V$SQLAREA statistics

After reload of SQL in SQL Shared Pool, V$SQLAREA will show
loads > 1.
Will v$sqlarea also show cumlative statistics for other columns
(e.g. sorts, buffer gets) for execution prior to the 2nd or
later loads?

After reload of SQL in SQL Shared Pool, V$SQLAREA will show
loads > 1.
Will v$sqlarea also show cumlative statistics for other columns
(e.g. sorts, buffer gets) for execution prior to the 2nd or
later loads?

Similar Messages

  • Query Tuning - Response time Statistics collection

    Our Application is Load tested for a period of 1 hour with peak load.
    For this specific period of time, say thousands of queries gets executed in the database,
    What we need is say for one particular query " select XYZ from ABC" within this span of 1 hour, we need statistics like
    Number of times Executed
    Average Response time
    Maximum response time
    minimum response time
    90th percentile response time ( sorted in ascending order, 90th percentile guy)
    All these statistics are possible if i can get all the response times for that particular query for that period of 1 hour....
    I tried using sql trace and TKPROF but unable to get all these statistics...
    Application uses connection pooling, so connections are taken as and when needed...
    Any thoughts on this?
    Appreciate your help.

    I don't think v$sqlarea can help me out with the exact stats i needed, but certainly it has lot of other stats to take. B/w there is no dictionary view called v$sqlstats.
    There are other applications which share the same database where i am trying to capture for my application, so flushing cache which currently has 30K rows is not feasible solution.
    Any more thoughts on this?

  • V$sqlarea : 34769 rows

    Hi expert,
    I don't know if this is a problem. But how is possible that the v$sqlarea has 34179 records?
    SQL> set timing on
    SQL> select count(0) from v$sqlarea;
      COUNT(0)
         34769
    Elapsed: 00:01:20.61i'm finding on oracle documentation and
    "V$SQLAREA lists statistics on shared SQL area and contains one row per SQL string. It provides statistics on SQL statements that are in memory, parsed, and ready for execution."
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2129.htm
    then there are 34000 sql statment in memory?
    where I can find some documentation on this view?
    many thanks, as usual.
    Cheers,
    Lain

    Depending on the nature of your application, the size of your SGA, and how well you or your developers use bind variables, the number of statements is perfectly fine or really horrible. Like so many things in Oracle, it depends.
    One of our databases has several hundred concurrent users, but they are all using a single application that uses bind variables everywhere and has a fairly limited range of funtionality. It has about 10,000 rows in v$sqlarea. Another database supports a dozen or so concurrent users using many related but different applications most of which have never heard of a bind variable. It has about 60,000 rows, most of them being multiple copies of various sql statements differing only by the literal strings used as predicates.
    John

  • Sql statements in v$sqlarea

    Hi
    When I get the sqls from v$sqlarea, does these sqls already executed, or currently runnıng?

    I suggest you check the Oracle Document,
    While showing similar content, you might also note there are difference between V$SQLAREA and V$SQL view
    V$SQL
    V$SQL lists statistics on shared SQL area without the GROUP BY clause and contains one row for each child of the original SQL text entered. Statistics displayed in V$SQL are normally updated at the end of query execution. However, for long running queries, they are updated every 5 seconds. This makes it easy to see the impact of long running SQL statements while they are still in progress.
    V$SQLAREA
    V$SQLAREA lists statistics on shared SQL area and contains one row per SQL string. It provides statistics on SQL statements that are in memory, parsed, and ready for execution.

  • Gather statistics in OEM

    Hi,
    I launched a job to gather statistics on just one schema on my DATABASE in OEM. How can I identify it's session ? Many thanks before.

    You can try below query:
    select s.username, s.sid,s.serial#, sql.sql_Text
    from v$session s, v$sqlarea sql
    where s.sql_address = sql.address and s.sql_hash_value = sql.hash_value and
    s.username = '&username' and lower(sql.sql_text) like '%dbms_stats%'
    Best Regards
    Krystian Zieja / mob

  • Execution Statistics

    I was looking at some statistics from an ADDM report and the executions of the statement made me curious. When a Statspack (AWR in 10g) report is run are the executions of a particular statement accurate for the time frame for that report? Also If I were to look at v$sqlarea or v$sql and pull out the executions is that cumulative since database startup?
    I had a hard time finding that info in the documentation so any assistance would be greatly appreciated.
    Thanks,
    Brian

    And a follow up to this for those of you familiar with RAC. Are the statistics specific for a particular instance or are they an indication of the amount of times a statement was executed in the database regardless of instance?
    Thanks,
    Brian

  • How to get performance statistics like OEM console 9i

    Hi All Gurus,
    I am having trouble getting the server load and seeing different session's statistics. As we all know, in OEM 9i java console, we were able to see the following useful columns for all the active sessions:
    CPU_TIME, PGA_MEMORY, PHYSICAL_DISK READS, LOGICAL READS, etc.
    But in OEM 10g java colsole, there are no such columns (I dont know why), for each session, the 10g console only gives basic info like, USER_NAME, OSUSER, MACHINE NAME, LOGON TIME, etc, but no tuning statistics.
    So, I was looking the dynamic performance views to see all these information. But I am confused, there are a number of views; they are v$sql, v$sqlearea, v$sql_workarea, v$sqlstats, v$session, v$sesstat, v$open_cursors, etc.
    Can somebody explain to me what are the relationships between them and exactly which ones are actually needed to produce the columns like those in OEM 9i. I am not sure does v$sql represent one sql statement or v$sqlarea?, how they are related to v$session? lot of confusions!!!! pls help. Thanks in advance.

    You have far more questions than can possibly be answered in a few short paragraphs.
    Pick up a copy of Jonathan Lewis' book on the CBO and you will find the answers to all of these questions and many more. Also check Jonathan's website and FAQ.
    You will find the link to it on the Links page of the PSOUG's website: www.psoug.org.

  • Statistics Currency Error while posting Sales Order (No: V1453)

    Hi Sappers,
    I am in process of making a company in SAP ERP SD - IDES, and while posting Sales Order, after I input Sold to Party, PO no, Del Plant and Incoterms and I press ENTER: The error comes up as:
    Statistics: The currency from in INR for date 25.11.2011 could not be determined.
    Kindly suggest a possible solution.
    Thanks
    Rahul Tikku

    Hi,
    Just go to "OBD2", select the Accounts group of your customer and "Double Click" on it, then "Double  cliick" on "Sales Data" > Further goto,  "Sales" and check if "Currency" field is suppressed. if so then change it to required or optional Entry. save and update you customer master in XD02. Then try your process.
    Reagrds
    DSR

  • How oracle decide whetehr to use index or full scan (statistics)

    Hi Guys,
    Let say i have a index on a column.
    The table and index statistics has been gathered. (without histograms).
    Let say i perform a select * from table where a=5;
    Oracle will perform a full scan.
    But from which statistics it will be able to know indeed most of the column = 5? (histograms not used)
    After analyzing, we get the below:
    Table Statistics :
    (NUM_ROWS)
    (BLOCKS)
    (EMPTY_BLOCKS)
    (AVG_SPACE)
    (CHAIN_COUNT)
    (AVG_ROW_LEN)
    Index Statistics :
    (BLEVEL)
    (LEAF_BLOCKS)
    (DISTINCT_KEYS)
    (AVG_LEAF_BLOCKS_PER_KEY)
    (AVG_DATA_BLOCKS_PER_KEY)
    (CLUSTERING_FACTOR)
    thanks
    Index Column (A)
    ======
    1
    1
    2
    2
    5
    5
    5
    5
    5
    5

    I have prepared some explanation and have not noticed that the topic has been marked as answered.
    This my sentence is not completely true.
    A column "without histograms" means that the column has only one bucket. More correct: even without histograms there are data in dba_tab_histograms which we can consider as one bucket for whole column. In fact these data are retrieved from hist_head$, not from histgrm$ as usual buckets.
    Technically there is no any buckets without gathered histograms.
    Let's create a table with skewed data distribution.
    SQL> create table t as
      2  select least(rownum,3) as val, '*' as pad
      3    from dual
      4  connect by level <= 1000000;
    Table created
    SQL> create index idx on t(val);
    Index created
    SQL> select val, count(*)
      2    from t
      3   group by val;
           VAL   COUNT(*)
             1          1
             2          1
             3     999998So, we have table with very skewed data distribution.
    Let's gather statistics without histograms.
    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for all columns size 1', cascade => true);
    PL/SQL procedure successfully completed
    SQL> select blocks, num_rows  from dba_tab_statistics
      2   where table_name = 'T';
        BLOCKS   NUM_ROWS
          3106    1000000
    SQL> select blevel, leaf_blocks, clustering_factor
      2    from dba_ind_statistics t
      3   where table_name = 'T'
      4     and index_name = 'IDX';
        BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR
             2        4017              3107
    SQL> select column_name,
      2         num_distinct,
      3         density,
      4         num_nulls,
      5         low_value,
      6         high_value
      7    from dba_tab_col_statistics
      8   where table_name = 'T'
      9     and column_name = 'VAL';
    COLUMN_NAME  NUM_DISTINCT    DENSITY  NUM_NULLS      LOW_VALUE      HIGH_VALUE
    VAL                     3 0,33333333          0           C102            C104So, Oracle suggests that values between 1 and 3 (raw C102 and C104) are distributed uniform and the density of the distribution is 0.33.
    Let's try to explain plan
    SQL> explain plan for
      2  select --+ no_cpu_costing
      3         *
      4    from t
      5   where val = 1
      6  ;
    Explained
    SQL> @plan
    | Id  | Operation         | Name | Rows  | Cost  |
    |   0 | SELECT STATEMENT  |      |   333K|   300 |
    |*  1 |  TABLE ACCESS FULL| T    |   333K|   300 |
    Predicate Information (identified by operation id):
       1 - filter("VAL"=1)
    Note
       - cpu costing is off (consider enabling it)Below is an excerpt from trace 10053
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 0.33333 Min: 1 Max: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 333333  Computed: 333333.33  Non Adjusted: 333333.33
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 2377.00  resc_cpu: 0
        ix_sel: 0.33333  ix_sel_with_filters: 0.33333
        Cost: 2377.00  Resp: 2377.00  Degree: 1
      Best:: AccessPath: TableScan
             Cost: 300.00  Degree: 1  Resp: 300.00  Card: 333333.33  Bytes: 0Cost of FTS here is 300 and cost of Index Range Scan here is 2377.
    I have disabled cpu costing, so selectivity does not affect the cost of FTS.
    cost of Index Range Scan is calculated as
    blevel + (leaf_blocks * selectivity + clustering_factor * selecivity) = 2 + (4017*0.33333 + 3107*0.33333) = 2377.
    Oracle considers that it has to read 2 root/branch blocks of the index, 1339 leaf blocks of the index and 1036 blocks of the table.
    Pay attention that selectivity is the major component of the cost of the Index Range Scan.
    Let's try to gather histograms:
    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for columns val size 3', cascade => true);
    PL/SQL procedure successfully completedIf you look at dba_tab_histograms you will see following
    SQL> select endpoint_value,
      2         endpoint_number
      3    from dba_tab_histograms
      4   where table_name = 'T'
      5     and column_name = 'VAL'
      6  ;
    ENDPOINT_VALUE ENDPOINT_NUMBER
                 1               1
                 2               2
                 3         1000000ENDPOINT_VALUE is the column value (in number for any type of data) and ENDPOINT_NUMBER is cumulative number of rows.
    Number of rows for any ENDPOINT_VALUE = ENDPOINT_NUMBER for this ENDPOINT_VALUE - ENDPOINT_NUMBER for the previous ENDPOINT_VALUE.
    explain plan and 10053 trace of the same query:
    | Id  | Operation                   | Name | Rows  | Cost  |
    |   0 | SELECT STATEMENT            |      |     1 |     4 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T    |     1 |     4 |
    |*  2 |   INDEX RANGE SCAN          | IDX  |     1 |     3 |
    Predicate Information (identified by operation id):
       2 - access("VAL"=1)
    Note
       - cpu costing is off (consider enabling it)
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 5.0000e-07 Min: 1 Max: 3
        Histogram: Freq  #Bkts: 3  UncompBkts: 1000000  EndPtVals: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 1  Computed: 1.00  Non Adjusted: 1.00
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 4.00  resc_cpu: 0
        ix_sel: 1.0000e-06  ix_sel_with_filters: 1.0000e-06
        Cost: 4.00  Resp: 4.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: IDX
             Cost: 4.00  Degree: 1  Resp: 4.00  Card: 1.00  Bytes: 0Pay attention on selectivity, ix_sel: 1.0000e-06
    Cost of the FTS is still the same = 300,
    but cost of the Index Range Scan is 4 now: 2 root/branch blocks + 1 leaf block + 1 table block.
    Thus, conclusion: histograms allows to calculate selectivity more accurate. The aim is to have more efficient execution plans.
    Alexander Anokhin
    http://alexanderanokhin.wordpress.com/

  • Forms 6.0 - how to have forms utilization statistics?

    I would like to know how i can retrieve statistics of form utilization during one period without recompile any form. I wanna know for instance the 20 forms more used on a Oracle Forms 6.0 app. Is it possible?
    thanks
    Luis Reis

    Frank
    thanks for your answer but the problem is that one . I have 800 forms and we have no time to open all them and make this script ( I am a CIO and not a tech guy so I presume that you are saying that we must write code in the Form, no?)
    Luis Reis

  • Upload data from excel to Ztable with statistics

    Hi,
    I have a requirement to upload data from excel sheet to ztable .
    Here i need tp provide the user with the execution statistics like
    1.Number of records read from the Excel spread-sheet
    2. Number records processed successfully
    3. Number records with Error
    4. Name and location of Error Log-file (text-file format)
    5. Name and location of the file containing error records (Excel spread-sheet format)
    I would appreciate if any of you have code written for the same

    See the below example code to upload from xl file to sap
    REPORT ZLWMI151_UPLOAD no standard page heading
                           line-size 100 line-count 60.
    *tables : zbatch_cross_ref.
    data : begin of t_text occurs 0,
           werks(4) type c,
           cmatnr(15) type c,
           srlno(12) type n,
           matnr(7) type n,
           charg(10) type n,
           end of t_text.
    data: begin of t_zbatch occurs 0,
          werks like zbatch_cross_ref-werks,
          cmatnr like zbatch_cross_ref-cmatnr,
          srlno like zbatch_cross_ref-srlno,
          matnr like zbatch_cross_ref-matnr,
          charg like zbatch_cross_ref-charg,
          end of t_zbatch.
    data : g_repid like sy-repid,
           g_line like sy-index,
           g_line1 like sy-index,
           $v_start_col         type i value '1',
           $v_start_row         type i value '2',
           $v_end_col           type i value '256',
           $v_end_row           type i value '65536',
           gd_currentrow type i.
    data: itab like alsmex_tabline occurs 0 with header line.
    data : t_final like zbatch_cross_ref occurs 0 with header line.
    selection-screen : begin of block blk with frame title text.
    parameters : p_file like rlgrap-filename obligatory.
    selection-screen : end of block blk.
    initialization.
      g_repid = sy-repid.
    at selection-screen on value-request for p_file.
      CALL FUNCTION 'F4_FILENAME'
           EXPORTING
                PROGRAM_NAME = g_repid
           IMPORTING
                FILE_NAME    = p_file.
    start-of-selection.
    Uploading the data into Internal Table
      perform upload_data.
      perform modify_table.
    top-of-page.
      CALL FUNCTION 'Z_HEADER'
      EXPORTING
        FLEX_TEXT1       =
        FLEX_TEXT2       =
        FLEX_TEXT3       =
    *&      Form  upload_data
          text
    FORM upload_data.
      CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
           EXPORTING
                FILENAME                = p_file
                I_BEGIN_COL             = $v_start_col
                I_BEGIN_ROW             = $v_start_row
                I_END_COL               = $v_end_col
                I_END_ROW               = $v_end_row
           TABLES
                INTERN                  = itab
           EXCEPTIONS
                INCONSISTENT_PARAMETERS = 1
                UPLOAD_OLE              = 2
                OTHERS                  = 3.
      IF SY-SUBRC <> 0.
        write:/10 'File '.
      ENDIF.
      if sy-subrc eq 0.
        read table itab index 1.
        gd_currentrow = itab-row.
        loop at itab.
          if itab-row ne gd_currentrow.
            append t_text.
            clear t_text.
            gd_currentrow = itab-row.
          endif.
          case itab-col.
            when '0001'.
              t_text-werks = itab-value.
            when '0002'.
              t_text-cmatnr = itab-value.
            when '0003'.
              t_text-srlno = itab-value.
            when '0004'.
              t_text-matnr = itab-value.
            when '0005'.
              t_text-charg = itab-value.
          endcase.
        endloop.
      endif.
      append t_text.
    ENDFORM.                    " upload_data
    *&      Form  modify_table
          Modify the table ZBATCH_CROSS_REF
    FORM modify_table.
      loop at t_text.
        t_final-werks = t_text-werks.
        t_final-cmatnr = t_text-cmatnr.
        t_final-srlno = t_text-srlno.
        t_final-matnr = t_text-matnr.
        t_final-charg = t_text-charg.
        t_final-erdat = sy-datum.
        t_final-erzet = sy-uzeit.
        t_final-ernam = sy-uname.
        t_final-rstat = 'U'.
        append t_final.
        clear t_final.
      endloop.
      delete t_final where werks = ''.
      describe table t_final lines g_line.
      sort t_final by werks cmatnr srlno.
    Deleting the Duplicate Records
      perform select_data.
      describe table t_final lines g_line1.
      modify zbatch_cross_ref from table t_final.
      if sy-subrc ne 0.
        write:/ 'Updation failed'.
      else.
        Skip 1.
        Write:/12 'Updation has been Completed Sucessfully'.
        skip 1.
        Write:/12 'Records in file ',42 g_line .
        write:/12 'Updated records in Table',42 g_line1.
      endif.
      delete from zbatch_cross_ref where werks = ''.
    ENDFORM.                    " modify_table
    *&      Form  select_data
          Deleting the duplicate records
    FORM select_data.
      select werks
             cmatnr
             srlno from zbatch_cross_ref
             into table t_zbatch for all entries in t_final
             where werks = t_final-werks
             and  cmatnr = t_final-cmatnr
             and srlno = t_final-srlno.
      sort t_zbatch by werks cmatnr srlno.
      loop at t_zbatch.
        read table t_final with key werks = t_zbatch-werks
                                    cmatnr = t_zbatch-cmatnr
                                    srlno = t_zbatch-srlno.
        if sy-subrc eq 0.
          delete table t_final .
        endif.
        clear: t_zbatch,
               t_final.
      endloop.
    ENDFORM.                    " select_data
    Reward Points if it is helpful
    Thanks
    Seshu

  • UCCE 8.5.3/8.5.4 call volume statistics not matching in interval tables

    hello,
    We have just migrated a call center to UCCE 8.5.3 that runs roggers and ICM call flow scripts that contain very basic flows. In each flow there is basically a one-to-one ratio of call type elements to select skill group elements.
    Generally you would expect the call volume (as our scripts are written) to have almost the same call volume in the call type interval table and the skill group interval table, with allowable differences because of RONAs, etc. But basically somewhat close.
    In general this does work but after a reboot of the roggers (A and B sides) (logger down first, then router and reversed to start), the skill group interval data lags greatly from the call type interval data, meaning maybe 10% of the call type interval volume. We have learned that completely shutting down both A and B side and then briinging it back up in order from scratch seems to alleviate the problem (until the next restart). We cannot just leave the servers alone though due to company security patching policies.
    Cisco TAC had recommended patching to UCCE 8.5.4 which we have recently done, but the probelm persists.
    I was wondering if this data discrepancy has ever been seen by anyone else or if possibly some system config issue might be self-deating us? Rebooting the roggers leaves the phone system itself working just fine for the call centers, but the recording of statistics to the skill group interval table is greatly impacted with almost no recording of volume in relation to the call type interval (and for comparison, the call type skill group interval table as well).
    We would generally not worry about it, but the workforce management vendor that takes its feed from UCCE only uses the skill group interval, which is basically reporting almost no volume in the mentioned scenarios.
    If anyone can provide any information it would be most appreciated/
    Thanks.
    Greg

    Thank you for the response. The time source check is a great idea. We ran into problems before when internal web service servers did not match the PGs (CTIOS service) and CTI provideed stats did not match.
    We will continue to work through TAC, but I was just wondering if anyone else had seen this (as 8.5.4 sid not fix it) and if maybe it could have been something self-defeating in our system configuration or scripting. We did not immediately know this was happening until our 3rd party workforce management vendor made us aware.
    Thanks,
    Greg

  • Test Cases required for BW Statistics to test in QA annd DEV.

    HI All,
    I am currently working on a support Project.  My client has completed installing of Bw statistics in DEV and transported it to QA way back in 2006.Currrently before moving the BI Statistics data to PRD we have to test it in DEV and QA.
    How to prepare sample test case for testing it in DEV and QA? Please Sugggest.

    Hi,
    this forum is for the SAP BusinessObjects BI Solution architecture. I would suggest you post your question to the BW forum.
    ingo

  • Contract net value for Header Statistics is not correct

    There is issue with value contract. The net value for Header Statistics is not showing correctly for some contracts. Especially, when we delete the PO line items or reverse all entries (GR and IR) for PO line item.
    Contract has one line with account category ‘U’. The target value is 300,000.00 and total quantity released to date is 160,000. The net value for Header Statistics should be 140,000 but it is showing 600,000 which is over (double) the target value and user cannot release any further PO reference to this contract.
    Earlier I defined net price for line item 300,000 and I changed net price to zero and execute report RM06ENP0 but it doesn’t work.
    Please share your experience and thoughts.
    Thanks,
    Shah.

    Hi Jurgen,
    There are few Purchace orders with multiple line items and each line item for Purchase orders referencing the same line and same contract.
    There is only one Purchase order has two deleted lines against this contract.
    Theses deleted line's net price has changed to zero and there is no PO history.
    Contract released order value is correct as there is only one line, but net price is wrong. and user is getting error for target value is excedeed by $nnn when trying to create purchase order.
    Thanks,
    Shah.

  • Report using BI Statistics

    People, I activated the BI Statistics in SAP/BW.
    The Multicube 0BWTC_C10 is active.
    Will I can generate a report related to load data (time of loading, the load status, etc), using this multicube (0BWTC_C10)?

    Hi Akash,
    The below standard queries will provide you the required details,
    Info
      Provider
    Bex Query
      Technical Name
    Bex Query
      description
    0TCT_MC11
    0TCT_MC11_Q0140
    InfoCube Status
    0TCT_MC11
    0TCT_MC11_Q0141
    InfoCube
      Correctness
    0TCT_MC11
    0TCT_MC11_Q0240
    InfoCube Status: Analysis
    0TCT_MC22
    ZCS_CUBE_DATADETAIL_STAT
    ZCS_CUBE_DATADETAIL_STAT
    0TCT_MC21
    ZTCT_MC21_Q_FB_01
    Dashboard - process
      chain historical loading time
    0TCT_MC22
    ZCS_STAT_SPEND_LOADS
    Statistics Spend
      Overview Loads
    0TCT_MC22
    ZCS_STAT_SPEND_LOADS_PERF
    Statistics Spend
      Overview Loads Performance
    0TCT_MC22
    ZTCT_MC22_Q_FB_02
    Dashboard - DSO
      loading time top 10
    0TCT_MC22
    ZTCT_MC22_Q_FB_04
    Dashboard - IC
      loading time top 10
    0TCT_MC22
    ZTCT_MC22_Q_FB_06
    Dashboard - IO
      loading time top 10
    -Arun.M.D

Maybe you are looking for

  • Burning CD's and DVD's with Titles

    How do I burn CD's or DVD's with titles so they're visible on TV playback slideshow?

  • Dynamic Table name in Inner Join in  4.6c

    data: tab1(10) type c value 'MARA',         tab2(10) type c value 'MAKT'. data: dbtab1(10) type c,          dbtab2(10) type c . dbtab1 = tab1.  dbtab2 = tab2.  DATA: BEGIN OF itab occurs 0,            matnr TYPE mara-matnr,            maktx type makt

  • All module standard SAP reports

    Hi, we are in version 4.7C . We are in the need of knowing all standards reports SAP has provided of following modules : SD / MM / FI / PP We need the program name / description and purpose of the report . Any such type of documentation available ? I

  • In printing a document my preview mode does not work??

    in printing a mail recieved message my preview mode sends a message stating Preview is not supported???

  • Projects error out when loading more then 1500 projects, while using api

    Hello, I am haivng problem while loading more then 1500 projects, when using api, PA_PROJECT_PUB.CREATE_PROJECT. We are on 12.1.3. It throws error message ORA-6502 pl/sql numberic error. I have changed PA:Debug Mode to No at Resp level also FND:Debug