OCM statistics

Hi,
Could we get some statistics about OCM exams ? How many exams delivered, how many certified, in EMEA / America / Asia?
Thanks for answer!
Laurent

Hi Joel and thanks for your answer.
I expected a more precise answer. By reading more oracle articles, I find out there were about 62 OCM at the beginning of the year ( http://www.oracle.com/hk/corporate/index.html?press_040105.html )
By reading the article, I can figure out there were 4 ocms in asia.
In EMEA, I was wondering how many we are. About a dozen of candidates did take the exams in England or Germany until now.
For OCP, I think the pass percentage as first try is about 50%-60%. In OCM, I would say 20%-30%, but this is pure guess.
By the way, the http://otn.oracle.com/ocm only shows the very first ocm's. This is not what I called a statistic. They all made the exams in the first sessions in Chicago, didn't they?
Kind regards
Laurent

Similar Messages

  • ACE OCM counters increasing

    Hi,
    OCM(outbound connection Mgr) Counters are increasing & we have found that ACE is reseting client connection. I have found following bug. Currently we are running A2(1.1a). Anyone there hit by the same bug. Your experiences on this are appreciated.
    CSCsr96168
    Connection Free List Corruption
    Symptom: New TCP connections are sometimes reset. Conditions: ACE module running A2(1.1a) Workaround: Downgrade to A2(1.1) Further Problem Description: This problem is suspected when `show np 2 me-stats -socm` shows the following counter incrementing: OCM Statistics: (Current) -------------- Drop [out of connections]: 14455 16
    # show np 2 me-stats -socm
    OCM Statistics: (Current)
    Connection create received:                30526367            34
    LB dest decision received:                  6724048            15
    Drop [LB dest decision fail]:                    51             0
    NAT Pool Alloc [addr/port]:                   25586             0
    NAT Pool Free [addr/port]:                    25584             0
    OCM Packet count (Hi & Lo):                37250415            49
    (Context 4 Statistics)
    Drop [mac lookup fail]:                           1             0
    Connection inserted:                         853174             1
    Packet message transmitted:                  853149             1
    # show np 2 me-stats -socm
    OCM Statistics: (Current)
    Connection create received:                30526681            33
    LB dest decision received:                  6724153             8
    Drop [LB dest decision fail]:                    51             0
    NAT Pool Alloc [addr/port]:                   25586             0
    NAT Pool Free [addr/port]:                    25584             0
    OCM Packet count (Hi & Lo):                37250834            41
    (Context 4 Statistics)
    Drop [mac lookup fail]:                           1             0
    Connection inserted:                         853195             6
    Packet message transmitted:                  853170             6
    # show np 2 me-stats -socm
    OCM Statistics: (Current)
    Connection create received:                30528663            24
    LB dest decision received:                  6724773            11
    Drop [LB dest decision fail]:                    51             0
    NAT Pool Alloc [addr/port]:                   25586             0
    NAT Pool Free [addr/port]:                    25584             0
    OCM Packet count (Hi & Lo):                37253436            35
    (Context 4 Statistics)
    Drop [mac lookup fail]:                           1             0
    Connection inserted:                         853328             2
    Packet message transmitted:                  853303             2
    Regards,
    Akhtar

    Hello Akhtar,
    You've probably noticed that the "Drop [out of connections]" counter referenced in the bug is not shown in the output of your commands.  This means that this counter is currently at zero.  It will only be shown if it has a non-zero value.  Therefore, this implies that this counter is not going up.
    Also, from the output you supplied, nothing looks alarming to me that would indicate the cause of the resets.  Here is how I would recommend you proceed:
    If possible, upgrade to a newer software release.  The release you are on is quite dated now and there have been hundreds of software fixes since that release.  Further, the A2(1.x) software train will no longer have any maintenance updates.  Upgrading to A2(2.3) may be the quickest solution for you if you are hitting a bug on the ACE.
    If you cannot upgrade, or you upgrade and the issue persists, then you should gather the following information:
    Get a showtech from the context in which the VIP resides
    Start a capture of the ACE tengig port(see not below)
    Run some test connections to replicate the RST
    Stop the capture
    Get a second showtech
    Now you will be able to look at the connections in Wireshark, and compare your two showtechs to see what, if any, error (or Drop) counters increment.  If necessary, please open a SR with Cisco TAC for expert analysis.
    NOTE:  In order to do the Tengig capture, you would need to configure SPAN on the Supervisor.  The source interface would be Te/1.  So for example, if your ACE is in slot 3, then the source interface of your monitor session would be Te3/1.
    Regards,
    Sean

  • ACE Drop (Dest nat fail):

    Hi All,
    I'm using ACE module A2(2.4)
    I'm trying to use parameter server-conn reuse, but clients get sometimes statuscode 503.
    A#1/Test1# show np 1 me-stats "-socm -v"
    OCM Statistics: (Current)
    Errors:                                           0             0
    Connection create received:               231121503          1142
    LB dest decision received:                365473159          1473
    Nat app fixup recieved:                           0             0
    Connection unproxy received:               52997475           393
    Connection reproxy received:               51249279           375
    IPCP received:                                83227             2
    ACK trigger received:                      52733008           390
    TCP connected received                    218498529          1065
    Unknown message received:                         0             0
    Drop [LB dest decision fail]:                 29392             0
    Drop [invalid ifid]                               0             0
    Drop [Out of buffers]:                            0             0
    Dest decision transmitted:                248735645          1174
    TCP connect transmitted:                  212827881           828
    ACK trigger transmitted:                         12             0
    IPCP transmitted:                             83227             2
    NAT[static mapped]:                               0             0
    NAT[static real]:                                 0             0
    NAT[xlate alloc fail]:                            0             0
    NAT[xlate real hit]:                              0             0
    NAT[xlate mapped hit]:                            0             0
    NAT[invalid xlate]:                               0             0
    NAT[dump xlate]:                                  0             0
    NAT[xlate release failed]:                        0             0
    NAT Pool Alloc [fail]:                            0             0
    NAT Pool Alloc [addr]:                            0             0
    NAT Pool Alloc [addr/port]:                33689970            81
    NAT Pool Free [addr]:                             0             0
    NAT Pool Free [addr/port]:                 33689214            88
    NAT Pool Free [orphan IP]:                        0             0
    Reuse retrieve link update conn invalid           0             0
    Reuse retrieve link update conn not on r          0             0
    Reuse retrieve success but conn invalid:          0             0
    Drop [Next Hop queue full]:                       0             0
    Reuse retrieve miss:                         845627             3
    OCM Packet count (Hi & Lo):               976499360          4850
    Packet forward received:                    4343180            10
    NAF Error [no route or unresolved adjace          0             0
    NAF Error [nat resp fail]:                        0             0
    UDP Chaser received:                          10406             0
    (Context 1 Statistics)
    Drop [out of connections]:                        0             0
    Drop [out of proxies]:                            0             0
    Drop [out of ssl]:                                0             0
    Drop [mac lookup fail]:                           0             0
    Drop [route lookup fail]:                         0             0
    Drop [nat fail]                                   0             0
    Drop [ip sanity check fail]                       0             0
    Drop [acl deny]:                                  0             0
    Drop [redundant connection]:                      0             0
    Connection inserted:                         862670             3
    Packet message transmitted:                 6409302           230
    Reuse conns retrieved:                      6390611           238
    Drop [Reproxy fail]:                            171             0
    Drop [dest nat fail]:                         58286             2
    The last counter is increasing. What does it mean? Can this be the problem?
    I do not get 503 in the retcode map of the servers.
    Regards
    Mats

    Hi Mats,
    I find it very strange that the ACE is sending a 503 message back to the client, because, in case of issues, it normally just resets the connection. With that in mind, we should also investigate the server itself.  This is not trivial, so, you should open a TAC case.
    Let me just explain the meaning of the "Drop [dest nat fail]" counter. It will be incremented if, after a connection has been natted, one of the servers tries to open a new connection against the natted IP and port. This shouldn't happen unless you are using a protocol composed of several connections (for example, FTP)
    Regards
    Daniel

  • Statistics Currency Error while posting Sales Order (No: V1453)

    Hi Sappers,
    I am in process of making a company in SAP ERP SD - IDES, and while posting Sales Order, after I input Sold to Party, PO no, Del Plant and Incoterms and I press ENTER: The error comes up as:
    Statistics: The currency from in INR for date 25.11.2011 could not be determined.
    Kindly suggest a possible solution.
    Thanks
    Rahul Tikku

    Hi,
    Just go to "OBD2", select the Accounts group of your customer and "Double Click" on it, then "Double  cliick" on "Sales Data" > Further goto,  "Sales" and check if "Currency" field is suppressed. if so then change it to required or optional Entry. save and update you customer master in XD02. Then try your process.
    Reagrds
    DSR

  • How oracle decide whetehr to use index or full scan (statistics)

    Hi Guys,
    Let say i have a index on a column.
    The table and index statistics has been gathered. (without histograms).
    Let say i perform a select * from table where a=5;
    Oracle will perform a full scan.
    But from which statistics it will be able to know indeed most of the column = 5? (histograms not used)
    After analyzing, we get the below:
    Table Statistics :
    (NUM_ROWS)
    (BLOCKS)
    (EMPTY_BLOCKS)
    (AVG_SPACE)
    (CHAIN_COUNT)
    (AVG_ROW_LEN)
    Index Statistics :
    (BLEVEL)
    (LEAF_BLOCKS)
    (DISTINCT_KEYS)
    (AVG_LEAF_BLOCKS_PER_KEY)
    (AVG_DATA_BLOCKS_PER_KEY)
    (CLUSTERING_FACTOR)
    thanks
    Index Column (A)
    ======
    1
    1
    2
    2
    5
    5
    5
    5
    5
    5

    I have prepared some explanation and have not noticed that the topic has been marked as answered.
    This my sentence is not completely true.
    A column "without histograms" means that the column has only one bucket. More correct: even without histograms there are data in dba_tab_histograms which we can consider as one bucket for whole column. In fact these data are retrieved from hist_head$, not from histgrm$ as usual buckets.
    Technically there is no any buckets without gathered histograms.
    Let's create a table with skewed data distribution.
    SQL> create table t as
      2  select least(rownum,3) as val, '*' as pad
      3    from dual
      4  connect by level <= 1000000;
    Table created
    SQL> create index idx on t(val);
    Index created
    SQL> select val, count(*)
      2    from t
      3   group by val;
           VAL   COUNT(*)
             1          1
             2          1
             3     999998So, we have table with very skewed data distribution.
    Let's gather statistics without histograms.
    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for all columns size 1', cascade => true);
    PL/SQL procedure successfully completed
    SQL> select blocks, num_rows  from dba_tab_statistics
      2   where table_name = 'T';
        BLOCKS   NUM_ROWS
          3106    1000000
    SQL> select blevel, leaf_blocks, clustering_factor
      2    from dba_ind_statistics t
      3   where table_name = 'T'
      4     and index_name = 'IDX';
        BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR
             2        4017              3107
    SQL> select column_name,
      2         num_distinct,
      3         density,
      4         num_nulls,
      5         low_value,
      6         high_value
      7    from dba_tab_col_statistics
      8   where table_name = 'T'
      9     and column_name = 'VAL';
    COLUMN_NAME  NUM_DISTINCT    DENSITY  NUM_NULLS      LOW_VALUE      HIGH_VALUE
    VAL                     3 0,33333333          0           C102            C104So, Oracle suggests that values between 1 and 3 (raw C102 and C104) are distributed uniform and the density of the distribution is 0.33.
    Let's try to explain plan
    SQL> explain plan for
      2  select --+ no_cpu_costing
      3         *
      4    from t
      5   where val = 1
      6  ;
    Explained
    SQL> @plan
    | Id  | Operation         | Name | Rows  | Cost  |
    |   0 | SELECT STATEMENT  |      |   333K|   300 |
    |*  1 |  TABLE ACCESS FULL| T    |   333K|   300 |
    Predicate Information (identified by operation id):
       1 - filter("VAL"=1)
    Note
       - cpu costing is off (consider enabling it)Below is an excerpt from trace 10053
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 0.33333 Min: 1 Max: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 333333  Computed: 333333.33  Non Adjusted: 333333.33
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 2377.00  resc_cpu: 0
        ix_sel: 0.33333  ix_sel_with_filters: 0.33333
        Cost: 2377.00  Resp: 2377.00  Degree: 1
      Best:: AccessPath: TableScan
             Cost: 300.00  Degree: 1  Resp: 300.00  Card: 333333.33  Bytes: 0Cost of FTS here is 300 and cost of Index Range Scan here is 2377.
    I have disabled cpu costing, so selectivity does not affect the cost of FTS.
    cost of Index Range Scan is calculated as
    blevel + (leaf_blocks * selectivity + clustering_factor * selecivity) = 2 + (4017*0.33333 + 3107*0.33333) = 2377.
    Oracle considers that it has to read 2 root/branch blocks of the index, 1339 leaf blocks of the index and 1036 blocks of the table.
    Pay attention that selectivity is the major component of the cost of the Index Range Scan.
    Let's try to gather histograms:
    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for columns val size 3', cascade => true);
    PL/SQL procedure successfully completedIf you look at dba_tab_histograms you will see following
    SQL> select endpoint_value,
      2         endpoint_number
      3    from dba_tab_histograms
      4   where table_name = 'T'
      5     and column_name = 'VAL'
      6  ;
    ENDPOINT_VALUE ENDPOINT_NUMBER
                 1               1
                 2               2
                 3         1000000ENDPOINT_VALUE is the column value (in number for any type of data) and ENDPOINT_NUMBER is cumulative number of rows.
    Number of rows for any ENDPOINT_VALUE = ENDPOINT_NUMBER for this ENDPOINT_VALUE - ENDPOINT_NUMBER for the previous ENDPOINT_VALUE.
    explain plan and 10053 trace of the same query:
    | Id  | Operation                   | Name | Rows  | Cost  |
    |   0 | SELECT STATEMENT            |      |     1 |     4 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T    |     1 |     4 |
    |*  2 |   INDEX RANGE SCAN          | IDX  |     1 |     3 |
    Predicate Information (identified by operation id):
       2 - access("VAL"=1)
    Note
       - cpu costing is off (consider enabling it)
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 5.0000e-07 Min: 1 Max: 3
        Histogram: Freq  #Bkts: 3  UncompBkts: 1000000  EndPtVals: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 1  Computed: 1.00  Non Adjusted: 1.00
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 4.00  resc_cpu: 0
        ix_sel: 1.0000e-06  ix_sel_with_filters: 1.0000e-06
        Cost: 4.00  Resp: 4.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: IDX
             Cost: 4.00  Degree: 1  Resp: 4.00  Card: 1.00  Bytes: 0Pay attention on selectivity, ix_sel: 1.0000e-06
    Cost of the FTS is still the same = 300,
    but cost of the Index Range Scan is 4 now: 2 root/branch blocks + 1 leaf block + 1 table block.
    Thus, conclusion: histograms allows to calculate selectivity more accurate. The aim is to have more efficient execution plans.
    Alexander Anokhin
    http://alexanderanokhin.wordpress.com/

  • Forms 6.0 - how to have forms utilization statistics?

    I would like to know how i can retrieve statistics of form utilization during one period without recompile any form. I wanna know for instance the 20 forms more used on a Oracle Forms 6.0 app. Is it possible?
    thanks
    Luis Reis

    Frank
    thanks for your answer but the problem is that one . I have 800 forms and we have no time to open all them and make this script ( I am a CIO and not a tech guy so I presume that you are saying that we must write code in the Form, no?)
    Luis Reis

  • Upload data from excel to Ztable with statistics

    Hi,
    I have a requirement to upload data from excel sheet to ztable .
    Here i need tp provide the user with the execution statistics like
    1.Number of records read from the Excel spread-sheet
    2. Number records processed successfully
    3. Number records with Error
    4. Name and location of Error Log-file (text-file format)
    5. Name and location of the file containing error records (Excel spread-sheet format)
    I would appreciate if any of you have code written for the same

    See the below example code to upload from xl file to sap
    REPORT ZLWMI151_UPLOAD no standard page heading
                           line-size 100 line-count 60.
    *tables : zbatch_cross_ref.
    data : begin of t_text occurs 0,
           werks(4) type c,
           cmatnr(15) type c,
           srlno(12) type n,
           matnr(7) type n,
           charg(10) type n,
           end of t_text.
    data: begin of t_zbatch occurs 0,
          werks like zbatch_cross_ref-werks,
          cmatnr like zbatch_cross_ref-cmatnr,
          srlno like zbatch_cross_ref-srlno,
          matnr like zbatch_cross_ref-matnr,
          charg like zbatch_cross_ref-charg,
          end of t_zbatch.
    data : g_repid like sy-repid,
           g_line like sy-index,
           g_line1 like sy-index,
           $v_start_col         type i value '1',
           $v_start_row         type i value '2',
           $v_end_col           type i value '256',
           $v_end_row           type i value '65536',
           gd_currentrow type i.
    data: itab like alsmex_tabline occurs 0 with header line.
    data : t_final like zbatch_cross_ref occurs 0 with header line.
    selection-screen : begin of block blk with frame title text.
    parameters : p_file like rlgrap-filename obligatory.
    selection-screen : end of block blk.
    initialization.
      g_repid = sy-repid.
    at selection-screen on value-request for p_file.
      CALL FUNCTION 'F4_FILENAME'
           EXPORTING
                PROGRAM_NAME = g_repid
           IMPORTING
                FILE_NAME    = p_file.
    start-of-selection.
    Uploading the data into Internal Table
      perform upload_data.
      perform modify_table.
    top-of-page.
      CALL FUNCTION 'Z_HEADER'
      EXPORTING
        FLEX_TEXT1       =
        FLEX_TEXT2       =
        FLEX_TEXT3       =
    *&      Form  upload_data
          text
    FORM upload_data.
      CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
           EXPORTING
                FILENAME                = p_file
                I_BEGIN_COL             = $v_start_col
                I_BEGIN_ROW             = $v_start_row
                I_END_COL               = $v_end_col
                I_END_ROW               = $v_end_row
           TABLES
                INTERN                  = itab
           EXCEPTIONS
                INCONSISTENT_PARAMETERS = 1
                UPLOAD_OLE              = 2
                OTHERS                  = 3.
      IF SY-SUBRC <> 0.
        write:/10 'File '.
      ENDIF.
      if sy-subrc eq 0.
        read table itab index 1.
        gd_currentrow = itab-row.
        loop at itab.
          if itab-row ne gd_currentrow.
            append t_text.
            clear t_text.
            gd_currentrow = itab-row.
          endif.
          case itab-col.
            when '0001'.
              t_text-werks = itab-value.
            when '0002'.
              t_text-cmatnr = itab-value.
            when '0003'.
              t_text-srlno = itab-value.
            when '0004'.
              t_text-matnr = itab-value.
            when '0005'.
              t_text-charg = itab-value.
          endcase.
        endloop.
      endif.
      append t_text.
    ENDFORM.                    " upload_data
    *&      Form  modify_table
          Modify the table ZBATCH_CROSS_REF
    FORM modify_table.
      loop at t_text.
        t_final-werks = t_text-werks.
        t_final-cmatnr = t_text-cmatnr.
        t_final-srlno = t_text-srlno.
        t_final-matnr = t_text-matnr.
        t_final-charg = t_text-charg.
        t_final-erdat = sy-datum.
        t_final-erzet = sy-uzeit.
        t_final-ernam = sy-uname.
        t_final-rstat = 'U'.
        append t_final.
        clear t_final.
      endloop.
      delete t_final where werks = ''.
      describe table t_final lines g_line.
      sort t_final by werks cmatnr srlno.
    Deleting the Duplicate Records
      perform select_data.
      describe table t_final lines g_line1.
      modify zbatch_cross_ref from table t_final.
      if sy-subrc ne 0.
        write:/ 'Updation failed'.
      else.
        Skip 1.
        Write:/12 'Updation has been Completed Sucessfully'.
        skip 1.
        Write:/12 'Records in file ',42 g_line .
        write:/12 'Updated records in Table',42 g_line1.
      endif.
      delete from zbatch_cross_ref where werks = ''.
    ENDFORM.                    " modify_table
    *&      Form  select_data
          Deleting the duplicate records
    FORM select_data.
      select werks
             cmatnr
             srlno from zbatch_cross_ref
             into table t_zbatch for all entries in t_final
             where werks = t_final-werks
             and  cmatnr = t_final-cmatnr
             and srlno = t_final-srlno.
      sort t_zbatch by werks cmatnr srlno.
      loop at t_zbatch.
        read table t_final with key werks = t_zbatch-werks
                                    cmatnr = t_zbatch-cmatnr
                                    srlno = t_zbatch-srlno.
        if sy-subrc eq 0.
          delete table t_final .
        endif.
        clear: t_zbatch,
               t_final.
      endloop.
    ENDFORM.                    " select_data
    Reward Points if it is helpful
    Thanks
    Seshu

  • UCCE 8.5.3/8.5.4 call volume statistics not matching in interval tables

    hello,
    We have just migrated a call center to UCCE 8.5.3 that runs roggers and ICM call flow scripts that contain very basic flows. In each flow there is basically a one-to-one ratio of call type elements to select skill group elements.
    Generally you would expect the call volume (as our scripts are written) to have almost the same call volume in the call type interval table and the skill group interval table, with allowable differences because of RONAs, etc. But basically somewhat close.
    In general this does work but after a reboot of the roggers (A and B sides) (logger down first, then router and reversed to start), the skill group interval data lags greatly from the call type interval data, meaning maybe 10% of the call type interval volume. We have learned that completely shutting down both A and B side and then briinging it back up in order from scratch seems to alleviate the problem (until the next restart). We cannot just leave the servers alone though due to company security patching policies.
    Cisco TAC had recommended patching to UCCE 8.5.4 which we have recently done, but the probelm persists.
    I was wondering if this data discrepancy has ever been seen by anyone else or if possibly some system config issue might be self-deating us? Rebooting the roggers leaves the phone system itself working just fine for the call centers, but the recording of statistics to the skill group interval table is greatly impacted with almost no recording of volume in relation to the call type interval (and for comparison, the call type skill group interval table as well).
    We would generally not worry about it, but the workforce management vendor that takes its feed from UCCE only uses the skill group interval, which is basically reporting almost no volume in the mentioned scenarios.
    If anyone can provide any information it would be most appreciated/
    Thanks.
    Greg

    Thank you for the response. The time source check is a great idea. We ran into problems before when internal web service servers did not match the PGs (CTIOS service) and CTI provideed stats did not match.
    We will continue to work through TAC, but I was just wondering if anyone else had seen this (as 8.5.4 sid not fix it) and if maybe it could have been something self-defeating in our system configuration or scripting. We did not immediately know this was happening until our 3rd party workforce management vendor made us aware.
    Thanks,
    Greg

  • Test Cases required for BW Statistics to test in QA annd DEV.

    HI All,
    I am currently working on a support Project.  My client has completed installing of Bw statistics in DEV and transported it to QA way back in 2006.Currrently before moving the BI Statistics data to PRD we have to test it in DEV and QA.
    How to prepare sample test case for testing it in DEV and QA? Please Sugggest.

    Hi,
    this forum is for the SAP BusinessObjects BI Solution architecture. I would suggest you post your question to the BW forum.
    ingo

  • Contract net value for Header Statistics is not correct

    There is issue with value contract. The net value for Header Statistics is not showing correctly for some contracts. Especially, when we delete the PO line items or reverse all entries (GR and IR) for PO line item.
    Contract has one line with account category ‘U’. The target value is 300,000.00 and total quantity released to date is 160,000. The net value for Header Statistics should be 140,000 but it is showing 600,000 which is over (double) the target value and user cannot release any further PO reference to this contract.
    Earlier I defined net price for line item 300,000 and I changed net price to zero and execute report RM06ENP0 but it doesn’t work.
    Please share your experience and thoughts.
    Thanks,
    Shah.

    Hi Jurgen,
    There are few Purchace orders with multiple line items and each line item for Purchase orders referencing the same line and same contract.
    There is only one Purchase order has two deleted lines against this contract.
    Theses deleted line's net price has changed to zero and there is no PO history.
    Contract released order value is correct as there is only one line, but net price is wrong. and user is getting error for target value is excedeed by $nnn when trying to create purchase order.
    Thanks,
    Shah.

  • Report using BI Statistics

    People, I activated the BI Statistics in SAP/BW.
    The Multicube 0BWTC_C10 is active.
    Will I can generate a report related to load data (time of loading, the load status, etc), using this multicube (0BWTC_C10)?

    Hi Akash,
    The below standard queries will provide you the required details,
    Info
      Provider
    Bex Query
      Technical Name
    Bex Query
      description
    0TCT_MC11
    0TCT_MC11_Q0140
    InfoCube Status
    0TCT_MC11
    0TCT_MC11_Q0141
    InfoCube
      Correctness
    0TCT_MC11
    0TCT_MC11_Q0240
    InfoCube Status: Analysis
    0TCT_MC22
    ZCS_CUBE_DATADETAIL_STAT
    ZCS_CUBE_DATADETAIL_STAT
    0TCT_MC21
    ZTCT_MC21_Q_FB_01
    Dashboard - process
      chain historical loading time
    0TCT_MC22
    ZCS_STAT_SPEND_LOADS
    Statistics Spend
      Overview Loads
    0TCT_MC22
    ZCS_STAT_SPEND_LOADS_PERF
    Statistics Spend
      Overview Loads Performance
    0TCT_MC22
    ZTCT_MC22_Q_FB_02
    Dashboard - DSO
      loading time top 10
    0TCT_MC22
    ZTCT_MC22_Q_FB_04
    Dashboard - IC
      loading time top 10
    0TCT_MC22
    ZTCT_MC22_Q_FB_06
    Dashboard - IO
      loading time top 10
    -Arun.M.D

  • Index Statistics Update - Problem

    We had performance problem yesterday with FI report FAGLL03, it timed out in online execution and in background mode it took 5000+ sec to execute. Result was no more than 100 records.
    Later with some investigation problem drill down to index usage of table FAGLFLEXA. We then updated the index statistics of table from DB02. After that report worked fine with execution time of 10-15 sec for same set of input.
    However user, in morning , was complaining again about performance problem with same report FAGLL03. We did that update index statistics again and as it was the case yesterday it fixed the problem.
    Later today I checked SQL server the job SAP CCMS_xxx_xxx_Update_Tabstats, which I guess is updating index statistics daily at 0400 hours, is working fine. I can't see any error log there. Daily job to check database consistency is also not reporting anything.
    Anyidea what could be going wrong.
    Basis Consultants are looking into problem however I am putting this case here if anyone of you had same problem and fixed it.
    Thanks,
    Pawan.
    Edited by: Pawan Kesari on Dec 11, 2009 4:05 PM

    Hi,
    Appears the stats are dropped eveytime the job runs @04:00
    Have a look at the table DBSTATC in trx: DB21 to see if it's setup to dropped the stats..
    Mark

  • BW Statistics 0BWTC_C10 in BW 3.5

    We have the BW Stats multi cube(0BWTC_C10) which currently consists of six cubes.
    1. BW Statistics – OLAP                                              0BWTC_C02
    2. BW Statistics - OLAP, Detail Navigation                  0BWTC_C03
    3. BW Statistics - Aggregates                                       0BWTC_C04
    4. BW Statistics - WHM                        0BWTC_C05
    5. BW Statistics: Condensing InfoCubes           0BWTC_C09
    6. BW Statistics: Deleting Data from InfoCubes      0BWTC_C11
    We have the data in all except 0BWTC_C02
    Now I want to load the data in to it and want to develop some queries on the MultiProvider(0BWTC_C10)
    Can any one help me in how to load the data to 0BWTC_C02??
    Any help much appreciated.
    Kind Regards,
    BH

    Hi Sesh,
    1)
    0BWTC_C02 is replaced by 3 cubes: 0TCT_MC01, 0TCT_MC02 and 0TCT_MC03
    (here each mulitprovider is based on 2 cubes, one basic cube and one virtual cube.
    For ex, 0TCT_MC01 has basic cube 0TCT_C01 and virtual cube 0TCT_VC01, the naming convention is same for rest 0TCT_C* and 0TCT_VC*)
    Hope the above is clear
    0BWTC_C03 was referring to RSDDSTAT table before, but this table has become obsolete in BI 7.0. Corrections have made to the extractor in BI 7.0 SAP_BW SP 9 such that the extractor refers to RSDDSTAT_OLAP. So you can use this cube as the way you were using in BW 3.x if you are above or equal to SP 9.
    0BWTC_C04 - no migration is required, technical content 3.x is still valid.
    0BWTC_C05 - is replaced by 0TCT_MC23 (0TCT_C23 and 0TCT_VC23)
    0BWTC_C09 - no migration is required, technical content 3.x is still valid.
    0BWTC_C11 - no migration is required, technical content 3.x is still valid.
    2) There is virtual provider for all the replaced cubes, but the old cubes for which migration is not required dont have virtual cubes.
    3) The new multiproviders have virtual cubes alreday.
    Hope this helps.
    Best Regards,
    Rajani

  • How to remove the ALV Statistics option from Accounts user's userids

    Dear Sir/Madam,
    In our Accounts department each time one extra page is comming out. When the ALV Statistics option is make Unchecked, then it funtioninng normally that is, single page is comming out.
    Is there any way to make this option permanently Unchecked ? If possible please advice and also whether there will be any unwanted implications of it ?
    Please advice.
    Thanks and Regards,
    Pranab

    Option 1 - Use SAP Report RSPRIPARADMIN you can set this for all users
    Option 2  - user parameter SP01_FROM   0000000002 - This means print job will start from page 2
    Option 3 - List -> Print -> click to properties button in print window -> expend cover sheets -> double click on ALV statistics , now uncheck the ALV statistics check box . now click on setting button and click on copy setting button
    Edited by: Savo on Jul 1, 2011 4:06 PM

  • SAPinst error during phase Create DB statistics (sapcheck)

    Hello all,
    I am currently installing an SAP ERP 6.0 EhP4/NW7.01 Java system when I enncountered this error during the phase Create Database Statistics.
    Error says
    BR0801I BRCONNECT 7.00 (40)
    BR0805I Start of BRCONNECT processing: cefukrjh.sta 2011-05-03 12.29.57
    BR0252E Function fopen() failed for 'D:\oracle\EA1\sapcheck\cefukrjh.sta' at location main-11
    BR0253E errno 2: No such file or directory
    BR0121E Processing of log file D:\oracle\EA1\sapcheck\cefukrjh.sta failed
    BR0806I End of BRCONNECT processing: cefukrjh.sta 2011-05-03 12.29.58
    BR0280I BRCONNECT time stamp: 2011-05-03 12.29.58
    BR0804I BRCONNECT terminated with errors
    I've checked this directory and I did not find the folder sapcheck. that is because my sapcheck is located in a different drive (E:). why is sapinst looking at the wrong directory and how can i point sapinst to use drive E instead?
    i was thinking maybe i could copy the folder sapcheck from drive E to drive d and continue with the installion. what will be its impact if i do this?
    by the way, my OS is windows 2003.
    please advise.
    thank you very much!
    regards,
    cha cha

    BR0252E Function fopen() failed for 'D:\oracle\EA1\sapcheck\cefukrjh.sta' at location main-11
    BR0253E errno 2: No such file or directory
    BR0121E Processing of log file D:\oracle\EA1\sapcheck\cefukrjh.sta failed
    It seems that while defining the Path of SAP and Oracle there is something wrong that's the reason it is looking somewhere else. If you will copy from E Drive installation will continue but it will be keep searching on D drive everytime. As you are on initial stage to install a new System i would suggest to stop the installation Master and start it from Scratch and define the FIle System Directly once again as you want.
    Regards,
    Subhash

Maybe you are looking for

  • Pdf generation issues in FrameMaker 7.0

    I am having trouble generating a book pdf. 1. Cross-references lose their blue font, although the hyperlinks work and do go to the chapter being referenced. Page numbering are also color-defined, this is lost in the generated pdf.  Ruling lines that

  • Configuration settings for the AD Password Sync Connector

    Hi, I am looking for information on how do retries work for the OIM Password (Sync) Connector for Active Directory. We are currently using version 9.1.1.5.10. If anyone can help answer any of the below questions, it will be very appreciated. Also, if

  • Web gallery url

    Is there an easy way to find the url of the Aperture web galleries that I have created? In iWeb there is a button that says "visit" - I wish there was something similar in Aperture, especially as for some reason the web galleries I create don't autom

  • Integrating WebGL and Edge Animate CC?

    Has anyone managed to get a working example of a WebGL demo (like these http://threejs.org/examples/#webgl_particles_random) into an Edge Animate CC Stage? I'd love to see one if someone has managed it... thanks!

  • How to capture previously clicked button(previous event)?

    Hi guyz, I have frame in which i have different panels. I would like to capture the last clicked button. My objective is to bring a pop-up box(dialog) when anything other than a specific button is clicked in one of the panels. So, i would like to kno