Metrics analysis

Hi All,
System - Oracle 10.2.0.4 on SOLARIS 10. Application is JDE..(Lot of adhoc queries, lot of batch/ETL jobs at Night, normal OLTP)
Using RAID 1+0
In my AWR report for some time I notice the following;
1> High db file parallel read for past few months
2> High log file switch completion
3> Very low Buffer Hit Ratio from 38-55 %
Can anyone please explain on this; What needs to be done ?
AWR Report and Findings as below; Have posted only the high values
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
            Buffer  Hit   %:   55.66    In-memory Sort %:  100.00
            Library Hit   %:   98.34        Soft Parse %:   99.25
         Execute to Parse %:   57.56         Latch Hit %:   99.97
Parse CPU to Parse Elapsd %:   90.02     % Non-Parse CPU:   98.34
Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
CPU time                                          4,472          67.5
db file parallel read                 8,003         166     21    2.5   User I/O
                                                                   Avg
                                             %Time  Total Wait    wait     Waits
Event                                 Waits  -outs    Time (s)    (ms)      /txn
db file parallel read                 8,003     .0         166      21       0.0
lSQL*Net more data to client         834,185     .0          52       0       4.0
SQL*Net more data from clien        186,379     .0          16       0       0.9
log file sequential read                423     .0           4       8       0.0
cursor: pin S wait on X                  50  100.0           0      10       0.0
SGA: allocation forcing comp             39   79.5           0      10       0.0
log file switch completion                9     .0           0      35       0.0
latch free                               11     .0           0       8       0.0
latch: shared pool                       11     .0           0       7       0.0
SQL*Net message from client       2,691,087     .0   1,471,156     547      12.8
Streams AQ: qmn slave idle w            385     .0       7,065   18351       0.0
virtual circuit status                  121  100.0       3,542   29273       0.0
Streams AQ: qmn coordinator             264   50.4       3,532   13380       0.0
Streams AQ: waiting for mess            721  100.0       3,520    4882       0.0
wait for unread message on b          3,601  100.0       3,516     976       0.0
jobq slave wait                       1,192   98.9       3,473    2914       0.0
Streams AQ: waiting for time              4  100.0       1,058  264428       0.0
class slave wait                          2     .0           0       0       0.0
Background Wait Events             DB/Inst: HQJDDB/hqjddb  Snaps: 66170-66171
-> ordered by wait time desc, waits desc (idle events last)
                                                                   Avg
                                             %Time  Total Wait    wait     Waits
Event                                 Waits  -outs    Time (s)    (ms)      /txn
os thread startup                        48     .0           4      83       0.0
log file sequential read                423     .0           4       8       0.0
control file parallel write           1,808     .0           3       2       0.0
latch: shared pool                        1     .0           0      28       0.0
log file single write                     6     .0           0       4       0.0
lStreams AQ: qmn slave idle w            385     .0       7,065   18351       0.0
Streams AQ: qmn coordinator             264   50.4       3,532   13380       0.0
smon timer                              317     .0       3,507   11064       0.0
pmon timer                            1,689  100.0       3,499    2071       0.0
Streams AQ: waiting for time              4  100.0       1,058  264428       0.0
init.ora Parameters              
                                                                End value
Parameter Name                Begin value                       (if different)
_addm_auto_enable             TRUE
_optimizer_mjc_enabled        FALSE
_spin_count                   6000
db_32k_cache_size             4294967296
db_block_size                 8192
db_keep_cache_size            1073741824
db_recovery_file_dest_size    2147483648
db_writer_processes           4
filesystemio_options          ASYNCH
hash_area_size                20971520
job_queue_processes           10
large_pool_size               83886080
log_buffer                    14242816
max_dump_file_size            UNLIMITED
open_cursors                  300
parallel_adaptive_multi_user  FALSE
parallel_execution_message_si 8192
parallel_max_servers          640
parallel_threads_per_cpu      2
pga_aggregate_target          3221225472
processes                     800
recyclebin                    ON
remote_login_passwordfile     EXCLUSIVE
session_cached_cursors        300
sga_max_size                  23622320128
sga_target                    23622320128
undo_management               AUTO
My log switches over past 2 weeks;;
Date      Day 00:00 1am 2am 3am 4am 5am 6am 7am 8am 9am 10am 11am 12:00 1pm 2pm 3pm 4pm 5pm 6pm 7pm 8pm 9pm 10pm 11pm
02-MAY-12 Wed     1   2  11   6   4   1   1   1   1   2    3    7     1
01-MAY-12 Tue     2   4  13   6   5   1   1           1               3               1   1   2   1       2         1
30-APR-12 Mon     1   2  10   8   3   1   1       2   3    2    6     2   1   2   2   2   4   2   2   1   2    1
29-APR-12 Sun     2   2  12   6   5   1   1       2   1               3   2   2   2       1   2   1   1   1    1
28-APR-12 Sat     1   3  15   4   3   8   5   2   1   2    2    4     3   1   2   1   2   1   3   2       4    1
27-APR-12 Fri     1   4  14   4   3   1   1   2   1   3    6    8     4   1   3   2   2   2   2   1       2    2
26-APR-12 Thu     2   3  11   5   5       1   1      16    6    4     2   1   1   5   5   5   2   1   1   2    1
25-APR-12 Wed     2   2  14   4   5       1   1       3    2    2     2   1   1   1   1   2   2   1   1   2    1
24-APR-12 Tue     2   5  12   5   4       1   1       2    2    2     3   1   4   2   2   2   2   1   2   2    1
23-APR-12 Mon     1   2  10   4   2   1   1       1   2    2    4     6   3   2   2   2   2   2   2   3   2    1
22-APR-12 Sun     2   2  12   4   2   1   1   1  39             1     1   2   2   1       2   1   1   1   1    1
21-APR-12 Sat     2   2  13   4   3   7   5   2   1   1    3    4     3   1       2   1   2   2   2   1   2    1
20-APR-12 Fri     1   3  11   3   3   2   1   1       2    3    1     2   1   4   2   3   3   2   1   2   2    1
19-APR-12 Thu     2   4  14   3   4       1   1       1    3    1     2       2   2   1   8   3   1       2    1
18-APR-12 Wed     2   2  11   3   4       1   1   1   2    3    6     8   1   2   2   3   2   2   1   1   2    1
My redo log file sizes;
SQL> select group#, members, bytes/1024/1024 "Size (MB)" from v$log;
    GROUP#    MEMBERS  Size (MB)
         1          1        150
         2          1        150
         3          1        150
         4          1        150

Iordan Iotzov wrote:
1> High db file parallel read for past few monthsWhat are the criteria for “high”? This event could show up during recovery and buffer pre-fetching. At 2.5% of your total call time, why do you think this event presents a systemic issue?
In general the response time has decreased. Is there any way to reduce the wait..
Did not find much info on it from Net..
2> High log file switch completion The report shows 9 waits each lasting around 35 ms., so I do not see this as a serious performance issue at this time. Many would find 38 log switches per hour too high. You could reduce that number by increasing the size of the redo log files. From the report I see, this would likely not help your performance now, but it could be a good preventive measure.
Thinking on the same line. But how to set the optimal size of redo logs..?
SQL> select optimal_logfile_size from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE
-- NO DATA.
3> Very low Buffer Hit Ratio from 38-55 %
Buffer hit ratio is pretty much meaningless. This Jonathan Lewis’ blog entry sums it up nicely - http://jonathanlewis.wordpress.com/2007/09/05/hit-ratios-2/
Concur with you. But was looking into potential problems, since the BCHR decreased from a earlier value of 65-75%

Similar Messages

  • Sap BI Retail Implementation

    Hi All,
    I am having experience in Implementation project but not from the scratch.
    Now i got chance to work on Implementation from scratch...we gathered requirement.
    I dont have functional specs...
    How do we know which data sources,cubes we need to use and how do we know whether we have to load data into cube or ODS...
    Please give the information which is helpfull for me.
    Thank you,
    Subbu.

    Hi,
    Depending on client Requirement ,First we check whther the Standard Datasources and Cubes or ODS are useful for the requirement.
    For Example:
    If the client Business is more on Sales, We go for SD-Business Content Extractors and Datasources
    Fopr Retail :Analytics for SAP for Retail
    In an area challenged by slim profit margins, decreasing customer loyalty and high costs associated with investing in exclusive real estate, retailers are continuously looking for ways to improve their businesses by measuring performance against goals. With Analytics for SAP for Retail, retailers can begin to understand the rapidly changing market needs; knowledge they can use to increase overall profitability. Analytics for SAP for Retail  supports decision making in:
    Price and Promotion optimization
    Demand Forecasting
    Product Affinity Metrics analysis
    Loss Prevention
    Merchandise and Assortment Planning
    Point of Sale analysis
    Go through the links for retail
    http://help.sap.com/saphelp_nw04/helpdata/EN/8b/39d6386e24f90ae10000000a114084/frameset.htm
    https://wiki.sdn.sap.com/wiki/display/Retail/Analytics%20for%20SAP%20for%20Retail
    Regards,
    Marasa.

  • Modularity index computation

    Hello,
    The company I work for is now getting into the VI metrics analysis of our programs and I am struggling with the Modularity Index. According to the Style book, the value is computed by
    # of VIs / total node count * 100.
    In the image, I get a value with the calculator that varies from what the Analyzer reports. Can someone tell me how the Analyzer computes that value?
    Tay
    Attachments:
    Mod Index issue.png ‏342 KB

    Here it is...
    The VI Analyzer Results folder contains the config file and not results. Not my call on the name...
    Message Edited by slipstick on 06-08-2010 09:14 AM
    Attachments:
    Final Inspection Project.zip ‏2829 KB

  • Reliability analysis metric calculation executable stopped working and was closed

    JUST BROWSING WHEN RECEIVING THIS ERROR: Reliability analysis metric calculation executable stopped working and was closed
    I HAVE TO COMPLETELY TURN OFF COMPUTER TO RESOLVE

    JUST BROWSING WHEN RECEIVING THIS ERROR: Reliability analysis metric calculation executable stopped working and was closed
    I HAVE TO COMPLETELY TURN OFF COMPUTER TO RESOLVE

  • Re: Metrics error-Please help me to analyse the error

    Metric=Error Rate (%)
    Metric Value=9.09
    Timestamp=Aug 18, 2010 1:48:35 AM EDT
    Severity=Critical
    Message=The percentage of requests that resulted in errors is 9.09%
    We found errors in the weblogs. Does anyone have any idea of what this is about.
    orclas/esbprd01web01/Apache/Apache/htdocs/g6n4qvwp.htm
    [Tue Aug 17 22:57:11 2010] [error] [client 10.20.130.42] [ecid: 1282100231:10.95.85.44:5814:0:1980,0] File does not exist: /app/oracle/prod/Apache/Apache/htdocs/g6n4qvwp.idc
    [Tue Aug 17 22:57:12 2010] [error] [client 10.20.130.42] [ecid: 1282100232:10.95.85.44:5814:0:1981,0] File does not exist: /app/oracle/prod/Apache/Apache/htdocs/g6n4qvwp.idc
    [Tue Aug 17 22:57:12 2010] [error] [client 10.20.130.42] [ecid: 1282100232:10.95.85.44:5814:0:1982,0] File does not exist: /app/orclas/prod/Apache/Apache/htdocs/g6n4qvwp.x
    [Tue Aug 17 22:57:12 2010] [error] [client 10.20.130.42] [ecid: 1282100232:10.95.85.44:5814:0:1983,0] File does not exist: /app/orclas/prod/Apache/Apache/htdocs/g6n4qvwp.x
    [Tue Aug 17 22:57:12 2010] [error] [client 10.20.130.42] [ecid: 1282100232:10.95.85.44:5814:0:1984,0] File does not exist: /app/orclas/prod/Apache/Apache/htdocs/<script>document.cookie="testobxt=6838;"</script>
    [Tue Aug 17 22:57:12 2010] [error] [client 10.20.130.42] [ecid: 1282100232:10.95.85.44:5814:0:1985,0] mod_wchandshake: incorrect uri: <script>document.cookie="testobxt=6838;"</script> passed in.
    [Tue Aug 17 22:57:12 2010] [error] [client 10.20.130.42] [ecid: 1282100232:10.95.85.44:5814:0:1985,0] Invalid URI in request GET <script>document.cookie=%22testobxt=6838;%22</script> HTTP/1.1
    [Tue Aug 17 22:57:12 2010] [error] [client 10.20.130.42] [ecid: 1282100232:10.95.85.44:26886:0:10576,0] File does not exist: /app/orclas/esbprd01web01/Apache/Apache/htdocs/<meta http-equiv=Set-Cookie content="testobxt=6838">
    [Tue Aug 17 22:57:12 2010] [error] [client 10.20.130.42] [ecid: 1282100232:10.95.85.44:26886:0:10577,0] mod_wchandshake: incorrect uri: <meta http-equiv=Set-Cookie content="testobxt=6838"> passed in.
    [Tue Aug 17 22:57:12 2010] [error] [client 10.20.130.42] [ecid: 1282100232:10.95.85.44:26886:0:10577,0] Invalid URI in request GET <meta%20http-equiv=Set-Cookie%20content=%22testobxt=6838%22> HTTP/1.1

    You need to post this question in HTTP Server/Apache Forum

  • Worlload Analysis and Metric charts

    Hi Best community,
    I can see from workload container Memory  Demand, usage and Capacity. How can I see these metrics from All Metrics.
    Is Demand means what the VM needs
    Usage what the VMs get from vCenter
    Usage the Active memory
    Many thanks
    vSohill

    useramit,
    You are in the consumer end products forum.  You will also want to ask your question over at the Enterprise Business Community.
    Click the plus sign (+) next to Discussion Boards to drop down all the options for servers, networking and any other professionally related problems.
    http://h30499.www3.hp.com/

  • Trying to consolidate multiple custom date metrics in a simple summary view

    I'm working with leads and campaigns and trying to provide analysis of the inflow, processing and output of the leads associated with a campaign. (There is some out of the box analysis do this, but it's not what I'm looking for). The intent is to measure key process steps and their associated dates and bucket these events into the fiscal month they occur. For instance: Lead.Create Date, Lead.Accepted Date, Lead.Converted Date and Opportunity.Close Date.
    (I use case statements to identify the fiscal month in which the date occurred when it's not available for use).
    So, the base data display is something like this:
    Lead ID     | Create Month | Accepted Month | Converted Month | Oppty Close Month
    12345 | 01 | 02 | 03 | 03
    12346 | 01 | 01 | 01 | 02
    12347 | 02 | 02 | |     
    With the objective being to create a display along the lines of this:
    Lead Metrics | M01 | M02 | M03 |     ...
    Created     | 2 | 1 | 0 |     
    Accepted | 1 | 2 | 0 |
    Converted | 1 | 0 | 1 |
    Oppty Closed | 0 | 0 | 1 |
    I can create the base data, but am struggling to present it in a clean single table format as outlined above.
    Any suggestion on how to do this or perhaps an alternative approach?
    Thanks.

    Great idea - thanks Max. I provided some some details above, but to summarize with Max's suggestion included here's what I've done (I've used calendar month rather than fiscal to keep it simple):
    Lead ID = Lead."Lead ID"
    Create Month = Month(Lead.Created)
    Accepted Month = Month(Lead.DATE_28)
    Converted Month = Month(Lead.DATE_29)
    Closed/Won Month = Month(Opportunity."Close Date")
    Metric 1 = 'Created'
    Metric 2 = 'Accepted'
    Metric 3 = 'Converted'
    Metric 4 = 'Closed/Won'
    # Created = Metrics."# of Leads"
    # Accepted = FILTER(Metrics."# of Leads" USING (Lead.DATE_28 IS NOT NULL))
    # Converted = FILTER(Metrics."# of Leads" USING (Lead.DATE_29 IS NOT NULL))
    # Closed/Won = FILTER(Metrics."# of Leads" USING (Opportunity."Current Sales Stage" = 'Closed/Won'))
    Then to display:
    Pivot table 1:
    Row = Metric 1
    Column = Create Month (sort)
    Measure = # Created
    Pivot table 2:
    Row = Metric 2
    Column = Accepted Month (sort)
    Measure = # Accepted
    Pivot table 3:
    Row = Metric 3
    Column = Converted Month (sort)
    Measure = # Converted
    Pivot table 4:
    Row = Metric 4
    Column = Closed/Won Month (sort)
    Measure = # Closed/Won
    The only thing I don't like is I have a null month at the end of pivots 2-4 with a zero value. I added the filter on the # metrics to try and suppress the zero but it didn't work. Any suggestions for that, please me know.
    Thanks.

  • EMOD HELP- analytics on campaign recipient metrics

    Hi,
    I have sent a Campaign out to 27k customers on the 19th January (first time EMOD user).
    I need to create reports and analyses based on recipient metrics. So far, I have loaded a report using the Campaign Response History in CRM Reports and downloaded this into excel. What I am trying to find out is how many out of the 27k have received the email (delivered) and how many have bounced back.
    1. Firstly I'd like to undertand what exactly do # of Recipients, # of Recipients, # of Hard Bounces, # of Open Responses, # of Responders, # of Responses, mean? So far I have figured out that soft + hard + open = # responses. Not sure if that is correct?
    2. Some lines show blank in the count of # recipients, #responses etc... however there is an email address in the appropriate column. Has the email been sent or not? Has it been delivered? How can we make sure?
    3. Then, looking at the 'Email' columns in my excel spreadsheet (this is who we sent the campaign to), it looks like some email addresses are missing (blank) however they are in CRM? Has the campaign been sent or not?
    4. I'd also like to understand some individual lines reading as:
    # recipient: 44
    # soft bounces: 0
    # hard bounces 15
    # responders 28
    # responses 35
    # opened 20
    email address: [email protected]
    Is all of this for just 1 single email? Does this mean that the email was sent multiple times to just one single address? Why do the # recipients show 44?
    5. Why do some lines show responses metrics if the email is blank?
    6. On the overall campaign, how can I accurately measure the number of emails that have been sent and delivered?
    Any answer to these questions would be great - sorry I know there are loads of questions.
    Kind regards
    Carine

    Carine, here is my response:
    1) See below for definitions.
    2) All email recipients who have an email address and who do not have the Never Email flag checked are Sent an email. Whether they receive it or not is not always known. See below for explanation of why we don't always know if an email was received.
    3) I'd have to see the report to understand what you are describing.
    4) # of Recipients - Count of Campaign Recipients
    # of Responses - Count of All Campaign Responses (Opt in to List, Opt out from List, Global Opt-out, Global Opt-in, Click-through on trackable url, opened email with images turned on)
    # of Responders - Count of All respondents for a campaign (how many recipients clicked something?)
    # Hard Bounces - Count of responses where response type equal to ‘Hard Bounce’
    # Soft Bounces - Count of responses where response type equal to ‘Soft Bounce’
    # of Open Responses - Count of responses where response type equal to ‘Message Opened’
    # of Click Through - Count of responses where response type equal to ‘Click-through’
    # of Opt Ins - Count of responses where response type equal to ‘Opt-in’
    # of Opt Outs - Count of responses where response type equal to ‘Opt-out’
    # of Global Opt Ins - Count of responses where response type equal to ‘Global Opt-in’
    # of Global Opt Outs - Count of responses where response type equal to ‘Global Opt-out’
    If your report was set up to report on one email campaign, then this is what the metrics are reporting on.
    5) I'd have to see the report to understand what you are describing.
    6) The recieving email server does not always tell EMOD that a message was received. If the email contained the Track Message Open tag, and the recipient receives html email and has images turned on, then EMOD will get notified that this email was opened. Otherwise, EMOD does not know if the message was opened or not (unless the recipient clicked something).
    Hope this helps.

  • EMOD HELP- understanding campaign recipient metrics - need emod pro

    Hi,
    I have sent a Campaign out to 27k customers on the 19th January (first time EMOD user).
    I need to create reports and analyses based on recipient metrics. So far, I have loaded a report using the Campaign Response History in CRM Reports and downloaded this into excel. What I am trying to find out is how many out of the 27k have received the email (delivered) and how many have bounced back.
    *1.* Firstly I'd like to undertand what exactly do # of Recipients, # of Recipients, # of Hard Bounces, # of Open Responses, # of Responders, # of Responses, mean? So far I have figured out that soft + hard + open = # responses. Not sure if that is correct?
    *2.* Some lines show blank in the count of # recipients, #responses etc... however there is an email address in the appropriate column. Has the email been sent or not? Has it been delivered? How can we make sure?
    *3.* Then, looking at the 'Email' columns in my excel spreadsheet (this is who we sent the campaign to), it looks like some email addresses are missing (blank) however they are in CRM? Has the campaign been sent or not?
    *4.* I'd also like to understand some individual lines reading as:
    # recipient: 44
    # soft bounces: 0
    # hard bounces 15
    # responders 28
    # responses 35
    # opened 20
    email address: [email protected]
    Is all of this for just 1 single email? Does this mean that the email was sent multiple times to just one single address? Why do the # recipients show 44?
    *5.* Why do some lines show responses metrics if the email is blank?
    *6.* On the overall campaign, how can I accurately measure the number of emails that have been sent and delivered?
    Any answer to these questions would be great - sorry I know there are loads of questions.
    Kind regards
    Carine

    Carine, here is my response:
    1) See below for definitions.
    2) All email recipients who have an email address and who do not have the Never Email flag checked are Sent an email. Whether they receive it or not is not always known. See below for explanation of why we don't always know if an email was received.
    3) I'd have to see the report to understand what you are describing.
    4) # of Recipients - Count of Campaign Recipients
    # of Responses - Count of All Campaign Responses (Opt in to List, Opt out from List, Global Opt-out, Global Opt-in, Click-through on trackable url, opened email with images turned on)
    # of Responders - Count of All respondents for a campaign (how many recipients clicked something?)
    # Hard Bounces - Count of responses where response type equal to ‘Hard Bounce’
    # Soft Bounces - Count of responses where response type equal to ‘Soft Bounce’
    # of Open Responses - Count of responses where response type equal to ‘Message Opened’
    # of Click Through - Count of responses where response type equal to ‘Click-through’
    # of Opt Ins - Count of responses where response type equal to ‘Opt-in’
    # of Opt Outs - Count of responses where response type equal to ‘Opt-out’
    # of Global Opt Ins - Count of responses where response type equal to ‘Global Opt-in’
    # of Global Opt Outs - Count of responses where response type equal to ‘Global Opt-out’
    If your report was set up to report on one email campaign, then this is what the metrics are reporting on.
    5) I'd have to see the report to understand what you are describing.
    6) The recieving email server does not always tell EMOD that a message was received. If the email contained the Track Message Open tag, and the recipient receives html email and has images turned on, then EMOD will get notified that this email was opened. Otherwise, EMOD does not know if the message was opened or not (unless the recipient clicked something).
    Hope this helps.

  • Campaign Optimization using RFM Analysis

    I'm interested in the tools offered by BI for the <i>RFM Marketing Analysis</i> (Recency, Frequency, Monetary Value Analysis). I've read this information:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/c5/bd493b13bab710e10000000a11402f/frameset.htm
    http://help.sap.com/saphelp_crm40/helpdata/en/d3/11b71b8182ed4995eaad3dc644c771/frameset.htm
    I understand that the all process needs a CRM deployment as well as BI. However I'm wondering if this kind of scenario can be implemented without a full CRM installation. In particular I'd need to use an Internet Sales - BI landscape.
    When calculating the Response Rate, I have to determine the <i>Addressed Business Partners</i>. One way to provide this data is using the ODS object <i>Campaign/Target Group/Business partner Assignements</i>
    (<a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/36/df333bf1047532e10000000a114084/content.htm">0CRM_TGCT</a>). This object is filled by data from the CRM application. Is it possible to take data from Internet Sales instead?
    Otherwise, can you briefly explain the <b>Intelligent Web Analytics</b> feature of Internet Sales? I know it requires BI. How are the two application integrated?
    At http://www.sap.com/solutions/business-suite/crm/internetsales/featuresfunctions/index.epx I read:
    <i>Take advantage of a broad range of Web analytics that enable you to track online sales, capture customer behavior, analyze Web site metrics, and improve online sales performance.</i>
    What kind of <i>customer behavior</i> characterization is offered by these Web analytics?
    Cheers, Davide

    Prem
    The PP/DS Optimiser has quite a few restrictions associated with it, and can cause unpredictable results if any of these restrictions are met. See SAP Note 712066 for full details as I am not sure of the exact conditions you are trying to optimise. The key one may be:
    The campaign optimization is based on the bottleneck resources. There must be no direct or indirect time link (in the form of constraints or material dependencies) between the operations or orders that are planned on the bottleneck resources. In particular, bottleneck resources may be used only once in the production flow.
    Hope this helps
    Regards
    Ian

  • Metrics - How they are defined / calculated ?

    My customer is analyzing a couple of standard reports. They are asking for the definition or calculation for some metrics like:
    a) In the Active Campaign status report
    ROI - Return over investment
    Cost per closed sale
    Cost per leadb) In the Team Sales Effectiveness Analysis report
    Average business volume
    Success RatePls. is there a way / document that has this information ?
    I know that these are in the 'model' or 'OBI repository used' but I dont think we have access to this model.
    Txs.
    Antonio

    ISA and GS values are inserted from party configuration.
    If you had schema deployed, EDI sendpipeline will resolve the parties depending upon message type of outgoing message which is Targetnamespace#Rootnode.
    In your first case you might have added send port inside agreement.
    For Dynamic Send port, please read below:
    A dynamic send port enables you to send an interchange to any one of multiple destinations, because it resolves the agreement and determines the destination address based upon the value in the
    DestinationPartyName context property.
    Note
    If you are sending an EDI interchange that is based upon an XML message received, and you are using a passthrough receive pipeline to receive that application, you will have to promote the DestinationPartyName context property. For more information, see Agreement
    Resolution and Schema Determination for Outgoing EDI Messages.

  • Re: Forte Estimating Metrics

    Greg,
    In my experience, the class-count metric is a poor one for time estimation,
    for four reasons:
    1. The actual time/class is very domain- and implementation-
    sensitive. Industry averages are fairly unhelpful, unless
    you happen to employ average practices and work in domains
    of "average" complexity (whatever that means!)
    2. It does not account for the overall size of the project; the
    larger the project, the lower overall productivity is.
    3. It requires that a fairly detailed design be done already, which
    represents a fair amount of the total effort; as an estimation
    tool, it is only useful in the latter stages of a project.
    4. Lorenz's research is based on a very small sample; 18 projects are
    hardly enough to have statistical validity.
    (I almost included a fifth point, but it is hardly worthy: class-counting
    can be spoofed. Awareness of the use of class-count as a metric can give
    developers a motive to either artificially increase or decrease the class
    count, depending on other motivating factors.)
    I would recommend function point analysis as a more general purpose tool
    for project management. Using a good statement of the requirements, FPA
    generates a measure of the size of the application in terms of function
    points. This is independent of the implementation; it measures the
    behaviours of the application, rather than the particulars of design
    or implementation, such as class count or lines-of-code. Note that
    expected productivity varies as a function of the overall size of a project.
    FPA has been used as part of the study of literally thousands of
    software projects, so the conclusions drawn from FPA studies done
    by Capers Jones at Software Productivity Research have some
    statistical credibility.
    Looking at the general productivity numbers gathered by SPR
    (http://www.spr.com), there are some productivity measures that
    cover many languages, including Forte. In terms of lines-of-code
    per function point, Forte is 3 times more productive than C++.
    In terms of overall productivity (function points / man month), there
    are too many variable factors to make a simple statement of relative
    productivity levels between C++ and Forte meaningful. This has a lot
    to do with the Forte object model and run-time system, which, in certain
    problem domains, provides a lot of function points (or equivalent)
    "for free." In other words, if you can use Forte's plumbing package,
    you're automatically more productive. Providing the equivalent in
    C++ would be a daunting task (I should know -- I've done it!)
    The best method of determining your productivity is to measure it. There
    are many factors that affect the overall productivity of your software
    organisation; the programming language choice is just one. Using metrics
    drawn on industry averages concerning just ONE variable in the product
    development process is not terribly useful in project management.
    If you are looking for an extremely rough number that is more defensible
    that the good old "gut feel" technique, then class counting is probably
    the least intrusive on the development process.
    -Ron
    At 08:02 AM 6/5/97 +1000, you wrote:
    In his book Object-Oriented Software Metrics (Prentice Hall, 1994), Mark
    Lorenz in proposes an approach for estimating OO projects. Lorenzrecommends
    the average amount of effort spent on a single class is the best indictor of
    the amount of work required on a new project. He suggests Smalltalkdevelopers
    average 5-10 person-days per class, and C++ developers average 25-35
    person-days per class.
    Has anyone a view on average effort to build Forte classes?
    Note, the metrics quoted above relate to Design and Implementation. They
    assume Analysis has been completed, and do not include the time for project
    management, systems testing, and other support personnel. They assume aratio
    of 1:6 OO experts to novices. Higher ratios should result in greater
    productivity (lower average efforts per class). The guidelines also assumeno
    library of reusable components.
    Lorenz is an ex-head of IBM's Object Technology Center and a respected OO
    consultant and author. His method is based on the results of 8-16Smalltalk and
    C++ projects (not all statistics are available from all projects). Projects
    ranged in size from 60 to 700+ project-specific classes. Project durations
    ranged from 6 months to 2.5 years, with teams of 2 to 35 developers.
    Thanks for any help
    gjb.

    Greg,
    In my experience, the class-count metric is a poor one for time estimation,
    for four reasons:
    1. The actual time/class is very domain- and implementation-
    sensitive. Industry averages are fairly unhelpful, unless
    you happen to employ average practices and work in domains
    of "average" complexity (whatever that means!)
    2. It does not account for the overall size of the project; the
    larger the project, the lower overall productivity is.
    3. It requires that a fairly detailed design be done already, which
    represents a fair amount of the total effort; as an estimation
    tool, it is only useful in the latter stages of a project.
    4. Lorenz's research is based on a very small sample; 18 projects are
    hardly enough to have statistical validity.
    (I almost included a fifth point, but it is hardly worthy: class-counting
    can be spoofed. Awareness of the use of class-count as a metric can give
    developers a motive to either artificially increase or decrease the class
    count, depending on other motivating factors.)
    I would recommend function point analysis as a more general purpose tool
    for project management. Using a good statement of the requirements, FPA
    generates a measure of the size of the application in terms of function
    points. This is independent of the implementation; it measures the
    behaviours of the application, rather than the particulars of design
    or implementation, such as class count or lines-of-code. Note that
    expected productivity varies as a function of the overall size of a project.
    FPA has been used as part of the study of literally thousands of
    software projects, so the conclusions drawn from FPA studies done
    by Capers Jones at Software Productivity Research have some
    statistical credibility.
    Looking at the general productivity numbers gathered by SPR
    (http://www.spr.com), there are some productivity measures that
    cover many languages, including Forte. In terms of lines-of-code
    per function point, Forte is 3 times more productive than C++.
    In terms of overall productivity (function points / man month), there
    are too many variable factors to make a simple statement of relative
    productivity levels between C++ and Forte meaningful. This has a lot
    to do with the Forte object model and run-time system, which, in certain
    problem domains, provides a lot of function points (or equivalent)
    "for free." In other words, if you can use Forte's plumbing package,
    you're automatically more productive. Providing the equivalent in
    C++ would be a daunting task (I should know -- I've done it!)
    The best method of determining your productivity is to measure it. There
    are many factors that affect the overall productivity of your software
    organisation; the programming language choice is just one. Using metrics
    drawn on industry averages concerning just ONE variable in the product
    development process is not terribly useful in project management.
    If you are looking for an extremely rough number that is more defensible
    that the good old "gut feel" technique, then class counting is probably
    the least intrusive on the development process.
    -Ron
    At 08:02 AM 6/5/97 +1000, you wrote:
    In his book Object-Oriented Software Metrics (Prentice Hall, 1994), Mark
    Lorenz in proposes an approach for estimating OO projects. Lorenzrecommends
    the average amount of effort spent on a single class is the best indictor of
    the amount of work required on a new project. He suggests Smalltalkdevelopers
    average 5-10 person-days per class, and C++ developers average 25-35
    person-days per class.
    Has anyone a view on average effort to build Forte classes?
    Note, the metrics quoted above relate to Design and Implementation. They
    assume Analysis has been completed, and do not include the time for project
    management, systems testing, and other support personnel. They assume aratio
    of 1:6 OO experts to novices. Higher ratios should result in greater
    productivity (lower average efforts per class). The guidelines also assumeno
    library of reusable components.
    Lorenz is an ex-head of IBM's Object Technology Center and a respected OO
    consultant and author. His method is based on the results of 8-16Smalltalk and
    C++ projects (not all statistics are available from all projects). Projects
    ranged in size from 60 to 700+ project-specific classes. Project durations
    ranged from 6 months to 2.5 years, with teams of 2 to 35 developers.
    Thanks for any help
    gjb.

  • Business Process Management - End-to-End Analysis and Performance KPI's

    Hello,
    I am from WB and were looking to expand our footprint with Business Process Monitoring.
    For critical business processes (order to cash, release to accounting, procure to pay; create shopping cart, etc.) how can be map and determine for the business units the expected run times as well as capturing / reporting historical trend analysis; looking at from a company business process level and not an SAP transaction level. 
    We were disappointed that we were unable to find this detailed functionality in SolMan (dashboards, metrics, customizable reports, etc.) that  could tell us how long a WB business process takes to execute form start to finish.
    Does anyone have any info as to if NetWeaver BPM  can capture and trend the complete A-Z run times of a Business Process? 
    Iu2019m I missing something here?! I would think something this basic would automatically be a part of SolMan BPM Monitoringu2026
    Appreciate all and any feedback.
    Thanks!
    Bruce L.
    Warner Bros. Pictures, Burbank Calif.

    Hi Bruce,
    You can find many of the Process Analytics capabilities in NW BPM 7.2 release in this How-to Webinar
    FEATURED EVENTS
    Here you can see the Process Analytics related documentation
    http://help.sap.com/saphelp_nwce72/helpdata/en/60/794d3f1e5a4443b5f714b28f6f5fa1/frameset.htm
    Here you can find what data is exposed within the BPM specific BI content http://help.sap.com/saphelp_nw70/helpdata/en/ec/d897e16ccd4efd951a5fe708734bd3/frameset.htm
    Best regards,
    Radost

  • An error occurred when loading the Cube ; Analysis services 2012

    Hi All
    We are facing issue on our SQL Analysis services 2012 (11.0.3381.0) on windows 2008 R2.
    All cubes are not getting loaded on server. When we restart services some times 2/8 cubes some times 4/8 cubes and some times all cubes get loaded. We are not sure what could be reason for such in consistency. Below are the logs
    Failed to load server plug-in extension defined in assembly System. The following error(s) have been raised during the plug-in initialization. Loading of the System assembly failed with the following error: Microsoft::AnalysisServices::AdomdServer::AdomdException;Could
    not load file or assembly 'msmdspdm, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A)
    Strong name validation failed. (Exception from HRESULT: 0x8013141A). Enumeration of types or functions through reflection in managed code failed with the following error: Microsoft::AnalysisServices::AdomdServer::AdomdException.
    OLE DB or ODBC error: Query timeout expired; HYT00.
    LOGS :
    (12/6/2013 7:45:12 AM) Message: Service started. Microsoft SQL Server Analysis Services 64 Bit Enterprise (x64) SP1 11.0.3381.0. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x41210000)
    (12/6/2013 7:46:37 AM) Message: An error occurred when loading the Claim Industry Summary Metrics Current. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210013)
    (12/6/2013 7:46:37 AM) Message: An error occurred when loading the AW Cube. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210013)
    (12/6/2013 7:46:41 AM) Message: An error occurred when loading the AW Cube. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210013)
    (12/6/2013 7:46:41 AM) Message: An error occurred when loading the AW Cube. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210013)
    (12/6/2013 7:46:42 AM) Message: An error occurred when loading the AW Cube. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210013)
    (12/6/2013 7:46:44 AM) Message: An error occurred when loading the AW Cube. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210013)
    (12/6/2013 7:46:45 AM) Message: An error occurred when loading the AW Cube. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210013)
    (12/6/2013 7:46:55 AM) Message: Service stopped. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x41210001)
    (12/6/2013 7:47:04 AM) Message: The Query thread pool now has 1 minimum threads, 40 maximum threads, and a concurrency of 20.  Its thread pool affinity mask is 0x00000000000fffff. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x4121000A)
    (12/6/2013 7:47:04 AM) Message: The ParsingShort thread pool now has 4 minimum threads, 4 maximum threads, and a concurrency of 20.  Its thread pool affinity mask is 0x00000000000fffff. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x4121000A)
    (12/6/2013 7:47:04 AM) Message: The ParsingLong thread pool now has 4 minimum threads, 4 maximum threads, and a concurrency of 20.  Its thread pool affinity mask is 0x00000000000fffff. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x4121000A)
    (12/6/2013 7:47:04 AM) Message: The Processing thread pool now has 1 minimum threads, 64 maximum threads, and a concurrency of 20.  Its thread pool affinity mask is 0x00000000000fffff. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x4121000A)
    (12/6/2013 7:47:04 AM) Message: The IOProcessing thread subpool with affinity 0x000000000000001f now has 1 minimum threads, 50 maximum threads, and a concurrency of 10. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x4121000B)
    (12/6/2013 7:47:04 AM) Message: The IOProcessing thread subpool with affinity 0x00000000000003e0 now has 1 minimum threads, 50 maximum threads, and a concurrency of 10. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x4121000B)
    (12/6/2013 7:47:04 AM) Message: The IOProcessing thread subpool with affinity 0x0000000000007c00 now has 1 minimum threads, 50 maximum threads, and a concurrency of 10. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x4121000B)
    (12/6/2013 7:47:04 AM) Message: The IOProcessing thread subpool with affinity 0x00000000000f8000 now has 1 minimum threads, 50 maximum threads, and a concurrency of 10. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x4121000B)
    (12/6/2013 7:47:11 AM) Message: The flight recorder was started. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x41210005)
    (12/6/2013 7:47:11 AM) Message: Service started. Microsoft SQL Server Analysis Services 64 Bit Enterprise (x64) SP1 11.0.3381.0. (Source:
    \\?\L:\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x41210000)
    Thanks
    Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/

    Hi Saurabh,
    Please elaborate your scenario or OLAP Server environment with more detail. What's the method you used to load the SSAS cube?
    For this issue, I would suggest opening a case with Microsoft Customer Support Services (CSS) (http://support.microsoft.com), so that a dedicated Support Professional can assist you in a more efficient manner.
    Regards,
    Elvis Long
    TechNet Community Support

  • Health app, is it possible to display weight and distance in imperial rather than metric ?

    health app, is it possible to display weight and distance in imperial rather than metric ?

    Hao,
    first of all, you are using a chart which has three options for updates if the chart is "full":
    Strip chart (default)
    Scope chart
    Sweep chart
    These are called "update mode". Test the modes yourself.
    Also you have to know that you will not likely have an integer number of periods of your signal in the display of the chart. Therefore, a continuous signal will "move" the graph from update to update.
    You can implement some algorithm to discard data to maintain a static "trigger" level for display, but as stated, it will leave gaps in the signal. These gaps are not a concern unless you use the displayed signal for analysis (e.g. FFT).
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

Maybe you are looking for

  • How to call one EJB in another EJB?

    How to call one EJB in another EJB? Please explain with some example code.

  • Error while  changing Service order through  BAPI_ALM_ORDER_MAINTAIN.

    Hi Frens, I am facing problem while  changing a service order . I am changing the  HEADER ( Planner group & Basic Finish date )   through BAPI_ALM_ORDER_MAINTAIN.   lt_methods-refnumber = '000001'.   lt_methods-OBJECTTYPE = 'HEADER'.   lt_methods-MET

  • Export To PDF with OLE Objects/Word

    I got a report that has several OLE Objects that are each linked to a Word document. Upon viewing the exported file, you notice that these objects have lost resolution and do not appear as clearly as they did in the report designer. Upgraded to SP 4.

  • How to install PDB

    Hello All, I am facing problem of SAP GUI 720 crash sometimes while printing Adobe forms. Now SAP requested me to install the pdb file for GUI 720 on the client machine to debug this error in detail. But when I try to install, its asking me to unzip

  • Licensing Adobe

    I bought Adobe XI last November, registered it, but it still keeps asking me to register the product. How do I get this to stop? The registration number I enter is correct.