BPM performance- data aggregation persistance

Hi,
I have a situation of large volumes of records to be evaluated, aggregated and split into different scenarios.
Q1.
Best way to persist this aggregated data.
Q2.
Has anyone found, know or could suggest the best way for this to run optimal.
Regards
Ian

Hi Ian,
I have both implemented some services on XI 2.0 using ABAP-Proxys at a XI-Application-System / Cluster-Databases and the same services on XI 3.0 with BPM. The proxy solution was much more performant but of course the BPM-Solution has a better monitoring and the advantage of beeing in standard. For a proxy solution you have to copy a XI mandant and configure it as "Application System". It will serve as a message allocator. The ABAP-Code (or Java) is executed in the Inbound proxys, where you can call Outbound proxys or implement database operations.
Regards Udo

Similar Messages

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Oracle BPM Process Data mart

    I am required to create audit reports on BPM workflows.
    I am new to thid & need some guidance on configuring BPM Process Data mart. What are the pre-requisites for configuring it & what are the steps to do it.
    Also, need some inputs on BAM database. What is the frequency of data upload. Is it data update or insert in BAM.

    Hi,
    You might want to check out the Administration and Configuration Guides on http://download.oracle.com/docs/cd/E13154_01/bpm/docs65/index.html.
    I suspect you might find the BAM and Data Mart portions of this documentation a bit terse, so I've added the steps below that provides more detail. I wrote this for ALBPM 6.0, but believe it will still work for Oracle BPM 10g. It was created from an earlier ALBPM 5.7 document Support wrote called "ALBPM 5_7 Configuring and Troubleshooting the BAM and DataMart Updater.pdf.
    You can define how often you want the contents in both databases updated (actually inserted) and how long you want to persist the contents of the BAM database during the configuration.
    Here's the contents of the document:
    1. Introduction
    The use of BAM (Business Activity Monitoring) and Data Mart (or Warehouse) information is becoming more and more widespread in today’s BPM project implementations for the obvious benefits they bring to the management and tuning of processes.
    BAM is basically composed by a collection of measurements of current processes load and execution times. This gives us an idea of how the business is doing at this moment (in a pseudo real-time fashion).
    Data Mart, on the other hand, is a historical view of the processes load and execution times. And this gives us an idea of how the business has developed since the moment the projects have been put in place.
    In this document we are not going to describe exhaustively all configuration aspects of the BAM and Data Mart Updater, but rather we will quickly move from one configuration step to another paying more attention to subjects that have presented some difficulties in real-life projects.
    2. Creating the Service Endpoints
    The databases for BAM and for Data Mart first have to be defined in the External Resources section of the BPM Process Administrator.
    In this following example the service endpoint ‘BAMJ2EEWL’ is being defined. This definition is going to be used later as BAM storage. At this point nothing is created.
    Add an External Resource with the name ‘BAMJ2EEWL’ and, as we use Oracle, select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMBAM’. This user, and its database, will be created later
    ·     the password for the user
    Scroll down to the bottom of the page and click <Save>.
    In addition to a standard JDBC connection that is going to be used by the Updater Service, a remote JDBC configuration needs to be added as the Engine runs in a WebLogic J2EE container. This Data Source is needed to grant the Engine access over BAM tables thru the J2EE Connection Pool instead of thru a dedicated JDBC. The following is an example of how to set this up.
    Add an External Resource with the name ‘BAMremote’ and select the Oracle driver, then click <Next>
    On the following screen, specify:
    ·     the Lookup Name that will be used subsequently in WebLogic - here I have given it the name ‘XAbamDS’
    Then click <Save>.
    In the next example the definition ‘DWHJ2EEWL’ is created to be used later as Data Mart storage. If you are not going to use a Data Mart storage you can skip this step.
    Add an External Resource with the name ‘DWHJ2EEWL’ and select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMDWH’. This user, and its database, will be created later
    ·     the password for the user
    3. Configuring BAM Updater Service
    Once the service endpoint has been created the next step is to enable the BAM update, select the service endpoint to be used as BAM storage and configure update frequency and others. Here the “Updater Database Configuration” is the standard JDBC we configured earlier and the “Runtime Database Configuration” is the Remote JDBC as we are using the J2EE Engine.
    So, here’s the example of how to set up the BAM Updater service….
    Go into ‘Process Monitoring’ and select the ‘BAM’ tab and enter the relevant information (using the names created earlier – use the drop down list to select):
    Note that here, to allow me to quickly test BAM reporting, I have set the update frequency to 1 minute. This would not be the production setting.
    Once the data is input, click <Save>.
    We now have to create the schema and related tables. For this we will open the “Manage Database” page that has appeared at the bottom of the BAM screen (you may have to re-select that Tab) and select to create the database and the data structure. The user required to perform this operation is the DB system administrator:
    Text showing the successful creation of the database and data structures should appear.
    Once we are done with the schema creation, we can move to the Process Data Mart configuration screen to set up the Common Updater Service parameters. Notice that the service has not been started yet… We will get to that point later.
    4. Configuring Process Data Mart Updater Service
    In the case that Data Mart information is not going to be used, the “Enable Automatic Update” checkbox must be left off and the “Runtime Database Configuration” empty for this service. Additionally, the rest of this section can be skipped.
    In the case it is going to be used, the detail level, snapshot time and the time of update should be configured; in addition to enabling the updater and choosing the storage configuration. An example is shown below:
    Still in ‘Process Monitoring’, select the ‘Process Data Mart’ tab and enter the name created earlier (use the drop down list to select).
    Also, un-tick the Generate O3 Cubes (see later notes):
    Then click <Save>.
    Once those properties have been configured the database and the data structure have to be created. This is performed at the “Manage Database” page for which the link has appeared at the bottom of the page (as with BAM). Even when this page is identical to the one shown above (for the BAM configuration) it has been opened from the link in the “Process Data Mart” page and this makes it different.
    Text showing the successful creation of the database and data structures should appear.
    5. Configuring Common Updater Service Parameters
    In the “Process Data Mart” tab of the Process Monitoring section -along with the parameters that are specific to the Data Mart - we will find some parameters that are common to all services. These parameters are:
    • Log directory: location of the log file
    • Messages logged from Data Store Updater: severity level of the Updater logs
    • Language
    • Generate Performance Metrics: enables performance metrics generation
    • Generate Workload Metrics: enables workload metrics generation
    • Generate O3 Cubes: enables O3 Cubes generation
    In this document we are not going to describe in detail each parameter. But we will mention a few caveats:
    a. the Log directory must be specified in order for the logs to be generated
    b. the Messages logged from Data Store Updater, which indicates the level
    of the logs, should be DEBUG for troubleshooting and WARNING otherwise
    c. Performance and Workload Metrics need to be on for the typical BAM usage and, even when either metric might not be used on the initial project releases, it is recommended to leave them on in case they turn out to be useful in the future
    d. the Generation of O3 Cubes must be off if this service is not used, otherwise the Data Mart Updater service might not work properly .
    The only changes required on this screen was to de-select the ‘Generate O3 Cubes’ as shown in the last section.
    6. Set up the WebLogic configuration
    We need to set up the JDBC data source specified above, so go to Services / JDBC / Data Sources.
    Click on <Lock and Edit> and then <New> to add a New data source.
    Specify:
    ·     the Name – use the name you set up in the Process Administrator
    ·     the JNDI Name – again use the name you set up in the Process Administrator
    ·     the Database Type – Oracle
    ·     use the default Oracle Database Driver
    Then click <Next>
    On the next screen, click <Next>
    On the next screen specify:
    ·     the Database Name – this is the SID – for me that is XE
    ·     the Host Name – as I am running on my laptop, I’ve just specified ‘localhost’
    ·     the Database User Name and Password – this is the BAM database user specified in the Process Administrator
    Then click <Next>
    On the next screen, you can test the configuration to make sure you have got it right, then click <Next>
    On the next screen, select your server as the target server and click <Finish>:
    Finally, click <Activate Changes>.
    7. The Last Step: Starting Up and Shutting Down the Updater Service
    ALBPM distribution is different depending on the Operating System. In the case of the Updater Service:
    -     For Unix like Operating Systems the service is started or stopped with the albpmwarehouse.sh shell script. The command in this case is going to look like this:
    $ALBPM_HOME/bin$ ./albpmwarehouse.sh start
    -     For Windows Operating Systems the service is installed or uninstalled as a Windows Service with the albpmwarehouse.bat batch file. The command will look like:
    %ALBPM_HOME%\bin> albpmwarehouse.bat install
    After installing the service, it has to be started|stopped from the Microsoft Management Console. Note also that Windows will start automatically the installed service when the computer starts. In either case the location of the script is ALBPM_HOME/bin Where ALBPM_HOME is the ALBPM installation directory. An example will be:
    C:\bea\albpm6.0\j2eewl\bin\albpmwarehouse.bat
    8. Finally: Running BAM dashboards to show it is Working
    Now we have finally got the BAM service running, we can run dashboards from within Workspace and see the results:
    9. General BAM and Data Mart Caveats
    a. The basic difference between these two collections of measurements is that BAM keeps track of current processes load and execution times while Data Mart contains a historical view of those same measurements. This is why BAM information is collected frequently (every minute) and cleared out every several hours (or every day) and why Data Mart is updated infrequently (once a day) and grows indefinitely. Moreover, BAM measurements can be though of as a minute-by-minute sequence of Engine Events snapshots, while Data Mart measurements will be a daily sequence of Engine Events snapshots.
    b. BAM and Data Mart table schemas are very similar but they are not the same. Thus, it is important not to use a schema created with the Manage Database for BAM as Data Mart storage or vice-versa. If these schemas are exchanged by mistake, the updater service will run anyway but no data will be added to the tables and there will be errors in the log indicating that the schema is incorrect or that some tables could not be found.
    c. BAM and Data Mart Information and Services are independent from one another. Any of them can be configured and running without the other one. The information is extracted directly from the Engine Database (PPROCINSTEVENT table is the main source of info) for both of them.
    d. So far there has not been a mention of engines, projects or processes in any of the BAM or Data Mart configurations. This is because the metrics of all projects published under the current Process Administrator (or, more precisely, FDI Directory) are going to be collected.
    e. It is also important to note that only activities for which events are generated are going to be measured (and therefore, shown in the metrics). The project default is to generate events only for Interactive activities. This can be changed for any particular activity and for the whole process (where the activity setting, when specified, overrides the process setting). Unfortunately, there is no project setting for events generation so far; thus, remember to edit the level of event generation for every new process that is added to the project.
    f. BAM and Data Mart metrics are usually enriched with Business Variables. These variables are a special type of External Variables. An External Variable is a process variable with the scope of an Instance and whose value is stored on a separate column in the Engine Instances table. This allows the creation of views and filters based on this variable. A Business Variable, then, shares all the properties of an External Variable plus the fact that its value is collected in all BAM and Data Mart measurements (in some cases the value is shown as it is for a particular instance and in others the value is aggregated).
    The caveat here is that there is a maximum number of 256 Business Variables per FDI. Therefore, when publishing several projects into a single FDI directory it is recommendable to reuse business variables. This is achieved by mapping similar Business Variables of different projects with a unique real Variable (on the variable mapping performed at publish time).
    g. Configuring the Updater Service Log
    In section 5. Configuring Common Updater Service Parameters we have seen that there are two common Updater properties related to logging. These properties are “Log directory” and “Messages logged from Data Store Updater”, and they specify the location and level of these two files:
    - dwupdater.log: which is the log for the Data Mart updater service
    - bam-dwupdater.log: which is the log for the BAM updater service
    In addition to these two properties, there is a configuration file called ‘WarehouseService.conf’ that allows us to modify these other properties:
    - wrapper.console.loglevel: level for the updater service log
    - wrapper.logfile.loglevel: level for the updater service log
    - wrapper.java.additional.n: additional argument to the service JVM
    - wrapper.logfile.maxsize: maximum size of the updater service log files
    - wrapper.logfile.maxfiles: maximum number of updater service log files
    - wrapper.logfile: updater service log file name (the default value is dwupdater-service.log)
    9.1. Updater Service Log Configuration Caveats
    a. The first three parameters listed above have to be modified when increasing the log level to DEBUG (since the default is WARNING). The loglevel parameters have to be set to DEBUG and a java.additional.n (where n is a consecutive integer to the already used ones) has to be set to –ea to enable asserts, since without this option no DEBUG message is going to be generated.
    b. Of the other arguments, maxfiles might need to be increased to hold a few more days of data when the log level is set to DEBUG (with the default value up to two days are stored).
    c. The updater service has to be stopped, uninstalled, installed and then started for any of these changes to take effect.
    Hope this helps,
    Dan

  • Is there any documentation which throws light on how data aggregation happens in data warehouse grooming? what algorithm exactly it follows in different aggregation type (raw, hourly, daily)?

    Is there any documentation which throws light on how data aggregation happens in data warehouse grooming? what algorithm exactly it follows in different aggregation type (raw, hourly, daily)?
    How exactly it picks up a specific data value during Hourly aggregations and Daily aggregations?As in  How the value is chosen. Does it say averages out or simply picks  value at the start of the hour/day or end of the hour/day ??

    I'll try one more time. :)
    Views in the operations console are derived from data in the operational database. This is always raw data, and typically does not go back more than 7 days.
    Reports get data from the data warehouse. Unless you create a custom report that uses raw data, you will never see raw data in a report - Microsoft and probably all 3rd party vendors do not develop reports that fetch raw data.
    Reports use aggregated data - hourly and daily. The data is aggregated by min, max, and avg sample for that particular aggregation. If it's hourly data, then you will see the min, max, and avg for that entire hour. Same goes for daily - you will see the
    min, max, and avg data sample for that entire day.
    And to try clarifying even more, the values you see plotted on the report are avg samples. If you drill into the performance detail report, then you can see the min, max, and avg samples, as well as standard deviation (which is calculated based on these
    three values).
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • JAVA Performance data in EWA is missing.

    Hi SAP-experts,
    our EWA is missing the JAVA performance data.  In transaction 
    u201CSOLMAN_WORKCENTER->Root Cause Analysis->End-to-End-Analysis->JAVA-System-><One of our java-system>->Workload Analysis-><SID> - Applicaton Server Javau201D
    no data for E2E Metric Type Variable(multiple values) SERVLETS is available.  And in transaction
    SOLMAN_WORKCENTER->Root Cause Analysis->End-to-End-Analysis->JAVA-System-><One of our java-system>->Workload Analysis->Overviewu201D
    all KPIs are u201Cn/au201D.
    The transaction LISTCUBE (Infoprovider=0SMD_PE2H) shows no entries for selection :
    Metric Type: HTTP SESSIONS
    System ID: <SID of Java System>
    Calendar Day: <startday> to <endday>
    (Recommended by Dokument: End-to-End Diagnostics Trouble Shooting Guide missing data in service session from BI/CCDB)
    The Extractors E2E_JAVA_EXTRACTOR_* are running.
    Introscope Extractors are scheduled for the managed system Checked it in table E2E_ACTIVE_WLI.
    Records are loaded. Checked Table E2E_EFWK_STATUS.
    (Recommended by Dokument: End-to-End Diagnostics Trouble Shooting Guide missing data in service session from BI/CCDB)
    The spool of job E2E HKCONTROLLER shows this error:
    Target Infocube: 0SMD_PE2D
    Source Infocube: 0SMD_PE2H
    Destination    : NONE
    Aggregator     :
    E2E_HK_AGGREGATE
    HK: Calling SMD_HOUSEKEEPER at NONE
    HK: Error detected by SMD_HK_AGGREGATE
    HK: Status :     0
    HK: Message:
    RSAPO_CLOSE_TRANS_REQUEST failed
    HK: New Requests found:
    APO_R4NGX6OEQD7GGFVSO540DFAIRW
    APO_R4NGSA2CB8MO33L1833NBXTV9O
    Runtime [s]:           9
    In my opinion there is a problem with the data transfer in the infoproviders 0SMD_PE2D and 0SMD_PE2D. What further checks can I do and how can I resolve this problem?
    Best regards
    Willi Eimler

    last  part of the version data
    (IA64)[hp10202:root]/usr/sap/hostctrl/exe> saphostexec -status                             
    saphostexec running (pid = 20880)                                                          
    sapstartsrv running (pid = 20894)                                                          
    09:31:00 13.10.2011     LOG: Using PerfDir (DIR_PERF) = /usr/sap/tmp                       
    saposcol running (pid = 12780)                                                                               
    (IA64)[hp10202:root]/usr/sap/hostctrl/exe> saphostexec -version                            
    Component ********************                                 
    ./saphostexec: 720, patch 92, changelist 1256713, hpia64, opt (Jun 15 2011, 22:42:14)      
    ./sapcimc: 720, patch 92, changelist 1256713, hpia64, opt (Jun 15 2011, 22:42:14)          
    SAPHOSTAGENT information                                                                   
    kernel release                720                                                                               
    kernel make variant           720_REL                                                                               
    compiled on                   HP-UX B.11.23 U ia64 for hpia64                                                                               
    compiled for                  64 BIT                                                                               
    compilation mode              Non-Unicode                                                                               
    compile time                  Jun 15 2011 22:37:28                                                                               
    patch number                  68                                                                               
    latest change number          1256713                                                                               
    supported environment                                                                      
    operating system                                                                           
    HP-UX B.11

  • Not able to extract performance data from .ETL file using xperf commands. getting error "Events were lost in this trace. Data may be unreliable ..."

    Not able to extract  performance data from .ETL file using xperf commands.
    Xperf Commands:
    xperf –i C:\TempFolder\Test.etl -o C:\TempFolder\BootData.csv  –a process
    Getting following error after executing above command:
    "33288636 Events were lost
    in this trace. 
    Data may be unreliable
    This is usually caused
    by insufficient disk bandwidth for ETW lo
    gging.
    Please try increasing the minimum
    and maximum number of buffers
    and/or
                    the buffer size. 
    Doubling these values would be a good first at
    tempt.
    Please note, though, that
    this action increases the amount of me
    mory
                    reserved
    for ETW buffers, increasing memory pressure on your sce
    nario.
    See "xperf -help start"
    for the associated command line options."
    I changed page size file but its does not work for me.
    Any one have idea, how to solve this problem and extract ETL file data.

    I want to mention one point here. I have total 4 machines out of these 3 machines above
    commands working properly. Only one machine has this problem.<o:p></o:p>
    Hi,
    I consider that you can try to use xperf to collect the trace etl file and see if it can be extracted on this computer:
    Refer to following articles:
    start
    http://msdn.microsoft.com/en-us/library/windows/hardware/hh162977.aspx
    Using Xperf to take a Trace (updated)
    http://blogs.msdn.com/b/pigscanfly/archive/2008/02/16/using-xperf-to-take-a-trace.aspx
    Kate Li
    TechNet Community Support

  • Report/Widget for combined performance data

    Hello.
    I have created a dashboard view in SCOM 2012 R2 for power consumption of a number of servers. These servers are all part of a group. The power consists of a couple of different performance monitors (VMWare-sensor for VMs and Power Meter from the Power
    Consumption MP)
    The dashboard view works well, however I am after a total figure for each server in the group. Ideally this figure would be on the dashboard view as well.
    E.g. if I have 3 servers with the below power draw
    Server 1 – 150 Watts
    Server 2 – 200 Watts
    Server 3 – 300 Watts
    I would like a box stating total power 650 Watts
    If this is not available in a dashboard view, a report would be okay
    Thanks

    Hi,
    As far as I know, there is no direct way, you may create a sql query and sum the value for each server.
    You may refer to the helpful article below:
    http://blogs.technet.com/b/drewfs/archive/2014/08/17/scom-performance-data-and-power-view.aspx
    Regards,
    Yan Li
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Performance data from system and DMS logs on PDW.

    I want all of ther performance data as given below of the particular completed job with respective login ID from system or DMS logs on PDW
    What is total Memory and CPU used for completing the job?
    How many rows processed?
    What is read and write data size?
    What is disk I/O ?
    How much time taken for completing the job?
    -Prasad KVSV

    Hi Prasad, you may want to have a look at the following links:
    http://download.microsoft.com/download/5/0/1/5015A62E-06BF-4DEC-B90A-37D52E279DE5/SQL_Server_2012_Parallel_Data_Warehouse_Breakthrough_Platform_White_Paper.pdf
    http://saldeloera.wordpress.com/2012/08/27/pdw-tip-query-optimization-techniques-to-avoid-shufflemoves-and-partition-moves/
    Regards, Leo

  • How to pass simple parameter to XSLT from BPM 11g Data Association

    Is it possible to pass a simple parameter to an XSLT from a BPM 11g Data Association (11.1.1.6.0)?  Here is why I asked.  I have a transform that I want to reuse across two service activities in the same process.  The only difference is one integer ID value between the two activities.  (Happens to be running an IPM saved search, each with a different ID.)
    I tried mapping the ID value in the Data Association dialog, in addition to the XSLT mapping, but didn't work.  Or is there another method I could use to reuse this transform (besides just copying it as I've done for now)?  Thanks.

    If I understood your scenario, the point is: you always have to pass the same dataObjects as parameters to the XSL, but you can change the values of them in an association step before the transformation.
    Lets see if the following can fit your needs:
    1) your project has three BusinessObjects: "SearchParam", "CommonData" and "Result"; and five process dataObjects:
    - "search1", "search2" and "search" of type "SearchParam";
    - "commonData" of type "CommonData";
    - "result" of type "Result"
    2) your XSL has "result" as target and two sources: "search" and "commonData"
    3) In some part of the process flow you have a scriptTask with:
    - a data association from "search1" to "search";
    - a transformation with "search" and "commonData" as parameters
    4) In other part of the process flow you can use the same transformation, but using the data from "search2" as parameter, adding a scriptTask with:
    - a data association from "search2" to "search";
    - the same transformation with "search" and "commonData" as parameters
    In this scenario you are reusing the same XSL file only changing the values for one parameter.

  • Performance data collection issue

    Hi,
    We are using SCOM 2007 r2.We have some servers not collecting performance data.These servers are up and running fine and generating alerts (Monitoring Working fine). Can any one please suggest us what is the work arround for this.
    Thanks&Regards,
    Padmaja M.

    Try to clear management server health service cache by stopping the system center management service, renaming the health service state folder and starting the service.
    Juke Chou
    TechNet Community Support

  • Check performance data on IE status bar

    Hi, experts.
    I'm trying to check the performance data of WDJ by using
    paramter, "sap.session.ssr.showInfo=true".
    I can see some data flow over the status bar of IE.
    But it went so quickly, I can't analyze the result.
    I know using WD console to check the performance, so I just
    want to check quick result on IE.
    Any suggestiong will be helpful. Thanks all in advance.

    Hi,
    Have you refered to the following blog :
    /people/bertram.ganz/blog/2006/11/21/analyzing-web-dynpro-java-application-performance-with-a-simple-url-parameter
    Regards,
    Apurva

  • BPM performance improving

    Hi everyone
    Do someone know if there is a way to improve a BPM performance?
    Or how to speed it up?
    ´
    I don´t know, maybe applying a note, setting a parameter, etc...
    Thanks in advanced.
    Emmanuel

    This may sound a bit off-topic but if you guess you have done the apt steps for BPM and still need a performance boost chuck the BPM and go for PROXY
    being directly on the ABAP stack it gives a significant performance boost...and going forward I see proxies being used for almost any interface which has a complex logic...
    high time to get a hold on ABAP fundamentals 
    Cheers!!!

  • Log files/troubleshooting performance data collection

    Hello: 
    Trying to use MAP 9.0, 
    When doing performance data collection, getting errors.  Is there a log file or event log that captures why the errors are occurring?
    One posting said to look in bin\log - but there is no log directory under BIN for this version it seems.  
    Thank you, 
    Mustafa Hamid, System Center Consultant

    Hi Mark,
    There's no CLEANER_ADJUST_UTILIZATION in EnvironmentConfig for BDB JE 5.0.43 which I'm currently using, I also tried
       envConfig.setConfigParam("je.cleaner.adjustUtilization",
              "false");
    it fails to start up with below error
    Caused by: java.lang.IllegalArgumentException: je.cleaner.adjustUtilization is not a valid BDBJE environment parameter
        at com.sleepycat.je.dbi.DbConfigManager.setConfigParam(DbConfigManager.java:412) ~[je-5.0.43.jar:5.0.43]
        at com.sleepycat.je.EnvironmentConfig.setConfigParam(EnvironmentConfig.java:3153) ~[je-5.0.43.jar:5.0.43]

  • Confirmation's field - delivery date/Performance date

    Hi All,
    When I approve the confirmation in SRM, service entry sheet will be created in R3.
    In SRM, there is a field called "delivery date/ performance date."
    It's currently mapped to R3 Document date (ESSR-BLDAT) in Service entry.
    1) May I know what are the function module involve in mapping this field?  Please kindly advise. Thanks.

    We did a small testing.
    It works well to get the date on friday of each week. But the delivery date category is still on day.
    Is there a way to create it with the indicator on week?
    Link: to image (PREQ screenshot)
    Regards,
    Michae
    Edited by: Michael Fröhlich on Apr 22, 2010 11:53 AM

  • Why canu0092t I enable capture of statistics performance data?

    Hi,
    Following some documents suggested here, I tried to enable capture of statistics performance data.
    This is what I did:
    rsa1,
    Tools (from menu)
    Settings for BI statistics
    Then I selected the Infoprovider tab; Under the “statistics” column, it looked like:
    Default Value    X
    Cube1               D
    Cube2               D
    Cube3               D
    I tried to change the “D”s to X, and the options were:
    X   – On
    D   – Default
         -- Off
    But when I switched to X, the "check mark" to save from X to D remains gray and I am unable to turn on the statistics for the cubes Cube1, Cube2, Cube3, etc.
    What could be the reason why I can’t seem to turn on the statistics on our BI7 development and quality environment?
    Thanks

    Hi,
    Refer.
    How to Set up BW Statistics
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5401ab90-0201-0010-b394-99ffdb15235b
    Activating Data Transfers for BW Statistics 
    http://help.sap.com/saphelp_nw04s/helpdata/en/e5/2d1e3bd129be04e10000000a114084/frameset.htm
    Installing BW Statistics
    http://help.sap.com/saphelp_nw04/helpdata/en/8c/131e3b9f10b904e10000000a114084/frameset.htm
    Monitoring BI Statistics
    http://help.sap.com/saphelp_nw70/helpdata/en/46/f9bd5b0d40537de10000000a1553f6/frameset.htm
    Hope this helps.
    Thanks,
    JituK

Maybe you are looking for