OSB Reporting Performance Metrics Throttling

Hi Friends,
Can someone give me an idea around how can I achieve following for my OSB services? Like pointers towards, is it available out of the box or should i use some APIs to generate these statistics would be very helpful.
•     Reporting / Analytics
o     Ability to report service consumption per service per application
     Calls per sec/min/hour/day
     Error Counts
o     Performance Metrics?
     Anything we can know from this perspective? Can we get performance matrix for Business services?
- We are collecting this in ou
•     Quotas / Throttling
o     Max Requests per second/minute/hour/day
o     Ability to restrict per consuming application
o     Auto shut off and re-enable when time period expires
Thanks.

Have tried to answer what I know.
You have to enable the Monitoring feature for the proxy/business services.This would help you collect req counts/response times etc.
http://docs.oracle.com/cd/E14571_01/doc.1111/e15867/monitoring_ops.htm
Ability to restrict per consuming application - For this you can use throttling work manager of OSB.

Similar Messages

  • Application Server (Pertinent Report values for performance metrics)

    Hi
    With dmstool what are the most relevant values to collect. My objectives are to compare performance metrics on two environments. First one is on 9iAS (or 10 g AS), the second is a weblogic environment.
    Regards
    Den

    when running the dmstool -dump i noticed that some requests where throwing errors "connection refused".
    I verified the server properties focusing on the dms port 7200;
    I noticed that the listening adress and port was missing cfr httpd.conf
    After adding this info metrics are displayed
    thanks for the clue;
    chris

  • Capture performance metrics across multiple servers

    Hello. I'm still very new to Powershell but anyone know of a good Powershell v.3 -4 script that can capture performance metrics across multiple servers with an emphasis on HPC (high performance computing) and gen up a helpful report, perhaps in HTML or Excel
    format?
    Closest thing I've found and used is this line of powershell:
    http://www.microsoftpro.nl/2013/11/21/powershell-performance-monitor-on-multiple-remote-computers/
    Maybe figure out a way to present that in better format, such as HTML or Excel.
    Also, if someone can suggest some performance metrics to look at with an HPC perspective. For example, if a CPU is running at 100 utilization, figure out if which cores are running high, see how many threads are queued waiting for CPU time, etc...

    As far as formatting is concerned,
    ConvertTo-HTML is a basic HTML output format, but you can spice it up as much as you like:
    http://technet.microsoft.com/en-us/library/ff730936.aspx
    Out-Grid is very functional and pretty simple:
    http://powertoe.wordpress.com/2011/09/19/out-gridview-now-has-a-passthru-parameter/
    Here's an example with Excel:
    Excel
    Worksheets Example
    This might be a good reference for HPC, I don't have access to an HPC environment so I can't offer much advice there.
    http://technet.microsoft.com/en-us/library/ff950195.aspx
    It might be better to keep unrelated questions separate, so a thread doesn't focus on one question and you lose time getting an answer to another.
    I hope this post has helped!

  • Performance Metrics

    Performance Metrics
    Hi All ,
    EBS Version 12.1.3
    DB Version : 11.2.0.3
    Today in one of the meeting we were asked if our system was capable to handle the growth in the next 6-8 Months.
    Now before I could get inputs from them we decided to do some homework.
    I am looking for some points/ suggestions on which I can check and compare from the past and provide the answer to the question.
    Regards
    Karan Kukreja

    I Found !
    Environment -> Performance Metrics -> Performance Metrics Report
    -- Cedric GEORGEOT [MVP] File System Storage http://www.e-novatic.fr -- Auteur du livre Bonnes pratiques, planification et dimensionnement des infrastructures de stockage et de serveur en environnement virtuel -- N'oubliez pas de marquer comme réponse

  • What are the reporting performances can we do?

    What are the reporting performances can we do?

    Hi,
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    Generate Report in RSRT  
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Hope this helps.
    Thanks,
    JituK

  • **Toplink Performance Metrics

    I am looking for some performance metrics of using Toplink in application deployment vs. not using Toplink (coding the old fashion way). Can you point me in the right direction?

    This would depend on how optimized your "coding the old fashion way" was, and how optimized your TopLink usage was, as well as the environment, architecture, hardware, object model, etc., so generic metrics are difficult to provide.
    In general TopLink offers a robust and sophisticated set of performance features that depending on the use case can improve your application's performance dramatically.
    These include:
    - Object caching (a cache hit can 100-1,000 times faster than a database read)
    - Batch reading (a batched read can be 1-10 times faster than a non-batched read)
    - Joining (joining can be 1-10 times faster than a non joined read)
    - Batch writing (batch writing can be 1-10 times faster than non batched writing)
    - Parameterized SQL (parameterized SQL can be 1-5 times faster than dynamic SQL)
    - Fetch size
    - Cursors
    - Indirection
    - Change tracking
    - Fetch groups and partial object reading
    - Report queries
    - Read-only

  • 9iAS Performance Metrics Tables

    I want to generate my own custom report based on the performance metrics available in the
    Oracle 9IAS. However i am not able to locate the metrics tables(i.e ohs_server, oc4j_web_module,
    oc4j_servlet, etc ) mentioned in the Oracle 9iAS performance guide.
    Can anybody help me with locating the tables, under what username should we look for them and
    also if any special packs need to be installed for them.
    Regards
    Sriram.

    Sriram,
    If U find a solution, pls do let me know for I too am facing the same problem.
    rgds,
    --rash                                                                                                                                                                                                                   

  • Report Performance - timeout short dump

    Hello Experts,
    i am trying to improve the performace of a report that was developed long time ago.
    Issues i found:
    1. The report has many select...Endselect combinations, and selects inside the loop statements.
    2. Most of the selects have the addition 'into corresponding fields of' for selecting a few fields, without  the table addition.
    3.  Also few selects have the 'select * from'  syntax.
    data: begin of itab occurs 0,
              f1,
              f2
              f3.....
              fn,          
            end of itab.
    Ex: loop at itab.
             select f1 f2 f3 from table1
                   into corresponding fields of itab1.
               collect itab1.
             endselect.
              select f4 f5 from table2
                  into corresponding fields of itab2.
               endselect.
          endloop.
    All this leeds to performace issues.
    i have checked ST05, and i have got the details of the error.
    My question is which one of the reasons i mentioned above are a major factor in delaying the report performance?
    Which one of the above should i conetrate first to get the long runtime down? My goal is to keep my changes to the minimum and improve the performance. Please advise.

    > My question is which one of the reasons i mentioned above are a major factor in delaying the report
    > performance?
    Don't ask people for guesses, if you can see the facts!
    Run the SQL Trace several times, and use go to 'Trace List' -> 'Summarize Trace by SQL Statement'
    => Shows you total DB time and time per statement (all executions), the problems are on top of the list.
    Check ABAP, detail, and explain!
    Read more here:
    /people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy
    Siegfried

  • Report Performance degradation

    hi,
    We are using around 16 entities in crm on demand R 16which includes both default as well as custom entites.
    Since custom entities are not visible in the historical subject area , we decided to stick to the real time reporting.
    Now the issue is , we have total 45 lakh record in these entites as a whole.We have reports where we need to retrieve the data across all the enties in one report.Intially we tested the reports with lesser no of records...the report performance was not that bad....but gradually it has degraded as we loaded more n more data over a period of time.The reports now takes approx 5-10 min and then finally diaplays an error msg.Infact after creating a report structure in Step 1 - Define Criteria......n moving to Step 2 - Create Layout it takes abnormal amount of time to display.As far as reports are concerned, we have built them using the best practice except the "Historical Subject Area Issue".
    Ideally for best performance how many records should be there one entity?
    What cud be the other reasons for such a performance?
    We are working in a multi tenant enviroment
    Edited by: Rita Negi on Dec 13, 2009 5:50 AM

    Rita,
    Any report built over the real-time subject areas will timeout after a period of 10 minutes. Real-time subject areas are really not suited for large reports and you'll find running them also degrades the application performance.
    Things that will degrade performance are:
    * Joins to other dimensions
    * Custom calculations
    * Number of records
    * Number of fields returned
    There are some things that just can't be done in real-time. I would look to remove joins from other dimensions e.g. Accounts/Contacts/Opportunities all in the same report. Apply more restrictive filters, e.g. current week/month to reduce the number of records required. Alternatively have very simple report, extract to excel and modify from there. Hopefully in R17 this will be added as a feature but it seems like you're stuck till then
    Thanks
    Oli @ Innoveer

  • Report performance while creating report on BEx

    All all!
    I am creating a report on BOE 4.0 on top of BEx connection as a source. I have developed reports on top of universe in the past and i know that if we keep calculations on reporting end it hampers the report performance. Is this the same case with BEx? if we are following the best practices is it ok to say that we should keep all heavy calculations/ aggregation on BEx or backend for better report performance.
    Can you guys please provide your opinion based on your experiance and knowledge.  Any feedbacks will help! Thanks.

    Hi,
    Definitely  best-practice to delegate a maximum of CKF to the Cube where possilble,  put RKF in the BEx query, and Filters too.
    also, add Default Values to your Variables (this will speed up generation of the bics transient universe)
    also, since Patch2.10, we are seeing some significant performance improvements  reducing 'document initialization' and  'time to prompts'  by up to 50% (step such as these often took 1.5 minutes, even on sized systems)
    Also, make sure you have BW corrections like this implemented:  1593802    Performance optimization when loading query views 
    In the BusinessObjects landscape - especially with BI 4.0 - it's all about Sizing and Tuning . Here is your bible the 'sizing companion' guide : http://service.sap.com/~form/sapnet?_SHORTKEY=01100035870000738725&_OBJECT=011000358700000307202011E
    Pay particular attention to BICSChunkSize registry settings
    Also, the  -Xmx JVM Heap Size for the Adaptive Processing Server  that is running the DSL_Bridge service.
    Regards,
    H

  • Report Performance for GL item level report.

    Hi All,
    I have a requirements to get GL line items
    report based on GL Line items so have created data model like 0FI_GL_4->DSO-> cube and tested everything is fine but when execute in production the report performance is very bad.
    Report contains document number, GL act, comp.code, posting date objects.
    I have decided to do as follows to improve reporting performance
    ·         Create Aggregate on Document, GL characteristic
    ·         Compression.
    Can I do aggregates 1st then do the compression.
    Please let me know if I missing out anything.
    Regards,
    Naani.

    Hi Naani,
    First fill the Aggrigates then do Compression,run SAP_INFOCUBE_DESIGN Check the size of Dimension maintain Line item, High cordinality to the dimension, Set Cahe for query in RSRT,
    Try to reduce Novigational Attr in report. Below document may help you.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/6071ed5f-1057-2e10-deb6-d3426fec0219?QuickLink=index&…
    Regards,
    Jagadeesh

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • Item/Drill Report Performance hinderance

    I am having a problem with report performance. I have a report that I have to have 5 drop down menus on top of the report. It seems the more drop down menus I add, the slower the response time when the report is actually navigated. One of the drop downs has over 1,000 options, but the other 4 drop down menus have 4-5 options. Is there a way to improve performance?

    And this is different from yesterday how?
    Please help, Discoverer Performance.
    Russ provided a few possible reasons, and asked for a bit of detail. Instead of asking the same question again, respond to Russ and others, and provide a bit more information to how things are set up.

  • Crystal Report Performance for dbf files.

    We have a report which was designed 5 -6 years ago. This report has 4 linked word doc and dbf file as datasource. This report also as 3 subreports. The size of field in dbf is 80 chars and couple of field are memo field. The report performance was excellent before we migrated the crystall report to 2008. After CR2008 the system changed and it is suddenly really slow. We have not change our reports so much it should have an influence on performance. When the user presses the preview button on printing tool window the control is transferred to Crystal. Something has happened inside black box of Crystal ( IMO ).   the dll we have are crdb_p2bxbse.dll 12.00.0000.0549 . The issues seems to be of xbase driver (not possible to use latest version of crdb_p2bxbse.dll and dbase files with memo fields).

    Hi Kamlesh,
    Odd that the word doc is opened before the RPT, I would think that the RPT would need to be opened first so it sees that the doc also needs to be opened. Once it's been loaded then the connection can be closed, CR embeds the DOC in the RPT so the original is no longer required.
    Also, you should upgrade to Service Pack 3, it appears you are still using the original release. SP1 is required first but then you should be able to skip SP2 and install SP3.
    You did not say what earlier version of Cr you were using? After CR 8.5 we went to full UNICODE support at which time they completely re-built the report designer and removed the database engines from the EXE and made them separate dll's now. OLE objecting also changed, you can use a formula and database field to point to linked objects now so they can be refreshed any time. Previously they were only refreshed when the report was opened.
    You may want to see if linking them using a database field would speed up the process. Other than that I can't suggest anything else as a work around.
    Thank you
    Don

  • 2004s Web report performance is not good ,though that of 3x web is OK.

    Hi,
    I feel 2004s Web report performance is bad, though that of 3x web is no problem (the same query is used.) it is worse than BEx analyzer.
    This query has more than 1,000 records and those queries that have many records result in the same bad performance.
    Of course there are many reason for this bad performance, please tell me your solution by which you solve the same problem like this.
    the SIDs of EP and BI is difference here.
    CPU is not consumed when 2004s web report is executed.
    And I have cancelled  virus scan to this web report...
    Kind regards,
    Masaaki

    It is bad, am sure it's down to the new .net and java based technology.  Aggregates are a way forwards though from what i've heard of the BI Accelerator this is the real way forwards.

Maybe you are looking for