How  to  improve  bsp  application  performance

hi ,
my  bsp  application performance  is  very  bad ,  it  takes  15  mins  to  open  webpage  ,  please  tell

Dear sapVeeru,
are you sure that it is the BSP that is taking much time do show ?
Isn't it any Function Modules, Method, or other that is triggered everytime this BSP is launched ?
You see, maybe the performance is related with ABAP code (SELECTS, authorizations, etc...) but not with the BSP itself.
I strongly doubt that a BSP takes 15 minutes to render.
It has to be something else.
Have you made performance analysis ?
Traces ?
Where exactly is this application taking more time ?
Is it on a method call ?
On a function module ?
What type of BSP is this ?
Custom or Standard ?
Does this happen with all your BSPs, or just this one ?
Is this a MVC type BSP ?
Or a simple BSP ?
What does your BSP page contain inside ?
Too many includes ?
Please let me know so that I can better help you.
Kind Regards
/Ricardo Quintas

Similar Messages

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • Does un-checking "Allow Debuging" in VI properties- execution improves overall application performance?

    Does un-checking "Allow Debuging" in VI properties->execution improves overall application performance?

    > Does un-checking "Allow Debuging" in VI properties->execution improves
    > overall application performance?
    Yes, though it is hard to predict by how much. When debugging is
    enabled, after each node on the diagram there will be a small amount of
    compiled code to test if the VI has a probe, a breakpoint, execution
    hiliting, or the pause button is pressed. The code is quite small,
    about two or three instructions, but if you have twenty nodes in a loop
    and the nodes are doing simple math, these instructions could be 25%
    overhead. For more complex nodes or datatypes, these instructions turn
    into a fraction of a percent.
    Turn it off and measure. If you have other performance problems, read
    the documentation on the profiler and use it to help you find yo
    ur
    bottlenecks.
    Greg McKaskle

  • How to improve database link performance?

    Hello all,
    We use db links to do DML operations on remote databases. For OLTP applications we are facing performance problems for transactions dependent on data on remote database.
    For legal and business reasons we cannot state all the data locally.
    Could anybody suggest how to improve database links performance or suggest methods/procedures/techniques to enhance speed of OLTP applications going against remote databases ?
    Thanks
    Sky

    AQ is as reliable as Oracle-- the guarantees about delivery of queued messages are the same as the guarantees about committed transactions (i.e. ACID). AQ is designed for asynchronous operation, though. If you are batching transactions, it sounds like you are already doing some sort of asynchronous operations-- I've generally found AQ a lot easier to administer & maintain than rolling your own batching system.
    If you want to tune the Oracle side of things, you'll need to explain more about the system(s) involved here. Architecture, data flow, operations that involve the dblink, etc. If you're not comfortable posting that sort of information to a public forum, feel free to send me mail directly [email protected]
    As an aside, I'm interested in how you can legally pull data from the remote system to display to your users but that you can't legally cache that data in your system via replication. Sounds like an odd constraint.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to get the Application perform actions when exits?

    How to get the Application perform actions when user clicks on the "X" icon in the top right hand corner?
    OR
    If i placed an Exit Button.... when actions that i need to use to allow my application to perform a certain action when it exits?
    Thanks

    frame.addWindowListener(new WindowAdapter() {
      public void windowClosing(WindowEvent e) {
        // do your stuff here
    });The WindowListener and WindowEvent can be found in java.awt.event package
    //David

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How to improve the OpenGL performance for AE

    I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
    I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
    Any idea how to improve the OpenGL performance please ?
    Regards

    juskocf wrote:
    But to scrub the timeline smoothly, I think OpenGL plays an important role.
    No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
    juskocf wrote:
    Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects.  Just wonder why the OpenGL Performance of such an expensive card should be so weak.
    It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
    Mylenium

  • How to Reuse BSP Application in CRM UI

    Hi All,
    I have a custom BSP Application which was used in PCUI as part of search help display. After migration of the system we would like to resuse this BSP application for the search help for the same field. Can anyone let me know what are the steps involved for this?
    Objective: Steps involved for how to reuse BSP application in CRM UI.
    Regards,
    Harish P M

    HI Harish,
    There is one idea to use transaction lancher.
    1. Check the relevant URL parameters that you need to define. You can find the parameters at the end of the URL.
    2. Define those paramters at IMG -> CRM -> Interaction Center WebClient -> Basic Functions -> Define URLs and Parameters
    3. Assign the values of the relevant parameters in the Transaction launcher wizard that can be accessed through IMG -> CRM -> Interaction Center WebClient -> Basic Functions -> Transaction Lancher Wizard at the step "Transaction parameters".
    Best,
    Levente

  • How to improve slow PowerPivot performance when adding/modifying measures, calculated columns or Relationships?

    I have been using PowerPivot for a couple of months now and whilst it is extremely quick when pulling in data to populate Pivot Tables, it is extremely slow to make the following kind of changes to the Data Model:
    - Add a Measure / Calculated Field
    - Add a Calculated Column
    - Rename a Calculated Field
    - Re-name a Calculated Column
    - Modify a relationship
    - Change a tables properties
    - Update a table
    In the status bar of excel I get a very quick 'calculating', then it spends a lot of time 'reading data',
    then it 'finalises' after which nothing is in the status bar but it still takes approx. 45 seconds before the program becomes responsive again. This waiting time does not change depending on the action, it is the same if I rename a
    column as it is if I add a new measure.
    My question is what affects performance of these actions and how do I improve it?
    To give you an idea of where my data comes from, I have:
    - 7 tables that feed into the Data Model directly from within the workbook which contains the data model itself. These are a combination of static tables and tables that connect to a MySQL database.
    - 6 separate workbooks which contain static data that is updated manually periodically (copied and pasted from another source)
    - 5 separate workbooks which contain dynamic tables that are linked to our MySQL database and update when opened.
    Now I realise that this is probably where my issue is, however I have no idea how to fix it. You do not seem to be able to connect to a MySQL database directly within the PowerPivot window itself so there is no way to generate and update tables without
    first creating them either in a worksheet or separate workbook (as far as I know).  If I try to create all of the tables directly within the single workbook containing the Data Model I get performance and crashing issues hence why I separate tables into
    individual workbooks.
    Any advice on how to improve performance would be tremendously appreciated. I'm new and keen to learn, I'm aware this set-up is far from best practice.
    Hardware wise I am using:
    - Windows 8 64-bit
    - Excel 2013 64-bit
    - Intel Core i7 processor
    - 6 GB Ram
    Thanks,
    James

    Darren,
    I think the point I was making is its in memory, geez... BTW what do all applications do when they run out of paged memory,  if PowerPivot is using all available memory then wouldn't this force the other applications to use Virtual or essentially write
    back and forth to the disks? I think Virtual memory white to disk ??, lol Also, there are parts if the architecture of Excel 2013 that when importing data into PowerPivot require memory and when working in SharePoint the PowerPivot data is cached to disk
    unless recently refreshed... But this conversation isn't help the James who asked the question and as much as I would love to continue its become a little boring..
    Hi James,
    If you download one the ODBC MySQL Connectors
    http://dev.mysql.com/downloads/connector/odbc/ and I believe yours is the first one for x64 systems and connect directly to the data you should be able to reduce the number of workbooks your opening and if you notice in the following graphic these
    connection are automatically refreshed by default, the parts in red are the differences between PowerPivot 2010 and 2013
    You should notice a lot of improvement especially when refreshing data please let us know how it goes...
    After registering the ODBC Driver
    Click Add. on the User-DSN sheet, choose the “MySQL ODBC 5.x driver”, fill in the credentials, choose a database (from the select menu) and a data source name and you’re done.
    Back in Excel you go on the PowerPivot section of the ribbon and open the PowerPivot window  (the green icon on the left side). In the ‘Home’ section of that window you will see a small gray cylindrical symbol (the international
    symbol for “database”) which will suggest to you different data sources to choose from. Take the one where it says “ODBC”.
    In the next dialog you click on create, choose the adapter, and then Ok. Back in the assistant you can check the connection and proceed.
    Now you have the choice between importing the data from tables using the import assistant or Query depends on your skillset..
    Cheers,
    Ivan
    Ivan Sanders <a href="http://www.linkedin.com/in/iasanders">My LinkedIn </a> , <a href="http://msmvps.com/blogs/ivansanders">My Blog</a>, <a href="http://twitter.com/iasanders"> @iasanders</a>,
    <a href="http://shop.oreilly.com/product/0790145372703.do">BI in SP2013</a>, <a href="http://sharepointdemobuilds.codeplex.com">SP2013 Content Packs</a>.

  • ANN: Learn How to Enhance J2EE Application Performance with ESI

    OTN's newly enhanced Virtual Shopping Mall application illustrates how ESI technology can improve J2EE 1.3 application performance and availability.
    http://otn.oracle.com/sample_code/products/ias/web_cache/index.html
    Cheers,
    -Srikanth

    This solution does not seem to work if it is CDSSO.

  • How to improve the write performance of the database

    Our application is a write intense application, maybe will write 2M/second data to the database, how to improve the performance of the database? We mainly write to 5 tables of the database.
    Currently, the database get no response and the CPU is 100% used.
    How to tuning this? thanks in advance.

    Your post says more by what is not provided than by what is provided. The following is the minimum list of information needed to even begin to help you.
    1. What hardware (server, CPU, RAM, and NIC and HBA cards if any pointing to storage).
    2. Storage solution (DAS, iSCSCI, SAN, NAS). Provide manufacturer and model.
    3. If RAID which implementation of RAID and on how many disks.
    4. If NAS or SAN how is the read-write cache configured.
    5. What version of Oracle software ... all decimal points ... for example 11.1.0.6. If you are not fully patched then patch it and try again before asking for help.
    6. What, in addition to the Oracle database, is running on the server?
    2MB/sec. is very little. That is equivalent to inserting 500 VARCHAR2(4000)s. If I couldn't do 500 inserts per second on my laptop I'd trade it in.
    SQL> create table t (
      2  testcol varchar2(4000));
    Table created.
    SQL> set timing on
    SQL> BEGIN
      2    FOR i IN 1..500 LOOP
      3      INSERT INTO t SELECT RPAD('X', 3999, 'X') FROM dual;
      4    END LOOP;
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.07
    SQL>Now what to do with the remaining 0.93 seconds. <g> And this was on a T61 Lenovo with a slow little 7500RPM drive and 4GB RAM running Oracle Database 11.2.0.1. But I will gladly repeat it using any currently supported version of the product.

  • Improve windows application performance in c#

    Actually my project logical function work is clustering documents means it's take number of documents grouping similar documents.in this i have logical methods for reding the data applying fuzzy clustering alogrithems.in this processing function is it's
    take more time untill win Ui static then give the result.pls slove time complexity for this project
    whenever i have perform the operations on win UI it's take more time responds to the win 
    form.it's take 15 -20 minutes after that give the result. i want improve the performance of
    application in c#.

    Hi
    ramesh,
    I think you can debug your code to test which line code take too long time.
    In addition, Here are some tips about how to improve your performance of C# code. They will help you.
    >>5 Tips to Improve Performance of C# Code
    http://www.c-sharpcorner.com/UploadFile/dacca2/5-tips-to-improve-performance-of-C-Sharp-code/
    >>How to increase performance for C#.net windows application
    http://forums.asp.net/t/1641252.aspx?How+to+increase+performance+for+C+net+windows+application
    Best regards,
    Kristin
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to Improve Report View performance

    Hi All, i have a webi report which runs about 3 minutes. But when i click on view the report takes about 21 seconds(average) or so to open up. Any ideas on how to improve the report view performance? Does it have anything to do with server load? Any server settings to tweak to speed it up? Any ideas are appreciated.
    The requirement is that my web team has to strip off the Business Objects logo etc(using sdk), and display the report in my company web page, so its
    looking sort of ugly as the web page is taking about 21 seconds just to display the report.
    Some Report statistics:
    Report size is about 90 MB, as it has about 300 k rows of data(which i am aggregating using formulas)
    Report has about 15 simple division formulas
    Report is in Drill Mode. There are about 5 drill filters
    Thanks,
    Kon

    Hi Larry,
    I'll assume you are scheduling this report and viewing the instance in ~21 seconds.  Is that correct?
    We definitely need some environment info to go along with this post.  Like Simone said, Product Version, Patch Level, and other OS, Hardware, App Server details would help as well.
    There are certain properties of a document that can slow down the rendering of a report but we generally have to look at the logs to determine what part of the report is taking the longest time to process.  Assuming this is an instance, I would be curious to know if it is quicker to come up if you immediately view it a second time?
    If you were to turn on a trace, you would see a number of lines like this:
    2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut
    2011/06/15 20:11:54.153|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut: 0
    2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:QTDP_StreamUnit_SerializeOut: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveMe_Serial: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveAll_Serial: 0.015
    The numbers at the end are how long the function took to run.  Generally the function gives us an idea of what the engine was doing.
    When evaluating performance issues, you can occasionally find a function that is taking long to run within the logs and based on the function and module names, it can sometime lead you to the reason it is taking longer than expected.
    Another good test might be to run a very basic report to see how long it takes to come up.  Even a report without a datasource would suffice as that will give you your baseline time on how long it takes to load the viewer, convert the WID file to XML and send it up through the application server to your browser.  If a test report takes 15 seconds to view, then you are really only looking at 6 seconds for this other report.
    Hope this helps and gets you started.  More environment info would help take it further.
    Thanks
    Jb

  • How to improve and maintain performance of droid phones

    ive read bits and pieces about how to make the phones faster and stuff but whats the best way of improving  the phones performance without overclocking and putting on custom roms and maintaining that performance.

    Biggest thing to do is keep the cache cleared out the applications. I recommend once a week check depending on usage.
    Keep an eye on your internal storage. Any thing below 30mb needs some serious cleaning of applications, cache, call history, text messages, in that order. I try and keep my internal storage at 50 mb or higher.
    Also try and pay attention to your Dialer Storage. It holds the call history and text messages. But it can grow quickly. I found out the hard way. Had some ringtones saved in the text message thread but rarely looked at them. Then one weekend almost six months after I got, I was looking at the thread a lot because a new message had been sent from that number. The dialer storage went from 5mb to 21mb in a couple of days. Even after deleting the entire thread it only went down 1mb. There was no way to clear data for that app so I ended up doing a factory reset. Now Dialer Storage is a baby size of 64 kb!
    I never used a task killer only task managers.
    I have a battery monitor and have seen no big difference. However I don't use facebook or twitter so I don't have those constantly updating.

Maybe you are looking for