Performance Tuning Best Practices/Recommendations

We recently went like on a ECC6.0 system.  We have 3 application servers that are showing a lot of swaps in ST02. 
Our buffers were initially set based off of SAP Go-Live Analysis checks.  But it is becoming apparent that we will need to enlarge some of our buffers.
Are there any tips and tricks I should be aware of when tuning the buffers? 
Does making them too big decrease performance?
I am just wanting to adjust the system to allow the best performance possible, so any recommendations or best practices would be appreciated.
Thanks.

Hi,
Please increase the value of parameters in small increments. If you set the parameters too large, memory is wasted. This can result in paging if too much memory is taken from the operating system and allocated to SAP buffers.
For example, if abap/buffersize is 500000, change this to 600000 or 650000. Then analyze the performance and adjust parameters accordingly.
Please check out <a href="http://help.sap.com/saphelp_nw04/helpdata/en/c4/3a6f4e505211d189550000e829fbbd/content.htm">this link</a> and all embedded links. The documentation provided there is fairly elaborate. Moreover, the thread mentioned by Prince Jose is very good for a guideline as well.
Best regards

Similar Messages

  • Where does one find the Oracle Best Practice/recommendations for how to DR

    What is the Oracle Best Practice for install/deployment and configuration of ODI 11g for Disaster Recovery?
    We have a project that is using Oracle ODI 11g (11.1.1.5).
    We have configured all the other Oracle FMW components as per the Oracle DR EDG guides. Basically using the Host ip name/aliasing concept to ‘trick’ the secondary site into thinking
    it is primary and continue working with minimal (or no) manual reconfiguration. But will this work for ODI? The FMW DR guide has sections for SOA, WebCenter and IdM, but nothing for ODI.
    Since ODI stores so much configuration information in the Master Repository..when this DB gets ‘data guarded’ to the secondary site and promoted to Primary…ODI will still think it is at the ‘other’ site. Will this break the actual agents running the scenarios?
    Where does one find the Oracle Best Practice/recommendations for how to DR ODI properly?
    We are looking for a solution that will allow a graceful switchover/failover with minimal manual re-configuration.

    user8804554 wrote:
    Hi all,
    I m currently testing external components with Windows Server and I want to test Oracle 11g R2.
    The only resource I have is this website and the only binaries seem to be for Linux OS.You have one other HUGE resource that, while it won't answer your current question, you'd better start getting familiar with if you are going to use Oracle. That is the complete and official documentation, found at tahiti.oracle.com
    >
    Does anybody know how I can upgrade my Oracle 11.1.0.7 version to the R2 release?
    Thanks,
    Bertrand

  • What is the best practices recommended from microsoft to give access a intranet portal from internet externally

    Hi
    what is the best practices recommended from microsoft
    i have a intranet portal in my organization used by employees  and i want to give access for employees to access external from  internet also
    can i use same url  for employees access intranet portal from internally and externally or diffrent url?
    like ( https://extranet.xyz.com.in)  and (http://intranet.xyz.com.in)
    internal url access by employees is( http://intranet.xyz.com.in)
    and this portal configured with claims based authentication
    here i have a F5 for load blance and
     a request from external to F5 is https request and F5 to sharepoint server http request
    and sharepoint server to F5 is http request but F5 to external users it is https response so 
    when i change below settings in alternate access mapings   all links changed to https
    but only authentication link is still showing http and authentication page not opened.
    adil

    Hi,
    One of my clients has an environment similar to yours with an internal pair of F5s and a pair used for the access from the internet. 
    I am only going to focus on the method using an F5 Load Balancer and SSL Offloading. the setup of the F5 will not be covered in detail but a reference to the documentation to support SharePoint and SSL Offloading will be provided
    Since you arte going to be using SSL Offloading you do not need to extend your WebApps to use separate IIS WebSites with Unique IP Addresses
    Configure the F5 with SSL Offloading
    Configure a Internal AAM for SSL (HTTPS) for each WebApp that maps to the Public HTTP FQDN AAM Setting for each WebApp
    Our environment has an additional component we require RSA Authentication for all internet facing Sites. So we have the extra step of extending the WebApp to a separate IIS WebSite and configuring RSA for each extended WebSite.Reference:
    Reference SharePoint F5 Configuration:
    http://www.f5.com/featured/video/ssl-offloading/
    -Ivan

  • Highest Quality Live Video Streaming | Best Practice Recommendations?

    Hi,
    When using FlashCollab Server, how can we achieve best quality publishing a live stream?
    Can you provide a bullet list for best practice recommendations?
    Our requirement is publishing a single presenter to many viewers (generally around 5 to 50 viewers).
    Also, can it make any difference if the publisher is running Flash Player 10 vs Flash Player 9?
              Thanks,
              g

    Hi Greg,
    For achieving best quality
    a) you should use RTMFP connection instead of RTMPS. RTMFP has a lower latency.
    b) You should the player 10 swc.
    c) If bandwidth is not a restriction for you, you can use higest quality values. WebcamPublisher class has property for setting quality.
    d) You can use a lower keyframeInterval value, which in turn  will send videos in full frames rather than by a video compression algorithm.
    e) You should use Speex Codec. Speex Codec is again provided with player 10 swc.
    These are some suggestions that can improve your quality depending on your requirements.
    Thanks
    Hironmay Basu

  • Could you point me to a ADF development best practice recommendation doc

    Could you point me to a ADF development best practice recommendations document for ADF 11g that could be used as a guideline for developers.
    Naming conventions
    Usage of Models, Implement validation in BC...
    Best practices for the UI with ADF Faces...
    Recommendations.
    Thanks

    The right place to start :
    http://groups.google.com/group/adf-methodology
    Also you may take a look at this:
    http://www.oracle.com/technology/products/jdev/collateral/4gl/papers/Introduction_Best_Practices.pdf
    Also
    http://groups.google.com/group/adf-methodology/browse_thread/thread/e7c9d557ab03b1cb?hl=en#
    There're some interesting tips there.

  • What are Best Practice Recommendations for Java EE 7 Property File Configuration?

    Where does application configuration belong in modern Java EE applications? What best practice(s) recommendations do people have?
    By application configuration, I mean settings like connectivity settings to services on other boxes, including external ones (e.g. Twitter and our internal Cassandra servers...for things such as hostnames, credentials, retry attempts) as well as those relating business logic (things that one might be tempted to store as constants in classes, e.g. days for something to expire, etc).
    Assumptions:
    We are deploying to a Java EE 7 server (Wildfly 8.1) using a single EAR file, which contains multiple wars and one ejb-jar.
    We will be deploying to a variety of environments: Unit testing, local dev installs, cloud based infrastructure for UAT, Stress testing and Production environments. **Many of  our properties will vary with each of these environments.**
    We are not opposed to coupling property configuration to a DI framework if that is the best practice people recommend.
    All of this is for new development, so we don't have to comply with legacy requirements or restrictions. We're very focused on the current, modern best practices.
    Does configuration belong inside or outside of an EAR?
    If outside of an EAR, where and how best to reliably access them?
    If inside of an EAR we can store it anywhere in the classpath to ease access during execution. But we'd have to re-assemble (and maybe re-build) with each configuration change. And since we'll have multiple environments, we'd need a means to differentiate the files within the EAR. I see two options here:
    Utilize expected file names (e.g. cassandra.properties) and then build multiple environment specific EARs (eg. appxyz-PROD.ear).
    Build one EAR (eg. appxyz.ear) and put all of our various environment configuration files inside it, appending an environment variable to each config file name (eg cassandra-PROD.properties). And of course adding an environment variable (to the vm or otherwise), so that the code will know which file to pickup.
    What are the best practices people can recommend for solving this common challenge?
    Thanks.

    HI Bob,
    As sometimes when you create a model using a local wsdl file then instead of refering to URL mentioned in wsdl file it refers to say, "C:\temp" folder from where you picked up that file. you can check target address of logical port. Due to this when you deploy application on server it try to search it in "c:\temp" path instead of it path specified at soap:address location in wsdl file.
    Best way is  re-import your Adaptive Web Services model using the URL specified in wsdl file as soap:address location.
    like http://<IP>:<PORT>/XISOAPAdapter/MessageServlet?channel<xirequest>
    or you can ask you XI developer to give url for webservice and username password of server

  • Just updated to CC 2014. Interested in best practice recommendations for converting INDD hi-resolution print layout files to .jpeg for use on a portfolio preview website

    Seeking recommendations for best practices for converting hi-res magazine INDD docs to .jpgs for web portfolio

    Export to a hi-res PDF, then do your conversion in Photoshop where you have more control.

  • High performance website, best practices?

    Hello all,
    I'm working on a system with a web service/Hibernate (Java code linking web pages to the database) front-end which is expected to process up to 12,000 transactions per second with zero downtime. We're at the development/demonstration stage for phase 1 functionality but I don't think there has been much of a planning stage to make sure the metrics can be reached. I've not worked on a system with this many transactions before and I've always had downtime where database and application patches can be applied. I've had a quick look into the technologies available for Oracle High Availability and, since we are using 11g with RAC I know we have at least paid for them even if we're not using them.
    There isn't a lot of programming logic in the system (no 1000-line packages accessing dozens of tables, in fact there are only about 20 tables) and there are very few updates. It's mostly inserts and small queries getting a piece of data for use in the front-end.
    What I'd like to know is the best practice development for this type of system. As far as I know, the only person on the team with authority and an opinion on technical architecture wants to use the database as a store of data and move all the logic into the front-end. The thinking behind this is
    1) it's easier to load balance or increase capacity in the front-end
    2) the database will be the bottleneck in the system so should have as little demand placed on it as possible
    3) pl/sql packages cannot always be updated without downtime (I'm not sure if this is true or if it can be managed -- the concern is that packages become invalid whilst the upgrade script is running -- or how updates in the front-end could be managed any better, especially if they need to be coordinated with changes to tables)
    4) reference tables can be cached in the front-end to cut down on data access
    Views please!

    Couple of thoughts
    - Zero downtime (Or at least very close to it) can be acheivable, but there is a rapidly diminishing return on cost in squeezing the last few percent out of uptime, if you can have the odd planned maintenance window then you can make your life a lot easier.
    -If you decide ahead of time that the database is going to be the bottleneck, then it probably will be!
    -I can understand where they are coming from with their thinking, the web tier will be easier to scale out, but eventually all that data still needs to get into the database. The database layer is where you need to start the design to get the most out of the platform. Can it handle 12,000 TPS? If it can't then it doesn't matter how quickly your application layer can service those requests.
    -If this is mainly inserts, could these be queued in somesort of message queue? Allow the clients to get an instant (Well almost) 'Done' confirmation, where the database will be eventually consistent? Very much depends on what this is being used for of course but this could help with both the performance (At east the 'percieved' performance) and the uptime requirement.
    - Caching fairly static data sounds like a good idea to me.
    Carl

  • How often should the Cisco 6509 and 3750 switches be rebooted? Does Cisco have a best practice recommendation?

    How often should the 6509's and 3750's switches be rebooted?
    Does Cisco have a best practice document on this and recommendation how long the switch should be up before it gets rebooted?
    Why is a reboot needed if there are no indications of issues on the log?

    I'd agree with Larry here.
    If you're not seeing any issues with your IOS revision and there are no relevant PSIRTs (security notices applicalble to features and or exposure of your device requiring an IOS upgrade) then you can go a very long time without rebooting, if ever.
    I'm sure it's far from a record, but our corporate distribution router that supports >1000 downstream devices day in and day out has never been rebooted since installation just over 5 years ago. I have a top of rack Layer 2 switch (2900 series running CatOS) that's almost at 10 years.
    That said, you should have some monitoring scheme that assures you everything is healthy. But as long as memory and cpu are happy, the device will run forever.

  • Best practice recommendation for locale-specific text/labels

    What is the recommended best practice approach to supporting locale-specific
    text for labels, messages when using Jdeveloper to create applets and applications.
    I am familiar with resource bundles, but wonder if there is a better approach within
    JDeveloper. Are there any plans to enhance this area in 9.0.3?

    I am familiar with resource bundles, but wonder if there is a better approach within
    JDeveloper. Resourcebundles are the java-native way of handling locale-specific texts.
    Are there any plans to enhance this area in 9.0.3? For BC4J, in 9.0.3, all control-hints and custom-validation messages (new feature) are generated in resource-bundles rather than xml-files to make it easier to "extend" for multiple locales.

  • Best practice recommendation--BC set

    Dear friends,
       I am using BC set concept to capture my configurations. I am a PP Consultant.
    Let us consider, one scenario, Configure plant parameters in t.code OPPQ.
    My requirement is:
    A.   Define Floats: (Schedule Margin key)
    SM key: 001
    opening period: 1day
    Float before prod: 2day
    Float After prod: 1 day
    Release period: 1 day
    B.   Number range
    Manitain internal number range as: 10-from:010000000999999999. (for planned orders)
    This is my configuration requirement.
    Method M1:
    Name of the BC set: ZBC_MRP1
    while creating BC set first time, while defining floats, i have wrongly captured/ activated the opening periodas 100, instead of 001. But i have correctly captured the value for number range (for my planned orders)
    Now if u see the activation log for my BC set, my BC set is in "GREEN" light--Version1, successfully activated, but activated values are wrong)
    So, i want to change my BC set values. Now i want to reactivate my BC set with correct value. Now i am again activating the same BC set with corret value of opening period (Value as 001 ). After reactivating the BC set, if i get into my BC set activation log, one more version (version 2) has appeared with "GREEN" light.
    So in my activation log, two BC sets are visible.
    If i activate Version 1---wrong values will be updated in configuration
    If i activate Version 2---corrrect values will be activated in configurations
    But both versions can be activated at any point of time. The latest activated version will be alwyas in top.
    <b>So method 1 (M1) talks about, with one BC set name, maintain different versions of BC set.</b>...Based on your requirement activate the versions
    Method 2 (M2)
    Instead of creating versions within a same BC set, create one more BC set to capture new values.
    So if i activate second BC set, the configuration will be updated.
    Please suggest me, which method is best practice( M1 or M2)?
    Thanks
    Senthil

    I am familiar with resource bundles, but wonder if there is a better approach within
    JDeveloper. Resourcebundles are the java-native way of handling locale-specific texts.
    Are there any plans to enhance this area in 9.0.3? For BC4J, in 9.0.3, all control-hints and custom-validation messages (new feature) are generated in resource-bundles rather than xml-files to make it easier to "extend" for multiple locales.

  • Best practice recommendations for Payables month end inbound jobs

    During Payables month end we hold off inbound invoice files and release them once the new period is open so that invoice get created in new fiscal period. Is this an efficient way to do? Please advise best practice for this business process.
    Thanks

    Hi,
    Can someone provide your valuable suggestions.
    Thanks
    Rohini.

  • Any best practice recommendations for controlling access to dashboards?

    Everyone,
         I understand that an Xcelsius dashboard compiled into a .swf file contains no means for providing access control to limit who can or how many times they can run the dashboard. Basically, if they have a copy of the .swf they can use it as much as they'd like. To protect access to sensitive data I'd like to be able to control who can access the dashboard and how many times or how long they can access it for.
         From what I've read it seems the simplest way to do this is to embed the swf file into a web portal that requires a user to authenticate before accessing the file. I suppose I can then handle how long they can access it from the back end.
         If I do this, is there anyway a user can do something like <right click - save as> on the flash file to save it on their local machine? Is there a best practice means for properly protecting the dashboard?
    Any advice would be appreciated,
    Jerry Winner

    Everyone,
         I understand that an Xcelsius dashboard compiled into a .swf file contains no means for providing access control to limit who can or how many times they can run the dashboard. Basically, if they have a copy of the .swf they can use it as much as they'd like. To protect access to sensitive data I'd like to be able to control who can access the dashboard and how many times or how long they can access it for.
         From what I've read it seems the simplest way to do this is to embed the swf file into a web portal that requires a user to authenticate before accessing the file. I suppose I can then handle how long they can access it from the back end.
         If I do this, is there anyway a user can do something like <right click - save as> on the flash file to save it on their local machine? Is there a best practice means for properly protecting the dashboard?
    Any advice would be appreciated,
    Jerry Winner

  • Dual 7010 - Layer 3 Peering Best Practice/Recommendation

    I have 2 Nexus 7010s with 2 Nexus 5548s dual connected to each 7K. The 7010s are acting as redundant core devices. Dual Sup2E in each.
    Can someone tell me what the best practice is for layer 3 peering (EIGRP) between these devices. I can't seem to find any example documents.
    VPCs are used
    Approx 20 Vlans. Mutliple functions, lots of virtualized servers (200+) on UCS and VMware.
    A firewall HA pair will be connected - 1 to each 7K. This leads to the Internet and DMZ.
    1 MPLS WAN router will be connected to the primary 7K.
    Let me know if you need any additional info. Thanks!

    I'm not sure that I understand your question. Is it EIGRP peering across the vPC links between the Core switches?
    If so see below an example for OSPF but the concepts are the same for EIGRP
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    Don't forget to rate all posts that are helpful

  • ETL processing Performance and best practices

    I have been tasked with enhancing an existing ETL process. The process includes dumping data from a flat file to staging tables and process records from the initial tables to the permanent table. The first step, extracting data from flat file to staging
    tables is done by Biztalk, no problems here. The second part, processing records from staging tables and updating/inserting permanent tables is done in .Net. I find this process inefficient and prone to deadlocks because the code loads the data from the initial
    tables(using stored procs) and loops through each record in .net and makes several subsequent calls to stored procedures to process data and then updates the record. I see a variety of problems here, the process is very chatty with the database which is a
    big red flag. I need some opinions from ETL experts, so that I can convince my co-workers that this is not the best solution.
    Anonymous

    I'm not going to call myself an ETL expert, but you are right on the money that this is not an efficient way to work with the data. Indeed very chatty. Once you have the data in SQL Server - keep it there. (Well, if you are interacting with other data
    source, it's a different game.)
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for