DMVPN Performance vs Latency best practice

I am currently investigating an issue which I find rather difficult to "catch" and find information about.
We are running a DMVPN environment based on 2951 HUB routers & 1941 Spoke routers all over the globe (20 locations).
HUB routers are connected on 200Mbit Internet lines, the spokes are connected on lots of different speeds most of them 10Mbit / 20Mbit, and 95% are performing well (getting 80 to 90% of the offered internet line over the VPN)
Recently we added a new location which is having performance issues. From my perspective it's a problem with the local ISP. But it also made me a bit more aware of having a quite high latency 220ms which might ask for tweaking the TCP window size.
I did find some info about setting the ip tcp window-size on the routers, but this made absolutely no change in performance what so ever. (and I tried lots of different calculations / values)
So this gives me the impression there is already a mechanism active which optimizes the TCP window size.
Trying to find more information in regard to optimize DMVPN connections vs latency as our new locations is connected via a 30Mbit line but via the VPN we do not even get 5 to 7 Mbit.
We did some serious testing with the ISP and from my perspective it is still an issue from their side / the routing / peering we are getting from these guys. But the ISP keeps pointing out the latency v.s. performance and advises to adjust the TCP windows size.
As performance  has never being an issue, and worked to our expectations makes me new to the debugging of our VPN networks in regard to performance.
I would love to share some thoughts here, or pointed into the right directions / to the right place to find documentation.  I want to be able to give an educated answer to my ISP that it is an issue on their / the internet side.

anything you push it to DB(SQL), will be the faster than processing outside.

Similar Messages

  • General Oracle Database Performance trouble solving best practice Steps

    We use  Oracle 11g DB on Windows2008R2 as web application backend DB.
    We have peformance trouble in that DB.
    I would like to know General Oracle Database Performance trouble solving best practice Steps.
    Is there any good general best practice document for performace trouble solving in the internet ?

    @Girish Sharma:  I disagree with this. Many people say things like your phrase "..first identify the root cause and then move forward" but that is not the first step. Any such technique is nothing more than looking at some report, finding a number that you don't like, and attempting to "fix" it. Some people use that supposedly funny term "compulsive tuning disorder" (first used by Gaja Krishna Vaidyanatha) to describe this approach (also advocated in this topic by @Supriyo Dey). The first step must be to determine what the problem is. Until you know that, all those reports you mentioned (which, remember, require EE plus pack licences) are useless.
    @teradata0802, your best practice starts by finding the problem. Is it, for example, that the overnight batch jobs don't finish until lunchtime? A screen takes 10 seconds to refresh, and your target is one second? A report takes half an hour, but you need to run it every five minutes? Determine what business function is causing your client to lose money because it is too slow. Then investigate what it is doing, how, and why. You have to begin by focussing on the problem, not by running database-wide reports..

  • Performance/Isolation/Reliability Best Practices

    I tried to find a blog or something about this in the docs, but could not seem to locate information on it. So if someone could share some words of wisdom, it would be appreciated.
    If you are going to run say ten web sites on one physical server.
    Each web site will be a virtual host with its own ip address.
    Most of the web sites will have https.
    Each web site will have a running Java application.
    Is it best to have all the web sites in one configuration?
    Should they each have their own configuration?
    Should they be split into groups like ecommerce/forum etc.
    Traffic to each site would be moderate. Nothing high volume.
    The server is a low range server. AMD64 with 2G of Ram.
    Any suggestions would be appreciated.
    Thanks,
    Tony Z

    A lot of the answer to this question will hinge on a great big "Well that depends on what your goals are."
    Is it best to have all the web sites in a single configuration?
    "Best" is relative. Having each site have its own configuration also means each site has its own server process, heap space, JVM, etc. Some will tell you that this is better from a security point of view; it's much harder for data from Site A to bleed into Site B if they're running in different processes (side note - I've been using Web Server for over 10 years and have never seen a problem with content bleeding from one virtual server to another within a single process). Of course the down side to this configuration is that you have a lot of wasted memory and process space - multiple JVMs, multiple file caches, etc.
    Is it best to have all the sites live within a single configuration?
    Again, that depends. The server itself is very well designed to handle this configuration, but security auditors may give your heartburn over it (see above - I don't think it's a very valid concern), and you run the risk that if, for some reason, one of the sites leaks resources (memory growth, file descriptors, whatever) it could negatively impact all the sites if/when that resource is exhausted. This scenario is typically the result of third-party NSAPI code doing bad things, or applications managing database connections poorly.
    I run an dual AMD-64 server on a 64-bit Linux kernel, 2GB RAM. My sites use PHP quite a bit (about 40% of all the HTTP requests are for PHP pages). I run all the sites in a single configuration with multiple obj.conf files (a dozen sites total, run through six obj.conf files). Some of the virtual sites exist only as redirects to other sites, and these are simply accomplished with a Client or If statement in the default obj.conf file. Due to PHP's tendency to be a little finicky when running in a multi-threaded environment (more of an issue for the PHP modules than PHP itself), I hang the PHP engine off of Web Server via the FastCGI interface. This is not a configuration easy to duplicate with the JVM due to the way it's architected into the server, but it's not impossible.
    For my dozen sites, handling around 300,000 HTTP hits a day, my HTTP process is around 50M. And I have roughly a bajillion PHP processes hanging off the back, and each of those is around 20m. The PHP processes are in constant churn (they crash, or their own internal timers bounce them once certain limits are hit), but the Web Server process typically goes >60 days without my restarting it (and that's usually due to my upgrading something, or changing a config that requires a restart, etc). I'm using a single WS process with the default thread configurations (48 min threads, 128 max, etc).
    Now I'm just rambling.
    As I said - "It depends on what your goals are."

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • What is the best Practice to improve MDIS performance in setting up file aggregation and chunk size

    Hello Experts,
    in our project we have planned to do some parameter change to improve the MDIS performance and want to know the best practice in setting up file aggregation and chunk size when we importing large numbers of small files(one file contains one record and each file size would be 2 to 3KB) through automatic import process,
    below is the current setting in production:-
    Chunk Size=2000
    No. Of Chunks Processed In Parallel=40
    file aggregation-5
    Records Per Minute processed-37
    and we made the below setting in Development system:-
    Chunk Size=70000
    No. Of Chunks Processed In Parallel=40
    file aggregation-25
    Records Per Minute processed-111
    after making the above changes import process improved but we want to get expert opinion making these changes in production because there is huge number different between what is there in prod and what change we made in Dev.
    thanks in advance,
    Regards
    Ajay

    Hi Ajay,
    The SAP default values are as below
    Chunk Size=50000
    No of Chunks processed in parallel = 5
    File aggregation: Depends  largely on the data , if you have one or 2 records being sent at a time then it is better to cluster them together and send it at one shot , instead of sending the one record at a time.
    Records per minute Processed - Same as above
    Regards,
    Vag Vignesh Shenoy

  • Best practice to monitor 10gR3 OSB performance using JMX API?

    Hi guys,
    I need some advice on the best practice to monitor 10gR3 OSB performance using JMX API.
    Jus to show I have done my home work, I managed to get the JMX sample code from
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/example.html#wp1109828
    working.
    The following is the list of options I am think about:
    * Set up: I have a cluster of one 1 admin server with 2 managed servers, which managed server runs an instance of OSB
    * What I try to achieve:
    - use JMX API to collect OSB stats data periodically as in sample code above then save data as a record to a
         database table
    Options/ideas:
    1. Simplest approach: Run the modified version of JMX sample on the Admin Server to save stats data to database
    regularly. I can't see problems with this one ...
    2. Use WLI to schedule the Task of collecting stats data regularly. May be overkill if option 1 above is good for production
    3. Deploy a simple web app on Admin Server, say a simple servlet that displays a simple page to start/stop and configure
    data collection interval for the timer
    What approach would you experts recommend?
    BTW, the caveats os using JMX in http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/concepts.html#wp1095673
    says
         Oracle strongly discourages using this API in a concurrent manner with more than one thread or process. This is because a reset performed in
         one thread or process is not visible to another threads or processes. This caveat also applies to resets performed from the Monitoring Dashboard of
         the Oracle Service Bus Console, as such resets are not visible to this API.
    Under what scenario would I be breaking this rule? I am a little worried about its statement
         discourages using this API in a concurrent manner with more than one thread or process
    Thanks in advance,
    Sam

    Hi Manoj,
    Thanks for getting back. I am afraid configuring aggregation interval from Dashboard doesn't solve problem as I need to collect stats data of endpoint URI or in hourly or daily basis, then output to CSV files so line graphs can be drawn for chosen applications.
    Just for those who may be interested. It's not possible to use SQL to query database tables to extract OSB stats for a specified time period, say 9am - 5pm. I raised a support case already and the response I got back is 'No'.
    That means using JMX API will be the way to go :)
    Has anyone actually done this kind of OSB stats report and care to give some pointers?
    I am thinking of using 7 or 1 days as the aggregation interval set in Dashboard of OSB admin console then collects stats data using JMX(as described in previous link) hourly using WebLogic Server JMX Timer Service as described in
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jmxinst/timer.html instead of Java's Timer class.
    Not sure if this is the best practice.
    Thanks,
    Regards,
    Sam

  • Function Module performance in Crystal Reports - Best practices

    Hi all,
    We are following a function module based approach for our crystal reporting needs. We tried to follow an infoset approach, but found that most of the critical fields required for reports were retrieved from function modules and bapis.
    Our reports contain some project filters/parameter fields based on which the task reports would be created. I was wondering what would be the best approach/best practices to be considered while designing the FM so as not to impact the crystal report performance? 
    We created a sample FM in our test system with just the table descriptions ( without the input parameters) which would retrieve all the projects and found that crystal reports crashed while trying to retrieve all the records. I am not sure if this is the right approach since this is our project in using FMs for crystal reports.
    Thank you
    Vinnie

    yes. We did try following the infoset approach against the tables however since our project reports contain long text fields and status texts ( retrieved via FMs), we opted for the FM approach. Do you know how texts can be handles from ABAP to Crystal reports?

  • Performance Tuning Best Practices/Recommendations

    We recently went like on a ECC6.0 system.  We have 3 application servers that are showing a lot of swaps in ST02. 
    Our buffers were initially set based off of SAP Go-Live Analysis checks.  But it is becoming apparent that we will need to enlarge some of our buffers.
    Are there any tips and tricks I should be aware of when tuning the buffers? 
    Does making them too big decrease performance?
    I am just wanting to adjust the system to allow the best performance possible, so any recommendations or best practices would be appreciated.
    Thanks.

    Hi,
    Please increase the value of parameters in small increments. If you set the parameters too large, memory is wasted. This can result in paging if too much memory is taken from the operating system and allocated to SAP buffers.
    For example, if abap/buffersize is 500000, change this to 600000 or 650000. Then analyze the performance and adjust parameters accordingly.
    Please check out <a href="http://help.sap.com/saphelp_nw04/helpdata/en/c4/3a6f4e505211d189550000e829fbbd/content.htm">this link</a> and all embedded links. The documentation provided there is fairly elaborate. Moreover, the thread mentioned by Prince Jose is very good for a guideline as well.
    Best regards

  • Reflection Performance / Best Practice

    Hi List
    Is reflection best practice in the followng situation, or should I head down the factory path? Having read http://forums.sun.com/thread.jspa?forumID=425&threadID=460054 I'm now wondering.
    I have a Web servlet application with a backend database. The servlet currently handles 8 different types of JSON data (there is one JSON data type for each table in the DB).
    Because JSON data is well structured, I have been able to write a simple handler, all using reflection, to dynamically invoke the Data Access Object and CRUD methods. So one class replaces 8 DAO's and 4 CRUD methods = 32 methods - this will grow as the application grows.
    Works brilliantly. It's also dynamic. I can add a new database table by simply subclassing a new DAO.
    Question is, is this best practice? Is there a better way? There are two sets of Class.forName(), newInstance(), getClass().getMethod(), invoke() ; one for getting the DAO and one for getting the CRUD method.....
    What is best practice here. Performance is important.
    Thanks, Len

    bocockli wrote:
    What is best practice here. Performance is important.I'm going to ignore the meat of your question (sorry, there are others who probably have better insights there) and focus on this point, because I think it's important.
    A best practice, when it comes to performance is: have clear, measurable goals.
    If your only performance-related goal is "it has to be fast", then you never know when you're done. You can always optimize some more. But you almost never need to.
    So you need to have a goal that can be verified. If your goal is "I need to be able to handle 100 update requests for Foo and 100 update requests for Bar and 100 read-only queries for Baz at the same time per second", then you have a definite goal and can check if you reached it (or how far away you are).
    If you don't have such a goal, then you'll be optimizing until the end of time and still won't be "done".

  • What is the best practice to perform DB Backup on Sun Cluster using OSB

    I have a query on OSB 10.4.
    I want to configure OSB 10.4 on 2 Node Sun Cluster where the oracle database is running.
    When im performing DB backup, my DB backup job should not get failed if my node1 fails. What is the best practice to achieve this?

    Hi,
    Each Host that participates in an OSB administrative domain must also have some pre-configured way to resolve a host name to an IP address.Use DNS, NIS etc to do this.
    Specify cluster IP in OSB, so that OSB always looks for Cluster IP only instead of physical IPs of each node.
    Explanation :
    If it is 2-Node OR 4-Node, when Cluster software installed in these nodes we have to configure Cluster IP so that when one node fails Cluster IP will automatically move to the another node.
    This cluster IP we have to specify whether it is RMAN backup or Application JDBC connection. Failing to second node/another Node is the job of Cluster IP. So wherever we install cluster configuration we have to specify in all the failover places specify CLUSTER IP.
    Hope it helps..
    Thanks
    LaserSoft

  • High performance website, best practices?

    Hello all,
    I'm working on a system with a web service/Hibernate (Java code linking web pages to the database) front-end which is expected to process up to 12,000 transactions per second with zero downtime. We're at the development/demonstration stage for phase 1 functionality but I don't think there has been much of a planning stage to make sure the metrics can be reached. I've not worked on a system with this many transactions before and I've always had downtime where database and application patches can be applied. I've had a quick look into the technologies available for Oracle High Availability and, since we are using 11g with RAC I know we have at least paid for them even if we're not using them.
    There isn't a lot of programming logic in the system (no 1000-line packages accessing dozens of tables, in fact there are only about 20 tables) and there are very few updates. It's mostly inserts and small queries getting a piece of data for use in the front-end.
    What I'd like to know is the best practice development for this type of system. As far as I know, the only person on the team with authority and an opinion on technical architecture wants to use the database as a store of data and move all the logic into the front-end. The thinking behind this is
    1) it's easier to load balance or increase capacity in the front-end
    2) the database will be the bottleneck in the system so should have as little demand placed on it as possible
    3) pl/sql packages cannot always be updated without downtime (I'm not sure if this is true or if it can be managed -- the concern is that packages become invalid whilst the upgrade script is running -- or how updates in the front-end could be managed any better, especially if they need to be coordinated with changes to tables)
    4) reference tables can be cached in the front-end to cut down on data access
    Views please!

    Couple of thoughts
    - Zero downtime (Or at least very close to it) can be acheivable, but there is a rapidly diminishing return on cost in squeezing the last few percent out of uptime, if you can have the odd planned maintenance window then you can make your life a lot easier.
    -If you decide ahead of time that the database is going to be the bottleneck, then it probably will be!
    -I can understand where they are coming from with their thinking, the web tier will be easier to scale out, but eventually all that data still needs to get into the database. The database layer is where you need to start the design to get the most out of the platform. Can it handle 12,000 TPS? If it can't then it doesn't matter how quickly your application layer can service those requests.
    -If this is mainly inserts, could these be queued in somesort of message queue? Allow the clients to get an instant (Well almost) 'Done' confirmation, where the database will be eventually consistent? Very much depends on what this is being used for of course but this could help with both the performance (At east the 'percieved' performance) and the uptime requirement.
    - Caching fairly static data sounds like a good idea to me.
    Carl

  • CE Benchmark/Performance Best Practice Tips

    We are in the early stages of starting a CE project where we expect a high volume of web service calls per day (e.g. customer master service, material master service, pricing service, order creation service etc).
    Are there any best-practice guidelines which could be taken into account to avoid any possible performance problems within the web service u201Cinfrastructureu201D? 
    Should master data normally residing in the backend ECC server be duplicated outside ECC? 
    e.g. if individual reads of the master data in the backend system take 2 seconds per call, would it be more efficient to duplicate     the master data on the SAP AS Java server, or elsewhere u2013 if the master data is expected to be read thousands of times each    day.
    Also, what kind of benchmarking tools (SAP std or 3rd party) are available to assess the performance of the different layers of the infrastructure during integration + volume testing phases?
    I've tried looking for any such documentation on SDN, OSS, help.sap.com, but to no avail.
    Many thanks in advance for any help.
    Ali Crawshaw

    Hi Ali,
    For performance and benchmarking have you had a look at Wiley Introscope?
    The following presentation has some interesting information [Wiley Introscope supports CE 7.1|http://www.google.co.za/url?sa=t&source=web&ct=res&cd=7&ved=0CCEQFjAG&url=http%3A%2F%2Fwww.thenewreality.be%2Fpresentations%2Fpdf%2FDay2Track6%2F265CTAC.pdf&ei=BUGES-yyBNWJ4QaN7KzXAQ&usg=AFQjCNE9qA310z2KKSMk4d42oyjuXJ_TfA&sig2=VD1iQvCUmWZMB5OB-Z4gEQ]
    With regards to best practice guidelines, if you are using PI for service routing try to keep to asynch services as far as possible, asynch with acknowledgments if need be. Make sure your CE Java AS is well tuned according to the SAP best practice.
    Will you be using SAP Global Data Types for your service development? If you are then the one performance tip i have regarding the use of GDT's is to keep your GDT structures as small (number of fields) as possible, as large GDT structures have an impact on memory consumption at runtime.
    Cheers
    Phillip

  • XDK -Performance best practices etc

    All ,
    Am looking for some best practices with specific emphasis on performance
    for the Oracle XDK ..
    can any one share any such doc or point me to white papers etc ..
    Thanks

    The following article discusses how to choose the most performant parsing strategy based on your application requirements.
    Parsing XML Efficiently
    http://www.oracle.com/technology/oramag/oracle/03-sep/o53devxml.html
    -Blaise

  • OWB Repository Performance, Best Practice

    Hi
    We are considering installing OWB repository in its own database, designed solely to the design repository to achieve maximum performance at the design center.
    Does anyone have knowledge of best practice in setting up the database to OWB repository? (db parameters, block size and so on).
    We are currently using Release 11.1.
    BR
    Klaus

    You can found all this informations in the documentation. Just here:
    http://download.oracle.com/docs/cd/B31080_01/doc/install.102/b28224/reqs01.htm#sthref48
    You will find all Initialization Parameters for the Runtime Instance and for the design instance.
    Success
    Nico

  • Coherence Best Practices and Performance

    I'm starting to use coherence and I'd to know if someone could point me out some doc on Best Practices and Performance optimizations when using it.
    BTW, I haven't had the time to go through the entire Oracle documentation.
    Regards

    Hi
    If you are new to Coherence (or even for people who are not that new) one of the best things you can do is read this book http://www.packtpub.com/oracle-coherence-35/book I know it says Coherence 3.5 and we are currently on 3.7 but it is still very relevant.
    You don't need to go through all the documentation but at least try the introductions and try out some of the examples. You need to know the basics otherwise it makes it harder for people to either understand what you want or give you detailed enough answers to questions.
    For performance optimizations it depends a lot on your use cases and what you are doing; there are a number of things you can do with Coherence to help performance but as with anything there are trade-offs. Coherence on the server-side is a Java process and often when tuning, sorting out issues and performance I spend a lot of time with the usual tools for Java such as VisualVM (or JConsole), tuning GC, looking at thread dumps and stack traces.
    Finally, there are plenty of people on these forums happy to answer your questions in return for a few forum points, so just ask.
    JK

Maybe you are looking for