Performance bottleneck on Production

Hi,
We have a Development ->Production in our landscape. Development system is rarely used.
Production system has 1.8 processing units, 9.5GB RAM, over 60% free disk space. We are experiencing performance issues. During peak hours the CPU usage is around 40% most of the time and at times suddenly rises above 150%. The are 140 users and all of them have to wait long to
carry out the day-to-day processing.
We experience long running jobs and response time is much more than the normal threshold.
Please advise if the hardware is suffiecient enough to withstand the system load. Help us fine tune the system.
Regards,
Sai R.

Hi Sai,
as EVERY performance problem:
It needs to be investigated FIRST and then you should make a descision !
You are talking on 1.8 CPUs - but you see 150% CPU Load - that is good news, as the CPUs can work on the one hand on the other hand we do not know, how many CPUs this LPAR has in reality - at least about 2.7 obviously ...
But: You never know, if the workload is the issue, or wrong customizing, missing indexes or bad ABAPs. This needs to be investigated and then you can do the appropiate stuff - but without that, it is just doing something without any useful reason.
The interesting thing here is, that even we that do these kind of analyzations pretty often, are often wrong with our first ideas ...
=> I do not believe anybody, that gives you a recommendation on the above (or a bit more) data ...
Regards
Volker Gueldenpfennig, consolut international ag
http://www.consolut.net - http://www.4soi.de - http://www.easymarketplace.de

Similar Messages

  • Performance bottleneck with subreports

    I have an SSRS performance bottleneck on my production server that we have diagnosed as being related to the use of subreports.
    Background facts:
    * Our Production and Development servers are identically configured
    * We've tried the basic restart/reboot activities, didn't change anything about the performance.
    * The Development server was "cloned" from the Production server about a month ago, so all application settings (memory usage, logging, etc.) are identical between the two
    * For the bottlenecked report the underlying stored procedure executes in 3 seconds, returning 901 rows, in both environments with the same parameters.  The execution plan is identical between the two servers, and the underlying tables and indexing
    is identical.  Stats run regularly on both servers.
    * In the development environment the report runs in 12 seconds. But on Production the report takes well over a minute to return, ranging from 1:10 up to 1:40.
    * If I point the Development SSRS report to the PROD datasource I get a return time of 14 seconds (the additional two seconds due to the transfer of data over the network).
    * If I point the Production SSRS report to the DEV datasource I get a return time of well over a minute.
    * I have tried deleting the Production report definition and uploading it as new to see if there was a corruption issue, this didn't change the runtimes.
    * Out of the hundreds of Production SSRS reports that we have, the only two that exhibit dramatically different performance between Dev and Prod are the ones that contain subreports.
    * Queries against the ReportServerTempDB also confirm that these two reports are the major contributors to TempDB utilization.
    * We have verified that the ReportServerTempDB is being backed up and shrunk on a regular basis.
    These factors tell me that the issue is not with the database or the SQL.  The tests on the Development server also prove that the reports and subreports are not an issue in themselves - it is possible to get acceptable performance from them in the
    Development environment, or when they are pointed from the Dev reportserver over to the Prod database.
    Based on these details, what should we check on our Prod server to resolve the performance issue with subreports on this particular server?

    Hi GottaLoveSQL,
    According to your description, you want to improve the performance of report with subreports. Right?
    In Reporting Services, the usage of subreport will impact the report performance, because the report server processes each instance of a subreport as a separate report. So the best way is avoid using subreport by using LookUp , MultiLookUp , LookUpSet, which
    will bridge different data sources. In this scenario, we suggest you cache the report with subreport. We can create a cache refresh plan for the report in Report Manager. Please refer to the link below:
    http://technet.microsoft.com/en-us/library/ms155927.aspx
    Reference:
    Report Performance Optimization Tips (Subreports, Drilldown)
    Performance, Snapshots, Caching (Reporting Services)
    Performance Issue in SSRS 2008
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • How do I perform a "Suite Product Activation" so that Acrobat will start working on my new Retina MacBook Pro?

    How do I perform a "Suite Product Activation" so that Acrobat will start working on my new Retina MacBook Pro?
    Like others, I have recently upgraded my MacBook Pro to the next generation and migrated all my information from old to new. Everything works seamlessly EXCEPT Acrobat. Photoshop, Indesign, Bridge and Lightroom all seem functional. Acrobat hangs with an error message "Suite Product Activation Needed. Acrobat was installed as part of a suite. To enable Acrobat, please start another element of the suite (such as Photoshop)." Needless to say, starting another element of the suite has NO impact on Acrobat.
    This has to be a common problem. Now that Apple has made it really easy to migrate information from old to new machines, it has to come up all the time. What surprises me greatly is that I can't find any coherent answer in these forums.
    So how do I get Acrobat running again?

    Hi Anubha,
    I do not understand what you said below.
    I am running Adobe Acrobat Pro Version 9.5.5. I do not remember whether it came with Photoshop or InDesign.
    When I open Photoshop, it opens without my having to follow any instructions to activate the software. As a matter of fact, I cannot find my Photoshop serial number anywhere in the Photoshop program itself. I do know it from my profile at Adobe.com. Are you suggesting I deactivate Photoshop on the new computer and then reactivate it using my serial number? Will it reactivate?
    When you say
    "/Library/Application Support/Adobe" at the root level of the startup disk (not the Library folder inside a user's Home folder)
    what do you mean? I do not have a startup disk. I have the original installation disk but that version of Photoshop has been updated a few times.
    After staring at your instructions for a while, I realized that you might be talking about the Library/Application Support folders resident on my Macintosh HD, although why you called it a startup disk is unclear to me. IAC, I went into those folders and duly moved the three folders into a new folder I called “Acrob1” and restarted Adobe Acrobat 9 and got the following error message: “AMT Subsystem Failure  The licensing subsystem has failed catastrophically. You must reinstall of call customer support.” with a small (6).
    By undoing my actions I am back to the staus quo ante.
    Now what?
    Regards, Robert

  • OWB Performance Bottleneck

    Is there any session log that is produced by the OWB mapping execution other than seeing the results in OWB Runtime Audit Browser.
    Suppose that the mapping is doing some hash join which is consuming too much amount of time and I would like to see which are the tables that are being joined at that instant. This would help me in identifying the exact area of the problem in a mapping. Does OWB provide a session log which can help me get that information, or from any other place where I can get some information regarding the operation which is causing a performance bottleneck
    regards
    -AP

    Thanks for all your suggestions. The mapping was using a join between some 4 - 5 tables and I think this was the place the mapping was getting stuck during execution in Set Based Mode. Moreover the mapping loads some 70 million records into the target table. Perhaps, loading such huge volume of data and that too in a set based mode and also with a massive join in the beginning, mapping should have got stuck somwhere.
    The solution that came up was to create a table with the join condition and use the table as input to the mapping. This helps us to get rid of the joiner in the very beginning and also the mapping be run in Row Based Target Only mode. The data (70 million) got loaded in some 4 hours.
    regards
    -AP

  • Will RAC's performance bottleneck be the shared disk storage ?

    Hi All
    I'm studying RAC and I'm concerned about RAC's I/O performance bottleneck.
    If I have 10 nodes and they use the same storage disk to hold database, then
    they will do I/Os to the disk simultaneously.
    Maybe we got more latency ...
    Will that be a performance problem?
    How does RAC solve this kind of problem?
    Thanks.

    J.Laurence wrote:
    I see FC can solve the problem with bandwidth(throughput),There are a couple of layers in the I/O subsystem for RAC.
    There is CacheFusion as already mentioned. Why read a data block from disk when another node has it in is buffer cache and can provide that instead (over the Interconnect communication layer).
    Then there is the actual pipes between the server nodes and the storage system. Fibre is slow and not what the latest RAC architecture (such as Exadata) uses.
    Traditionally, you pop a HBA card into the server that provides you with 2 fibre channel pipes to the storage switch. These usually run at 2Gb/s and the I/O driver can load balance and fail over. So it in theory can scale to 4Gb/s and provide redundancy should one one fail.
    Exadata and more "+modern+" RAC systems use HCA cards running Infiniband (IB). This provides scalability of up to 40Gb/s. Also dual port, which means that you have 2 cables running into the storage switch.
    IB supports a protocol called RDMA (Remote Direct Memory Access). This essentially allow memory to be "+shared+" across the IB fabric layer - and is used to read data blocks from the storage array's buffer cache into the local Oracle RAC instance's buffer cache.
    Port to port latency for a properly configured IB layer running QDR (4 speed) can be lower than 70ns.
    And this does not stop there. You can of course add a huge memory cache in the storage array (which is essentially a server with a bunch of disks). Current x86-64 motherboard technology supports up to 512GB RAM.
    Exadata takes it even further as special ASM software on the storage node reconstructs data blocks on the fly to supply the RAC instance with only relevant data. This reduces the data volume to push from the storage node to the database node.
    So fibre channels in this sense is a bit dated. As is GigE.
    But what about the hard drive's reading & writing I/O? Not a problem as the storage array deals with that. A RAC instance that writes a data block, writes it into storage buffer cache.. where the storage array s/w manages that cache and will do the physical write to disk.
    Of course, it will stripe heavily and will have 24+ disk controllers available to write that data block.. so do not think of I/O latency ito of the actual speed of a single disk.

  • Major performance bottleneck in JSF RI 1.0

    We've been doing some load testing this week, and have come up with what I believe is a major performance bottleneck in the reference implementation.
    Our test suite was conducted two different application servers (JBoss and Oracle) and we found that in both cases response time degraded dramatically when hitting about 25-30 concurrent users.
    On analyzing a thread dump when the application server was in this state we noticed that close to twenty threads were waiting on the same locked resource.
    The resource is the 'descriptors' static field in the javax.faces.component.UIComponentBase class. It is a WeakHashMap. The contention occurs in the getPropertyDescriptors method, which has a large synchronized block.

    Well not the answer I was hoping for. But at least that's clear.
    Jayashri, I'm using JSF RI for an application that will be delivered to testing in august. Can you give advice wether I can expect an update for this bottleneck problem within that timeframe?
    Sincerely,
    Joost de Vries
    ps hi netbug. Saw you at theserverside! :-)

  • J2EE application performance bottlenecks

    For anyone interested in learning how to get resolve J2EE application performance bottlenecks, found a great resource:
    http://www.cyanea.com/email/throttle_form2.html
    registering with them can have you win 1 of 3 iPod mini's

    I agree with yawmark's response #1 in one of your evil spams http://forum.java.sun.com/thread.jsp?thread=514026&forum=54&message=2446641

  • Array as Shared Memory - performance bottleneck

    Array as shared memory - performance bottleneck
    Hello,
    currently I work on a multi-threaded application, where many threads work on shared memory.
    I wondering why the application doesn't become faster by using many threads (I have i7 machine).
    Here is an example for initialization in single thread:
              final int arrayLength = (int)1e7;
              final int threadNumber = Runtime.getRuntime().availableProcessors();
              long startTime;
               * init array in single thread
              Integer[] a1 = new Integer[arrayLength];
              startTime = System.currentTimeMillis();
              for(int i=0; i<arrayLength; i++){
                   a1[i] = i;
              System.out.println("single thread=" + (System.currentTimeMillis()-startTime));and here initialization with many threads:
               * init array in many threads
              final Integer[] a3 = new Integer[arrayLength];
              List<Thread> threadList = new ArrayList<Thread>();
              for(int i=0; i<threadNumber; i++){
                   final int iF = i;
                   Thread t = new Thread(new Runnable(){
                        @Override
                        public void run() {
                             int end = (iF+1)*offset;
                             if(iF==(threadNumber-1))
                                  end = a3.length;
                             for(int i=iF*offset; i<end; i++){
                                  a3[i] = i;
                   threadList.add(t);
              startTime = System.currentTimeMillis();
              for(Thread t:threadList)
                   t.start();
              for(Thread t:threadList)
                   t.join();After execution it looks like this:
    single thread=2372
    many threads List=3760I have i7 4GB RAM
    System + Parameters:
    JVM-64bit JDK1.6.0_14
    -Xmx3g
    Why the executing of one thread is faster as executing of many threads?
    As you can see I didn't use any synchronization.
    Maybe I have to configure JVM in some way to gain wished performance (I expected the performance gain on i7 x8 times) ?

    Hello,
    I'm from [happy-guys|http://www.happy-guys.com] , and we developed new sorting-algorithm to sort an array on the multi-core machine.
    But after the algorithm was implemented it was a little-bit slower as standard sorting-algorithm from JDK (Array.sort(...)). After searching for the reason, I created performance tests which shows that the arrays in Java don't allow to access them by many threads at the same time.
    The bad news is: different threads slowdown each-other even if they use different array-objects.
    I believe all array-objects are natively managed by a global manager in JVM, thus this manager builds a global-lock for all threads.
    Only one thread can access any array at the same time!
    I used:
    Software:
    1)Windows Vista 64bit,
    2) java version "1.6.0_14"
    Java(TM) SE Runtime Environment (build 1.6.0_14-b08)
    Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode)
    Hardware:
    Intel(R) Core(TM) i7 CPU 920 @ 2,67GHz 2,79 GHz, 6G RAM
    Test1: initialization of array in a single thread
    Test2: the array initialization in many threads on the single array
    Test3: array initialization in many threads on many arrays
    Results in ms:
    Test1 = 5588
    Test2 = 4976
    Test3 = 5429
    Test1:
    package org.happy.concurrent.sort.forum;
    * simulates the initialization of array in a single thread
    * @author Andreas Hollmann
    public class ArraySingleThread {
         public static void main(String[] args) throws InterruptedException {
              final int arrayLength = (int)2e7;
              long startTime;
                    * init array in single thread
                   Integer[] a1 = new Integer[arrayLength];
                   startTime = System.currentTimeMillis();
                   for(int i=0; i<arrayLength; i++){
                        a1[i] = i;
                   System.out.println("single thread=" + (System.currentTimeMillis()-startTime));
    }Test2:
    package org.happy.concurrent.sort.forum;
    import java.util.ArrayList;
    import java.util.List;
    * simulates the array initialization in many threads on the single array
    * @author Andreas Hollmann
    public class ArrayManyThreads {
         public static void main(String[] args) throws InterruptedException {
              final int arrayLength = (int)2e7;
              final int threadNumber = Runtime.getRuntime().availableProcessors();
              long startTime;
              final int offset = arrayLength/threadNumber;
                    * init array in many threads
                   final Integer[] a = new Integer[arrayLength];
                   List<Thread> threadList = new ArrayList<Thread>();
                   for(int i=0; i<threadNumber; i++){
                        final int iF = i;
                        Thread t = new Thread(new Runnable(){
                             @Override
                             public void run() {
                                  int end = (iF+1)*offset;
                                  if(iF==(threadNumber-1))
                                       end = a.length;
                                  for(int i=iF*offset; i<end; i++){
                                       a[i] = i;
                        threadList.add(t);
                   startTime = System.currentTimeMillis();
                   for(Thread t:threadList)
                        t.start();
                   for(Thread t:threadList)
                        t.join();
                   System.out.println("many threads List=" + (System.currentTimeMillis()-startTime));
    }Test3:
    package org.happy.concurrent.sort.forum;
    import java.util.ArrayList;
    import java.util.List;
    * simulates the array initialization in many threads on many arrays
    * @author Andreas Hollmann
    public class ArrayManyThreadsManyArrays {
         public static void main(String[] args) throws InterruptedException {
              final int arrayLength = (int)2e7;
              final int threadNumber = Runtime.getRuntime().availableProcessors();
              long startTime;
              final int offset = arrayLength/threadNumber;
                    * init many arrays in many threads
                   final ArrayList<Integer[]> list = new ArrayList<Integer[]>();
                   for(int i=0; i<threadNumber; i++){
                        int size = offset;
                        if(i<(threadNumber-1))
                             size = offset + arrayLength%threadNumber;
                        list.add(new Integer[size]);
                   List<Thread> threadList = new ArrayList<Thread>();
                   for(int i=0; i<threadNumber; i++){
                        final int index = i;
                        Thread t = new Thread(new Runnable(){
                             @Override
                             public void run() {
                                  Integer[] a = list.get(index);
                                  int value = index*offset;
                                  for(int i=0; i<a.length; i++){
                                       value++;
                                       a[i] = value;
                        threadList.add(t);
                   startTime = System.currentTimeMillis();
                   for(Thread t:threadList)
                        t.start();
                   for(Thread t:threadList)
                        t.join();
                   System.out.println("many threads - many List=" + (System.currentTimeMillis()-startTime));
    }

  • Performance bottleneck of hard drive: assets vs. cache vs. render-to drive?

    so i'm beefing up my old mac pro tower (5,1) and was wondering which combination of use of hard drives is fastest, if anyone has any firsthand or theoretical suggestions...
    if someone has all three of these hard drives:
    A) PCIe SSD (OWC Mercury Accelsior_E2 PCI Express SSD)
    B) internal drive bay SSD
    C) external SSD connected via 600MB/s eSATA port of the above linked card
    … which is best to use in combination for the following in After Effects CC/CC2014?
    1) storage of assets files used in the AE project (ie. 1080/4k/RAW/etc footage, PSD files)
    2) AE disk cache
    3) the drive that AE is rendering to
    … for example is 1A + 2C + 3B the fastest use for rendering? and is 1AC + 2B the fastest for while working in AE?
    between assets, disk cache, and render location, which are more of a performance bottleneck?
    and does the optimal combination vary if someone had 16 GB vs 64GB vs 128GB of RAM?
    thanks in advance for any insight!

    Well, the long and short answer is: It won't matter. All your system buses only have so much overall transfer bandwith and ultimately they all end up being in some way piped through your PCI bus, which in addition is shared by your graphics card, audio devices and what have you as well. There are going to be wait states and data collisions and whether or not you can make your machine fly to Mars is ultimately not relevant. There may be perhaps some tiny advantage in using a native PCI card SSD for Cache, but otherwise the overall combined data transfer rates will be way above and beyond what your system can handle, so it will put in the throttle one way or the other.
    Mylenium

  • Performing SPAU in Production System

    Dear Team,
    We are in the process of upgrading our Production system from EHP3 to EHP7.
    We upgraded our Quality system with SPDD and SPAU transport requests from Development system and it finished successfully.
    But after upgrading our Kernel in Production system and before performing the EHP7 upgrade, we applied two SAP notes in Production system, these notes were transported from our temporary development(un-upgraded) system(parallel landscape).
    While performing the EHP upgrade in Production system,  SUM has prompted us the below message
    “Decide about Incomplete Modification Transport”
    CAUTION: The program has found the transport "MEDK906662"(our SPAU transport) for SPAU but it does not contain all objects which need adjustment.
    When we checked the log file it show that two objects are missing in the SPAU transport.We suspect these are the SAP Notes which we applied after kernel upgrade and are missing in SPAU transport request.
    Please let me know how we can correct this issue in Production system. I know that SPAU will appear in Production system, but will the system allow me to re-apply/reset the SAP note in SPAU as this system will be un-modifiable.
    Regards
    Imran

    SE06.
    System Change. Global setting "modifiable". Componenents "modifiable". Namespaces "modifiable".
    Client setting-> select client -> Automatic recording of changes, changes to repository and cross-client customising allowed, protection level 0.
    If that doesn't work, give yourself SAPALL and try again.
    If that doesn't work, contact SAP because something is messed up. I've successfully applied notes in P systems during upgrade. Including ones that modify SAP_APPL.

  • How can we improve performance while selection production orders from resb

    Dear all,
    there is a performance issue in a report which compares sales order and production order.
    Below is the code, in this while reading production order data from resb with the below select statement.
    can any body tell me how can we improve the performance? should we use indexing, if yes how to use indexing.
    *read sales order data
      SELECT vbeln posnr arktx zz_cl zz_qty
      INTO (itab-vbeln, itab-sposnr, itab-arktx, itab-zz_cl, itab-zz_qty)
      FROM vbap
      WHERE vbeln  = p_vbeln
      AND   uepos  = p_posnr.
        itab-so_qty = itab-zz_cl * itab-zz_qty / 1000.
        CONCATENATE itab-vbeln itab-sposnr
           INTO itab-document SEPARATED BY '/'.
        CLEAR total_pro.
    **read production order data*
        SELECT aufnr posnr roms1 roanz
        INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
        FROM resb
        WHERE kdauf  = p_vbeln
        AND   ablad  = itab-sposnr+2.

    Himanshu,
    Put a break point before these two select statements and execute in the production.This way you will come to know which select statement is taking much time to get executed.
    In both the select statements the where clause is not having the primary keys.
    Coming to the point of selecting the data from vbap do check the SAP note no:-185530 accordigly modify the select statement.
    As far as the table RESB is concerened here also the where clause doesn't have the primary keys.Do check the SAP Note No:-187906.
    I guess not using primary keys is maring the performance.
    K.Kiran.

  • Performance bottleneck with 2.2.1 and 2008 R2 os VM's

    Hi,
    I have DL370 g6 with Oracle vm server 2.2.1 installed
    *72 GB of Memory and 2 dual + quad core processor*
    All VM's are installed on local disk ( 6 300 GB in Raid 5 )*
    I have 2 nic connected to siwtch for lan traffic
    We have 10 VM's with 2008 R2 OS on it.
    The overall performance of these VM's is really horrible.
    They are very very very slow
    To install Databse on it takes 4 hours even though RAM is 6 GB  for each VM
    To restar the system it takes around 20 minutes .
    Has anybody tried thiese many VM's on 1 server.
    Is there any tool or any way i can see what is the issue or is there any bottlneck on the server .
    2008 R2 is generally a resource hungry OS but still the overall performance is really horrible

    hi,
    hdparm -T /dev/cciss/c0d0 which is the drive gives
    /dev/cciss/c0d0
    Timing cached reads: 31088 MB in 1.99 seconds = 15595.54 MB/sec
    HDIO_Drive_CMD(null) (wait for flush complete ) failed : Inappropriate ioctl for device
    HDIO_Drive_CMD(null) (wait for flush complete ) failed : Inappropriate ioctl for device
    hdparm  -T /dev/cciss/c0d0p5 whichi is /OVS - it gives
    /dev/cciss/c0d0p5
    Timing cached reads: 30636 MB in 1.99 seconds = 15364.10 MB/sec
    HDIO_Drive_CMD(null) (wait for flush complete ) failed : Inappropriate ioctl for device
    HDIO_Drive_CMD(null) (wait for flush complete ) failed : Inappropriate ioctl for device
    My all VM guests are windows 2008 R2 64 bit OS which is very new OS of windows.Oracle came up with New PV drivers for it and I suspect can that be a reason for all this resource bottleneck ???????????
    For I/O test use :On the VM server
    1) iostat
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.11 0.00 0.03 0.24 0.00 99.61
    Device tps BLK_read/s Blk_wrtn/s
    ciss/c0d0 - disk 41.90 186.32 26.05
    cciss/c0d0p1-/boot 0.00 0.00 0.00
    cciss/c0d0p2 - 0.00 0.00 0.00
    cciss/c0d0p3- / 1.90 1.83 43.06
    cciss/c0d0p4 0.00 0.00 0.00
    cciss/c0d0p5 - /OVS 40.00 184.49 295.82
    2) vmstat
    swpd free buff cache
    92 166632 94940 53444
    3) sar
    To display block I/O activity:
    sar -b 3 100
    Average : tps - 41.18 rtps - 7.17 wtps - 34.01 bread/s 188.99 bwrtn/s - 588.64
    To display block I/O activity for each block device:
    sar -d 3 100
    Does this look ok .Is there any way to improve overall performance.Like enabling or disabling something .We are facing a very bad problem here with all the VM with 2008 R2 OS on it

  • Structured approach to debugging performance bottlenecks for 3rd Party apps

    Hi All,
    I am facing a situation which I believe most App Support personnel and DBAs in IT organizations do, but I havent found a structured approach to solve the problem. I am hoping this thread can help filter and pull together the varied chunks of information out there in one place.
    Here is the situation. I am avoiding making it too specific, as the idea is to identify a good approach that is repeatable in other scenarios.
    We are in the process of implementing a solution using a third party application (SAP's BPC), which is sitting on an Oracle database. The application implementation team has some control on to use the application to design the solution, but no direct access to the underlying queries that the app generates. We are starting to find that as the underlying database usage size is increasing (from a couple of million to tens of mllions of records), the performance of certain operations is becoming very unpredictable. Sometimes an operation would go through, relatively fast while at other times it would get stuck for over an hour and then time-out.
    In such situations it is a classic battle between the Oracle DBAs and the App implementation team to try and push the ball in each other's court to try and identify and "fix" the problem.
    What in your opinion would be a structured approach between the two teams to help solve the problem? For each step of the approach, please also try and see if you can point to links which further dive into specifics of executing that step.
    For example, one approach might be to ...
    1. DBA team to find a way to identify specific Querios/DBOperations that are taking too long. (add references here)
    2. App team to collaborate with the App manufacturer's support organization to see what design changes or parameters could alter the nature of queries being generated or affect the size of the underlying tables. (too specific for each 3rd party app)
    3. After exhausing (2), DBA team to analyze the remaining culprit queries and find ways to obtain better performance without changing the query or the size of the database tables via indexes/DB parameters/etc.. (add references here)
    4. After exhausing (3), DBA/Unix admin team to identify which specific hardware bottlenecks are being faced (CPUs/storage/memory) to see if hardware changes can help obtain better performance.
    Thoughts?

    >
    1. DBA team to find a way to identify specific Querios/DBOperations that are taking too long. (add references here)
    2. App team to collaborate with the App manufacturer's support organization to see what design changes or parameters could alter the nature of queries being generated or affect the size of the underlying tables. (too specific for each 3rd party app)
    3. After exhausing (2), DBA team to analyze the remaining culprit queries and find ways to obtain better performance without changing the query or the size of the database tables via indexes/DB parameters/etc.. (add references here)
    4. After exhausing (3), DBA/Unix admin team to identify which specific hardware bottlenecks are being faced (CPUs/storage/memory) to see if hardware changes can help obtain better performance.
    >
    In general your approach is correct.
    However I'd put priorities different way.
    1. DBA team to find a way to identify specific Querios/DBOperations that are taking too long. (add references here)
    2. DBA team to analyze the culprit queries and find ways to obtain better performance without changing the query or the size of the database tables via indexes/DB parameters/etc.. (add references here)
    With collaboration with the App manufacturer's support if required.
    Indexes are transparent to application logic. They do not affect results data. Only performance.
    Note that indexes should be regular b-tree indexes, not unique or bitmap.
    Edited by: user11181920 on Nov 7, 2012 3:20 PM
    Changes of queries can be allowed here, with using Oracle query substitution techniques (Plan Stability, Plan Management...).
    3. After exhausing (2), DBA/Unix admin team to identify which specific hardware bottlenecks are being faced (CPUs/storage/memory) to see if hardware changes can help obtain better performance.
    Not only because today to beef up HW is less expensive way to improve performance comparing to SW optimization, especially redesign of App; but mainly, in case with SAP, the poor performance that can be improved by HW tells that the sizing of the system has been done incorrectly.
    SAP has a methodology to size your HW depending on volume of data, number of users and quantity of transactions.
    Sizing should be re-done if your data grown beyond the volume that had been used for initial SAP sizing.
    4. After exhausting (3), App team to collaborate with the App manufacturer's support organization to see what design changes or parameters could alter the nature of queries being generated or affect the size of the underlying tables. (too specific for each 3rd party app)

  • Does BPM - for a synchronous interface have a performance bottleneck

    Hi All,
    Just have a small query.
    We have a scenario in which we need to receive PO details from a legacy system, create a sales order in ecc and send back a response table to the legacy system.
    Our understanding is that this can be acheived using synchronous ABAP Proxies and also involves BPM and Abstract mappings.
    I beleive that this should not pose any problems. My concern here is that we are confused as to whether BPM would have bottlenecks with performance. Do we have any SAP document or article which mentions that for synchronous interfaces BPM is the only way to go and this would not have a significant impact on the performance.
    Another approach to the problem would be to create an asynchronous inbound proxy, write ABAP code within it and call a seperate outbound asynchronous proxy within the inbound proxy method. This approach looks and sounds very clumsy.
    Kindly let me know your thoughts or any links which would be useful.
    Thanks & Regards,
    Mz

    Hi Aashish,
    Thanks for your quick reply. it was helpful, but i am not using RFC's. Correct me if i am wrong, but i have explained the scenarios in detail below.
    Scenario 1. Synchronous
    1) PI Picks file from a common folder.
    2) PI does a data mapping and sends the data to ECC.
    3) ECC contains an inbound interface which receives the data and in which abap proxy code is written.
    4) The abap proxy code executes a function module and sends the response as an internal table back to PI.
    5) PI receives the response and places it in a text/csv file and places it back to another folder.
    I assume that the above would be possible only using BPM. What i understand is that in order for an interface to receive and send data, abstract mappings are to be used, and for this BPM is required. We do not have any conversions etc. its just a simple matter of receiving an internal table from ECC and creating a file to place in the folder.
    I also understand that BPM could have bottlenecks due to queue and cache issues, messages might be pending, or lost etc.
    Scenario 2. Asynchronous
    1) PI Picks file from a common folder.
    2) PI does a data mapping and sends the data to ECC.
    3) ECC contains an inbound interface which receives the data and in which abap proxy code is written.
    4) ABAP Proxy code executes the same function module and calls a seperate outbound interface and passes the values to it. This would be used in sending the response back.
    5)  PI receives the response from the second interface and places it in a text/csv file and places it back to another folder.
    I would like to know which would be the better approach. Documentation/references to support your claims would be much appreciated.
    Cheers,
    Mz

  • Severe performance issues in production database

    Hi Experts,
    we have configured RMAN in our production database recently using some 3 party tool COMMVAULT.
    Problem is just 47 GB database taking around 6 hrs of time to complted the bakcup job. Please let me know what could be the reason.
    Further to this issues i found out some of few things, our application vendor commissioned this database server and it seems that they done some changes in TIMEZONE settings.
    when i query against some of few dictionary tables for example dba_schedular_jobs i am getting the following error.
    ORA-01882: timezone region %s not found
    when ever i connect the database , the connection itself is very slow and suffering severe performance issues in my production database.
    You help would be much appreciated.
    Regards,
    Salai

    Hi,
    also let us know if you use asm or local file system for datafiles.
    Your backup strategy will also be helpful:
    Do you make compressed backups?
    Full or incremental?
    To where do you backup the database? To the local filesystem, SAN Volume, Offsite Storage, Tape storage?
    Is there any other jobs running while the RMAN job is running?
    Please post the stats that you have gathered over the time period when the backup is running.
    Thanks.

Maybe you are looking for

  • Difference in String declaration

    Hello All, Can any one please tell me whats the difference between following String declarations: a) String str = "abc"; b) String str = new String( "abc" ); Also please tell me which one is correct way of declaration and in which sence. Thanks in ad

  • Quiz default labels in English

    Hello, Could you please send me the whole default labels in English ? I have Captivate 7 in another language and should translate it in English. I mean the 6 messages and the 6 buttons. Or just send a shotscreen with the default labels under Captivat

  • App Guide for Florence - any good ones?

    I'm going to Florence and looking for a good App Guide for this wonderful city. There are quite a few in the App Store - but I'm looking for one that at least has a map with ALL the street names. I know there are some apps which has only approx 70% o

  • Window / printing

    I am working on an art project which has a form of a cash machine displayed in public space (Windows XP, application in Flash, the receipt printed in the end. The problem is, after pressing "Receipt" button in Flash, as ussually the printing window a

  • IACu00B4s in Customer Namespace

    Dear all, I am running a problem migration IAC´s in Customer Namespace and external ITS to internal ITS. Example: In our SAP 4.7.1 we have IAC Services called /abc/xyz (/abc/ is our customer namespace registered at SAP). While publishing to external