Synch Process Performance issues.

Hi,
I have a BPEL process that reads a file and then for each transaction within the file it calls a Synch BPEL process that processes it. The Synchronous process takes about 2 min. So say, if there are 25 transations in the file, it takes about 50 min to complete the entire processing. Earlier the Synch process was Asynch process but we had problems when all the records were processed at once that caused data duplication issues. Hence we made changes to make it a Synch process but then performace is become a problem.
Can we improve the performance in anyway?
Otherwise, if I make changes just have one process and move the entire logic of Synch process to the main one and process each record in a while loop, will that have better perfrmance than the above.
Need suggestions. Thanks!
-Prapoorna

We are on 10.1.3.4. So can't try on 10.1.3.5.
And we are using a specific schema.
Regarding the duplication, this is an EDI BPEL Process, so what was happening was, when we get the file and if the customer mentioned in the file does not exist in our system (Oracle db), then within the BPEL process we call the db adapter to call a custom procedure that creates the Customer. Because the process was Asynch earlier, if there are 4 transactions in the file, the Asynch process used to get invoked 4 times all at the same time and then we would see 2 or 3 customers created in the system with the same name and then a couple of instances would fail in the console when it is trying to find out if the customer already exists because the query returns multiple rows. Thats when I thought that may be I have to make the process wait until the 1st record is completely processed that will create the customer record if it does not exist and then when the 2nd record is processed it will not create the customer again as it is already created. Hence I made it a Synch process.
As I said earleir, do you think I can merge these two process (the one that reads the file and calls the Synch process for each record) into one process with a while loop... Will that work faster. appreicate if you cna give me suggestions on what my options are.
Thanks
-Prapoorna

Similar Messages

  • Image Processing Performance Issue | JAI

    I am processing TIFF images to generate several JPG files out of it after applying image processing on it.
    Following are the transformations applied:
    1. Read TIFF image from disk. The tiff is available in form of a PlanarImage object
    2. Scaling
         /* Following is the code snippet */
         PlanarImage origImg;
         ParameterBlock pb = new ParameterBlock();
         pb.addSource(origImg);
         pb.add(scaleX);
         pb.add(scaleY);
         pb.add(0.0f);
         pb.add(0.0f);
         pb.add(Interpolation.getInstance(Interpolation.INTERP_BILINEAR));
         PlanarImage scaledImage = JAI.create("scale", pb);3. Convertion of planar image to buffered image. This operation is done because we need a buffered image.
         /* Following is the code snippet used */
         bufferedImage = planarImage.getAsBufferedImage();4. Cropping
         /* Following is the code snippet used */
         bufferedImage = bufferedImage.getSubimage(artcleX, artcleY, 302, 70);The performance bottle neck in the above algorithm is step 3 where we convert the planar image to buffered image before carrying out cropping.
    The operation typically takes about 1120ms to complete and considering the data set I am dealing with this is a very expensive operation. Is there an
    alternate to the above mentioned approach?
    I presume if I can carry out the operation mentioned under step 4 above on a planr image object instead of buffered image, I will be able to save
    considerable processing time as in this case step 3 won't be required. (and that seems like the bottle neck). I have also noticed that the processing
    time of the operation mentioned in step 3 above is proportional to the size of the planar image object.
    Any pointers around this would be appreciated.
    Thanks,
    Anurag
    Edited by: anurag.kapur on Oct 4, 2007 10:17 PM
    Edited by: anurag.kapur on Oct 4, 2007 10:17 PM

    It depends on whether you want to display the data or not.
    PlanarImage (the subclass of all renderedOps) has a method that returns a Graphics object you can use to draw on the image. This allows you to do this like write on an image.
    PlanarImage also has a getAsBufferedImage that will return a copy of the data in a format that can be used to write to Graphics objects. This is used for simply drawing processed images to a display.
    There is also a widget called ImageCanvas (and ScrollingImagePanel) shipped with JAI (although it is not a committed part of the API). These derive from awt.Canvas/Panel and know how to render RenderedImage instances. This may use less copying/memory then getting the data as a BufferedImage and drawing it via a Graphics Object. I can't say for sure though as I have never used them.
    Another way may be to extend JComponent (or another class) and customize it to use calls to PlanarImage/RenderedOp instances directly. This can hep with large tiled images when you only want to display a small portion.
    matfud

  • SAP BI 7 - Background process performance Issue

    This question is about efficiency of dialog process compared to inefficiency of the background process and I want suggestions from SAP Basis/BI experts on this.
    We have observed that our background processes on BI server are not running as efficiently as we would like them to run.
    For example
    DSO activation:
    When I use Dialog mode for DSO Activation with 3 parallel processes, I can activate a certain DSO with in 15 seconds. But the same DSO and the same data package when activated using background mode with 3 parallel process takes about 15 minutes.(there are plenty of background processes available on the system when this activation is running). So dialog process runs 60 times faster than background process.
    In BI 7 most of the operations can be executed only in the background parallel mode. And from the activation example we know that our background processes are not as efficient as dialog processes. It
    is understood that in general the background processes will not be as efficient as dialog processes, but in our case the difference is a factor of 60. We want help to identify, why the background processes are not as efficient as dialog processes. What SAP Basis settings need to be changed to make them as efficient as we can. Any suggestion or help will be highly appreciated.

    Hello Eswaran
    Generally a dialog process is not faster by any means than a background workprocess. Nevertheless i trust you on your observations, so there must be some difference. I just did a small test, i ran SE16 chose a big table and selected 10000 rows, i did 3 tries in dialog and 3 in background. The results:
    dialog: 10.2s, 9.4s, 9.6s
    background: 26s, 24s, 24s
    So even in my simple example the background execution took more time. The issue here is, that the resulting output (which is pretty large) had to be saved additionally in a spool (a total of 167 pages) when executed in background. The selection of the data certainly did not take more time in either case.
    The most important difference between dialog and batch processes is the memory management. Dialog processes work with a shared memory segment (extended memory). Background processes have their private heap memory.
    Nevertheless, background processes are not "slower" in general, not at all. You will need to observe closer, what your processes are doing. Maybe watching SM50 while running the DSO activation is enough to see it, maybe you need to trace a process. If you see large spools generated, or huge amounts of memory consumed, we might already have an answer.
    Regards
    Michael

  • Line Split Interface to feed Integration Process - Performance Issues

    Hi All
    We have a scenario whereby we receive an XML message from a 3rd Party through an exposed SOAP Adapter service. The XML Message has multiple lines that need to be split up and processed as individual messages. We need to create a Line Splitting interface in order to achieve this. The Line Split interface would feed different Integration Processes depending on a specific payload value. The Integration Processes would then perform certain specific logic & Rules as well as transformations to specific message formats (e.g. idoc, xml, flatfile). The Line Split interface also maps from an xml structure that caters for multiple lines, to a flatened xml structure which only contains one line. The uper range of a message we may need to split into individual messages is 30 000 lines.
    We first used an Interface Map and used SplitByValue to achieve this, however we ran into the constraint that we could not feed the output split messages to an Integration Process - you can only feed it to Adapters that reside on the J2EE engine.
    We then decided to build a seprate Integration Process thats sole purpose was to split the message and route the indvidual messages to other integration processes to perform the logic, business rules and specific transformations. However, the performance of the ccBPM line splitting Integration Process was nowhere near the Interface Map.
    e.g. Interface Map Split 1000 Lines = 13 seconds - BPM Integration Process 1000Lines = 100 seconds.
    Does anybody have any suggestions on how we can perform the line split outside of BPM, or how we can improve ther performance of the line splitting within BPM?
    Thanks for your assistance.
    Edited by: CostaC on Aug 24, 2009 11:53 AM

    hi,
    >>>We first used an Interface Map and used SplitByValue to achieve this, however we ran into the constraint that we could not feed the output split messages to an Integration Process - you can only feed it to Adapters that reside on the J2EE engine.
    the easiest (not the only) way :
    do the split as you did here and post the results in different folders (file adapter)
    then set up scenarios that will get the files from those folders
    (many additional objects but will be much much faster and better then a BPM)
    you could also split the messages in the adapter module but this is more advanced
    and officially SAP does not recommend it - even though it's possible
    Regards,
    Michal Krawczyk

  • Process flow/map performance issues

    We have some issues with our OWB-based application and we're looking to find out if there are different ways we could be using the tool, or features/options we've missed.
    We are trying to maintain a near real time feed of data from a front end system into our warehouse which was built using OWB 10.2.0.3 over a 10.2.0.4 database. The bulk of the application consists of OWB maps with a few hand-written PL/SQL objects, all executed in a series of hierachical OWB process flows. Maps/transformations are executed either sequentially or in parallel where the referential integrity of the model allows.
    The problem is that we have around 150 tables in the datamart which could potentially require updating on each refresh cycle, although in practice only a few tables have any activity on a typical refresh cycle. The cycle consists of loading data into a set of staging tables, and from there the data is transformed into the main schema, often with multiple maps per target table.
    On every cycle we run hundreds of maps, the vast majority of which process zero rows. Each map runs quickly and efficiently in its own right but collectively they add up to a 5 - 10 min cycle even if there is no data to process.
    There are 2 avenues which we'd like to explore and would be grateful if anyone could provide any pointers/suggestions :-
    1) It appears that each map opens and closes its own database session when it executes. I presume this was done because a single process flow could be constructed with maps executing in different target schemas, but we know that's not the case for us. We'd like to know if there is anyway to configure the database connection at a higher level (eg. process flow) so it opens a connection once and executes each of the maps (database packages) in that one session.
    Our DBAs are experimenting with 'shared server' settings at a database level which may help to some degree but won't be the whole story.
    2) Another option is simply to run less maps eg. load the staging area as now, collate stats on which staging tables contain new data, and then apply some logic such that subsequent maps only execute if the relevant staging table(s) contain(s) some new data, otherwise bypass that map.
    We tried experimenting with the 'Pre Mapping Process' operator, but essentially that just generates another function call from the map package, so we still have the overhead of opening a database session for each map to run the package. Minimal gain.
    We thought about adding a function call in the process flow before each map and then branching to either execute/bypass the map as approriate, but the function call still requires opening/closing of a database session each time so, once again, minimal gain.
    What we really want is some way for a map or process flow to check without logging onto the database repeatedly.
    Any ideas on the above, or other potential solutions anyone could suggest, would be greatly appreciated.

    Hi,
    Please see if these documents help.
    Note: 554635.1 - Create Accounting Process Performs Poorly When 100K + Distributions are Passed for an Event
    Note: 954273.1 - Multiple Create Accounting Requests Result In Poor Performance For Online Accruals
    Note: 763500.1 - R12: Performance Issue with Create Accounting
    Note: 733637.1 - R12:Performance Issue When Running Accounting Program Xlaaccup
    Note: 781311.1 - Create Accounting Process Taking A Long Time To Complete After Appying Critical Patches
    Note: 557869.1 - EBS: R12 Oracle Financials Critical Patches
    Regards,
    Hussein

  • RKKBABS0 Performance Issues (Background Processing of CO99) for PM Orders

    We are experiencing extremely long run times when batch processing through program RKKBABS0 in ECC 6.0 (just upgraded). The issue appears to be that the program is using the production order numbers to search against the EXKN table which contains no AUFNR or AUFPL information.
    Has anyone experienced this same issue and how was it resolved?
    Edited by: Ken Lundeen on Apr 9, 2010 9:17 PM
    Edited by: Ken Lundeen on Apr 9, 2010 9:17 PM Table ESKN

    (I'm sorry you've waited over a year for a reply.)
    We also have performance issue. In our case we do not use Service Entry sheets with maintenance or production orders; AUFNR will not be populated in table ESKN.  We are unable to 'complete business' our maintenance and production orders using batch processing because of performance.
    We use Oracle database, which uses full table scan in this situation.  But Secondary index (MANDT and AUFNR) is of no value anyway, we have about 12 million records with client and blank AUFNR field.
    Our solution is a combination of a modification and a new index.  OSS pilot note "1532483 - Performance of RKKBABS0 CHECK_ENTRYSHEET when reading ESKN" is a modification which introduces code improvements especially if running in background and closing several orders.  Because we only have one client, we also created a new index consisting only of AUFNR.  Oracle will not add a row to the secondary index of all fields of the index are null, making our new index very small. We then udpated Oracle stats to ensure Oracle would choose our new index.
    We can now 'complete business' a single order online in under a minute, and the batch program runs much more efficiently.
    This is not a perfect solution, but it has been a useful workaround for us.  I hope this is useful to you.

  • Performance issues executing process flows after upgrading db to 10G

    We have installed OWF 2.6.2, and initially our database was at 9.2. Last week we updated our database to 10g, and process flow executions are taking a lot longer, from 1 minute to 15 minutes.
    Any ideas anyone what could be the cause of this performance issue?
    Thanks,
    Yanet

    Hi,
    Oracle10g database behaves differently on the statistics of tables and indexes. So check these and check wether the mappings are updating these statistics at the right moments with respect to the ETL-proces and with the right interval.
    Also, check your generated sources on how statistics are gathered (dmbs_stats.gather....). Does the index that might play a vital role in Oracle9i get new statistics, or only the table? Or only the table where doubled in amount of rows by this mapping?
    You can always take matter into your own hands, by letting OWB NOT generate the source for gathering statistics, and call your own procedure in a post-mapping.
    Regards,
    André

  • Info Package in Process Chain - Performance Issue

    Dear All,
    We have some Info Package in Process Chain which when triggered initially turns red, then again yellow, after sometime red/green. These status keeps changing and finally results in green status. These infopackage takes 15-30 mins to complete.
    And the info package error message shows "Processing time overdue". The chain runs daily and this issue also occurs on a daily basis.
    Does anyone one of you have an idea of how to deal with this performance issue?
    Regards.

    Hi
    As you said--
    "initially time out time was set to 10 mins, so it was turning red at 10 mins. Later i changed it to 30 mins, then it is turning red at 30 mins, this info package takes maximum 20 mins to complete.
    thatsy i asked whether there will be any problem if we remove this time out time."
    I guess, its not mandatory to include the timeout value. Just try to execute the IP without entering the timeout value.
    It should work.
    Let me know if this helps.
    Regards
    SaiPrasad

  • Annotation process tools build performance issue

    Annotation process tools is taking 85 minutes to compile page flow files. Is there any suggestion to improve performance.

    As you probably already have gathered, you have different execution plans in the two environments. There could be different reason why this happens, for instance different profiles of the data. But more likely it is due to that in the slow environment, the
    plan was built from some atypical set of input parameters, which causes a performance issue when the procedure is run in a normal fashion.
    When you say that the query has a lot of isnull and convert functions I get a little worried. If it is only in the SELECT list, it's not evil, but if functions are used in the WHERE or ON clauses, the query might benefit from a review, and possibly also
    the data model. But this is pure speculation at this point.
    I would suggest more than one course of action. As a short-term solution, you run UPDATE STATISTICS WITH FULLSCAN on all tables involved, as out-dated statistics can also be explanation for why you got a bad plan. New statistics should trigger a recompilation.
    But even if this resolves the issue, it may not be feasible in the long run, because the events indicate that the query is sensitive to something and the bad plan could come back any day. For this reason, it might be a good idea to make sure that the query
    does include inappropriate constructs, and you also need to review that there are good indexes in place.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance Issues with SDV03V02

    Hi,
      The order rescheduling program(SDV03V02) runtime is too long. We have searched for notes in the OSS database but have not found any suitable notes. One of the oss notes suggested implementing indexes on VAPMA and VBUK but that has not helped either. We are thinking of writing a Z program which would submit the standard SAP program into multiple jobs-The Z program would schedule the program into multiple jobs using different sets of materials.
      Has anyone faced runtime issues with this program? Any other suggestions?
    Regards,
    Aravind

    Aravind,
    the program invokes availability check for all backourders based on material.
    I would check the following :
    - how many documents it's going to process ?
    - if the number of documents is too big and there are really old docs - you may have a problem with order status update (your order never goes to "complete" status even if all items are rejected or fully shipped - just an example)....
    - if you really need to reschedule many orders - you'll have to consider parallel rescheduling.... BUT it's not as simple as submit parallel jobs for different materials (material ranges)....as jobs will very likely fail due to a contention.
    - another option to speed up the process (may really help) is to change timing of all order outputs which are set to process immediately to '1' -> to  be processed via RSNAST00, then schedule a job after your rescheduling is complete to process all these outputs. It may really help as you separate output processing from avail.check/order save.... and since same order may be saved several times during rescheduling -> it may really help. You can do it by defining a "timing" routine for your output types.
    - always good to run SQL trace or to just monitor this job in SM50 to see what it's doing... OR run it under new USER ID and check statistics for that ID to see which programs/tables took the most time.
    Would be good to measure an average order processing time (check how many orders were processed in this job... I hope it has this info in the spool or you can calculate it based on messages in job log). It may help to determine if the problem is casued by too slow order processing or just by increased order volume.
    There are many options to look at regarding rescheduling...but I would start from order processing performance.

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issues with Homesharing?

    I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
    1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
    2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
    3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
    I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
    Has anyone some suggestions?

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

Maybe you are looking for