APEX 4.1 Applicaton Performance issue!

Problem:
Virtual Circuit wait is running high.
Apex application pages loads much slower comparing to our old environment APEX 3.0 on 10g, apache server.
Issues after installing apex with embedded PL/SQL ... OEM showing Virtual Circuit wait is running high ... APEX application pages loads with significant delays specially when first loaded.
Environment:
New 11g server with New apex 4.0 and updated to 4.1 with embedded pl/sql, windows server 2008 64 bit.
Migrated old application from 3.0 to 4.0 ---- No problems other than performance issue, slow pages load.
The reason why I went with the embeded pl/sql and not the apache because Windows server 2008 R2 and the apache HTTP server (conpanion cd) could not be installed on this version.
We did not have this issue with 10g apex 3.0 running with an apache server?
Where can I find HTTP server for windows server 2008 R2 64 bit?
Is this an embedded PL/SQL issue install on windows server 2008 x64?
Others ...
There is no Companion Disk for the Oracle Database 11g Release 1 (11.1.0.7.0) for Microsoft Windows Server 2008 x64 therefore the Http server is not included in the install or packaged software (companion or examples). The Oracle database 10g had the HTTP server in the companion disks. The 11 g examples download does not include the APACHE HTTP Server !!!!! Did anyone find a solution for this problem, or is the prior version of Oracle database 10 R 2 -HTTP SERVER (Companion Disk) windows server 2008 x64 compatable with 11g Release 1 installed. Can I install the companion disk included in the 10g R2 (Http server) with Oracle 11 g Release 1. Again for windows server 2008 x64.
Thanks,
Mhnd

Dimitri,
Thanks for taking the time to answer my question. So you are saying the listener which is my third choice should eliminate this issue. I know oracle have three choices for the web server but I'm only familiar with the two mentioned above, the APEX installation document provide information about the listener but haven't gone that path yet. The apache 2 only comes packaged for 32 bit systems (last I checked 2.16), and they don't have the 64bit version. I'm going to try the listener approach and see how it goes. Hopefully this will solve the issue with page loading slow. Should I uninstall the embedded PL/SQL or just continue with the listener install?
Mobra,
Thanks for your suggestions but I'm going to stick with the Oracle suggested methods for now. Hopefully some will work out for our environment.
Thanks,
Muhannad.

Similar Messages

  • Performance issues ..with apex in reports version 3.1

    Hello All,
    I am using apex 3.1 oracle 10g.
    I am facing with performance issues with apex . I am generating iteractive reports with apex and the number of records are huge - running in 30 to 40 thousands of records and the reports is taking almost 30 minutes.
    How I can improve the performance of this kind of report. I am using apex collections.
    How apex works in terms of retrieving the records -?
    Please let me know .
    Thanks/kumar
    Edited by: kumar73 on Jun 18, 2010 10:21 AM

    Hello Tony ,
    The following are the sequence of steps to run the test case.
    Note:- All the schemas , tables and variables are populated from database.
    From Schema and Relations tab choose the following:
    1)     Select P3I2008Q4 as schema.
    2)     Choose Relation as query path.
    3)     Select ECLA, ECLB, MTAB as relations.
    From Variables choose the following:
    4)     Choose the variables AGE_SEXA,CLODESCA,ALCNO from ECLA relation.
    5)     Choose the variables AGE_SEXB, ALCNO, CLODESCB from ECLB relation.
    6)     Choose the variables EXPNAME, ALCNO, COST_, COST from MTAB relation.
    From Conditions: Click the Run Report button this generated standard report ( Total no of records in report – 30150 )
    Click on Interactive report button –to generate an interactive report. ( Error occurred )
    We are using return sql statement in generationg the standard report and collections for interactive report.
    thanks/kumar

  • Select List Performance Issue in Apex 3.1.2

    Hi
    It would of great help if you could suggest us for the following
    We are facing some issues in Select List performance in Apex 3.1.2 version.
    We have 6 select list and all have been cascaded i.e., values of each select list based on pervious list value.
    Values in each select list are huge. Because of this, the performance is very slow.
    It takes huge time to fetch the data based on other list values.
    We cross verified in backend, the same query takes less time compared to the Application.
    Any recommedation to fine tune this?
    Thanks in advance
    Vijay

    if your select lists are very huge then it could be the browser that is causing the slow down.
    try and create an example using a static HTML file containing a select list of the same size and you may find the same performance issue.
    a quick way to create the test would be to save the source of your APEX page as a html file
    Craig
    [http://www.oracleapplicationexpress.com]

  • Performance issue in Internet facing APEX application

    Hi APEX gurus,
    at peak times we encounter in our application some very, very poor response times. If I look in the Administration tab Home>Administration>Monitor Activity>Page Views by Hour, I can see some very very nice average response times: 0.33 max, but rendering a page takes than 10 seconds. Also if i look at the load on the machine everything seems fine, there is enough room for the CPU to go up, also for the memory, also for the disk. With all these the performance of the application is very very bad.
    Any suggestions where I should dig into the issue?
    Further details from the Home>Administration>Monitor Activity>Page Views by Hour page at peak times:
    Hour     Views     Pages     Elapsed     AverageElapsed     Sessions     Users     ReportRows
    2010.03.05 14     19,151     62     5,899.68     0.3081     1,019     246     404,905
    2010.03.05 13     17,568     44     2,235.51     0.1272     923     230     329,742Software configuration (all on the same box):
    Windows 2003 R2 64bit
    DB 10.2.0.4
    Oracle-Application-Server-10g/10.1.2.0.0 Oracle-HTTP-Server.
    Hardware details:
    HP Xeon Quad-core E5420 2.50GHz with 10GB RAM
    Thanks,
    Florin

    Looking into the SQL Developer Top SQL report, I could see that in top 3 by elapsed seconds I have some things that I did not write.
    For example I have SELECT VALUE FROM SYS.V$PARAMETER WHERE NAME = 'compatible' called ~650000 times from my APEX application, even I have no idea where that is called, and having elapsed seconds 1570.
    Also I have some code like:
    DECLARE
      rc__ NUMBER;
      simple_list__ owa_util.vc_arr;
      complex_list__ owa_util.vc_arr;
    BEGIN
      owa.init_cgi_env(:n__,:nm__,:v__);
      htp.HTBUF_LEN := 255;
      NULL;
      NULL;
      simple_list__(1) := 'sys.%';
      simple_list__(2) := 'dbms\_%';
      simple_list__(3) := 'utl\_%';
      simple_list__(4) := 'owa\_%';
      simple_list__(5) := 'owa.%';
      simple_list__(6) := 'htp.%';
      simple_list__(7) := 'htf.%';
      IF ((owa_match.match_pattern('f', simple_list__, complex_list__, true))) THEN
        rc__ := 2;
      ELSE
      .... module Apache, which generates a lot of elapsed seconds 31684 & 1828. BTW, what on Earth this piece of code is doing? We do not use images from the DB, but they are put directly into the filesystem and served directly by Apache. Can anything be done for these to consume less time or should I focus on other areas for getting rid of the performance issues when having high loads on the application?
    Thanks,
    Florin

  • How to update this query and avoid performance issue?

    Hi, guys:
    I wonder how to update the following query to make it weekend day aware. My boss want the query to consider business days only. Below is just a portion of the query:
    select count(distinct cmv.invoicekey ) total ,'3' as type, 'VALID CALL DATE' as Category
    FROM cbwp_mv2 cmv
    where cmv.colresponse=1
    And Trunc(cmv.Invdate)  Between (Trunc(Sysdate)-1)-39 And (Trunc(Sysdate)-1)-37
    And Trunc(cmv.Whendate) Between cmv.Invdate+37 And cmv.Invdate+39the CBWP_MV2 is a materialized view to tune query. This query is written for a data warehouse application, the CBWP_MV2 will be updated every day evening. My boss wants the condition in the query to consider only business days, for example, if (Trunc(Sysdate)-1)-39 falls in weekend, I need to move the range begins from next coming business day, if (Trunc(Sysdate)-1)-37 falls in weekend, I need to move the range ends from next coming business day. but I should always keep the range within 3 business days. If there is overlap on weekend, always push to later business days.
    Question: how to implement it and avoid performance issue? I am afraid that if I use a function, it greatly reduce the performance. This view already contains more than 100K rows.
    thank you in advance!
    Sam
    Edited by: lxiscas on Dec 18, 2012 7:55 AM
    Edited by: lxiscas on Dec 18, 2012 7:56 AM

    You are already using a function, since you're using TRUNC on invdate and whendate.
    If you have indexes on those columns, then they will not be used because of the TRUNC.
    Consider omitting the TRUNC or testing with Function Based Indexes.
    Regarding business days:
    If you search this forum, you'll find lots of examples.
    Here's another 'golden oldie': http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:185012348071
    Regarding performance:
    Steps to take are explained from the links you find here: {message:id=9360003}
    Read them, they are more than worth it for now and future questions.

  • Performance issue of "clobagg"

    create or replace
    type clobagg_type as object(
    text clob,
    static function ODCIAggregateInitialize(
    sctx in out clobagg_type
    return number,
    member function ODCIAggregateIterate(
    self in out clobagg_type,
    value in clob
    return number,
    member function ODCIAggregateTerminate(
    self in clobagg_type,
    returnvalue out clob,
    flags in number
    return number,
    member function ODCIAggregateMerge(
    self in out clobagg_type,
    ctx2 in clobagg_type
    return number
    create or replace
    type body clobagg_type
    is
    static function ODCIAggregateInitialize(
    sctx in out clobagg_type
    return number
    is
    begin
    sctx := clobagg_type(null) ;
    return ODCIConst.Success ;
    end;
    member function ODCIAggregateIterate(
    self in out clobagg_type,
    value in clob
    return number
    is
    begin
    self.text := self.text || value ;
    return ODCIConst.Success;
    end;
    member function ODCIAggregateTerminate(
    self in clobagg_type,
    returnvalue out clob,
    flags in number
    return number
    is
    begin
    returnValue := self.text;
    return ODCIConst.Success;
    end;
    member function ODCIAggregateMerge(
    self in out clobagg_type ,
    ctx2 in clobagg_type
    return number
    is
    begin
    self.text := self.text || ctx2.text;
    return ODCIConst.Success;
    end;
    end;
    create or replace
    function clobagg(
    input clob
    return clob
    deterministic
    parallel_enable
    aggregate using clobagg_type;
    SQL> select trim(',' from clobagg(ename||',')) as enames from emp;
    ENAMES
    SMITH,ALLEN,WARD,JONES,MARTIN,BLAKE,CLARK,SCOTT,KING,TURNER,ADAMS,JAMES,FORD,MILLER
    SQL>
    I use the above function in my queries.. but it takes ages. Any one has better solution for this?
    Requirement is , I need to use LISTAGG functionality with out 4000 char limit

    Both the threads are not talking about the performance issue. Yes, they are (but they are large threads), see for example:
    "It works perfectly fine for small data. But for large data sets, it takes forever e.g"
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:15637744429336#50979409688013
    "search - you'll find an implementation that does clobs - but I would recommend against it, the amount of data is probably just too long at that point."
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:15637744429336#1225142800346989581

  • Performance issue on swing application

    Hi,
    I have one swing application and i am facing one performance issue after minimizing the applicaton. The same application is taking 2-5 seconds to redraw the screen again when i am trying to maximize it. Is there any way to improve this redraw delay ?..Is there any way to tweak the Event Dispatcher thread and by this we can overcome correct me if i am wrong? .
    Is there anybody facing such kind of issue please help me...
    Thanks in Advance,
    Anish.

    Encephalopathic wrote:
    good guess. What do you think? Loading the images in the painting methods?That is a common mistake. To the OP, some common errors when overriding paintComponent include:
    1. Loading an image each time paintComponent() is called. Just load it once, cache it, and only draw it in paintComponent, this saves you from the IO of loading the image on repaints.
    2. Doing application logic/complex operations in paintComponent. paintComponent should be kept fast, only do painting operations, keep long-running tasks, IO, etc. out of it.
    3. To a lesser extent, doing lots of complex painting on a repaint call, maybe lots of transforms, translucency, xor-ing, whatever. For most common painting (at least in my work, YMMV), this isn't a problem, but if it is you should consider drawing all "static" stuff to a buffer (such as a BufferedImage) and just painting the BufferedImage, as opposed to drawing all the static stuff each paintComponent call.
    The idea is just to keep the painting logic as fast as possible, to avoid slow repaints as you're seeing. Sometimes people don't realize that paintComponent() will be called every time your component needs to be even partially repainted (minimized/maximized window, partially hidden by another window then revealed again, stretched by window resizing, etc.). That's why things like IO are out of the question.
    Of course, I'm babbling now. This may not even be your problem, or you may already know all this stuff. If so, just ignore this post!

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • Report performance Issue in BI Answers

    Hi All,
    We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
    Could any once suggest to improve the performance in BI answers?
    Thanks in advance,

    I hope you dont have many case statements and complex calculations that you do in the Answers.
    Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
    hope it helps
    thanks
    Prash

  • BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Hi Masters,
    I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
    Some of the querys is taking more 15 to 20 minutes to execute the report.
    This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
    Current service pack we are using is
    SAP_BW 350 0016 SAPKW35016
    FINBASIS 300 0012 SAPK-30012INFINBASIS
    BI_CONT 353 0008 SAPKIBIFP8
    SEM-BW 400 0012 SAPKGS4012
    Best of Luck
    Chris
    BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Ravi,
    I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
    they resolved it.
    Regards,
    Chris

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Thanks Kelly,
    The answers would be the following:
    1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
    Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
    The cells will be numeric, free text, drop downs, and some calculated numeric.
    Are we reaching the limits for UI performance?
    Thanks

  • IOS 8.1+ Performance Issue

    Hello,
    I encountered a serious performance bug in Adobe Air iOS application on devices running iOS 8.1 or later. Approximately in 1-2 minutes fps drops to 7 or lower without interacting with the app. This is very noticeable in the app. The app looks like frozen for about 0.5 seconds. The bug doesn't appear on every session.
    Devices tested: iPad Mini iOS 8.1.1, iPhone 6 iOS 8.2. iPod Touch 4 iOS 6 is working correctly.
    Air SDK versions: 15 and 17 tested.
    I can track the bug using Adobe Scout. There is a noticeable spike on frame time 1.16. Framerate drops to 7.0. The App spends much time on function Runtime overhead. Sometimes the top activity is Running AS3 attached to frame or Waiting For Next Frame instead of Runtime overhead.
    The bug can be reproduced on an empty application having a one bitmap on stage. Open the app and wait for two minutes and the bug should appear. If not, just close and relaunch the app.
    Bugbase link: Bug#3965160 - iOS 8.1+ Performance Issue
    Miska Savela

    Hi
    Id already activated Messages and entered the 6 digit code I was presented with into my iPhone. I can receive txt messages from non iOS users on my iMac and can reply to those messages.
    I just can't send a new message from scratch to a non iOS user :-s
    Thanks
    Baz

Maybe you are looking for

  • How do I select part of a clip to import in Final Cut Pro 10.0.7?

    Greetings. I'm a newby in FCPX.  I've been watching Michael Wohl's tutorial and it looks like I should be able to mark parts of clips stored on my HD to import into the event browser.  But I don't seem to be able to mark the clips with either the "i

  • I´m trying to download Whatsaap for my iphone 3 and I can´t

    I need to download Whatssap for my Iphone 3

  • WCF-SQL Adapter Hangs in Active State for hours

    I'm at a new client, and doing a simple test with a content-based-routing solution picks up a file, maps it, and is supposed to update a SQL database.  The WCF-SQL send port is stuck in the "active" state.  No messages in EventLog.  If there's an iss

  • Java proxy in PI7.1.....

    Hi All, I am having a requirement where i have to develop JAVA proxy in PI 7.1 and for this I am using NWDS 7.1 but the main concern is that i am very new to JAVA proxies. Even though i refer SDN but the blogs and forums mentioned are related to XI3.

  • LifeInSharepoint Metro UI Sandbox and File Not Found

    We just deployed the LifeInSharePoint Metro UI solution, and we are now experiencing login problems. Site Collection Administrators can log in to the site and access all pages without incident. But site owners cannot login to the welcome.aspx page: