Front Conroller Performance issue

We consider using a front controller servlet as the entrance to the web server.
We could allow direct access to more then one conroller.
We would like an advice regarding to performance - would it be better to use a few controllers to reduce load on the controller servlet, or use one controller servlet.
would the load on the web server would be the same? any other pros / cons to choose between one or few front controllers?
Thanks
Avi Inbar

Oops. Spoke too soon. I haven't solved the issue completely.
Ok, more info:
When I first start FR it works flawlessly. Then, after I select and play a movie and it plays completely (I chose a 5 minute Pixar short) then go back and try to scroll through to find another option, then I get preview errors and the selection jumps to the last played item. Repairing permissions only works if you quite FR and then restart. So I tried just quitting and restarting FR and it solved the problem for the first show then it came back after the video completed its run. So. The work around is to quit and restart FR after every video. That kinda *****.
I checked the console and got this:
2/14/10 10:22:13 AM com.apple.RemoteUI[1016] CoreAnimation: rendering error 501
Don't know what that means or how to fix it.
Any thoughts?

Similar Messages

  • Performance issue - application running on front

    Hi, I have a Strange performance issue:
    - when I launch my app from flash builder without touching anything, it is slow,
    - when I launch it from flash builder and immediately open another window and keep it in front of it, it is really fast
    it is a windowed application, full screen, displaying multiple objects moving around
    had anyone already had the issue?
    thanks,
    YAnn

    For monitoring Azure using SCOM 2012R2, you can refer below link
    http://blogs.technet.com/b/dcaro/archive/2012/05/02/how-to-monitor-your-windows-azure-application-with-system-center-2012.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"Mai Ali | My blog:
    Technical | Twitter:
    Mai Ali

  • DB Performance Issues after 10g Upgrade in EBS Instance

    We have upgraded our Database from 9i to 10g as first part of EBS 11.5.9 to 11.5.10.2 upgrade. Currently our production is running on 11.5.9 apps with 10g DB.
    Facing performance problems now. one of the them is, one Valueset query not using funcion based index while fired from the front end. but the same query when collected from SQL trace tkprofed file and executed from SQL Plus, it uses all proper indexes. We are not getting the cause of this.
    Had anyone faced same kind of issues before. please suggest.
    thanks,
    Raj.

    Make sure you have all of the recommended performance patches for 11.5.9, and gather stats for SYS and SYSTEM in the following manner:
    Oracle E-Business Suite Recommended Performance Patches
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=244040.1
    Collecting Statistics with Oracle Apps 11i
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=368252.1
    execute dbms_stats.unlock_schema_stats('SYS');
    execute dbms_stats.unlock_schema_stats('SYSTEM');
    exec dbms_stats.gather_schema_stats('SYSTEM',options=>'GATHER', estimate_percent => 100, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
    exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER', estimate_percent => 100, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
    exec dbms_stats.gather_fixed_objects_stats();
    commit;
    exec dbms_stats.DELETE_TABLE_STATS('SYS','X$KCCRSR');
    exec dbms_stats.LOCK_TABLE_STATS('SYS','X$KCCRSR');
    commit;
    The last 3 commands resolve problems with RMAN, in case you are using it.
    Rman Backup is Very Slow selecting from V$RMAN_STATUS
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=375386.1
    Poor performance when accessing V$RMAN_BACKUP_JOB_DETAILS
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=420200.1
    Troubleshooting Oracle Applications Performance Issues
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=169935.1
    Debugging General Performance Issues with Oracle Apps
    http://blogs.oracle.com/schan/newsItems/departments/optimizingPerformance/2007/05/18#a1548
    Performance Tuning the Apps Database Layer
    http://blogs.oracle.com/schan/newsItems/departments/optimizingPerformance/2007/05/17#a1562
    Preventing Apps 11i Performance Issues in Four Steps
    http://blogs.oracle.com/schan/newsItems/departments/optimizingPerformance/2007/05/21#a1566

  • As a number of Oracle Connections created - Performance Issue

    Hello,
    We are using Oracle 9i as a backend and VB 6.0 as Frontend. For one client one connection will be opened in the Database Server, if 200 clients open the application then 200 connections will be opened in the Database Server, due to that performance issue is raising.
    We are doing: If the frontend application is not in use for 10-minutes then the connection is closed and at the same time front-end application is also terminated or if the Client terminates the application the connection is closed in the Server.
    Eventhought if 200-Clients connect the Server only 4 to 5 Clients send a request to the Server for retriving or for saving the data.
    I want to Open a fixed number of connections in the DataBase Server eg., around 10-connections, and i want to use this 10 connections for 200 clients.
    At First 10 Connections are opened and 10 Clients are using this 10 connections, among 10 Clients 1 released the database connection after finishing the Job and if 11'th Client opens then this released connection as to be used without opening a new connection, in the same way if some of the connections from 9 get free and if any other clients open application then this connections which are free as to be used for opening the application or else for doing any transaction. So, that the number of connection to be create can be redused.
    Please give me the suggestion how to make use the released connection for doing the transaction or else for opening the application.
    Thanking U with Regards,
    Sravan,
    Hyderabad.

    As Satish mentioned, Shared Server can be a good solution, but it would also be worth telling us , how did you quantify this thing that due to the number of connections, you have a performance issue? 200 is not that big number I guess. What's the o/s , exact database version and the system details with database details? Also if you have statspack report, post that too over here.
    HTH
    Aman....

  • Oracle Forms6i Query Performance issue - Urgent

    Hi All,
    I'm using oracle forms6i and Oracle DB 9i.
    I'm facing the performance issue in query forms.
    In detail block form taking long time to load the data.
    Form contains 2 non data blocks
    1.HDR - 3 input parameters
    2.DETAILS - Grid - Details
    HDR input fields
    1.Company Code
    2.Company ACccount No
    3.Customer Name
    Details Grid is displayed the details.
    Here there are 2 tables involved
    1.Table1 - 1 crore records
    2.Table2 - 4 crore records
    In form procedure one cursor bulid and fetch is done directly and assign the values to form block fields.
    Below i've pasted the query
    SELECT
    t1.entry_dt,
    t2.authoriser_code,
    t1.company_code,
    t1.company_ac_no
    initcap(t1.customer_name) cust_name,
    t2.agreement_no
    t1.customer_id
    FROM
    table1 t1,
    table2 t2
    WHERE
    (t2.trans_no = t1.trans_no or t2.temp_trans_no = t1.trans_no)
    AND t1.company_code = nvl(:hdr.l_company_code,t1.company_code)
    AND t1.company_ac_no = nvl(:hdr.l_company_ac_no,t1.company_ac_no)
    AND lower(t1.customer_name) LIKE lower(nvl('%'||:hdr.l_customer_name||'%' ,t1.customer_name))
    GROUP BY
    t2.authoriser_code,
    t1.company_code,
    t1.company_ac_no,
    t1.customer_name,
    t2.agreement_no,
    t1.customer_id;
    Where Clause Analysis
    1.Condition 1 OR operator (In table2 two different columbs are compared with one column in table)
    2.Like Operator
    3.All the columns has index but not used properly always full table scan
    4.NVL chk
    5.If i run the qry in backend means coming little fast,front end very slow
    Input Parameter - Query retrival data - limit
    Only compnay code means record count will be 50 - 500 records -
    Only compnay code and comp ac number means record count will be 1-5
    Only compnay code,omp ac number and customer name means record count will be 1 - 5 records
    I have tried following ways
    1.Split the query using UNIOIN (OR clause seaparted) - Nested loops COST 850 , Nested loops COST 750 - index by row id - cost is 160 ,index by row id - cost is 152 full table access.................................
    2.Dynamic SQL build - 'DBMS_SQL.DEFINE COLUMN .....
    3.Given onlu one input parameter - Nested loops COST 780 , Nested loops COST 780 - index by row id - cost is 148 ,index by row id - cost is 152 full table access.................................
    Still im facing the same issue.
    Please help me out on this.
    Thanks and Regards,
    Oracle1001

    Sudhakar P wrote:
    the below query its take more than one minute while updating the records through pro*c.
    Execute 562238 161.03 174.15 7 3932677 2274833 562238Hi Sudhakar,
    If the database is capable of executing 562,238 update statements in one minute, then that's pretty good, don't you think.
    Your real problem is in the application code which probably looks something like this in pseudocode:
    for i in (some set containing 562,238 rows)
    loop
      <your update statement with all the bind variables>
    end loop;If you transform your code to do a single update statement, you'll gain a lot of seconds.
    Regards,
    Rob.

  • Process flow/map performance issues

    We have some issues with our OWB-based application and we're looking to find out if there are different ways we could be using the tool, or features/options we've missed.
    We are trying to maintain a near real time feed of data from a front end system into our warehouse which was built using OWB 10.2.0.3 over a 10.2.0.4 database. The bulk of the application consists of OWB maps with a few hand-written PL/SQL objects, all executed in a series of hierachical OWB process flows. Maps/transformations are executed either sequentially or in parallel where the referential integrity of the model allows.
    The problem is that we have around 150 tables in the datamart which could potentially require updating on each refresh cycle, although in practice only a few tables have any activity on a typical refresh cycle. The cycle consists of loading data into a set of staging tables, and from there the data is transformed into the main schema, often with multiple maps per target table.
    On every cycle we run hundreds of maps, the vast majority of which process zero rows. Each map runs quickly and efficiently in its own right but collectively they add up to a 5 - 10 min cycle even if there is no data to process.
    There are 2 avenues which we'd like to explore and would be grateful if anyone could provide any pointers/suggestions :-
    1) It appears that each map opens and closes its own database session when it executes. I presume this was done because a single process flow could be constructed with maps executing in different target schemas, but we know that's not the case for us. We'd like to know if there is anyway to configure the database connection at a higher level (eg. process flow) so it opens a connection once and executes each of the maps (database packages) in that one session.
    Our DBAs are experimenting with 'shared server' settings at a database level which may help to some degree but won't be the whole story.
    2) Another option is simply to run less maps eg. load the staging area as now, collate stats on which staging tables contain new data, and then apply some logic such that subsequent maps only execute if the relevant staging table(s) contain(s) some new data, otherwise bypass that map.
    We tried experimenting with the 'Pre Mapping Process' operator, but essentially that just generates another function call from the map package, so we still have the overhead of opening a database session for each map to run the package. Minimal gain.
    We thought about adding a function call in the process flow before each map and then branching to either execute/bypass the map as approriate, but the function call still requires opening/closing of a database session each time so, once again, minimal gain.
    What we really want is some way for a map or process flow to check without logging onto the database repeatedly.
    Any ideas on the above, or other potential solutions anyone could suggest, would be greatly appreciated.

    Hi,
    Please see if these documents help.
    Note: 554635.1 - Create Accounting Process Performs Poorly When 100K + Distributions are Passed for an Event
    Note: 954273.1 - Multiple Create Accounting Requests Result In Poor Performance For Online Accruals
    Note: 763500.1 - R12: Performance Issue with Create Accounting
    Note: 733637.1 - R12:Performance Issue When Running Accounting Program Xlaaccup
    Note: 781311.1 - Create Accounting Process Taking A Long Time To Complete After Appying Critical Patches
    Note: 557869.1 - EBS: R12 Oracle Financials Critical Patches
    Regards,
    Hussein

  • Performance Issue while Joining two Big Tables

    Hello Experts,
    We've the following Scenario, wherein we need to have Sales rep associated for a Sales Order. This information is available in VBPA table with Sales Order, Sales Order Item and Partner Function being the key.
    NOw I'm interested in only one Partner Function for e.g. 'ZP'. This table is having around 120 million records.
    I tried both options:
    Option 1 - Join this table(VBPA) with Sales Order Item table(VBAP) within the Data Foundation Layer of the Analytic View and doing the filtering on Partner Function
    Option 2 - Create a Attribute View for VBPA having filtering on Partner Function and then join this Attribute View in the Logical Join Layer with the Data Foundation table.
    Both these options are killing the performance.
    Is there any way out to achieve this ?
    Your expert opinion is greatly appreciated!!
    Thanks & regards,
    Jomy

    Hi,
    Lars is correct. You may have to spend a little bit more time and give a bigger picture.
    I have used this join. It takes about 2 to 3 seconds to execute this join for me. My data volume is less than yours.
    You must be have used a left outer join when joining the attribute view (with constant filter ZP  as specified in your first post) to the data foundation. Please cross check once again, as sometimes my fat finger inadvertently changed the join type and I had to go back and fix it. If this is a left outer join or a referential join, HANA  does not perform the join if you are not requesting any field from the attribute view on table VBPA. This used to be a problem due to a bug in SP4 but got fixed in SP5.
    However, if you have performed this join in the data foundation, it does enforce, the join even if you did not ask any fields from the VBPA table. The reason being that you have put a constant filter ZR (LIPS->VBPA join in  data foundation as specified in one of your later replies).
    If any measure you are selecting in the analytic view is a restricted measure  or a calculated measure that needs some field from VBPA, then the join will be enforced as you would agree. This is where I had most trouble. My  join itself is not bad but my business requirement to get the current value of a partner attribute on  a higher level calculation view sent too much data from analytic view to calculation view.
    Please send the full diagram  of your model and vizplan. Also, if you are using a front end (like analysis office), please trap the SQL sent from this front end tool and include it in the message.  Even a straight SQL you have used in which you have detected this performance issue will be helpful.
    Ramana

  • Performance issues - v7.0.112 microsoft

    Hi,
    I have been experiencing severe performance issues with BPC whereby response times have been extremely slow and often BPC for excel hangs with the following message
    "microsoft office excel is waiting for another application to complete an ole action"
    Other issues have taken the form of a modify/process of an application taking over 20 minutes to complete the "Make OLAP database and Journal/Audit reports" step instead of around 4 or not completing at all, giving the message
    "An unhandled exception has occurred in your application........ Object reference not set to an instance of an object"
    Is there a top 5 of things to invesitgate as I have access to the application and database servers?
    I am not familiar with the BPC install process and architecture as I primarily work with the front end so forgive me if I am missing somethig obvious.
    Questions I am trying to answer are
    - Is it the database server
    - Is it the application server
    - Is it the application conguration
    - Is it the design of the application and input schedules
    - Is it the network ie communication between the app and SQL server
    Any help would be greatly appreciated
    Phil

    Hi all ,
    You have to do the follow changes:
    1. Into database appserver you have to change
    - into tblappsetinfo for your appsets you have to add for connection string pooling=false;
    it should be like :
    Initial Catalog=ApShell;Data Source="yourserver";Connect Timeout=90;integrated security=SSPI;pooling=false;
    - into tblserverdefaults you have to add at the end of connection string with keyid "Constr" pooling= false;
    it should be something like:
    Initial Catalog=%DBNAME%;Data Source=%SQLServer%;Connect Timeout=90;integrated security=SSPI;pooling=false;
    2. Into Outlooksoft.config from  from folder ...BPC\Websrvr\bin you have to change into file:
    <add key="Database_AppServerDBConn" value="Server=IWDF0119;Database=AppServer;Trusted_Connection=True;"/>
    with
    <add key="Database_AppServerDBConn" value="Server=IWDF0119;Database=AppServer;Trusted_Connection=True;pooling=false"/>
    I think it is also a note specifying this but for the moment I didn't find the number.
    Regards
    Sorin Radulescu

  • PERFORMANCE ISSUE IN LOV(ORACLE FORMS)

    I have a requirement to populate an LOV in a Form Which is taking LOT of TIME (PERFORMANCE ISSUE)
    the Record Group Query is as
    select segment1 INVENTORY_ITEM ,
    inventory_item_id,
    description,
    primary_uom_code,
    decode(service_item_flag, 'Y', service_duration, NULL) service_duration,
    service_duration_period_code,
    shippable_item_flag,
    Decode(bom_item_type ,
    1,'MDL',2,'OPT',3,'PLN',4,
    Decode( service_item_flag,'Y','SRV',
    Decode( serviceable_product_flag,'Y','SVA','STD'))) item_type_code
    from mtl_system_items_b --table name
    where organization_id = :QOTLNDET_LINES.ORGANIZATION_ID
    AND (bom_item_type = 1 or bom_item_type = 4)
    AND vendor_warranty_flag = 'N'
    AND primary_uom_code <> 'ENR'
    AND ((:QOTLNDET_LINES.LINE_CATEGORY_CODE = 'ORDER' and customer_order_enabled_flag = 'Y') OR
    (:LINE_CATEGORY_CODE = 'RETURN' and NVL(returnable_flag, 'Y') = 'Y'))
    AND segment1 like :QOTLNDET_LINES.INVENTORY_ITEM || '%'
    When ever i give :QOTLNDET_LINES.INVENTORY_ITEM from Front end This LOV need to be displayed.
    IT IS TAKING MORE THAT 3 MINUTES DEPENDING ON THE ITEM GIVEN.
    SUGGEST ME TO REDUCE THIS TIME.
    Thanks,
    Durga Srinivas
    Edited by: DurgaSrinivas_886836 on May 31, 2012 5:14 PM

    I had an idea ,
    record_group1=
    select segment1 INVENTORY_ITEM ,
    inventory_item_id,
    description,
    primary_uom_code,
    decode(service_item_flag, 'Y', service_duration, NULL) service_duration,
    service_duration_period_code,
    shippable_item_flag,
    Decode(bom_item_type ,
    1,'MDL',2,'OPT',3,'PLN',4,
    Decode( service_item_flag,'Y','SRV',
    Decode( serviceable_product_flag,'Y','SVA','STD'))) item_type_code
    from mtl_system_items_b --table name
    where organization_id = :QOTLNDET_LINES.ORGANIZATION_ID
    AND (bom_item_type = 1 or bom_item_type = 4)
    AND vendor_warranty_flag = 'N'
    AND primary_uom_code 'ENR'
    AND ((:QOTLNDET_LINES.LINE_CATEGORY_CODE = 'ORDER' and customer_order_enabled_flag = 'Y') OR
    (:LINE_CATEGORY_CODE = 'RETURN' and NVL(returnable_flag, 'Y') = 'Y'))
    AND segment1 like :QOTLNDET_LINES.INVENTORY_ITEM
    Record_group2 =
    select segment1 INVENTORY_ITEM ,
    inventory_item_id,
    description,
    primary_uom_code,
    decode(service_item_flag, 'Y', service_duration, NULL) service_duration,
    service_duration_period_code,
    shippable_item_flag,
    Decode(bom_item_type ,
    1,'MDL',2,'OPT',3,'PLN',4,
    Decode( service_item_flag,'Y','SRV',
    Decode( serviceable_product_flag,'Y','SVA','STD'))) item_type_code
    from mtl_system_items_b --table name
    where organization_id = :QOTLNDET_LINES.ORGANIZATION_ID
    AND (bom_item_type = 1 or bom_item_type = 4)
    AND vendor_warranty_flag = 'N'
    AND primary_uom_code 'ENR'
    AND ((:QOTLNDET_LINES.LINE_CATEGORY_CODE = 'ORDER' and customer_order_enabled_flag = 'Y') OR
    (:LINE_CATEGORY_CODE = 'RETURN' and NVL(returnable_flag, 'Y') = 'Y'))
    AND segment1 like :QOTLNDET_LINES.INVENTORY_ITEM || '%'
    If i can give Full item name then dynamically I will assign Record_group1 else i will assign Record_group2 by using Set_LOV_Property()
    so that if i give full item name lov is populated quickly .
    Suggest me Which Triggers Should i use.
    Edited by: DurgaSrinivas_886836 on May 31, 2012 6:49 PM

  • Performance issue using WDPortalUtils.getService

    Hi,
    I have an application with multiple componnets inside a main component.
    If i perform ADD using ADD component it is not getting refreshing in the front end but it is getting refreshing in the back end.
    But if i refresh the application data is getting refreshed in the front end too
    I am refreshing my application for every  operation with the given code
    Willl it give any performance issue
    if(WDPortalUtils.isRunningInPortal()){
                      IWDPageService iWDPortalService=(IWDPageService)WDPortalUtils.getService(WDPortalServiceType.PAGE_SERVICE);
                      iWDPortalService.restartApplication();
    Regards
    Padma

    Anup,
    What will happen behind the screens when application restarts.
    what are the otherways of achieving the same Behavior,like getting the application state to initial state.

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • Report performance Issue in BI Answers

    Hi All,
    We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
    Could any once suggest to improve the performance in BI answers?
    Thanks in advance,

    I hope you dont have many case statements and complex calculations that you do in the Answers.
    Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
    hope it helps
    thanks
    Prash

  • BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Hi Masters,
    I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
    Some of the querys is taking more 15 to 20 minutes to execute the report.
    This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
    Current service pack we are using is
    SAP_BW 350 0016 SAPKW35016
    FINBASIS 300 0012 SAPK-30012INFINBASIS
    BI_CONT 353 0008 SAPKIBIFP8
    SEM-BW 400 0012 SAPKGS4012
    Best of Luck
    Chris
    BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Ravi,
    I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
    they resolved it.
    Regards,
    Chris

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

Maybe you are looking for

  • Item out of stock - Otterbox Alpha Glass

    Hello, ive been waiting for this item to come back in stock but there seems to be no sign of that happening. Do you happen to know when bestbuy.com will restock on this item?: http://www.bestbuy.com/site/otterbox-alpha-glass-s​creen-protector-for-app

  • Htmldb_item date_popup populating wrong row

    Hi guys, I've noticed a few similar threads previously posted on the forums about this issue, unfortunately mines occurring in APEX 2.2. When I try to populate a date pop_up field it populates the wrong field. I want to be able to order my report res

  • Error formatting Cross-Tab table

    Hello! My CR2008 reports include many cross-tab tables and till now, everything went smoothly. ...till I got the message "Error formatting Cross-Tab table" while viewing the print preview. Technically the error comes up while clicking one of the grou

  • HP Pavilion n028sr, hard disk emits sounds

    Is it normal for this model?

  • Problems of iOS7: can't use email

    Ater upgrading to IOS7 in my ipad 2, I can't use some of the apps such as email. The system is not stable and it looks not smooth in my ipad2. Can I download to IOS 6?