EBS performance prob

HI everyone,
iam facing a problem of slow speed at the client end , EBS pages open very slowly or just hang , my EBS version is 11.5.10 , DB is oracle 9.2.0.5 and OS HP UX 11.11 . i ran statspack to know abt wait events , i got high waits in
SQL*Net message from client
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
enqueue 55,365 53,751 163,230 2948 2.4
SQL*Net message from client 2,464,948 0 660,740 268 107.4
any suggestions?

only we patched our EBS to CU2 ie 11.5.10.2Did you run Gather Schema Statistics concurrent program after the upgrade?
is there any way to see which user is connected to EBS at a time and its Ip address?Note: 295206.1 - How to Count Total Number of Users Connected to ORACLE Application
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=295206.1
USERS CONNECTED
Re: USERS CONNECTED
Number of users logged in last two months
Number of users logged in last two months
No of Apps users log-in
Re: No of Apps users log-in.

Similar Messages

  • EBS performance impact using it as a Data Source

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.Hi,
    Tough to answer without looking at data; my suggestion would be to have a test EBS environment setup, get permission from the vendors to run performance test without buying license - compare AWRs from both scenarios and then decide.
    Generally speaking, native XML publisher (BI Publisher) has less of database performance hit than external reporting tools using ODBC.
    Hope this helps.
    Regards,

  • EBS Performance  Check

    Hi,
    The database I am currently working on is about 400Gb. There is lot of load during month end period and even more during financial year end. On a regular basis,if we confront performance issues , we find more CPU and I/O consuming sessions and accordingly take a call on them. But at times the data uploaded and active sessions are so very high that even after recognising and terminating the I/O and CPU consuming sessions,performance issues still surface. Kindly advice me what other factors should i check like resources allocated,network tests etc. so that action can be taken accordingly. My application version is 12.0.6 and database version is 10.2.0.4.
    Also DB and CAR are on one node and web and forms on the other.
    Thanks,
    rdxdba

    Hi;
    It consider many reason(hardware, settings ,network etc) But i suggest please check below thread which is posted by Hussein Sawwan, if you have performance problem you can check notes&advice in below threads.
    EBS application running very slow
    Application running slow
    Re: Performance is very slow
    Slow performance EBS
    Also check:
    Re: How to improve 11i ebs performence
    Hope it helps
    Regard
    Helios

  • How to judge performance prob

    Dear All,
    I have gone through ST03 , ST06 , ST02 , ST04 several times but never get any conclusion where is the prob .
    what  I have to check
    Total cpu , avg. cpu time, total rep. time , avg. resp. time , Total db , avg db time
    which bacground task is taking too much time ?
    is the sytem is busy with rfc call ?
    any transection taking too much cpu time but not taking too much db time .
    how to measure the time just like a doctor check through bench mark ( like blook pressure measure between 80 to 120 )
    what to check in ST02 ?
    I think i am looking for threshhold value ?
    suppose cpu time is 3200 ms . is it high cpu time or normal ?
    very much confused about the performance .
    Pl. advice .
    any good docs , link , wiki , TIPS ?

    As long as no user complains and all jobs run ok, you are basically fine
    But let me do an example, you have a user complaining that transaction X is running slow. Now you need to find out, what is taking the largest part of the response time. I often just monitor the workprocess in transaction SM50, constantly refreshing while the transaction is running. If you see the workprocess stuck on one single database table, you will have to look that up. If there are almost no database tables, this means most of the time is spent in abap code. You can also use transaction ST03 to figure out where the main part of the response time is spent.
    Depending on this, you can do an SQL trace with ST05 (if the time spent is on the database), or a runtime analysis in SE30 to find which parts consume the most time. Or you use the debugger to capture a running transaction, often the place where you land is the part taking the most time.
    Causes for high database times are often missing indexes, wrong database access (wrong cbo decision) or suboptimal coding.
    Causes for high abap times are often nested loops, searching on unsorted lists etc.
    Regards, Michael

  • What EBS performance gains can I expect moving non-x86 (sun?) to x86?

    Hi,
    I was hoping some of you would please share any general performance gains you encountered by moving your EBS from non-x86 to x86. I'm familiar with the benchmarks from tpc.org and spec.org. The users however measure performance on how long it takes for a request to complete. For example, when we moved from our EBS from a two node sun E3500 (4*450 sparc II 8GB memory) to a two node v440 (4*1.28ghz sparc IIIi 8GB memory), performance doubled accross the board with a three year pay back.
    I am trying to 'guesstimate' what performance increase we might encounter, if any, moving from sun sparc to x86. We'll be doing our first dev/test migration the first half of '08, but I thought I'd get a reading from all of you about what to expect.
    Right now we're planning on going with a single-node, 6 cpu dual core 3Ghz x86, 16GB ram. The storage is external RAID 10. We process approximately 1000 payroll checks bi-weekly. Our 'Payroll Process' takes 30min to complete. Similarly, 'Deposit Advice' takes about 30min to complete. Our EBS database is a tiny 200GB, we have a mere 80 concurrent users, and we run HRMS, PAY, PA, GL, FA, AP, AR, PO, OTL, Discoverer.
    Thanks for your feedback. These forums are great.
    L5

    Markus and David,
    First let me thank you for your posts. :-).
    Markus:
    Thank you for the tip. However, I usually do installations with a domain adm user. It does a lot of user switching, yes, but then it only switches to users created by SAPINST, that is most of the time it is switching to <sid>adm, which sounds perfect. At the time of my post I had been setting some environment variables so as to try to get the procedure to distribute the various pieces and bits (saparch, sapbackup, saptrace, origlogs and mirror logs, datafiles, etc. exactly where I wanted them and not where the procedure wants them) so I ended up by using <sid>adm to perform the DB instance installation and not the domain adm user I had installed CI with (I forgot to change back). When I noticed I figured it wouldn't make a difference since it usually switches to <sid>adm anyway. However, for the next attempts I settled on ny initially created dom adm user and no change to the results. OracleService<SID> usually logs on as a system account so the issue doesn't arise, I think.
    and
    David:
    The brackets did it. Thank you so much. It went further and only crashed later, I don't usually potter around sdn, so I'm not familiar with the workings of this, I don't know how to reply separately to the posts and I don't know how to include a properly formatted post (I've seen the Plain Text help but I hate to bother with sidetrack details) so I apologize to all for the probably-too-compact jumble that will come out when I post this. I am now looking at the following problem (same migration to 64) so I fear I may have to close this post and get back with a new one if I can't solve this next issue.

  • Performance prob

    Hi All
    Have a problem. I wrote a procedure which deals with 30K rows with 25 cols, the proc runs with in 7 mins here in NT and it takes 1 hr 30 mins + on HP-Unix both running Oracle 8i. What do i need to chk to rectify the prob and gain performance. every thing else is same including code,indexes..etc.
    Suggations welcomed Pls
    Ashok

    Ashok,
    I'd say without question one (or more) of your SQL queries is doing a full table scan. I say this because the performance is WAY off, not just a few seconds. Besides, the HP unix performance would blow away NTs any day of the week. What you really nee to do, and should have done in the first place, is go to every SIUD (select/insert/update/delete) statement in your proc and do an EXPLAIN PLAN on it. Use free TOAD to do this, is so quick and easy.
    See example below ...
    CREATE OR REPLACE PROCEDURE SP_TEST
    AS
    --[LOCAL VARIABLES]
    LD_TEMP DATE;
    BEGIN
    --[MAIN]
    SELECT D1 INTO LD_TEMP FROM T1 WHERE T1.N1 = 123;
    --[EP] : FIX
    --SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=9)
    -- TABLE ACCESS (FULL) OF T1 (Cost=1 Card=1 Bytes=9)
    SELECT /*+ RULE */ D1 INTO LD_TEMP FROM T1 WHERE T1.N1 = 123;
    --[EP] : OK [RULE]
    --SELECT STATEMENT Optimizer=HINT: RULE
    -- TABLE ACCESS (BY INDEX ROWID) OF T1
    -- INDEX (RANGE SCAN) OF I_NU_T1_N1 (NON-UNIQUE)
    END SP_TEST;
    Notice the 2 SQL queries are the same and should use the index on column N1. In that case I had to tell the optimizer to use RULE so it would get used. So the solution for you is you have to go to every SIUD statement and do the explain plan [EP]. Even if you make up some fake data in the WHERE clause.
    I usually have a debugger with the pl/sql code I write that records the START and END times for each SIUD. This way you can find which queries are taking the longest to run. You should probably have something like this as well. Over all I'd say a table scan is occurring, if you find ones that are try adding the hint /*+ RULE */ or /*+ FIRST_ROWS */ or /*+ ALL_ROWS */. Check to make sure indexes are not missing.
    Also doing sql like the following (in the where clause) will force a full table scan even if the column is indexed. Be careful when using functions in the where clause.
    SELECT T1.N1 FROM T1 WHERE TO_CHAR(T1.N1) = 123;
    SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=2)
    TABLE ACCESS (FULL) OF T1 (Cost=1 Card=1 Bytes=2)
    Overall the time difference is much to great to be really anything else but full table scans occurring. It could be a record locking problem, as this process might be waiting for other locks to get released, but I would do the EP for all the SIUD (and record start/end times for each). A good programmer will have done both anyway.
    Oh one more thing I just thought about, analyze the statistics for all the tables used in your proc. Even if the SQL is correct (and all the indexes) you should do this as without doing an analyze after loading these tables it might end up doing a full table scan. Some of the more common ones are shown below ...
    ANALYZE TABLE T1 COMPUTE STATISTICS;
    ANALYZE TABLE T1 COMPUTE STATISTICS FOR ALL INDEXES;
    ANALYZE TABLE T1 COMPUTE STATISTICS FOR ALL INDEXED COLUMNS;
    If you find full table scans occurring and can't get it to use an index, paste the code in here (and what indexes are present) and we'll have a look at it.
    Tyler D.

  • EBS performance?

    Dear all,
    The EBS are suddenly very slow.... how can I know what are the processes are running?
    Or any hints why the performance are very slow.
    Thanks,
    Amy

    Hi amy;
    1. Please see if any process is consuming high cpu.
    Answer: How can I see this.... I am using AIX server with database 9.2.0.3 database.Try to use topas command for see CPU usage
    Check your database it has any lock(If you use toad from tool bar Database>>Monitor>> Session Browser>> Locks)
    Check tablespace usage,
    If all above is okey and if u havent change nothing before you got performance issue, try to stop appl and db and restart all(if its possible restart your server)
    If you still have performance issue please check:
    Note: 744143.1 - Tuning performance on eBusiness suite
    Note: 864226.1 - How Can I Diagnose Poor E-Business Suite Performance?
    EBS, performence issue
    Re: EBS, performence issue
    Oracle Apps Tuning
    Re: Oracle Apps Tuning
    Note: 69565.1 - A Holistic Approach to Performance Tuning Oracle Applications Systems
    Until check those notes&link you have still performance issue i suggest log a SR
    Regard
    Helios

  • EBS performance problem 10g

    Hi,
    This an outpout of two hours awr report of EBS oracle version 10203.
    Its shows few ITL locks that should be fixed by increasing the size of the INITRANS to 20.
    Currently the INITRANS is set to 11.
    The FREELIST is set to 4 and also the FREELIST GROUP is set to 4
    There are 8 indexes on this table (they are all have the same INITRANS+FREELIST+FREELIST GROUP)
    The table contain 250 million records.
    The table is not partitioned.
    How can i find which of the 8 indexes i should deal with ?
    They are all start with the name: "RA_CUST_TRX_LINE_GL_" ?
    Would you consider doing more things beside increasing the INITRANS ?
    My second question is regarding the last section which shows the
    same indexes waiting on Buffer Busy Waits .
    Is there an event that i can use in order to find what cause the
    index to wait so many times ?
    Please note that the issue is not if to partition the table or not , i already working in test env. to partition the table.
    I would like to get your advices regarding the current situation.
    Thanks
    tagSegments by ITL Waits                     DB/Inst: xxx/xxx  Snaps: 8311-8313
    -> % of Capture shows % of ITL waits for each top segment compared
    -> with total ITL waits for all segments captured by the Snapshot
               Tablespace                      Subobject  Obj.           ITL    % of
    Owner         Name    Object Name            Name     Type         Waits Capture
    AR         AR_INDEX1  RA_CUST_TRX_LINE_GL_            INDEX           10   18.18
    AR         AR_INDEX1  RA_CUSTOMER_TRX_LINE            INDEX            9   16.36
    AR         AR_INDEX1  RA_CUST_TRX_LINE_GL_            INDEX            9   16.36
    AR         AR_INDEX1  AR_PAYMENT_SCHEDULES            INDEX            5    9.09
    AR         AR_INDEX1  AR_PAYMENT_SCHEDULES            INDEX            5    9.09
    Segments by Buffer Busy Waits             DB/Inst: xxx/xxx  Snaps: 8311-8313
    -> % of Capture shows % of Buffer Busy Waits for each top segment compared
    -> with total Buffer Busy Waits for all segments captured by the Snapshot
                                                                      Buffer
               Tablespace                      Subobject  Obj.          Busy    % of
    Owner         Name    Object Name            Name     Type         Waits Capture
    AR         AR_INDEX1  RA_CUST_TRX_LINE_GL_            INDEX       41,671   20.30
    AR         AR_INDEX1  RA_CUST_TRX_LINE_GL_            INDEX       22,248   10.84
    AR         AR_INDEX1  IL_RA_CUST_TRX_LINE_            INDEX       18,067    8.80
    AR         AR_INDEX1  RA_CUST_TRX_LINE_GL_            INDEX       15,571    7.58
    AR         AR_DATA    RA_CUST_TRX_LINE_GL_            TABLE       15,075    7.34
    tag

    Hi Mr Lewis,
    You wrote :
    tagChecking buffer busy waits - you need to know whether these are "read by other session"
    or "real" buffer busy waits as this part of the report doesn't distinguish the classes.
    So check the wait times again for "read by other session" and "buffer busy waits" to see what the spread is.I recheck the AWR report and found at the top 5 events: read by other session
    tagTop 5 Timed Events                                         Avg %Total
    ~~~~~~~~~~~~~~~~~~                                        wait   Call
    Event                                 Waits    Time (s)   (ms)   Time Wait Class
    CPU time                                         10,998          39.9
    db file sequential read           2,545,797       8,793      3   31.9   User I/O
    read by other session             1,081,643       2,852      3   10.4   User I/O
    library cache pin                    18,450       1,253     68    4.5 Concurrenc
    db file scattered read              115,039       1,226     11    4.5   User I/O
              -------------------------------------------------------------And also:
    tag                                                                    Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    db file sequential read           2,545,797     .0       8,793       3     142.3
    read by other session             1,081,643     .0       2,852       3      60.4
    library cache pin                    18,450     .1       1,253      68       1.0
    db file scattered read              115,039     .0       1,226      11       6.4
    log file parallel write             111,366     .0         803       7       6.2
    SQL*Net more data from clien         28,701     .0         661      23       1.6
    enq: TX - index contention           25,492     .0         303      12       1.4
    log file sync                        14,065     .0         205      15       0.8
    latch: cache buffers chains         670,408     .0         135       0      37.5
    Log archive I/O                       6,751     .0         120      18       0.4
    library cache load lock               1,539     .0          97      63       0.1
    buffer busy waits                   234,058     .0          61       0      13.1So, there is 5 times more : "read by other session" than "buffer busy waits".
    What that mean ? what should i check ?
    Thanks Again

  • Connect BY Performance Prob- [10.2.0.4 - No Additonal Patch Sets deployed]

    Hi All,
    Having trouble with a connect-by performance.
    All columns are indexed, so cannot see why this is taking so long to retrieve a 135 row data-set (105,000 rows data set).
    For some reason the explain plan insists on Full Table Scans:
    SELECT STATEMENT ALL_ROWS
    Cost: 1,121 Bytes: 3,390,384 Cardinality: 70,633 CPU Cost: 203,799,167 IO Cost: 1,085                                         
         16 HASH JOIN Cost: 1,121 Bytes: 3,390,384 Cardinality: 70,633 CPU Cost: 203,799,167 IO Cost: 1,085                                    
              1 TABLE ACCESS FULL TABLE BANKREC.test_table Cost: 140 Bytes: 632,259 Cardinality: 70,251 CPU Cost: 42,653,073 IO Cost: 132                               
              15 VIEW BANKREC. Cost: 665 Bytes: 3,870,321 Cardinality: 99,239 CPU Cost: 124,660,113 IO Cost: 643                               
                   14 CONNECT BY WITH FILTERING                          
                        6 VIEW BANKREC. Cost: 665 Bytes: 7,740,642 Cardinality: 99,239 CPU Cost: 124,660,113 IO Cost: 643                     
                             5 SORT UNIQUE Cost: 665 Bytes: 1,488,570 Cardinality: 99,239 CPU Cost: 119,007,497 IO Cost: 641                
                                  4 UNION-ALL           
                                       2 TABLE ACCESS FULL TABLE BANKREC.test_table Cost: 137 Bytes: 1,488,570 Cardinality: 99,238 CPU Cost: 31,045,760 IO Cost: 132      
                                       3 FAST DUAL Cost: 2 Cardinality: 1 CPU Cost: 7,271 IO Cost: 2      
                        13 HASH JOIN                     
                             7 CONNECT BY PUMP                
                             12 VIEW BANKREC. Cost: 665 Bytes: 2,580,214 Cardinality: 99,239 CPU Cost: 124,660,113 IO Cost: 643                
                                  11 SORT UNIQUE Cost: 665 Bytes: 1,488,570 Cardinality: 99,239 CPU Cost: 119,007,497 IO Cost: 641           
                                       10 UNION-ALL      
                                            8 TABLE ACCESS FULL TABLE BANKREC.test_table Cost: 137 Bytes: 1,488,570 Cardinality: 99,238 CPU Cost: 31,045,760 IO Cost: 132
                                            9 FAST DUAL Cost: 2 Cardinality: 1 CPU Cost: 7,271 IO Cost: 2
    The query data being returned is based upon- 20-level depth / up to 10-items at any given single depth hierarchy.
    The problem I have is this is taking 2-3 secs to return any given groups data back.
    The query:
    select * from connect_by_testv4
    where acct_group = 1110777 /*Pick anything but 0 - as 0 returns all acct_ids which are in a group.
    Select a acct_group which exists*/
    The view:
    CREATE OR REPLACE FORCE VIEW CONNECT_BY_TESTv4
    (ACCT_GROUP, ACCT_ID)
    AS
    with account_groups
    as (select connect_by_root ba.acct_id orig_id, ba.acct_id, ba.acct_group
    from (select acct_id, acct_group, acct_type /*We add record 0 as this is the top level of the hierarchy*/
    from test_table b1
    union
    select 0, null,4 from dual) ba
    start with (ba.acct_type =4) /*acct_type =4- Group: List only the groups then find the children*/
    connect by ba.acct_group = prior ba.acct_id
    select ag.orig_id acct_group, ag.acct_id
    from account_groups ag
    join test_table unio
    on ag.acct_id = unio.acct_id
    where unio.acct_type in (0,1,2) /*List all items which are not groups, or deleted - acct_type = 3=deleted*/
    The table structure:
    CREATE TABLE test_table
    acct_id INTEGER NOT NULL ,
    field1 VARCHAR2(255) NOT NULL ,
    field2 VARCHAR2(255) NOT NULL ,
    field3 INTEGER NULL ,
    acct_group INTEGER NULL ,
    field5 CHAR(3) NOT NULL ,
    acct_type INTEGER NOT NULL ,
    field6 INTEGER NULL ,
    CONSTRAINT test_table_pk PRIMARY KEY (acct_id)
    CREATE INDEX test_table_if3 ON test_table
    acct_group ASC
    CREATE INDEX test_table_if4 ON test_table
    acct_type ASC
    Anyone have any ideas how to make this faster?
    (The equivilant query on a different competitors RDBMS takes 0.5 sec to return this same data - on a machine with the same IO power)
    It does seem a little odd that the CPU resource required is: CPU Cost: 203,799,167 ?
    I have tested with an IOT base table - no real perf. diff benefits either.
    Test Data can be sent (In a CSV file - 3Mb) - email: [email protected].
    Thanks in advance,
    Dan.

    Hi All,
    is the magic number 4 in :
    select 0, null,4 from dual) ba
    start with (ba.acct_type =4) /*acct_type =4- Group: List only the groups then find the children*/
    -- Can I add this data to the base table data - unfortunatly not, without making the application break.
    --The union part will always be 0 - acct_id, acct_type = 4, group_id = NULL [This will always be the case, never changing]
    1) Yes UNION /UNION ALL is being problematic. Table has 105,000 rows.
    2) There is an index on ACCT_ID/ACCT_GROUP/ACCT_TYPE - all of which are individual indexes.
    3) ACCT_ID - is a PK Index (Apologies for not including this in the post)
    4) ACCOUNT_GROUPS is a Common Table Expression (CTE): http://www.dba-oracle.com/t_with_clause.htm
    5) Indexes are present. Why oh Why cannot oracle join to a CONNECT BY when given input parameters on the view definition instead of to the materialised data-set ?
    Materialised view - cannot be considered this is an enterprise edition feature mostly.
    When running the same type of query on a different RDBMS System, the following:
    select * from <view_name> where acct_group = X
    This results in the query inserting acct_group into the middle of the CTE / connect-by expression - so it does a fast lookup inside the Connect-by-clause limiting the results set to the perform the outer joins. This other RDBMS SYstem only takes about 180 ms to perform this query, where the oracle engine takes 2-3 seconds (as it materialises the CTE fully and does not insert the acct_group in the middle of the connect-by expression).
    This query only works efficiently in Oracle when converted to a User Defined Function - e.g. specifying the @acct_group as an input variable:
    with account_groups
    as (select connect_by_root ba.acct_id orig_id, ba.acct_id, ba.acct_group
    from (select acct_id, acct_group, acct_type
    from bs_accts b1
    union
    select 0, null,4 from dual) ba
    start with (ba.acct_type =4 and acct_id = @acct_group)
    connect by ba.acct_group = prior ba.acct_id
    The performance of the view/UDF should be the same - I am happy to say oracles RDBMS engine in this case is generating BAD Plans for a connect by and failing badly in optimising the query being asked of it.
    If people want test data: email: [email protected]
    So to be absolutly clear, the following performs: (Returns data in about 422 ms)--
    with account_groups
    as (select connect_by_root ba.acct_id orig_id, ba.acct_id, ba.acct_group
    from (select acct_id, acct_group, acct_type
    from bs_accts b1
    ) ba
    start with (ba.acct_type =4 and ba.acct_id = 1000001) /*UDF - uses @acct_group variable here*/
    connect by ba.acct_group = prior ba.acct_id
    select ag.orig_id acct_group, ag.acct_id
    from account_groups ag
    join bs_accts unio
    on ag.acct_id = unio.acct_id
    where unio.acct_type in (0,1,2)
    and ag.orig_id = 1000001;
    Where the following: (Returns data in about 2 seconds)
    with account_groups
    as (select connect_by_root ba.acct_id orig_id, ba.acct_id, ba.acct_group
    from (select acct_id, acct_group, acct_type
    from bs_accts b1
    ) ba
    start with (ba.acct_type =4)
    connect by ba.acct_group = prior ba.acct_id
    select ag.orig_id acct_group, ag.acct_id
    from account_groups ag
    join bs_accts unio
    on ag.acct_id = unio.acct_id
    where unio.acct_type in (0,1,2)
    and ag.orig_id =1000001 ;
    * The above query generates a poor plan. Due to what I believe is an oracle engine optimisation deficiency.
    Edited by: user626167 on 01-Sep-2008 06:08

  • EBS performance Question

    EBS: 12.1.1
    Platform: IBM AIX on POWER Systems (64-bit)
    1. Customer's PROD instance has 16 CPUs
    2. Active users number more than 2000
    3. Customer modify OACore number to '2', and use load runner test with 200 active users, then have some user login failed.
    4. Customer wants to know how to achieve the goal so that the EBS can afford more than 2000 active users
    5. 2 RAC DB + 4 APP Servers
    Thanks,
    Jackie
    Edited by: Jackie Han on Nov 22, 2012 12:56 AM

    Hi;
    No one can answer you from forum or see your AWR ASH etc reports.
    I suggest review below link first:
    https://blogs.oracle.com/stevenChan/entry/ebs_performance_patches << check Related Articles also
    Regard
    Helios

  • Performance Prob on 5600 TD256

    2 months ago, I replaced my GeForce4 Ti4200 with the 5600 TD 256. I had some troubles (windows auto-detect giving me fits), but I resolved them. I finally went to test my card's speed using 3dmark 2001SE and to my surprise my marks dropped from 11000 or so to around 9300. I was shocked, so I did some tweaking and got the latest drivers, but nothing helped. It is not just that test. Both UT2003 and Unreal 2, as well as some other games skip periodically, and my frame rates are low. Even Max Payne, which is going on 2 years old, exhibits low frame rates at certain times. I have the image quality setting in the nvidia control panel set to blend, and I am nt using any quality enhancers, like anti-aliasing. Is this all I can expect from this card? If not, someone help me please! My specs are as follows.
    Mobo: Asus p4pe
    Procsessor: Northwood p4 2.4 ghz
    Video Card:GeForce Fx 5600 TD256
    Ram:512mb Kingston ValueRam DDR @ 333mhz
    HDD:20 GB Western Digital (don't know model)
    OS: Windows 98SE
    Sound: SBLive Value
     

    If you compare 3Dmark scores with AA enabled the FX5600 will have an advantage and personally I only care about preformance with AA.
    But in "brute force" the Ti4200 is faster, that's true.
    The FX5600 has DX9 support so 3D Mark2003 will run better on it and it has a better picture quality because of 128bit internal color.

  • EBS 12.1.3 unsuccessful logins trace

    Hi there
    I would like to trace unsuccessful login attempts (Application User) to an IP address. EBS has a standard report for unsuccessful logins but only has username and time of the unsuccessful login.
    Any suggestions on how to trace this?
    Thank you
    Brindly

    user9050262 wrote:
    Hi there
    I would like to trace unsuccessful login attempts (Application User) to an IP address. EBS has a standard report for unsuccessful logins but only has username and time of the unsuccessful login.
    Any suggestions on how to trace this?
    Thank you
    BrindlyPlease see old threads for similar same topic/discussion.
    Re: EBS performance prob
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Trace+AND+IP+AND+Address&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    https://forums.oracle.com/forums/search.jspa?threadID=&q=unsuccessful+AND+logins&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • Is custom views effect the performance of EBS?

    We are using EBS r12.0.6,, database 0g R2.
    Our developers are writting some views to sumarize the data and those views are using in custom reports.
    Is this effect the EBS performance?

    Technically, just doing select 1 from dual will affect the performance. The impact may be a pico second but it will certainly be there.
    My point is that it is very difficult to answer this question without knowing the kind of views and the queries that are run on the views. It is best to analyze the report request to identify performance impact (if any).
    views to sumarize the data Keep in mind, for some summary type of reports, it might be better to write materialized views instead of regular views OR do the summarization on a data w/h
    instance or a reporting instance if you have one.
    Sandeep Gandhi

  • Oracle EBS Application performance tuning course

    May I know Oracle EBS r12 application performance tuning course available in Oracle University?

    Hi Azar,
    As far as I know there is no course as EBS performance Tuning course. As you know this is the complete Oracle training courses available:
    Oracle Certification All Exam | Certifications | Oracle
    As EBS is involved in multi Tier architecture, I recommend you to have a look at:
    Oracle Certification Program: Oracle Technology and Applications Certification
    Oracle Certification Program: Oracle Technology and Applications Certification
    Oracle Certification Program: Oracle Technology and Applications Certification
    Hope this helps!! and Best of Luck mate!
    Thanks &
    Best Regards

  • K9A2 Neo + AMD X2 6000+ performance problem

    Hello!
    I got this problem wich i wonder if it has to do with the chipset contra graphic card.
    I used to have an AMD X2 3800+, ASUS A8N5X NFORCE4 mobo, 2gb PC3200 400mhz, and i exchange those for AMD X2 6000+, MSI K9A2 Neo, 2gb Corsair XMS2 800mhz. My graphics card is a NVIDIA GF8000GTS running under Win XP 32.
    My problem comes down to that this new gear runs like my older. If not worse at times. And my friend that has an identical system can play on very high settings without any stutter while i have to play on normal. Exactlly as my older system. The difference we have is that he is using a NFORCE motherboard. Im now wondering if that has such an impact on performance? Having a NVIDIA graphics card and use this MSI with ATI chipset.
    When i baught it today i asked for NFORCE but i guess he forgot after picking it up and handing it to me, and i ofcourse forgot to double check. I thaught it was NFORCE7 when i saw the "770" marking. I also installed some AMD CPU DRIVERS that came with the disc. Dont know what they are for. Never seen that before. I also skipped installing the "ATI SYSTEM DRIVERS" as i figured i dont use ATI graphics card. Dont know if i should?
    I will exchange this for a NFORCE mobo, but i just wanted to check with you if this really can have such impact on performance.
    I have everyhing else updated to latest drivers with XP SP3 up to date.
    Hope for some answer. Thank you in advance.
    Regards
    Alex

    My friend and i got the same gear apart from the nforce mobo.
    I got a 500W PSU and it should do as antother friend of mine got 600W and running SLI and faster CPU, more RAM etc. When i installed this i ofcourse formatted and made a fresh install. Unless you mean that i need to install the ATI SYSTEM DRIVERS first? I dont know if its 30 AMPS on the PSU though. Need to check that.
    To me it just sounds more sain to use Nvidia chipset together with Nvidia graphics card, and ATI chipset with ATI graphics. But as you say you got no performance probs with that i might be wrong. However i am comparing to a friend with the same gear and playing online at the same time. He can go way up in settings i cant. I even stutter on the normal settings, and he doesnt on the high settings.
    Gotta check the 30AMPS now.
    Thanks for your ideas. Your probably right. Will check.
    Regards
    Alex

Maybe you are looking for