10g performance issue + RAC
Hi,
we are running oracle e-business suite 111 with 10g database and RAC.
Please let us know what can be tuned, following is a AWR report statistics:
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 2904 20-Jan-09 08:30:17 362 55.3
End Snap: 2905 20-Jan-09 09:30:43 525 79.0
Elapsed: 60.42 (mins)
DB Time: 214.04 (mins)
Report Summary
Cache Sizes
Begin End
Buffer Cache: 2,992M 2,896M Std Block Size: 8K
Shared Pool Size: 1,008M 1,104M Log Buffer: 14,352K
Load Profile
Per Second Per Transaction
Redo size: 95,171.90 7,079.15
Logical reads: 83,587.58 6,217.48
Block changes: 696.50 51.81
Physical reads: 163.22 12.14
Physical writes: 18.76 1.40
User calls: 429.26 31.93
Parses: 156.87 11.67
Hard parses: 5.60 0.42
Sorts: 146.05 10.86
Logons: 0.88 0.07
Executes: 991.77 73.77
Transactions: 13.44
% Blocks changed per Read: 0.83 Recursive Call %: 83.42
Rollback per transaction %: 39.30 Rows per Sort: 10.42
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 99.81 In-memory Sort %: 100.00
Library Hit %: 98.77 Soft Parse %: 96.43
Execute to Parse %: 84.18 Latch Hit %: 99.90
Parse CPU to Parse Elapsd %: 17.28 % Non-Parse CPU: 99.64
Shared Pool Statistics
Begin End
Memory Usage %: 75.51 84.38
% SQL with executions>1: 92.27 86.87
% Memory for SQL w/exec>1: 90.30 88.03
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 5,703 44.4
db file sequential read 399,573 1,732 4 13.5 User I/O
gc current block 2-way 380,370 846 2 6.6 Cluster
gc cr grant 2-way 202,084 325 2 2.5 Cluster
gc cr multi block request 273,702 315 1 2.5 Cluster
RAC Statistics
Begin End
Number of Instances: 2 2
Global Cache Load Profile
Per Second Per Transaction
Global Cache blocks received: 207.45 15.43
Global Cache blocks served: 201.99 15.02
GCS/GES messages received: 491.52 36.56
GCS/GES messages sent: 592.97 44.11
DBWR Fusion writes: 3.18 0.24
Estd Interconnect traffic (KB) 3,487.35
Global Cache Efficiency Percentages (Target local+remote 100%)
Buffer access - local cache %: 99.56
Buffer access - remote cache %: 0.25
Buffer access - disk %: 0.19
Global Cache and Enqueue Services - Workload Characteristics
Avg global enqueue get time (ms): 0.6
Avg global cache cr block receive time (ms): 3.1
Avg global cache current block receive time (ms): 3.4
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.0
Global cache log flushes for cr blocks served %: 12.5
Avg global cache cr block flush time (ms): 3.4
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.0
Global cache log flushes for current blocks served %: 0.0
Avg global cache current block flush time (ms): 1.0
Global Cache and Enqueue Services - Messaging Statistics
Avg message sent queue time (ms): 0.1
Avg message sent queue time on ksxp (ms): 1.5
Avg message received queue time (ms): 0.0
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.0
% of direct sent messages: 46.02
% of indirect sent messages: 49.25
% of flow controlled messages: 4.73
Regards
Arora
Achyot,
Apart from the comments by Justin a few other general comments
- RAC is mainly suited for OLTP
- if your OLTP application doesn't scale in a single instance configuration, RAC will make this worse. You may want to tune your application without RAC first (set CLUSTER_DATABASE to FALSE)
- You need to find out whether the delays are resulting from a badly configured Interconnect. This will show up as events starting with 'gc', above all 'gc cache cr request'
Hth
Sybrand Bakker
Senior Oracle DBA
Similar Messages
-
Hi,
We were using Oracle 9i in Solaris 5.8 and it was working fine with some minor performance issues. We formatted the Solaris server with new Solaris 5.10 and installed Oracle 10g.
Now we are experiencing some performance issues in Oracle 10g. This issue is arising when using through Websphere 5.1.
We have analyzed the schema, index is rebuild, SGA is 4.5 GB, PGA is 2.0 GB, Solaris RAM is 16 GB. Also we are having some Mat Views (possibly this may cause performance issues - not sure) due to refresh.
Also I have changed some parameters in init.ora file like query_rewrite = STALE_TOLERATED, open_cursors = 1500 etc.
Is is something due to driver from which the data is accessed. I guess it is not utilizing the indexes on the table.
Can anyone please suggest, what could be the issue ?<p>There are a lot of changes to the optimizer in the upgrade from 9i to 10g, and you need to be aware of them. There are also a number of changes to the default stats collection mechanism, so after your upgrade your statistics (hence execution paths) could change dramatically.
</p>
<p>
Greg Rahn has a useful entry on his blog about stats collection, and the blog al,so points to an Oracle white paper which will give you a lot of ideas about where the optimizer changes - which may help you spot your critical issues.
</p>
<p>Otherwise, follow triggb's advice about using Statspack to find the SQL that is the most expensive - it's reasonably likely to be this SQL that has changed execution plans in the upgrade.
</p>
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
Disco Plus 10g Performance Issue When Moving Table Headings to Page Items
Hello,
We are experiencing an performance anomaly in Discoverer Plus (the latest release: 10.1.2.48.18), and are wondering if anyone else out there has noticed similar behavior. We're having trouble identifying the cause, and have a TAR openned with Oracle support but they are not able to reproduce our issue, and have been slow to offer suggestions or help.
The issue happens when a user drags a table heading up into the page items area in a worksheet. When we move a heading up to the page items it works quickly one time, but then, every time after that, it takes anywhere from a few minutes up to HOURS to move any additional headings up to the page items area.
The problem occurrs on approximately 3 out of every 5 of our workbooks. We've tried different sizes of workbooks, with large (several million records) and small (a few thousand records) datasets, and none of this seems to affect the issue at hand. The problem only occurs in Discoverer Plus, not Desktop. We've also spent some time researching caching and memory configuration, and believe that we have setup all of the recommended options for maximum performance on our systems.
I would just like to know if anyone else out there in the community has experienced the same issue, and if anyone has any advice for us.
Thank you,
-ScottHi,
I found the following on Best Practices of Orcle Discoverer 10g by
Mike Donohue - Product Management - Oracle Business Intelligence
Performance Parameters and Page Items
Page Items provide very responsive, interactive manipulation of data
At a cost:
Forces retrieval of all Detail values
Incremental increases of memory as indices are built
Parameters reduce result set, improve performance
Use Page Items only when needed 2 to 3 with < 12 values each
Performance Parameters and Page Items - example
Loc(3), Dept(10), ProdType(50), Prod(1,000), Date(365)
547,500,000 potential rows/combinations (3*10*50*1000*365)
Use parameters for Loc, Dept, ProdType, and 90 day date range
90,000 potential rows/combinations (1000*90)
Reduce data retrieved by ~ 6000 X
Improve performance by several orders of magnitude
I hope this will help you! -
Upgrade 9i to 10g Performance Issue
Hi All,
DBA team recently upgraded the database from 9i to 10g (Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi).
There is a process as follows
Mainframe --> Flat file --> (Informatica) --> Oracle DB --> (processing through SPs) --> Report
The whole process normally used to take 1.5 - 2 hrs which is now taking 4-5 hrs after 9i to 10g upgrade.
I tried searching on Google but could not find any detailed procedure to root out the cause. Below is the link of such an instance-
http://jonathanlewis.wordpress.com/2007/02/02/10g-upgrade/
Can someone suggest where to start digging to find out the reason behind the slowdown (if possible, please tell the detailed procedure or reference)?
It is to be noted that there was no other change besides Database upgrade. Also, for about 20 days after the upgrade the whole process took near 1 hr (which was a pleasnt gain) but afterward it shoot up to 4-5 hrs suddenly and since then its taking same time.
Thanks.Without more information (Statspack reports, AWR reports, or at least query plans), it's going to be exceptionally hard to help you.
There are dozens, more likely hundreds, of parameters which change between 9i and 10g. And one of which could cause an individual query to perform worse. Plus, obviously something changed after the process was running quickly for 20 days, so there is some change on your side that would need to be identified and accounted for. There is no way for anyone to guess which of the hundreds of parameter and behavioral changes might have contributed to the problem, nor for us to guess what might have changed after 20 days in your database.
If you can at least provide Statspace/ AWR reports, we may be able to help you out with a general tuning exercise. If you can identify what changed between the time the process was running well to the time it stopped running well, we may be able to help there as well.
Justin -
Hi all,
In sunfire v890 we have installed oracle 10g release 2 on solaris 10.
prstat -a command shows :
NPROC USERNAME SIZE RSS MEMORY TIME CPU
105 root 9268M 6324M 20% 1:21:57 0.4%
59 oracle 24G 22G 71% 0:04:33 0.1%
2 nobody4 84M 69M 0.2% 0:11:32 0.0%
2 esbuser 13M 9000K 0.0% 0:00:46 0.0%
1 smmsp 7560K 1944K 0.0% 0:00:00 0.0%
4 daemon 12M 7976K 0.0% 0:00:00 0.0%
and top utility shows :
last pid: 8639; load avg: 0.09, 0.09, 0.09; up 2+06:05:29 17:07:50
171 processes: 170 sleeping, 1 on cpu
CPU states: 98.7% idle, 0.7% user, 0.7% kernel, 0.0% iowait, 0.0% swap
Memory: 32G phys mem, 22G free mem, 31G swap, 31G free swap
therefore from prstat we come to know that memory used by oracle is 71%
where as top says 31.25% used.....
which one is true in this scenario.....
shall we go ahead in trusting top utility????
Advance thanks to you.Hi darren
The main thing is,, prstat -a command showing oracle
user occupied 70%.In top utility showing 22gb memory
free out of 32 gb.That means 10gb was occupied by all
users.In percentage calculation its comes
31.25%...i.e top shows all users occupied only
31.25%Right. That's all memory in use, correct? From your first message I thought you meant it said that was the amount used by Oracle.
It's easy to calculate total memory in use.
It's hard to calculate memory in use by a subset of processes (perhaps those owned by a particular user).
but prstat -t command shows 70% occupied by oracle.
which one i want to believe??????The prstat command showing memory in use by a user will be incorrect because it does not calculated shared pages properly.
As far as I am aware, 'top' has no similar display.
Darren -
AS + Oracle DB 10g performance issue
Hi,
My company is currently exploring Oracle Portal and Discoverer. I have a question and would really appreciate if anyone can help me.
I install the AS infrastructure and the database in the same machine. Currently when I log in to the portal for the first time of the day, it is always very slow. If I install the IAS and database in separate machines, will this give a significant improvement in the response time for my portal? What are the implications of installing this in 2 machines instead of 1?
Thanks :)Two things -
With Portal and Discoverer, your AS setup probably includes both mid-tier and Infrastructure installations. (is this a BI & F type mid-tier?) The Infrastructure commonly installs its own database for Metadata repositories etc. So this machine has 2 x AS instances and 2 x Database server, correct? An awful lot for a single machine, I'd guess.
Do you restart the machine first thing each day? The first time a web app loads it has a startup latency. Monday morning, sort of. There is a lot of components involved and they all need to load up some stuff in memory to be able to do some work. This takes time (disk-> mem is very very slow, for one thing).
2 machines means more complex setup and hopefully a 'quicker' solution.
Depending on requirements, you may want one machine for customer/your own database, one machine for AS database and one for AS mid-tier.... What kind of platform are you planning on (hardware + os)? -
Database migrated from Oracle 10g to 11g Discoverer report performance issu
Hi All,
We are now getting issue in Discoverer Report performance as the report is keep on running when database got upgrade from 10g to 11g.
In database 10g the report is working fine but the same report is not working fine in 11g.
The query i have changed as I have passed the date format TO_CHAR("DD-MON-YYYY" and removed the NVL & TRUNC function from the existing query.
The report is now working fine in Database 11g backhand but when I am using the same query in Discoverer it is not working and report is keep on running.
Please advise.
Regards,Pl post exact OS, database and Discoverer versions. After the upgrade, have statistics been updated ? Have you traced the Discoverer query to determine where the performance issue is ?
How To Find Oracle Discoverer Diagnostic and Tracing Guides [ID 290658.1]
How To Enable SQL Tracing For Discoverer Sessions [ID 133055.1]
Discoverer 11g: Performance degradation after Upgrade to Database 11g [ID 1514929.1]
HTH
Srini -
DB Performance Issues after 10g Upgrade in EBS Instance
We have upgraded our Database from 9i to 10g as first part of EBS 11.5.9 to 11.5.10.2 upgrade. Currently our production is running on 11.5.9 apps with 10g DB.
Facing performance problems now. one of the them is, one Valueset query not using funcion based index while fired from the front end. but the same query when collected from SQL trace tkprofed file and executed from SQL Plus, it uses all proper indexes. We are not getting the cause of this.
Had anyone faced same kind of issues before. please suggest.
thanks,
Raj.Make sure you have all of the recommended performance patches for 11.5.9, and gather stats for SYS and SYSTEM in the following manner:
Oracle E-Business Suite Recommended Performance Patches
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=244040.1
Collecting Statistics with Oracle Apps 11i
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=368252.1
execute dbms_stats.unlock_schema_stats('SYS');
execute dbms_stats.unlock_schema_stats('SYSTEM');
exec dbms_stats.gather_schema_stats('SYSTEM',options=>'GATHER', estimate_percent => 100, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER', estimate_percent => 100, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
exec dbms_stats.gather_fixed_objects_stats();
commit;
exec dbms_stats.DELETE_TABLE_STATS('SYS','X$KCCRSR');
exec dbms_stats.LOCK_TABLE_STATS('SYS','X$KCCRSR');
commit;
The last 3 commands resolve problems with RMAN, in case you are using it.
Rman Backup is Very Slow selecting from V$RMAN_STATUS
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=375386.1
Poor performance when accessing V$RMAN_BACKUP_JOB_DETAILS
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=420200.1
Troubleshooting Oracle Applications Performance Issues
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=169935.1
Debugging General Performance Issues with Oracle Apps
http://blogs.oracle.com/schan/newsItems/departments/optimizingPerformance/2007/05/18#a1548
Performance Tuning the Apps Database Layer
http://blogs.oracle.com/schan/newsItems/departments/optimizingPerformance/2007/05/17#a1562
Preventing Apps 11i Performance Issues in Four Steps
http://blogs.oracle.com/schan/newsItems/departments/optimizingPerformance/2007/05/21#a1566 -
Performance issues executing process flows after upgrading db to 10G
We have installed OWF 2.6.2, and initially our database was at 9.2. Last week we updated our database to 10g, and process flow executions are taking a lot longer, from 1 minute to 15 minutes.
Any ideas anyone what could be the cause of this performance issue?
Thanks,
YanetHi,
Oracle10g database behaves differently on the statistics of tables and indexes. So check these and check wether the mappings are updating these statistics at the right moments with respect to the ETL-proces and with the right interval.
Also, check your generated sources on how statistics are gathered (dmbs_stats.gather....). Does the index that might play a vital role in Oracle9i get new statistics, or only the table? Or only the table where doubled in amount of rows by this mapping?
You can always take matter into your own hands, by letting OWB NOT generate the source for gathering statistics, and call your own procedure in a post-mapping.
Regards,
André -
Performance issue with form after 10g upgrade
Hi Team,
Last week we have upgraded our systems to 10g database.
Ever since we did an upgrade there is a huge performance issues with the custom forms and this is causing a major setback to our business. Before the upgrade the forms were running with our any performance issues.
Can anyone please help me in finding out the reason behind the performance issue(May be a tar or performance tuning).
Many Thanks in Advance.
Regards
KumarLike Jan said,
You must supply more information so we can help you, like where the degradation happens?, in processing? in navigation? in forms loading?? where?
You may also do a little test. Create a one button form, just a canvas and a button, you can include a message in when-button-pressed trigger.
run it and see what happens.
Tony -
Creating datafile performance issues
Hi guys,
I´m using a oracle rac 10g with 3 nodes, and i needed do create a new datafile in production environment, then i had some performance issues, do you guys have any idea what the cause of this ?
CREATE TABLESPACE "TBS_TDE_CMS_DATA"
DATAFILE
'+RDATA' SIZE 1000M REUSE AUTOEXTEND ON NEXT 1000M MAXSIZE 30000M
LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
Any hint would be helpful.
thanks.BrunoSales wrote:
Hi guys,
I´m using a oracle rac 10g with 3 nodes, and i needed do create a new datafile in production environment, then i had some performance issues, do you guys have any idea what the cause of this ?
CREATE TABLESPACE "TBS_TDE_CMS_DATA"
DATAFILE
'+RDATA' SIZE 1000M REUSE AUTOEXTEND ON NEXT 1000M MAXSIZE 30000M
LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
Any hint would be helpful.
thanks.why are creating 1GB of initial extent? I think is too big. Remember whenever you add datafile to database, oracle first formats the complete datafile. So doing this can have performance issue as it depends upon the datafile size. check if you face any issue in I/O or check if there are any spikes in I/O.
You can try adding a datafile with default values and check how much time it take -
Performance Issue with sql query
Hi,
My db is 10.2.0.5 with RAC on ASM, Cluster ware version 10.2.0.5.
With bsoa table as
SQL> desc bsoa;
Name Null? Type
ID NOT NULL NUMBER
LOGIN_TIME DATE
LOGOUT_TIME DATE
SUCCESSFUL_IND VARCHAR2(1)
WORK_STATION_NAME VARCHAR2(80)
OS_USER VARCHAR2(30)
USER_NAME NOT NULL VARCHAR2(30)
FORM_ID NUMBER
AUDIT_TRAIL_NO NUMBER
CREATED_BY VARCHAR2(30)
CREATION_DATE DATE
LAST_UPDATED_BY VARCHAR2(30)
LAST_UPDATE_DATE DATE
SITE_NO NUMBER
SESSION_ID NUMBER(8)
The query
UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID
Is taking a lot of time to execute and in AWR reports also it is on top in
1. SQL Order by elapsed time
2. SQL order by reads
3. SQL order by gets
So i am trying a way to solve the performance issue as the application is slow specially during login and logout time.
I understand that the function in the where condition cause to do FTS, but i can not think what other parts to look at.
Also:
SQL> SELECT COUNT(1) FROM BSOA;
COUNT(1)
7800373
The explain plan for "UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID" is
{code}
PLAN_TABLE_OUTPUT
Plan hash value: 1184960901
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 1 | 26 | 18748 (3)| 00:03:45 |
| 1 | UPDATE | BSOA | | | | |
|* 2 | TABLE ACCESS FULL| BSOA | 1 | 26 | 18748 (3)| 00:03:45 |
Predicate Information (identified by operation id):
2 - filter("SESSION_ID"=TO_NUMBER(SYS_CONTEXT('USERENV','SESSIONID')))
{code}Hi,
There are also triggers before update and AUDITS on this table.
CREATE OR REPLACE TRIGGER B2.TRIGGER1
BEFORE UPDATE
ON B2.BSOA REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
:NEW.LAST_UPDATED_BY := USER ;
:NEW.LAST_UPDATE_DATE := SYSDATE ;
END;
CREATE OR REPLACE TRIGGER B2.TRIGGER2
BEFORE INSERT
ON B2.BSOA REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
:NEW.CREATED_BY := USER ;
:NEW.CREATION_DATE := SYSDATE ;
:NEW.LAST_UPDATED_BY := USER ;
:NEW.LAST_UPDATE_DATE := SYSDATE ;
END;
And also there is an audit on this table
AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER SUCCESSFUL;
AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER NOT SUCCESSFUL;
And the sessionid column in BSOA has height balanced histogram.
When i create an index i get the following error. As i am on 10g I can't use DDL_LOCK_TIMEOUT . I may have to wait for next down time.
SQL> CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS;
CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified
Thanks -
Performance issues with LOV bindings in 3-tier BC4J architecture
We are running BC4J and JClient (Jdeveloper 9.0.3.4/9iAS 9.0.2) in a 3-tier architecture, and have problems with the performance.
One of our problems are comboboxes with LOV bindings. The view objects that provides data for the LOV bindings contains simple queries from tables with only 4-10 rows, and there are no view links or entity objects to these views.
To create the LOV binding and to set the model for the combobox takes about 1 second for each combobox.
We have tried most of tips in http://otn.oracle.com/products/jdev/tips/muench/jclientperf/index.html, but they do not seem to help on our problem.
The performance is OK (if not great) when the same code is running as 2-tier.
Does anyone have any good suggestions?I can recommend that you look at the following two bugs in Metalink: Bug 2640945 and Bug 3621502
They are related to the disabling of the TCP socket-level acknowledgement which slows down remote communications for EJB components using ORMI (the protocol used by Oracle OC4J) to communicate between remote EJB client and server.
A BC4J Application Module deployed as an EJB suffers this same network latency penalty due to the TCP acknowledgement.
A customer sent me information (that you'll see there as a part of Bug# 3621502) like this on a related issue:
We found our application runs very slow in 3-Tier mode (JClient, BC4J deployed
as EJB Session Bean on 9iAS server 9.0.2 enterprise edition). We spent a lot
of time to tune up our codes but that helped very little. Eventually, we found
the problem seemed to happen on TCP level. There is a 200ms delay in TCP
level. After we read some documents about Nagle Algorithm, we disabled a
registry key (TcpDelAckTicks) in windows2000 on both client and server. This
makes our program a lot faster.
Anyway, we think we should provide our clients a better solution other than
changing windows registry for them, for example, there may be a way to disable
that Nagle's algorithm through java.net.Socket.setTcpNoDelay(true), in BC4J,
or anywhere in our codes. We have not figured out yet.
Bug 2640945 was fixed in Oracle Application Server 10g (v9.0.4) and it now disables this TCP Acknowledgement on the server side in that release. In the BugDB, I see backport patches available for earlier 9.0.3 and 9.0.2 releases of IAS as well.
Bug 3621502 is requesting that that same disabling also be performed on the client side by the ORMI code. I have received a test patch from development to try out, but haven't had the chance yet.
The customer's workaround in the interim was to disable this TCP Acknowledgement at the OS level by modifying a Windows registry setting as noted above.
See Also http://support.microsoft.com/default.aspx?kbid=328890
"New registry entry for controlling the TCP Acknowledgment (ACK) behavior in Windows XP and in Windows Server 2003" which documents that the registry entry to change disable this acknowledgement has a different name in Windows XP and Windows 2003.
Hope this info helps. It would be useful to hear back from you on whether this helps your performance issue. -
Performance issues ..with apex in reports version 3.1
Hello All,
I am using apex 3.1 oracle 10g.
I am facing with performance issues with apex . I am generating iteractive reports with apex and the number of records are huge - running in 30 to 40 thousands of records and the reports is taking almost 30 minutes.
How I can improve the performance of this kind of report. I am using apex collections.
How apex works in terms of retrieving the records -?
Please let me know .
Thanks/kumar
Edited by: kumar73 on Jun 18, 2010 10:21 AMHello Tony ,
The following are the sequence of steps to run the test case.
Note:- All the schemas , tables and variables are populated from database.
From Schema and Relations tab choose the following:
1) Select P3I2008Q4 as schema.
2) Choose Relation as query path.
3) Select ECLA, ECLB, MTAB as relations.
From Variables choose the following:
4) Choose the variables AGE_SEXA,CLODESCA,ALCNO from ECLA relation.
5) Choose the variables AGE_SEXB, ALCNO, CLODESCB from ECLB relation.
6) Choose the variables EXPNAME, ALCNO, COST_, COST from MTAB relation.
From Conditions: Click the Run Report button this generated standard report ( Total no of records in report – 30150 )
Click on Interactive report button –to generate an interactive report. ( Error occurred )
We are using return sql statement in generationg the standard report and collections for interactive report.
thanks/kumar -
Using Reference Cursor Performance Issue in Report
Hi,
Are reference cursor supposed to be faster than a normal query? The reason why I am asking is because I am using a reference cusor query in the data model and it has a performance issue on the report, it's taking quite a while to run than If I just run the same reference cursor query in sql*plus. The difference is significantly big. Any input is very much appreciated!
Thanks,
MarilynFrom the metalink bug 4372868 on 9.0.4.x. It was fixed on 10.1.2.0.2 and does not have a backport for any 9.0.4 version.
Also the 9.0.4 version is already desupported. Please see the note:
Note 307042.1
Topic: Desupport Notices - Oracle Products
Title: Oracle Reports 10g 9.0.4 & 9.0.4.x
Action plan:
If you are still on 9.0.4 and later version of oracle reports and have no plan yet to migrate to 10.1.2.0.2 version use the same query you are using in your reference cursor and use it as a plain SQL query in your reports data model.
Maybe you are looking for
-
Hi Experts, I am having a select query in which I am using a variable in the where condition but it is giving error. Please suggest how to use variable in the select query. The query I am using is a s below. select * from zexc_rec into table it_ZEXC_
-
ATV not showing up under devices in iTunes
I have synced this ATV with this Macbook in the past. I am able to play music from my Macbook thru my ATV by selecting the ATV speakers instead of the Macbook's speakers. I can play movie trailers on the ATV. In iTunes>Preferences>ATV my ATV shows up
-
Using Firefox 26. How do I get a close button for each tab at the top of the screen?
Some time ago, I had a close tab button for each tab on the top of the screen. I want that back. I have no additional add ons and the close tab is set to 1. Changing it to 2 does nothing.
-
Web service invocation using pl/sql
Hi Am trying to invoke a webservice using PL/SQL. Any ideas? I will be recieving the response as XML and then present the information using XSL which I have ready. Or any other ideas to invoke Siebel Analytics report from portal (no BPEL). I have WSD
-
New learningvideos in Acrobat 3D and the toolkits from VTC.
It is hard to learn 3D Toolkits without any learning videos or books. If we all send wishes to www.VTC.COM, to make a on-line course in Acrobat 3D and the 3D Toolkits, they will soon have a very good course for very low costs. (only 30 dollar for one