Performance problem in application
Hi ,
I have some serious problems with my application on perfromance, when i check the db for events i can see the following
Events Total_Waits Time Wauted
1 Data file init write 1794 9077
2 local write wait 24 63
3 read by other session 4718 3460
4 db file sequential read 1864619 413008
5 db file scattered read 178020 80039
6 db file single write 116 2
7 db file parallel read 52 179
8 direct path read 54476 4824
9 direct path read temp 93568 27171
10 direct path write 18521 960
11 direct path write temp 13999 226
Buckets for Sequential read
Bucket wait_count
1 1 1441402
2 2 39998
3 4 78500
4 8 187616
5 16 78683
6 32 26376
7 64 8488
8 128 1683
9 256 1742
10 512 126
11 1024 4
When I check the history details for the session i have the following details
SAMPLE_ID SESSION_STATE BLOCKING_SESSION BLOCKING_SESSION_STATUS EVENT EVENT_ID EVENT# P2 P3TEXT P3 WAIT_CLASS WAIT_CLASS_ID WAIT_TIME TIME_WAITED
26571178 ON CPU NOT IN WAIT 1 0 144 0
26571176 WAITING NO HOLDER db file sequential read 2652584166 116 312822 blocks 1 User I/O 1740759767 0 255583
26571175 WAITING NO HOLDER db file sequential read 2652584166 116 3076763 blocks 1 User I/O 1740759767 0 251864
26571174 ON CPU NOT IN WAIT 1 0 124 0
26571171 WAITING 111 VALID log file sync 1328744198 115 0 0 Commit 3386400367 0 555324
26571169 WAITING NO HOLDER db file sequential read 2652584166 116 3076834 blocks 1 User I/O 1740759767 0 36578
26571166 WAITING UNKNOWN Data file init write 2326919048 9 256 timeout 4294967295 User I/O 1740759767 0 102445
26571165 ON CPU NOT IN WAIT 312 blocks 1 6079 0
26571164 ON CPU NOT IN WAIT 1 0 124 0
26571163 WAITING UNKNOWN read by other session 3056446529 67 198938 class# 1 User I/O 1740759767 0 215
26571161 WAITING UNKNOWN read by other session 3056446529 67 3073940 class# 1 User I/O 1740759767 0 299513
26571160 WAITING NO HOLDER db file sequential read 2652584166 116 3073906 blocks 1 User I/O 1740759767 0 158991
26571158 WAITING 111 VALID log file sync 1328744198 115 0 0 Commit 3386400367 0 229301
26571155 WAITING NO HOLDER db file sequential read 2652584166 116 3073109 blocks 1 User I/O 1740759767 0 124131
26571152 WAITING NO HOLDER db file sequential read 2652584166 116 3075524 blocks 1 User I/O 1740759767 0 213313
26571151 ON CPU NOT IN WAIT 1 0 143 0
26571149 WAITING NO HOLDER db file sequential read 2652584166 116 3075601 blocks 1 User I/O 1740759767 0 37342
26571147 WAITING NO HOLDER db file sequential read 2652584166 116 3074293 blocks 1 User I/O 1740759767 0 150539
26571146 WAITING 111 VALID log file sync 1328744198 115 0 0 Commit 3386400367 0 148213
26571144 WAITING NO HOLDER db file sequential read 2652584166 116 3076250 blocks 1 User I/O 1740759767 0 198960
26571143 WAITING NO HOLDER db file sequential read 2652584166 116 3076821 blocks 1 User I/O 1740759767 0 139530
26571142 WAITING NO HOLDER db file sequential read 2652584166 116 3076273 blocks 1 User I/O 1740759767 0 214391
26571140 ON CPU NOT IN WAIT 1 0 124 0
26571139 ON CPU NOT IN WAIT 1 0 216 0
26571137 WAITING NO HOLDER db file sequential read 2652584166 116 3076411 blocks 1 User I/O 1740759767 0 185643
26571136 ON CPU NOT IN WAIT 1 0 134 0
26571135 WAITING NO HOLDER db file sequential read 2652584166 116 3074201 blocks 1 User I/O 1740759767 0 434296
26571133 WAITING UNKNOWN direct path write 885859547 164 1484961 block cnt 1 User I/O 1740759767 0 170312
How can I improve on db file sequential read/ do you see any other potential problems for my query execution time.
Thank you
Edited by: tcklion on Apr 23, 2009 10:57 AM
tcklion wrote:
I couldnt find the hash value , i am using this query
Hash values are o for the query output
select sid, sql_text,sql_hash_value
from v$session s, v$sql q
where sid in ('session id)
How can i find the hash value from sys.v_$active_session_history, i dont see a column which shows the hash valueYou can use the SQL_ID available in V$ACTIVE_SESSION_HISTORY. The parameter to DBMS_XPLAN.DISPLAY_CURSOR is actually the SQL_ID and not the hash value (in passing, the SQL_ID is actually an obfuscated hash value, but that's not the point here: http://blog.tanelpoder.com/2009/02/22/sql_id-is-just-a-fancy-representation-of-hash-value/).
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Similar Messages
-
Bad performance problem of Application
Hi guys,
When I open my site by many user it will take lot of time to load than how can i solved multiuser problem i m using flex and hibernate than how can i solved performance problem.
Thanks
abhiwhen i open my site than connection
problem is coming connection has been lost when i m login from my site it will take time after that primary factory arror come and we need to start tomact.if many user acces this site it will take lot of time. -
Performance Problem while signing into Application
Hello
Could someone plz throw some light into which area I can look into for my performance problem. its an E-business suite version 11.5.10.2 which was upgrade from 11.5.8.
the Problem : When the Sign in Page is displayed , After the user name / Pwd is entered it sort of takes for ever for the System to actually log the user in. Sometimes I have to click twice on the Sign in Button.
I have run purge sign on audit / purge concurrent Request/manager logs / gather schema stats but its still slow. Are there any way of check whether the Middle Tier is the bottle neck.
Thanks
Ninican you check the profile option FND%diagnostic% if it was enabled or not
fadi -
PL/SQL Performance problem
I am facing a performance problem with my current application (PL/SQL packaged procedure)
My application takes data from 4 temporary tables, does a lot of validation and
puts them into permanent tables.(updates if present else inserts)
One of the temporary tables is parent table and can have 0 or more rows in
the other tables.
I have analyzed all my tables and indexes and checked all my SQLs
They all seem to be using the indexes correctly.
There are 1.6 million records combined in all 4 tables.
I am using Oracle 8i.
How do I determine what is causing the problem and which part is taking time.
Please help.
The skeleton of the code which we have written looks like this
MAIN LOOP ( 255308 records)-- Parent temporary table
-----lots of validation-----
update permanent_table1
if sql%rowcount = 0 then
insert into permanent_table1
Loop2 (0-5 records)-- child temporary table1
-----lots of validation-----
update permanent_table2
if sql%rowcount = 0 then
insert into permanent_table2
end loop2
Loop3 (0-5 records)-- child temporary table2
-----lots of validation-----
update permanent_table3
if sql%rowcount = 0 then
insert into permanent_table3
end loop3
Loop4 (0-5 records)-- child temporary table3
-----lots of validation-----
update permanent_table4
if sql%rowcount = 0 then
insert into permanent_table4
end loop4
-- COMMIT after every 3000 records
END MAIN LOOP
Thanks
Ashwin N.Do this intead of ditching the PL/SQL.
DECLARE
TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
pnums NumTab;
pnames NameTab;
t1 NUMBER(5);
t2 NUMBER(5);
t3 NUMBER(5);
BEGIN
FOR j IN 1..5000 LOOP -- load index-by tables
pnums(j) := j;
pnames(j) := 'Part No. ' || TO_CHAR(j);
END LOOP;
t1 := dbms_utility.get_time;
FOR i IN 1..5000 LOOP -- use FOR loop
INSERT INTO parts VALUES (pnums(i), pnames(i));
END LOOP;
t2 := dbms_utility.get_time;
FORALL i IN 1..5000 -- use FORALL statement
INSERT INTO parts VALUES (pnums(i), pnames(i));
get_time(t3);
dbms_output.put_line('Execution Time (secs)');
dbms_output.put_line('---------------------');
dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
END;
Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723 -
SQL report performance problem
I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
The summary:
In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695
In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551
What could be the cause of this?
I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
Only the explain plan doesn't show anything.
To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
So, Apex does do something different in the application compared to SQL Workshop.
Edited by: InoL on Aug 5, 2011 4:50 PM -
Performance problem with WPF Viewer CRVS2010
Hi,
We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
The testmachine setup:
HP DL 580 G5
WMWare ESXi 4.0
Guest OS: Windows 7 Enterprise 64-bit
Memory (guest OS): 3GB
CPU: 1
Visual Studio 2010
Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
Visual Studio 2008 installed
Microsoft Office 2010 installed
Macafee antivirus
There are about 10 other virtual machines on the same HW.
I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
Here is the VB.Net code Im using to View the reports:
Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
' Initialize Viewer
Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
viewer.Owner = Me
viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
viewer.Show()
End Sub
Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
' Initialize Viewer
Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
viewer.Owner = Me
viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
viewer.Show()
End Sub
The reports are 2 empty default reports with only 1 textobject on the details section.
// ThomasSee if the KB [
[1448013 - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
Also the following may not hurt to have a look at (if only for ideas):
[1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
[1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
[1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
Ludek
Follow us on Twitter http://twitter.com/SAPCRNetSup -
JRC 2: Performance Problem
Hi.
Our reporting component used JRC 1.x before we upgraded to JRC 2.x. We got two issues after upgrading.
First issue I solved already with a workaround which I published on stackoverflow.com. (1) Does anyone knows where I will find the issue management system to report this issue?
Second issue occurs big performance problem within our project. We opened a report with 6 subreports (which includes 1 upto 3 tables) in 2-4 seconds using JRC 1. If we will open same report using JRC 2, we wait upto 60 seconds.
This methods requires more time with JRC 2 comparing to JRC 1:
ReportClientDocument#open(String, int);
SubreportController#setTableLocation(String, ITable, ITable)
DatabaseController#setTableLocation(ITable, ITable)
Each invocation of one of these methods requires 2-4 seconds.
Thank you in advance.
Best regards
Thomas
(1) http://stackoverflow.com/questions/479405/replace-a-database-connection-for-subreports-with-jrchello ....
my report is ''crystal report 11'' => "OLE DB" => "Add Command(select * from table) " .
code(JRC) : eclipse + crystal report for eclipse version 2 => "cr4e-all-in-one-win_2.0.1.zip"
<%@ page contentType="text/html; charset=UTF-8"
import="
com.crystaldecisions.report.web.viewer.CrystalReportViewer,
com.crystaldecisions.reports.sdk.ReportClientDocument,
com.crystaldecisions.sdk.occa.report.lib.ReportSDKExceptionBase,
java.sql.Connection,
java.sql.DriverManager,
java.sql.ResultSet,
java.sql.SQLException,
java.sql.Statement" %>
<%
try {
String reportName = "report.rpt";
ReportClientDocument clientDoc = new ReportClientDocument();
clientDoc.open(reportName, 0);
String tableAlias = "Command";
clientDoc.getDatabaseController().setDataSource(myResult("SELECT * FROM table"), tableAlias,tableAlias);
CrystalReportViewer crystalReportPageViewer = new CrystalReportViewer();
crystalReportPageViewer.setReportSource(clientDoc.getReportSource());
crystalReportPageViewer.processHttpRequest(request, response, application, null);
} catch (ReportSDKExceptionBase e) {
e.printStackTrace();
out.println(e);
%>
I simplified the code, *myResult("SELECT * FROM table") * is absolutely no problem ,
and this code is absolutely no problem in the "crystal report for eclipse "version 1
but in version 2 run error:
com.crystaldecisions.sdk.occa.report.lib.ReportSDKException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4---- Error code:-2147467259 Error code name:failed
at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
at com.businessobjects.reports.sdk.JRCCommunicationAdapter.if(Unknown Source)
at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.a(Unknown Source)
at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.call(Unknown Source)
at com.crystaldecisions.reports.common.ThreadGuard.syncExecute(Unknown Source)
at com.businessobjects.reports.sdk.JRCCommunicationAdapter.for(Unknown Source)
at com.businessobjects.reports.sdk.JRCCommunicationAdapter.int(Unknown Source)
at com.businessobjects.reports.sdk.JRCCommunicationAdapter.request(Unknown Source)
at com.businessobjects.sdk.erom.jrc.a.a(Unknown Source)
at com.businessobjects.sdk.erom.jrc.a.execute(Unknown Source)
at com.crystaldecisions.proxy.remoteagent.RemoteAgent$a.execute(Unknown Source)
at com.crystaldecisions.proxy.remoteagent.CommunicationChannel.a(Unknown Source)
at com.crystaldecisions.proxy.remoteagent.RemoteAgent.a(Unknown Source)
at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.if(Unknown Source)
at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.a(Unknown Source)
at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.new(Unknown Source)
at com.crystaldecisions.sdk.occa.report.application.b9.onDataSourceChanged(Unknown Source)
at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
at com.crystaldecisions.sdk.occa.report.application.DatabaseController.setDataSource(Unknown Source)
at org.apache.jsp.No_005f1.Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp._jspService(Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp.java:106)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Unknown Source)
Caused by: com.crystaldecisions.reports.common.QueryEngineException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4
at com.crystaldecisions.reports.queryengine.Connection.bf(Unknown Source)
at com.crystaldecisions.reports.queryengine.Rowset.z3(Unknown Source)
at com.crystaldecisions.reports.queryengine.Rowset.bL(Unknown Source)
at com.crystaldecisions.reports.queryengine.Rowset.zM(Unknown Source)
at com.crystaldecisions.reports.queryengine.Connection.a(Unknown Source)
at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
at com.crystaldecisions.reports.queryengine.Table.if(Unknown Source)
at com.crystaldecisions.reports.queryengine.Table.try(Unknown Source)
at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
at com.crystaldecisions.reports.queryengine.Table.u7(Unknown Source)
at com.crystaldecisions.reports.datafoundation.DataFoundation.a(Unknown Source)
at com.crystaldecisions.reports.dataengine.dfadapter.DFAdapter.a(Unknown Source)
at com.crystaldecisions.reports.dataengine.dfadapter.CheckDatabaseHelper.a(Unknown Source)
at com.crystaldecisions.reports.dataengine.datafoundation.CheckDatabaseCommand.new(Unknown Source)
at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
at com.crystaldecisions.reports.common.Document.a(Unknown Source)
at com.crystaldecisions.reports.dataengine.VerifyDatabaseCommand.new(Unknown Source)
at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
at com.crystaldecisions.reports.common.Document.a(Unknown Source)
at com.businessobjects.reports.sdk.requesthandler.f.a(Unknown Source)
at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.a(Unknown Source)
at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.if(Unknown Source)
at com.businessobjects.reports.sdk.JRCCommunicationAdapter.do(Unknown Source)
... 39 more
Please help me and tell me why.... -
Performance problems with Leopard 10.5.1
Hello,
I use an iMac 24 Alu 2,8Ghz and upgraded to Leopard. There are some major performance problems and bugs in the recent version of Leopard:
1. While accessing USB devices, the display speed, windows moving, animations etc. slow down
2. Adobe CS3 Photoshop 10.01 and Flash CS3 are sometimes extremly slow. I tried the recent demo packages from Adobe:
2.1. The Photoshop dialogue "save for web" slows down the system completly, and this problem stays when quitting Photoshop. A restart is neccessary then.
2.2. Flash CS3 movie preview is very slow and stuttering. It's so slow you cannot imagine how the real movie flow will be.
2.3. Recent Flash Player 9,0,115,0 with hardware acceleration enabled doesn't really work with QuartzGL enabled Leopard: The movies slow down a lot. Try www.neave.tv for example.
3. Safari, Mail and other bundled software hang sometimes. You have to force quit them then. It doesn't matter whether QuartzGL is enabled or not. This especially happens to my system if it is online for some hours.
4. A lot of Apple applications doesn't seem to work with 2dextreme enabled. Why this? Apple supporters told me in Leopard there will be a much better 2dextreme support. Also Quartz2dExtreme in OSX 10.4 worked with all applications and i guess it's the same feature like "QuartzGL" in Leopard. So Leopard isn't finished here. It would be nice if Apple could make it's own software QuartzGL compatible.
5. Very often the desktop slows down or lags. This is the main reason I often still witch to Windows XP PC to do work in a faster, less annoying way.
6. Safari crashes randomly sometimes. It is unstable still. Also it crashes more often if you resize/move the window a lot, so I guess it is a graphics extension-related problem.
I hope you people from Apple will fix these annoying points and optimize your new system in the next update release.
Best regardsThanks for you answer. I repaired in the way as described above. There were some errors, some file index was wrong (don't remember exactly the phrase), now the DU reports the partition was successfully repaired / the volume appears to be ok.
The crashes in Safari are gone, but all other described problems still exist. Adobe CS3 is not really usable for me.
By the way, in iMacSoftwareUpdate 1.3, which was replaced by OSX 10.5.1 update, there is one extension called AppleVADriver.kext that does not exist in the OSX 10.5.1 update. Is it an important extension? -
Performance problems when running PostgreSQL on ZFS and tomcat
Hi all,
I need help with some analysis and problem solution related to the below case.
The long story:
I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
Within a non-global zone Im running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
NPROC USERNAME SWAP RSS MEMORY TIME CPU
49 postgres 749M 669M 4,7% 7:14:38 13%
1 jboss 2519M 2536M 18% 50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
NPROC USERNAME SWAP RSS MEMORY TIME CPU
1 jboss 3104M 913M 6,4% 0:22:48 0,1%
#sar -g 5 5
SunOS vbn-back 5.10 Generic_142901-03 i86pc 05/28/2010
07:49:08 pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
07:49:13 27.67 316.01 318.58 14854.15 0.00
07:49:18 61.58 664.75 668.51 43377.43 0.00
07:49:23 122.02 1214.09 1222.22 32618.65 0.00
07:49:28 121.19 1052.28 1065.94 5000.59 0.00
07:49:33 54.37 572.82 583.33 2553.77 0.00
Average 77.34 763.71 771.43 19680.67 0.00Making more memory available to tomcat seemed to worsen the problem or at least didnt prove to have any positive effect.
My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
An unofficial performance evaluation on the database with vacuum analyze took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
The short story:
Im trying different steps but running out of ideas. Weve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didnt find much information on the matter so if any can help please recommend how to make this change
Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
Any help appreciated and I will try to provide additional information on request if needed
Thanks in advance,
Kasperraidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
You can change the record size by "zfs set recordsize=8k <dataset>"
It will only take effect for newly written data. Not existing data. -
Performance problems with DFSN, ABE and SMB
Hello,
We have identified a problem with DFS-Namespace (DFSN), Access Based Enumeration (ABE) and SMB File Service.
Currently we have two Windows Server 2008 R2 servers providing the domain-based DFSN in functional level Windows Server 2008 R2 with activated ABE.
The DFSN servers have the most current hotfixes for DFSN and SMB installed, according to http://support.microsoft.com/kb/968429/en-us and http://support.microsoft.com/kb/2473205/en-us
We have only one AD-site and don't use DFS-Replication.
Servers have 2 Intel X5550 4 Core CPUs and 32 GB Ram.
Network is a LAN.
Our DFSN looks like this:
\\contoso.com\home
Contains 10.000 Links
Drive mapping on clients to subfolder \\contoso.com\home\username
\\contoso.com\group
Contains 2500 Links
Drive mapping on clients directly to \\contoso.com\group
On \\contoso.com\group we serve different folders for teams, projects and other groups with different access permissions based on AD groups.
We have to use ABE, so that users see only accessible Links (folders)
We encounter sometimes multiple times a day enterprise-wide performance problems for 30 seconds when accessing our Namespaces.
After six weeks of researching and analyzing we were able to identify the exact problem.
Administrators create a new DFS-Link in our Namespace \\contoso.com\group with correct permissions using the following command line:
dfsutil.exe link \\contoso.com\group\project123 \\fileserver1\share\project123
dfsutil.exe property sd grant \\contoso.com\group\project123 CONTOSO\group-project123:RX protect replace
This is done a few times a day.
There is no possibility to create the folder and set the permissions in one step.
DFSN process on our DFSN-servers create the new link and the corresponding folder in C:\DFSRoots.
At this time, we have for example 2000+ clients having an active session to the root of the namespace \\contoso.com\group.
Active session means a Windows Explorer opened to the mapped drive or to any subfolder.
The file server process (Lanmanserver) sends a change notification (SMB-Protocol) to each client with an active session \\contoso.com\group.
All the clients which were getting the notification now start to refresh the folder listing of \\contoso.com\group
This was identified by an network trace on our DFSN-servers and different clients.
Due to ABE the servers have to compute the folder listing for each request.
DFS-Service on the servers doen't respond for propably 30 seconds to any additional requests. CPU usage increases significantly over this period and went back to normal afterwards. On our hardware from about 5% to 50%.
Users can't access all DFS-Namespaces during this time and applications using data from DFS-Namespace stop responding.
Side effect: Windows reports on clients a slow-link detection for \\contoso.com\home, which can be offline available for users (described here for WAN-connections: http://blogs.technet.com/b/askds/archive/2011/12/14/slow-link-with-windows-7-and-dfs-namespaces.aspx)
Problem doesn't occure when creating a link in \\contoso.com\home, because users have only a mapping to subfolders.
Currently, the problem doesn't occure also for \\contoso.com\app, because users usually don't use Windows Explorer accessing this mapping.
Disabling ABE reduces the DFSN freeze time, but doesn't solve the problem.
Problem also occurs with Windows Server 2012 R2 as DFSN-server.
There is a registry key available for clients to avoid the reponse to the change notification (NoRemoteChangeNotify, see http://support.microsoft.com/kb/812669/en-us)
This might fix the problem with DFSN, but results in other problems for the users. For example, they have to press F5 for refreshing every remote directory on change.
Is there a possibility to disable the SMB change notification on server side ?
TIA and regards,
Ralf GaudesHi,
Thanks for posting in Microsoft Technet Forums.
I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
Thank you for your understanding and support.
Regards.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Performance problems with File Adapter and XI freeze
Hi NetWeaver XI geeks,
We are deploying a XI based product and encounter some huge performance problems. Here after the scenario and the issues:
- NetWeaver XI 2004
- SAP 4.6c
- Outbound Channel
- No mapping used and only the iDocs Adapter is involved in the pipeline processing
- File Adapter
- message file size < 2Ko
We have zeroed down the problem to Idoc adapters performance.
We are using a file channel and every 15 seconds a file in a valid Idoc format is placed in a folder, Idoc adapter picks up the file from this folder and sends it to the SAP R/3 instance.
For few minutes (approx 5 mins) it works (the CPU usage is less then 20% even if processing time seems huge : <b>5sec/msg</b>) but after this time the application gets blocked and the CPU gets overloaded at 100% (2 processes disp_worker.exe at 50% each).
If we inject several files in the source folder at the same time or if we decrease the time gap (from 15 seconds to 10 seconds) between creation of 2 Idoc files , the process blocks after posting 2-3 docs to SAP R/3.
Could you point us some reasons that could provoke that behavior?
Basically looking for some help in improving performance of the Idoc adapter.
Thanks in advance for your help and regards,
AdalbertHi Bhavesh,
Thanks for your suggestions. We will test...
We wonder if the hardware is not the problem of this extremely poor performance.
Our XI server is:
Windows 2003 Server
Processors: 2x3GHZ
RAM: 4GB (the memory do not soak)
The messages are well formed iDocs = single line INVOICES.
Some posts are talking 2000 messages processed in some seconds... whereas we got 5 sec per message.
Tnanks for your help.
Adalbert -
Performance problems with XMLTABLE and XMLQUERY involving relational data
Hello-
Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
* Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
* Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.<Note>Long post, sorry.</Note>
First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
• Converting legacy tabular data into XML records; and
• Performing code table lookups for coded values in XML records.
There are three things I want to accomplish with this post:
• Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
• Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
• Highlight remaining performance issues in hopes that we can solve them
What we are trying to accomplish:
• Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
• Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
• Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
• Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
Why we did and why:
• Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
• Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
• Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
• Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
• Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
• Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
• Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
What issues remain?
We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
-- The main record table:
create table RECORDS (
SSN varchar2(20),
XMLREC sys.xmltype
xmltype column XMLREC store as binary xml;
create index records_ssn on records(ssn);
-- A dozen code tables represented by one like this:
create table CODES (
CODE varchar2(4),
DESCRIPTION varchar2(500)
create index codes_code on codes(code);
-- Some XML records with coded values (the real records are much more complex of course):
-- I think this took about a minute or two
DECLARE
ssn varchar2(20);
xmlrec xmltype;
i integer;
BEGIN
xmlrec := xmltype('<?xml version="1.0"?>
<Root>
<Id>123456789</Id>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
</Root>
for i IN 1..100000 loop
insert into records(ssn, xmlrec) values (i, xmlrec);
end loop;
commit;
END;
-- Some code data like this (ignoring date ranges on codes):
DECLARE
description varchar2(100);
i integer;
BEGIN
description := 'This is the code description ';
for i IN 1..3000 loop
insert into codes(code, description) values (to_char(i), description);
end loop;
commit;
end;
-- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
-- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
-- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
-- Note we are accessing a single XML record based on SSN
-- Note also we are reusing the one test code table multiple times for convenience of this test
select xmlquery('
for $r in Root
return
<Root>
<Id>123456789</Id>
{for $e in $r/Element
return
<Element>
<Subelement1>
{$e/Subelement1/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
</Description>
</Subelement1>
<Subelement2>
{$e/Subelement2/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
</Description>
</Subelement2>
<Subelement3>
{$e/Subelement3/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
</Description>
</Subelement3>
</Element>
</Root>
' passing xmlrec returning content)
from records
where ssn = '10000';
The plan shows the nested loop access that slows things down.
By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
Operation Object
|SELECT STATEMENT ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| XPATH EVALUATION ()
| TABLE ACCESS (BY INDEX ROWID) RECORDS
| INDEX (RANGE SCAN) RECORDS_SSN
With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
-- Add an xmlindex. Takes about 2.5 minutes
create index records_record_xml ON records(xmlrec)
indextype IS xdb.xmlindex;
Operation Object
|SELECT STATEMENT ()
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| TABLE ACCESS (BY INDEX ROWID) RECORDS
| INDEX (RANGE SCAN) RECORDS_SSN
Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity. -
Hello again,
we are experiencing performance problems with the 6321 in our environment - 10 kHz system clock, i.e. the analog input channels should be read in every 100 µs. This works fine with one card (differentially cabled), but if we plug a second card the systems just comes to a grinding halt.
It appears, that one AI channel register access is taking us about 4 µs, which would mean 64 µs using 16 channels. We have tried RLP, "pure" DMA and DMA using interrupts.
Am I getting something wrong or was the 6321 just not designed for this kind of application?
Best regards,
PhilipHello again,
we are experiencing performance problems with the 6321 in our environment - 10 kHz system clock, i.e. the analog input channels should be read in every 100 µs. This works fine with one card (differentially cabled), but if we plug a second card the systems just comes to a grinding halt.
It appears, that one AI channel register access is taking us about 4 µs, which would mean 64 µs using 16 channels. We have tried RLP, "pure" DMA and DMA using interrupts.
Am I getting something wrong or was the 6321 just not designed for this kind of application?
Best regards,
Philip -
Performance problem with Mavericks.
Performance problem with Mavericks. My Mac is extremly slow after upgrading to Mavericks. What can i do to solve that?
If you are still experiencing slow down issues, it maybe because of a few other reasons.
Our experience with OS X upgrades, and Mavericks is no exception, is that users have installed a combination of third party software and/or hardware that is incompatible and/or is outdated that causes many negative performance issues when upgrading to a new OS X version.
Your Mac's hard drive maybe getting full.
Do you run any antivirus software on your Mac? Commercial Antivirus software can slow down and negatively impact the normal operation of OS X.
Do you have apps like MacKeeper or any other maintenance apps like CleanMyMac 1 or 2, TuneUpMyMac or anything like these apps, installed on your Mac? These types of apps, while they appear to be helpful, can do too good a job of data "cleanup" causing the potential to do serious data corruption or data deletion and render a perfectly running OS completely dead and useless leaving you with a frozen, non-functional Mac.
Your Mac may have way too many applications launching at startup/login.
Your Mac may have old, non-updated or incompatible software installed.
Your Mac could have incompatible or outdated web browser extensions, plugins or add-ons.
Your Mac could have connected third party hardware that needs updated device drivers.
It would help us to help you if we could have some more technical info about your iMac.
If you so choose, please download, install and run Etrecheck.
Etrecheck was developed as a simple Mac diagnostic report tool by a regular Apple Support forum user and technical support contributor named Etresoft. Etrecheck is a small, unobstrusive app that compiles a static snapshot of your entire Mac hardware system and installed software.
This is a free app that has been honestly created to provided help in diagnosing issues with Macs running the new OS X 10.9 Mavericks.
It is not malware and can be safely downloaded and installed onto your Mac.
http://www.etresoft.com/etrecheck
Copy/paste and post its report here in another reply thread so that we have a complete profile of your Mac's hardware and installed software so we can all help continue with your Mac performance issues.
Thank you. -
Hi,
This is similar - yet different - to a few of the old postings about performance
problems with using jdbc drivers against Sql Server 7 & 2000.
Here's the situation:
I am running a standalone java application on a Solaris box using BEA's jdbc driver
to connect to a Sql Server database on another network. The application retrieves
data from the database through joins on several tables for approximately 40,000
unique ids. It then processes all of this data and produces a file. We tuned
the app so that the execution time for a single run through the application was
24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
Sql Server 2000 version. I ran the app and got an alarming execution time of
5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
and set the "useVarChars" property to "true" on the driver. The execution time
for a single run through the application is now 56 minutes.
56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
over twice the execution time that I was seeing against the 6.5 database. Theoretically,
I should be able to switch out my jdbc driver and the DBMS conversion should be
invisible to my application. That would also mean that I should be seeing the
same execution times with both versions of the DBMS. Has anybody else seen a
simlar situation? Are there any other settings or fixes that I can put into place
to get my performance back down to what I was seeing with 6.5? I would rather
not have to go through and perform another round of performance tuning after having
already done this when the app was originally built.
thanks,
mikeMike wrote:
Joe,
This was actually my next step. I replaced the BEA driver with
the MS driver and let it run through with out making any
configuration changes, just to see what happened. I got an
execution time of about 7 1/2 hrs (which was shocking). So,
(comparing apples to apples) while leaving the default unicode
property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
I then set the 'SendStringParametersAsUnicode' to 'false' on the
MS driver and ran another test. This time the application
executed in just over 24 minutes. The actual runtime was 24 min
16 sec, which is still ever so slightly above the actual runtime
against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
56 minutes that BEA's driver was giving me.
I think that this is very interesting. I checked to make sure that
there were no outside factors that may have been influencing the
runtimes in either case, and there were none. Just to make sure,
I ran each driver again and got the same results. It sounds like
there are no known issues regarding this?
We have people looking into things on the DBMS side and I'm still
looking into things on my end, but so far none of us have found
anything. We'd like to continue using BEA's driver for the
support and the fact that we use Weblogic Server for all of our
online applications, but this new data might mean that I have to
switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
between the client and DBMS, you will probably not see any appreciable
difference in the content of the SQL sent be either driver. My suspicion is
that it involves the historical backward compatibility built in to the DBMS.
It must still handle several iterations of older applications, speaking obsolete
versions of the DBMS protocol, and expecting different DBMS behavior!
Our driver presents itself as a SQL7-level application, and may well be treated
differently than a newer one. This may include different query processing.
Because our driver is deprecated, it is unlikely that it will be changed in
future. We will certainly support you using the MS driver, and if you look
in the MS JDBC newsgroup, you'll see more answers from BEA folks than
from MS people!
Joe
>
>
Mike
The next test you should do, to isolate the issue, is to try another
JDBC driver.
MS provides a type-4 driver now, for free. If it is significantly faster,
it would be
interesting. However, it would still not isolate the problem, because
we still would
need to know what query plan is created by the DBMS, and why.
Joe Weinstein at BEA
PS: I can only tell you that our driver has not changed in it's semantic
function.
It essentially send SQL to the DBMS. It doesn't alter it.
Maybe you are looking for
-
New to garageband and having some issues
Hi. I have a series of files that I am working with and I thought GB would be a good fit to edit them in. I am importing a lot of taped information into GB to edit but I want to be able to break the imported tapes apart into a series of files. Right
-
Firefox was working prior to updating to version 17. Then when I click it, it won't start. I tried to start in Safe Mode, also Reset Mode. I tried to uninstall version 17, and do a clean install. Also, I cannot run an older version of Firefox since t
-
Movie playlist in iTunes is not showing up on iPhone after manually managing syncing and including movies from playlist.
-
Export all user security profiles at once
Is there a way to export all of the e-sourcing user profiles at once? Thanks for your help, Jerry
-
SCORM and Desire 2 Learn (D2L)
Is there a way to create a course in Captivate 5.5 and import it into D2L and have the quizzes linked to the gradebook? I am able to import my course (by saving and publishing the course as a SCORM 1.2, not 2004), but can only view a "report" based o