Baffled by performance
Oracle 11.1.0.6.0 client, 10.2.0.3 database, Visual Studio 2005.
I have a C# program that had a pretty involved update in it. There were two bind variables, one used several times. When I ran the update as a fat statement within the program, it took hours to run. When I ran it with SQL Developer (with the bind variables in the same places) it took a minute. So I extracted the fat update, put it in a PL/SQL procedure, called it...
OleDbCommand cmd = CommonSupport._CSIADO.getOleDbForProc("MFIVE.CALC_DOWNTIME.INSERT_PDS");
OleDbParameter resultParm = cmd.Parameters.Add("p_records_affected", OleDbType.Integer);
resultParm.Direction = ParameterDirection.Output;
OleDbParameter downtimeCalcDtParm =
cmd.Parameters.AddWithValue("p_downtime_calc_dt", un_downtime_calc_dt);
downtimeCalcDtParm.Direction = ParameterDirection.Input;
cmd.ExecuteNonQuery();...and it still takes hours. I ran the PL/SQL procedure from within SQL Developer (again, passing in the parameters) and it still takes a minute.
If I look at Enterprise Manager, it says that it's using the same plan as SQL Developer says it will use. Doing a tkprof of a trace is rather different, though, leaving out the work being done in the subselects and just... looking different.
I'll post the tkprof output here for the bored, but a general question: does OLEDB make Oracle stupid under some circumstances?
SQL Developer calling PL/SQL procedure:from SQL Developer (about 100 seconds):
insert into pd_dwn_stat
(company, unit_id, period, oper_dt, maint_dt)
with d as
(select c.company, c.unit_id, min(c.start_dt) as start_dt,
min(c.shift_code) as shift_code, c.end_dt
from
(select a.company, a.unit_id, a.start_dt, g.shift_code as shift_code,
max(coalesce(b.end_dt, sysdate)) as end_dt
from unit_downtime a
join unit_downtime b
on a.company = b.company
and a.unit_id = b.unit_id
left outer join o_wo o
on o.company = a.company
and o.wo_no = a.wo_no
left outer join loc_maint g
on g.company = o.company
and g.location = o.location
where a.start_dt >= b.start_dt
and a.start_dt < coalesce(a.end_dt, sysdate)
and b.start_dt < coalesce(b.end_dt, sysdate)
and b.start_dt >=any
(select z.start_dt
from unit_downtime z
where z.downtime_calc_dt = :p_downtime_calc_dt
and z.unit_id = b.unit_id)
group by a.company, a.unit_id, a.start_dt, g.shift_code) c
group by c.company, c.unit_id, c.end_dt),
f as
(select company, fisc_pd, start_date,
lead(start_date) over (partition by company order by fisc_pd) as end_date
from fiscal_cal)
select u.company, u.unit_id, f.fisc_pd,
(select sum((least(f.end_date, :p_downtime_calc_dt, d.end_dt, s.end_dt) -
greatest(f.start_date, d.start_dt, s.start_dt))) * 24 * 60 * 60 *1000
from shift s
join d
on s.company = d.company
where d.unit_id = u.unit_id
and s.start_dt >= f.start_date
and s.end_dt < f.end_date
and d.start_dt <= s.end_dt
and d.end_dt > s.start_dt
and s.company = u.company
and s.shift_code = u.shift_code) as calc_oper_dt,
(select sum((least(f.end_date, :p_downtime_calc_dt, d.end_dt, s.end_dt) -
greatest(f.start_date, d.start_dt, s.start_dt))) * 24 * 60 * 60 *1000
from shift s
join d
on s.company = d.company
where d.unit_id = u.unit_id
and f.fisc_pd = f.fisc_pd
and s.start_dt >= f.start_date
and s.end_dt < f.end_date
and d.start_dt <= s.end_dt
and d.end_dt > s.start_dt
and d.company = u.company
and s.shift_code = d.shift_code) as calc_maint_dt
from unit_dept_comp_main u
join f
on u.company = f.company
where u.company = :g_company
and f.start_date < sysdate
and f.end_date >=any
(select e.start_dt
from unit_downtime e
where e.company = u.company
and e.unit_id = u.unit_id
and e.downtime_calc_dt = :p_downtime_calc_dt)
call count cpu elapsed disk query current rows
Parse 1 0.01 0.01 0 0 0 0
Execute 1 110.31 114.34 1358 2033031 74796 35662
Fetch 0 0.00 0.00 0 0 0 0
total 2 110.32 114.35 1358 2033031 74796 35662
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 32
Rows Row Source Operation
35662 TEMP TABLE TRANSFORMATION (cr=15420 pr=909 pw=4 time=1416590 us)
1 LOAD AS SELECT (cr=14243 pr=737 pw=4 time=897903 us)
797 HASH GROUP BY (cr=14243 pr=737 pw=0 time=863230 us)
834 VIEW (cr=14243 pr=737 pw=0 time=860507 us)
834 SORT GROUP BY (cr=14243 pr=737 pw=0 time=858836 us)
1006 NESTED LOOPS OUTER (cr=14243 pr=737 pw=0 time=2222986 us)
1006 NESTED LOOPS OUTER (cr=13235 pr=733 pw=0 time=2195945 us)
1006 NESTED LOOPS (cr=11221 pr=626 pw=0 time=879284 us)
839 HASH JOIN (cr=1312 pr=411 pw=0 time=30725 us)
801 TABLE ACCESS BY INDEX ROWID UNIT_DOWNTIME (cr=745 pr=0 pw=0 time=4057 us)
801 INDEX RANGE SCAN UNIT_DOWNTIME_DMWIX3 (cr=6 pr=0 pw=0 time=844 us)(object id 98246)
34925 TABLE ACCESS FULL UNIT_DOWNTIME (cr=567 pr=411 pw=0 time=503836 us)
1006 TABLE ACCESS BY INDEX ROWID UNIT_DOWNTIME (cr=9909 pr=215 pw=0 time=316547 us)
9992 INDEX RANGE SCAN UNIT_DOWNTIME_IX1 (cr=904 pr=177 pw=0 time=244588 us)(object id 13200)
1006 TABLE ACCESS BY INDEX ROWID O_WO (cr=2014 pr=107 pw=0 time=332243 us)
1006 INDEX UNIQUE SCAN O_WO_IX0 (cr=1008 pr=59 pw=0 time=170868 us)(object id 12787)
1006 TABLE ACCESS BY INDEX ROWID LOC_MAINT (cr=1008 pr=4 pw=0 time=38706 us)
1006 INDEX UNIQUE SCAN LOC_MAINT_IX0 (cr=2 pr=1 pw=0 time=8583 us)(object id 12574)
35662 HASH JOIN RIGHT SEMI (cr=1177 pr=172 pw=0 time=411534 us)
801 TABLE ACCESS BY INDEX ROWID UNIT_DOWNTIME (cr=745 pr=0 pw=0 time=4055 us)
801 INDEX RANGE SCAN UNIT_DOWNTIME_DMWIX3 (cr=6 pr=0 pw=0 time=843 us)(object id 98246)
879375 HASH JOIN (cr=432 pr=172 pw=0 time=896062 us)
125 VIEW (cr=1 pr=0 pw=0 time=1419 us)
156 WINDOW SORT (cr=1 pr=0 pw=0 time=947 us)
156 INDEX RANGE SCAN FISCAL_CAL_IX1 (cr=1 pr=0 pw=0 time=170 us)(object id 12304)
7035 TABLE ACCESS FULL UNIT_DEPT_COMP_MAIN (cr=431 pr=172 pw=0 time=458753 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
control file sequential read 5 0.00 0.02
db file sequential read 780 0.04 2.11
db file scattered read 67 0.01 0.16
direct path write temp 1 0.00 0.00
log file switch completion 3 0.08 0.15
SQL*Net message to client 1 0.00 0.00
virtual circuit status 17 29.99 496.46
SQL*Net message from client 1 496.46 496.46-----
From C# program calling PL/SQL procedure (about three hours):
INSERT INTO PD_DWN_STAT (COMPANY, UNIT_ID, PERIOD, OPER_DT, MAINT_DT) WITH D
AS (SELECT C.COMPANY, C.UNIT_ID, MIN(C.START_DT) AS START_DT,
MIN(C.SHIFT_CODE) AS SHIFT_CODE, C.END_DT FROM (SELECT A.COMPANY, A.UNIT_ID,
A.START_DT, G.SHIFT_CODE AS SHIFT_CODE, MAX(COALESCE(B.END_DT, SYSDATE))
AS END_DT FROM UNIT_DOWNTIME A JOIN UNIT_DOWNTIME B ON A.COMPANY =
B.COMPANY AND A.UNIT_ID = B.UNIT_ID LEFT OUTER JOIN O_WO O ON O.COMPANY =
A.COMPANY AND O.WO_NO = A.WO_NO LEFT OUTER JOIN LOC_MAINT G ON G.COMPANY =
O.COMPANY AND G.LOCATION = O.LOCATION WHERE A.START_DT >= B.START_DT AND
A.START_DT < COALESCE(A.END_DT, SYSDATE) AND B.START_DT < COALESCE(B.END_DT,
SYSDATE) AND B.START_DT >=ANY (SELECT Z.START_DT FROM UNIT_DOWNTIME Z
WHERE Z.DOWNTIME_CALC_DT = :B1 AND Z.UNIT_ID = B.UNIT_ID) GROUP BY
A.COMPANY, A.UNIT_ID, A.START_DT, G.SHIFT_CODE) C GROUP BY C.COMPANY,
C.UNIT_ID, C.END_DT), F AS (SELECT COMPANY, FISC_PD, START_DATE,
LEAD(START_DATE) OVER (PARTITION BY COMPANY ORDER BY FISC_PD) AS END_DATE
FROM FISCAL_CAL) SELECT U.COMPANY, U.UNIT_ID, F.FISC_PD, (SELECT
SUM((LEAST(F.END_DATE, :B1 , D.END_DT, S.END_DT) - GREATEST(F.START_DATE,
D.START_DT, S.START_DT))) * 24 * 60 * 60 *1000 FROM SHIFT S JOIN D ON
S.COMPANY = D.COMPANY WHERE D.UNIT_ID = U.UNIT_ID AND S.START_DT >=
F.START_DATE AND S.END_DT < F.END_DATE AND D.START_DT <= S.END_DT AND
D.END_DT > S.START_DT AND S.COMPANY = U.COMPANY AND S.SHIFT_CODE =
U.SHIFT_CODE) AS CALC_OPER_DT, (SELECT SUM((LEAST(F.END_DATE, :B1 ,
D.END_DT, S.END_DT) - GREATEST(F.START_DATE, D.START_DT, S.START_DT))) * 24
* 60 * 60 *1000 FROM SHIFT S JOIN D ON S.COMPANY = D.COMPANY WHERE
D.UNIT_ID = U.UNIT_ID AND F.FISC_PD = F.FISC_PD AND S.START_DT >=
F.START_DATE AND S.END_DT < F.END_DATE AND D.START_DT <= S.END_DT AND
D.END_DT > S.START_DT AND D.COMPANY = U.COMPANY AND S.SHIFT_CODE =
D.SHIFT_CODE) AS CALC_MAINT_DT FROM UNIT_DEPT_COMP_MAIN U JOIN F ON
U.COMPANY = F.COMPANY WHERE U.COMPANY = :B2 AND F.START_DATE < SYSDATE AND
F.END_DATE >=ANY (SELECT E.START_DT FROM UNIT_DOWNTIME E WHERE E.COMPANY =
U.COMPANY AND E.UNIT_ID = U.UNIT_ID AND E.DOWNTIME_CALC_DT = :B1 )
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 7566.98 1870.53 15834828 137765256 75235 35662
Fetch 0 0.00 0.00 0 0 0 0
total 2 7566.98 1870.54 15834828 137765256 75235 35662
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 32 (recursive depth: 1)
Rows Row Source Operation
35662 HASH JOIN RIGHT SEMI (cr=1177 pr=391 pw=0 time=521648 us)
801 TABLE ACCESS BY INDEX ROWID UNIT_DOWNTIME (cr=745 pr=0 pw=0 time=4842 us)
801 INDEX RANGE SCAN UNIT_DOWNTIME_DMWIX3 (cr=6 pr=0 pw=0 time=829 us)(object id 98246)
879375 HASH JOIN (cr=432 pr=391 pw=0 time=1778117 us)
125 VIEW (cr=1 pr=0 pw=0 time=2041 us)
156 WINDOW SORT (cr=1 pr=0 pw=0 time=1812 us)
156 FILTER (cr=1 pr=0 pw=0 time=493 us)
156 INDEX RANGE SCAN FISCAL_CAL_IX1 (cr=1 pr=0 pw=0 time=332 us)(object id 12304)
7035 TABLE ACCESS FULL UNIT_DEPT_COMP_MAIN (cr=431 pr=391 pw=0 time=734863 us)
-----
I tried the DistribTx=0 option which had the strange effect of causing the update to blow up with a "ORA-28115: policy with check option violation". The table does indeed have a DBMS_RLS policy on it, but why DistribTx=0 started making it fail I don't know. Anyway, that led me to use an Oracle user that wasn't subject to the policy rules and the query was fast again. In other words, trying the DistribTx was helpful in that it led me to discover that something in the DBMS_RLS setup was making the optimizer go mad. It wasn't OLEDB per se that was the problem; using the right (wrong) Oracle user, I was able to make the performance stink in SQL Developer too.
Similar Messages
-
This is a new built system and I'm trying to play LOTR: Return of the King by EA, the games about 1+yrs old. During play, the game will start to stutter, then on the monitor it shows "No Signal, going to sleep." I can't get an image back up and have to reboot. I don't have a clue whats wrong. I should be playing this game with no issues as I built a good system and I'm playing an older game. I believe that I should be able to play todays games with no issues. Please tell me what you think.
I noticed that first the games sound and video are not matched. Then the video starts to skip. Then the screen goes off. Thanks for your help.
My specs are:
AMD Athlon 64 3000 Venice, 939 socket
MSI K8N Neo4-f mobo
Gigabyte ATI Radeon X700 Pro, 128mb vid card
Corsair 1024 membory on 1 dimm
Maxtor 100gb SATA Hd
Ultra 500W power supply
Pioneer DVDRW+/-You can find your +12V rail's power easily: Open up your case and have a look at the sticker with a table on it. Find the amount of A (Amps or ampère) under the +12V column in the table. (mine is 28A on +12V)
Since people usually start with replying to your posts with questions about the Amps on your +12V rail, you better set the characteristics of your PSU in your sig (like I did).
WHat type of Corsair memory do you have? Because I had to set my Vdimm (voltage on your memory modules) to 2.75V to get an stable system. (as is recommended by Corsair, but depends on your module type). If you know your module type then you can even set your memorytimings to Corasair's recommended settings. (like mine: 2,5-3-6-3 1T @ 2.75V for my 2x CMX512-3200C2Pt)
Try ATitool to find out your GPU temp (under Overclocking - Hardware Monitoring Graphs, I think). My Asus AX800Pro 256MB can get up to 70°C, and I have no problems.
This might be an open door, but are you using the latest ATi or Omega drivers? (because you should). -
Performance degradation encountered while running BOE in clustered set up
Problem Statement:
We have a clustered BOE set up in Production with 2 CMS servers (named boe01 and boe02) . Mantenix application (Standard J2EE application in a clustered set up) points to these BOE services hosted on virtual machines to generate reports. As soon as BOE services on both boe01 and boe02 are up and running , performance degradation is observed i.e (response times varies from 7sec to 30sec) .
The same set up works fine when BOE services on boe02 is turned off i.e only boe01 is up and running.No drastic variation is noticed.
BOE Details : SAP BusinessObjects environment XIR2 SP3 running on Windows 2003 Servers.(Virtual machines)
Possible Problem Areas as per our analysis
1) Node 2 Virtual Machine Issue:
This currently being part of the Production infrastructure, any problem assessment testing is not possible.
2) BOE Configuration Issue
Comparison report to check the build between BOE 01 and BOE 02 - Support team has confirmed no major installation differences apart from a minor Operating System setting difference.Question being is there some configuration/setting that we are missing ?
3) Possible BOE Cluster Issue:
Tests in staging environment ( with a similar clustered BOE setup ) have proved inconclusive.
We require your help in
- Root cause Analysis for this problem.
- Any troubleshooting action henceforth.
Another observation from our Weblogic support engineers for the above set up which may or may not be related to the problem is mentioned below.
When the services on BOE_2 are shutdown and we try to fetch a particular report from BOE_1 (Which is running), the following WARNING/ERROR comes up:-
07/09/2011 10:22:26 AM EST> <WARN> <com.crystaldecisions.celib.trace.d.if(Unknown Source)> - getUnmanagedService(): svc=BlockingReportSourceRepository,spec=aps<BOE_1> ,cluster:@BOE_OLTP, kind:cacheserver, name:<BOE_2>.cacheserver.cacheserver, queryString:null, m_replaceable:true,uri=osca:iiop://<BOE_1>;SI_SESSIONID=299466JqxiPSPUTef8huXO
com.crystaldecisions.thirdparty.org.omg.CORBA.TRANSIENT: attempt to establish connection failed: java.net.ConnectException: Connection timed out: connect minor code: 0x4f4f0001 completed: No
at com.crystaldecisions.thirdparty.com.ooc.OCI.IIOP.Connector_impl.connect(Connector_impl.java:150)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.createTransport(GIOPClient.java:233)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClientWorkersPool.next(GIOPClientWorkersPool.java:122)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.getWorker(GIOPClient.java:105)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.startDowncall(GIOPClient.java:409)
at com.crystaldecisions.thirdparty.com.ooc.OB.Downcall.preMarshalBase(Downcall.java:181)
at com.crystaldecisions.thirdparty.com.ooc.OB.Downcall.preMarshal(Downcall.java:298)
at com.crystaldecisions.thirdparty.com.ooc.OB.DowncallStub.preMarshal(DowncallStub.java:250)
at com.crystaldecisions.thirdparty.com.ooc.OB.DowncallStub.setupRequest(DowncallStub.java:530)
at com.crystaldecisions.thirdparty.com.ooc.CORBA.Delegate.request(Delegate.java:556)
at com.crystaldecisions.thirdparty.org.omg.CORBA.portable.ObjectImpl._request(ObjectImpl.java:118)
at com.crystaldecisions.enterprise.ocaframework.idl.ImplServ._OSCAFactoryStub.getServices(_OSCAFactoryStub.java:806)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.do(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getUnmanagedService(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.AbstractStubHelper.getService(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.e.do(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.try(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.p.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getManagedService(Unknown Source)
at com.crystaldecisions.sdk.occa.managedreports.ps.internal.a$a.getService(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.e.do(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.try(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.p.a(Unknown Source)
We see the above warning coming 2 or 3 times before the request is processed and then we see the report. We have checked our config's for the cluster but didn't find anything concrete.
Is this a normal behavior of the software or can we optimize it?
Any assistance that you can provide would be greatRahul,
I have exactly the same problem running BO 3.1 SP3 in a 2 machine cluster on AIX. Exact same full install on both machines. When I take down one of the machines the performance is much better.
An example of the problem now is that when i run the command ./ccm.sh -display -username administrator -password xxx on the either box when they are both up and running, I sometimes receive a timeout error (over 15mins)
If I run SQLplus direct on the boxes to the CMS DB then the response is instant. Tnspings of course shows no problems
When I bring down one of the machines and run the command ./ccm.sh -display again then this brings back results in less than a minute...
I am baffled as to the problem so was wondering if you found anything from your end
Cheers
Chris -
Performance issue on Date filter
In my where condition I wanted a date filter on sale date. I want all sales after 9/1/2014.
CASE1
If I explicitly use date filter like
SaleDate > '2014-09-01 00:00:00:000' I am getting the result in 2 seconds.
CASE2
Since I may need to use this data value again, I created a date table with single column "date" and loaded the value '2014-09-01 00:00:00:000' .
So now my where is like
SaleDate > (Select date from dateTable)
When I run this , the result does not show up even after 10 mins. Both date types are datetime. I am baffled. Why is this query not coming up with the result?As mentioned by Erland, for the optimizer, both situation are very different. With a literal, the optimizer can properly estimate the number of qualifying rows and adapt the query plan appropriately. With a scalar subquery, the value is unknown at compile
time, and the optimizer will use heuristics to accommodate any value. In this case, the selection for all rows more recent than September 1st 2014 is probably a small percentage of the table.
I can't explain why the optimizer or engine goes awry, because the subquery's result is a scalar, and shouldn't result in such long runtime. If you are unlucky, the optimizer expanded the query and actually joins the two tables. That would make the indexes
on table dateTable relevant, as well as distribution and cardinality of dateTable's row values. If you want to know, you would have to inspect the (actual) query plan.
In general, I don't think your approach is a smart thing to do. I don't know why you want to have your selection date in a table (as opposed to a parameter of a stored procedure), but if you want to stick to it, maybe you should break the query up into something
like this. The optimizer would still have to use heuristics (instead of more reliable estimates), but some unintended consequences could disappear.
Declare @min_date datetime
Set @min_date = (SELECT date FROM dateTable)
SELECT ...
FROM ...
WHERE SaleDate > @min_date
If you use a parameter (or an appropriate query hint), you will probably get performance close to your first case.
Gert-Jan -
Performance problems with XMLTABLE and XMLQUERY involving relational data
Hello-
Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
* Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
* Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.<Note>Long post, sorry.</Note>
First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
• Converting legacy tabular data into XML records; and
• Performing code table lookups for coded values in XML records.
There are three things I want to accomplish with this post:
• Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
• Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
• Highlight remaining performance issues in hopes that we can solve them
What we are trying to accomplish:
• Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
• Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
• Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
• Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
Why we did and why:
• Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
• Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
• Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
• Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
• Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
• Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
• Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
What issues remain?
We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
-- The main record table:
create table RECORDS (
SSN varchar2(20),
XMLREC sys.xmltype
xmltype column XMLREC store as binary xml;
create index records_ssn on records(ssn);
-- A dozen code tables represented by one like this:
create table CODES (
CODE varchar2(4),
DESCRIPTION varchar2(500)
create index codes_code on codes(code);
-- Some XML records with coded values (the real records are much more complex of course):
-- I think this took about a minute or two
DECLARE
ssn varchar2(20);
xmlrec xmltype;
i integer;
BEGIN
xmlrec := xmltype('<?xml version="1.0"?>
<Root>
<Id>123456789</Id>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
</Root>
for i IN 1..100000 loop
insert into records(ssn, xmlrec) values (i, xmlrec);
end loop;
commit;
END;
-- Some code data like this (ignoring date ranges on codes):
DECLARE
description varchar2(100);
i integer;
BEGIN
description := 'This is the code description ';
for i IN 1..3000 loop
insert into codes(code, description) values (to_char(i), description);
end loop;
commit;
end;
-- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
-- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
-- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
-- Note we are accessing a single XML record based on SSN
-- Note also we are reusing the one test code table multiple times for convenience of this test
select xmlquery('
for $r in Root
return
<Root>
<Id>123456789</Id>
{for $e in $r/Element
return
<Element>
<Subelement1>
{$e/Subelement1/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
</Description>
</Subelement1>
<Subelement2>
{$e/Subelement2/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
</Description>
</Subelement2>
<Subelement3>
{$e/Subelement3/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
</Description>
</Subelement3>
</Element>
</Root>
' passing xmlrec returning content)
from records
where ssn = '10000';
The plan shows the nested loop access that slows things down.
By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
Operation Object
|SELECT STATEMENT ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| XPATH EVALUATION ()
| TABLE ACCESS (BY INDEX ROWID) RECORDS
| INDEX (RANGE SCAN) RECORDS_SSN
With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
-- Add an xmlindex. Takes about 2.5 minutes
create index records_record_xml ON records(xmlrec)
indextype IS xdb.xmlindex;
Operation Object
|SELECT STATEMENT ()
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| TABLE ACCESS (BY INDEX ROWID) RECORDS
| INDEX (RANGE SCAN) RECORDS_SSN
Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity. -
Keyboard producing extra letters when typing and impacting system performance
I’m having some issues with my Mid-2010 MacBook Pro 15. The Apple tech guy was baffled at the symptoms I was experiencing; he was unable to help (the MacBook Pro itself has not been inspected) so I thought I would see if anyone else had a view as to what the issue could be. I’m happy to fix it myself if possible (I’m quite comfortable with doing repairs/upgrades/general maintenance on Macs and PCs at large) and would rather avoid a hefty fee from the Apple store or other repair services if it can be avoided (unless the error is due to faulty hardware where Apple will fix free of charge).
The issue lies with using the internal keybaord. Most of the keys on the keyboard work just fine, a couple don’t, however when I use the internal keyboard it significantly slows down animations (i.e. clicking on the Applications icon on the dock will take a good 5-7 seconds for it to finish the pop-up). There hasn’t been any liquid spillage or suffered from any falls so I’m not quite sure how this problem came about. These are some of the issues/symptoms I’m having which may help in a diagnosis:
-Shift key is stuck and the Mac will boot into Safe Mode on its own. Holding down the Option key upon pressing the power button until the spinning gear appears will allow the Mac to boot normally. I can use an external keyboard and the Mac runs just fine. However, if I use the internal keyboard....
-Animations run slowly and it cannot load graphics programs such as iMovie. (An error message pops up, but I do not have this latop with me at time of posting this to find what it is. If you need to know what the error message is, let me know and I will post it here when I get home). Even though I’m in ‘normal mode’ for less of a better term, using the internal keyboard will result in the Mac performing like it is in 'safe mode' until…
-Pressing any key on an external keyboard. The slow animation issue is resolved (i.e. clicking on the Application icon on dock pops up in less than a second as per normal) and the Mac will return to running as normal and everything is very responsive. iMovie will now load correctly (without the error message).
-Some letters when pressed on the internal keyboard will type another letter directly after it (e.g. If pressing ‘q’ it will type as ‘QT’ (in caps due to the Shift key being stuck), same for pressing ‘t’ will display as “TQ”. This also happens to ‘a’ as it will type ‘AB’ and ‘b’ where it will display ‘BA’. In total about 6 or 7 keys around the keyboard are affected where pressing one key will result in it being typed in caps AND a second correspoding letter. Pressing that second corresponding letter will also type in caps and it's "linked" key.
-Selecting individual files with the track pad (after pressing a key on the internal keyboard) will pick multiple items within a folder as if you were clicking and dragging to select multiple files. If I click on one file and move the cursor (without holding the click or ‘touch to drag’) to a file below, it will select everything between the two selections. Pressing a key on the external keyboard returns everything including the track pad back to normal
I’ve performed a clean install of Snow Leopard which took a little longer than usual. The install disc took some time to actually load into the installation screen, roughly 5-7 minutes longer. The external keyboard doesn’t appear to work on boot up until I get to the log in screen. I’ve cleaned inside Mac from what minimal dust was there. Finally, I’ve been unable to reset the PRAM or SMC as the laptop boots into Safe Mode due to the Shift key being stuck and the external keyboard not being recognised until I reach the log in screen to enter my password. Other keys don’t appear to be working (i.e. holding down C to boot from disc and other key commands on start-up do not work), the only key on the internal keyboard that appear to work on boot is Option which allows me to bypass the 'auto-safe mode' boot.
I’m quite sure this is hardware related and if feasible I will fix myself, if not I’m happy to use an external keyboard (except when needing to perform boot commands which is where I’ll run into some issues).System specs are as follows:
MacBook Pro 15 - Mid 2010 (April)
2.4GHz Intel Core i5
4GB RAM
160GB hard drive
NVIDIA GeForce GT330M 256MB
I’d appreciate any possible answers you have. Apologies if this has been posted elsewhere, I’m quite sure this is a unique issue so I’d be surprised to learn someone else has had this problem too after all the hours of research I’ve done!
Thanks.lol, I was going to add "as long as it's not doing that all the time" ...but was hoping for the quick fix. You might want to check your Activity Monitor and see which program or background process is causing the activity ...have you installed anything lately that coincided with this problem? You also might want to try booting into Safe Mode to see if things still persist.
-
Extremely slow performance with Radeon HD 7870
Hi,
I am using a number of Adobe programs on my new Windows 8 64 bit system, with 16 gigs of ram, an Intel Core i5 2.67ghz, and AMD Radeon HD 7870 2 gig. All the programs (including After Effects and Illustrator) work very well with the exception of Photoshop CS6 64 bit, which has extremely slow performance with Use Graphics Processor enabled in my preferences (which Photoshop selects by default). If I turn off Use Graphics Processor, the slow performance vanishes. If I change Drawing Mode to Basic, there might be a slightly detectable improvement over Normal and Advanced, but it's still horribly slow. The refresh rate seems to be just a few frames per second. Everything is slow: brushes, zooming, panning, everything.
I've tried changing the settings I've seen suggested elsewhere: I switched Cache levels to 2, history states to 10, and tile size to 128k. No effect. Photoshop is currently at the default of using 60% of available ram. Efficiency has remained at 100% throughout all my tests. Also, this slow performance has affected me from the moment I installed Photoshop; it didn't crop up after previous good performance. The problem exists regardless of the size or number of documents I have open. Performance is still terrible even when I create a 500 x 500 pixel blank new canvas and try a simple task like drawing with the brush.
Photoshop is fully up to date, and so are my graphics drivers (Catalyst version 13.1). Any help would be greatly appreciated; at the moment performance is so bad in Photoshop that it's unusable with graphics acceleration enabled. Thanks in advance for replying.
Photoshop System Info:
Adobe Photoshop Version: 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00) x64
Operating System: Windows 8 64-bit
Version: 6.2
System architecture: Intel CPU Family:6, Model:14, Stepping:5 with MMX, SSE Integer, SSE FP, SSE2, SSE3, SSE4.1, SSE4.2
Physical processor count: 4
Processor speed: 2665 MHz
Built-in memory: 16379 MB
Free memory: 13443 MB
Memory available to Photoshop: 14697 MB
Memory used by Photoshop: 60 %
Image tile size: 128K
Image cache levels: 2
OpenGL Drawing: Enabled.
OpenGL Drawing Mode: Advanced
OpenGL Allow Normal Mode: True.
OpenGL Allow Advanced Mode: True.
OpenGL Allow Old GPUs: Not Detected.
OpenCL Version: 1.2 AMD-APP (1084.4)
OpenGL Version: 3.0
Video Rect Texture Size: 16384
OpenGL Memory: 2048 MB
Video Card Vendor: ATI Technologies Inc.
Video Card Renderer: AMD Radeon HD 7800 Series
Display: 1
Display Bounds: top=0, left=0, bottom=1080, right=1920
Video Card Number: 2
Video Card: AMD Radeon HD 7800 Series
Driver Version:
Driver Date:
Video Card Driver: aticfx64.dll,aticfx64.dll,aticfx64.dll,aticfx32,aticfx32,aticfx32,atiumd64.dll,atidxx64.d ll,atidxx64.dll,atiumdag,atidxx32,atidxx32,atiumdva,atiumd6a.cap,atitmm64.dll
Video Mode: 1920 x 1080 x 4294967296 colors
Video Card Caption: AMD Radeon HD 7800 Series
Video Card Memory: 2048 MB
Video Card Number: 1
Video Card: Microsoft Basic Render Driver
Driver Version: 9.12.0.0
Driver Date: 20121219000000.000000-000
Video Card Driver:
Video Mode:
Video Card Caption: Microsoft Basic Render Driver
Video Card Memory: 0 MB
Serial number: 90970078453021833509
Application folder: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\
Temporary file path: C:\Users\RAFFAE~1\AppData\Local\Temp\
Photoshop scratch has async I/O enabled
Scratch volume(s):
C:\, 931.5G, 534.8G free
Required Plug-ins folder: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\Required\
Primary Plug-ins folder: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\Plug-ins\
Additional Plug-ins folder: not set
Installed components:
ACE.dll ACE 2012/06/05-15:16:32 66.507768 66.507768
adbeape.dll Adobe APE 2012/01/25-10:04:55 66.1025012 66.1025012
AdobeLinguistic.dll Adobe Linguisitc Library 6.0.0
AdobeOwl.dll Adobe Owl 2012/09/10-12:31:21 5.0.4 79.517869
AdobePDFL.dll PDFL 2011/12/12-16:12:37 66.419471 66.419471
AdobePIP.dll Adobe Product Improvement Program 7.0.0.1686
AdobeXMP.dll Adobe XMP Core 2012/02/06-14:56:27 66.145661 66.145661
AdobeXMPFiles.dll Adobe XMP Files 2012/02/06-14:56:27 66.145661 66.145661
AdobeXMPScript.dll Adobe XMP Script 2012/02/06-14:56:27 66.145661 66.145661
adobe_caps.dll Adobe CAPS 6,0,29,0
AGM.dll AGM 2012/06/05-15:16:32 66.507768 66.507768
ahclient.dll AdobeHelp Dynamic Link Library 1,7,0,56
aif_core.dll AIF 3.0 62.490293
aif_ocl.dll AIF 3.0 62.490293
aif_ogl.dll AIF 3.0 62.490293
amtlib.dll AMTLib (64 Bit) 6.0.0.75 (BuildVersion: 6.0; BuildDate: Mon Jan 16 2012 18:00:00) 1.000000
ARE.dll ARE 2012/06/05-15:16:32 66.507768 66.507768
AXE8SharedExpat.dll AXE8SharedExpat 2011/12/16-15:10:49 66.26830 66.26830
AXEDOMCore.dll AXEDOMCore 2011/12/16-15:10:49 66.26830 66.26830
Bib.dll BIB 2012/06/05-15:16:32 66.507768 66.507768
BIBUtils.dll BIBUtils 2012/06/05-15:16:32 66.507768 66.507768
boost_date_time.dll DVA Product 6.0.0
boost_signals.dll DVA Product 6.0.0
boost_system.dll DVA Product 6.0.0
boost_threads.dll DVA Product 6.0.0
cg.dll NVIDIA Cg Runtime 3.0.00007
cgGL.dll NVIDIA Cg Runtime 3.0.00007
CIT.dll Adobe CIT 2.1.0.20577 2.1.0.20577
CoolType.dll CoolType 2012/06/05-15:16:32 66.507768 66.507768
data_flow.dll AIF 3.0 62.490293
dvaaudiodevice.dll DVA Product 6.0.0
dvacore.dll DVA Product 6.0.0
dvamarshal.dll DVA Product 6.0.0
dvamediatypes.dll DVA Product 6.0.0
dvaplayer.dll DVA Product 6.0.0
dvatransport.dll DVA Product 6.0.0
dvaunittesting.dll DVA Product 6.0.0
dynamiclink.dll DVA Product 6.0.0
ExtendScript.dll ExtendScript 2011/12/14-15:08:46 66.490082 66.490082
FileInfo.dll Adobe XMP FileInfo 2012/01/17-15:11:19 66.145433 66.145433
filter_graph.dll AIF 3.0 62.490293
hydra_filters.dll AIF 3.0 62.490293
icucnv40.dll International Components for Unicode 2011/11/15-16:30:22 Build gtlib_3.0.16615
icudt40.dll International Components for Unicode 2011/11/15-16:30:22 Build gtlib_3.0.16615
image_compiler.dll AIF 3.0 62.490293
image_flow.dll AIF 3.0 62.490293
image_runtime.dll AIF 3.0 62.490293
JP2KLib.dll JP2KLib 2011/12/12-16:12:37 66.236923 66.236923
libifcoremd.dll Intel(r) Visual Fortran Compiler 10.0 (Update A)
libmmd.dll Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler 12.0
LogSession.dll LogSession 2.1.2.1681
mediacoreif.dll DVA Product 6.0.0
MPS.dll MPS 2012/02/03-10:33:13 66.495174 66.495174
msvcm80.dll Microsoft® Visual Studio® 2005 8.00.50727.6910
msvcm90.dll Microsoft® Visual Studio® 2008 9.00.30729.1
msvcp100.dll Microsoft® Visual Studio® 2010 10.00.40219.1
msvcp80.dll Microsoft® Visual Studio® 2005 8.00.50727.6910
msvcp90.dll Microsoft® Visual Studio® 2008 9.00.30729.1
msvcr100.dll Microsoft® Visual Studio® 2010 10.00.40219.1
msvcr80.dll Microsoft® Visual Studio® 2005 8.00.50727.6910
msvcr90.dll Microsoft® Visual Studio® 2008 9.00.30729.1
pdfsettings.dll Adobe PDFSettings 1.04
Photoshop.dll Adobe Photoshop CS6 CS6
Plugin.dll Adobe Photoshop CS6 CS6
PlugPlug.dll Adobe(R) CSXS PlugPlug Standard Dll (64 bit) 3.0.0.383
PSArt.dll Adobe Photoshop CS6 CS6
PSViews.dll Adobe Photoshop CS6 CS6
SCCore.dll ScCore 2011/12/14-15:08:46 66.490082 66.490082
ScriptUIFlex.dll ScriptUIFlex 2011/12/14-15:08:46 66.490082 66.490082
svml_dispmd.dll Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler 12.0
tbb.dll Intel(R) Threading Building Blocks for Windows 3, 0, 2010, 0406
tbbmalloc.dll Intel(R) Threading Building Blocks for Windows 3, 0, 2010, 0406
updaternotifications.dll Adobe Updater Notifications Library 6.0.0.24 (BuildVersion: 1.0; BuildDate: BUILDDATETIME) 6.0.0.24
WRServices.dll WRServices Friday January 27 2012 13:22:12 Build 0.17112 0.17112
Required plug-ins:
3D Studio 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Accented Edges 13.0
Adaptive Wide Angle 13.0
Angled Strokes 13.0
Average 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Bas Relief 13.0
BMP 13.0
Camera Raw 7.3
Chalk & Charcoal 13.0
Charcoal 13.0
Chrome 13.0
Cineon 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Clouds 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Collada 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Color Halftone 13.0
Colored Pencil 13.0
CompuServe GIF 13.0
Conté Crayon 13.0
Craquelure 13.0
Crop and Straighten Photos 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Crop and Straighten Photos Filter 13.0
Crosshatch 13.0
Crystallize 13.0
Cutout 13.0
Dark Strokes 13.0
De-Interlace 13.0
Dicom 13.0
Difference Clouds 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Diffuse Glow 13.0
Displace 13.0
Dry Brush 13.0
Eazel Acquire 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Embed Watermark 4.0
Entropy 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Extrude 13.0
FastCore Routines 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Fibers 13.0
Film Grain 13.0
Filter Gallery 13.0
Flash 3D 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Fresco 13.0
Glass 13.0
Glowing Edges 13.0
Google Earth 4 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Grain 13.0
Graphic Pen 13.0
Halftone Pattern 13.0
HDRMergeUI 13.0
IFF Format 13.0
Ink Outlines 13.0
JPEG 2000 13.0
Kurtosis 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Lens Blur 13.0
Lens Correction 13.0
Lens Flare 13.0
Liquify 13.0
Matlab Operation 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Maximum 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Mean 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Measurement Core 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Median 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Mezzotint 13.0
Minimum 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
MMXCore Routines 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Mosaic Tiles 13.0
Multiprocessor Support 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Neon Glow 13.0
Note Paper 13.0
NTSC Colors 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Ocean Ripple 13.0
Oil Paint 13.0
OpenEXR 13.0
Paint Daubs 13.0
Palette Knife 13.0
Patchwork 13.0
Paths to Illustrator 13.0
PCX 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Photocopy 13.0
Photoshop 3D Engine 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Picture Package Filter 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Pinch 13.0
Pixar 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Plaster 13.0
Plastic Wrap 13.0
PNG 13.0
Pointillize 13.0
Polar Coordinates 13.0
Portable Bit Map 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Poster Edges 13.0
Radial Blur 13.0
Radiance 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Range 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Read Watermark 4.0
Reticulation 13.0
Ripple 13.0
Rough Pastels 13.0
Save for Web 13.0
ScriptingSupport 13.1.2
Shear 13.0
Skewness 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Smart Blur 13.0
Smudge Stick 13.0
Solarize 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Spatter 13.0
Spherize 13.0
Sponge 13.0
Sprayed Strokes 13.0
Stained Glass 13.0
Stamp 13.0
Standard Deviation 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
STL 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Sumi-e 13.0
Summation 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Targa 13.0
Texturizer 13.0
Tiles 13.0
Torn Edges 13.0
Twirl 13.0
Underpainting 13.0
Vanishing Point 13.0
Variance 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Variations 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Water Paper 13.0
Watercolor 13.0
Wave 13.0
Wavefront|OBJ 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
WIA Support 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Wind 13.0
Wireless Bitmap 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
ZigZag 13.0
Optional and third party plug-ins:
DAZ Studio 3D Bridge 12.0
DazUpdateScene 12.0
Plug-ins that failed to load: NONE
Flash:
Mini Bridge
Kuler
Installed TWAIN devices: NONEGreat news!
I followed your suggestion, Noel, and uninstalled my drivers with the Catalyst Uninstall Utility (which I hadn't used before), rebooted, reinstalled Catalyst 13.2 Beta 7, rebooted, deleted the PS Prefs, started PS, and now the performance is radically improved!
It baffles me why I would need to do a clean driver uninstall in a new system environment that has only ever had this video card, and is only a few months old, but that action seems to be what's improved the situation. Note that because I deleted the PS preferences, the good performance now occurs with the default Use Graphics Processor enabled, and Drawing Mode set to Advanced (with every feature in the Advanced dialog checked except 30bit display).
Having no reference point, I can't tell for sure if performance is as good as it theoretically ought to be for my system specs, but using the standard 13px brush is only slightly laggy behind the cursor, even if I become reckless and squiggle on the canvas violently. It wasn't just sluggish response before: Photoshop seemed to be in pain trying to draw the line segments as it laboured to catch up to the cursor. That is pretty much gone now.
Much more tellingly, zooming and panning the canvas is like it ought to be, responsive and clean, more or less like it was for me in CS4 with my old Nvidia 9600GT. The poor performance with zooming and panning was the most worrying aspect of the issue. It's a great relief to see these working more properly now.
I have to confess I'm a little embarrassed to have drug you through all this hassle only to discover that something as rudimentary as properly cleaning out the drivers would be the apparent solution. I never would have imagined that doing that, with such a new and stable system, would make any difference, since I don't have a legacy of older drivers dotting my hard drive. In any event, I'm really grateful that you guys took the time to try to help me.
In the interest of completeness, here's the relevant portion of my system info after doing the reinstall steps I mentioned above:
Adobe Photoshop Version: 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00) x64
Operating System: Windows 8 64-bit
Version: 6.2
System architecture: Intel CPU Family:6, Model:14, Stepping:5 with MMX, SSE Integer, SSE FP, SSE2, SSE3, SSE4.1, SSE4.2
Physical processor count: 4
Processor speed: 2665 MHz
Built-in memory: 16379 MB
Free memory: 13680 MB
Memory available to Photoshop: 14695 MB
Memory used by Photoshop: 60 %
Image tile size: 1024K
Image cache levels: 4
OpenGL Drawing: Enabled.
OpenGL Drawing Mode: Advanced
OpenGL Allow Normal Mode: True.
OpenGL Allow Advanced Mode: True.
OpenGL Allow Old GPUs: Not Detected.
OpenCL Version: 3.0
OpenGL Version: 3.0
Video Rect Texture Size: 16384
OpenGL Memory: 2048 MB
Video Card Vendor: ATI Technologies Inc.
Video Card Renderer: AMD Radeon HD 7800 Series
Display: 1
Display Bounds: top=0, left=0, bottom=1080, right=1920
Video Card Number: 2
Video Card: AMD Radeon HD 7800 Series
Driver Version:
Driver Date:
Video Card Driver: aticfx64.dll,aticfx64.dll,aticfx64.dll,aticfx32,aticfx32,aticfx32,atiumd64.dll,atidxx64.d ll,atidxx64.dll,atiumdag,atidxx32,atidxx32,atiumdva,atiumd6a.cap,atitmm64.dll
Video Mode: 1920 x 1080 x 4294967296 colors
Video Card Caption: AMD Radeon HD 7800 Series
Video Card Memory: 2048 MB
Video Card Number: 1
Video Card: Microsoft Basic Render Driver
Driver Version: 12.100.17.0
Driver Date: 20130226000000.000000-000
Video Card Driver:
Video Mode:
Video Card Caption: Microsoft Basic Render Driver
Video Card Memory: 0 MB
As you can see, the Microsoft Basic Render Driver still appears as a Video Card in the list. This presumably has something to do with Windows 8, and I really don't know if its presence here is still a sign that my card is not being used to its optimal capability. I have a hunch that the slight lag that I am still experiencing with the brush when Use Graphics Processor is on -- totally absent with the acceleration turned off -- implies that Photoshop is still unable to take maximum advantage of my card. I would assume that the brush tool should be more responsive with acceleration on than off, in a rig without any issues, but that's just assumption on my part. If you turn off Use Graphics Processor on your systems, is the brush tool more or less responsive?
But again, panning the canvas with the hand tool is now extremely responsive, without any screen tearing or visible lag, and that's a massive improvement over the total meltdown of performance I was suffering from before.
Thanks again very much for your help. I'm grateful to all of you who took the time to chime in. -
Inconsistent datastore performance
Hi all
Hope someone can help, ran into a bit of a wall here. We're running ESXi 5.1 on an HP Proliant DL120 G7 and Intel Cougar Point 6 SATA controller, with 4 identical WD 2TB drives.
We've got 4 datastores configured, each on the 4 WD drives.
The problem is that uploading say a 2GB file via vSphere's datastore browser takes incredibly long (at least 20 minutes) on 3 of the 4 datastores, with the odd one out completing in under 2 minutes. I cannot for the life of me figure out what exactly is different between the datastores, especially considering that they're all using what should be identical backend hardware.
Just as a test, I'm seeing the same behaviour when I SSH to the ESXi host and copy something from one datastore to another - 3 of 4 datastores return write speeds of just over 9MB/s.
Reading up on the Cougar Point 6 controller though, apparently it has 2 6GB/s ports and 4 3GB/s ports, but there doesn't seem to be a lot of easily accessible info on it. I do not have the output immediately accessible, but all four disks are set at 3GB/s, so doubt that could have anything to do with it.
Apparently there was/ is also a hardware bug with the B2 stepping version of this particular controller, though I have no idea how to identify whether this controller comes from the faulty batch.
What I'm more interested in at this stage is if there's a recommended way of identifying where the bottleneck is via ESXi? I'm baffled why one datastore's performance would be great, while others notsomuch.
Any help appreciated.This is probably more a question for Intel or on an Intel forum instead of here, but here's the relevant output from esxcli hardware pci list:
000:000:1f.2
Address: 000:000:1f.2
Segment: 0x0000
Bus: 0x00
Slot: 0x1f
Function: 0x02
VMkernel Name: vmhba0
Vendor Name: Intel Corporation
Device Name: Cougar Point 6 port SATA AHCI Controller
Configured Owner: Unknown
Current Owner: VMkernel
Vendor ID: 0x8086
Device ID: 0x1c02
SubVendor ID: 0x103c
SubDevice ID: 0x330d
Device Class: 0x0106
Device Class Name: SATA controller
Programming Interface: 0x01
Revision ID: 0x05
Interrupt Line: 0x0a
IRQ: 10
Interrupt Vector: 0x98
PCI Pin: 0x33
Spawned Bus: 0x00
Flags: 0x0201
Module ID: 65
Module Name: ahci
Chassis: 0
Physical Slot: 255
Slot Description:
Passthru Capable: false
Parent Device:
Dependent Device: PCI 0:0:31:2
Reset Method: Function reset
FPT Sharable: false
Apparently the B2 stepping model of this type of controller was the faulty version. According to Dell Intel, the B2 stepping model has a revision ID of 04h, with 05h being the B3 and therefore working model. What I can't figure out is whether Revision ID: 0x05 == 05h. Assume it probably is.
If it is, then I'm probably barking up the wrong tree in suspecting the AHCI controller of being the root cause. -
New X800XT, poor performance
I just upgraded from a Radeon 9600 (only 64mb) to a Radeon X800XT (256 mb) expecting to see a big improvement in performance, but I can see no improvement at all. I get the same hesitations when viewing Hi resolution video, and Maya hardware renders (which use the video card to render) run no faster. I also get exactly the same performance in Maya's workspace.
I've downloaded and installed the latest drivers from ATI, is there something else I can do? Am I missing something?
G5 Dual 2GHz Mac OS X (10.3.9) Dual Monitors (1-21" cinema display, 1-CRT)You've done all of this, right:
Have you tried the usual items:
Repair Permissions: Finder>Applications>Utilities>Disk Utility>Select Drive>Repair Permissions.
Reset the PRAM: Shut down computer>Restart and hold down CNMD-OPT-P-R during startup until second POST chime is heard, then release.
Run CRON scripts: Finder>Applications>Utilities>Terminal>Open Terminal>Type in "sudo periodic daily weekly monthly" (sans quotes) and quit when finished.
Reset the SMU: Underneath the ram to the right is small SMU reset button. Power computer down, remove side cover, air baffle and fan assembly and press button once. Put machine back together and restart.
Reset the Firmware: Shut down the computer, and on restart, hold down CMD-OPT-O-F and type in "reset NVRAM" and then "reset-all" (sans quotes) when prompt line appears. -
Audio performance deteriorating over time until Mavericks reinstall
I'm on the latest Mavericks, maxed-out 12-core Mac Pro. I use Steinberg Nuendo as my main audio DAW. It works like a dream... for a while... But after a few weeks it starts slowing down until Nuendo is almost unusable. Larger projects which normally run just fine become unworkable or very sluggish because they just overload the CPU, as if I was working with a 5 year old computer. This issue specifically targets audio performance, whether I use internal audio or my audio interface (UAD Apollo).
I've tried removing/uninstalling all non essential drivers, extensions etc, and reinstalled/updated everything I can't work without to the latest versions. I've tried unplugging everything except 1 monitor, keyboard, mouse and USB authorization sticks. Nothing makes a difference. It's like there's a virus that progressively slows down my system, but I have found no virus on my system, at least via MacKeeper.
Here is my temporary solution: Every few weeks I reinstall Mavericks (right now 10.9.3) via the Apple online recovery system (Command-R on startup). The problem goes away completely and my Mac Pro is blazing fast again... For a few weeks, until audio performance deteriorates and I have to reinstall Mavericks again. Apple tech support originally advised me to reinstall Mavericks on top of itself, and I was really happy the problem was fixed, but I didn't know it would keep sneaking in over and over again.
This is the third time I have to reinstall Mavericks via online Recovery, and I am still baffled as to what could possibly get worse over time in this manner, and also specifically target audio performance! It's like some sort of cache is getting filled-up somewhere and overloading some of the computer's functions over time. Whatever this is, it gets reset during a reinstall.
If anyone has any ideas or hunches, they would be really appreciated!
PS: I've posted the same issue in the "Mac Pro" section to make sure I get the most coverage. I don't expect too many answers since not too many people have the Mac Pro yet and this is a targeted inquiry.Make sure you got all the parts and pieces.
MacKeeper Removal
Activity Monitor - Mavericks
Activity Monitor in Mavericks has significant changes
Performance Guide
Why is my computer slow
Why your Mac runs slower than it should
Slow Mac After Mavericks
Things you can do to resolve slowdowns see post by Kappy -
Cannot perform Mac OSX recovery need help
Morning forum
I am hoping that I can find the help to my issue here as I need this machine up for business. I m having trouble reinstalling the OS as it is stating that it cannot find the system folder (folder with ?). I have tried to perform recovery installation holding down command R and also the other option of holding X down whilst boot up but it seems the keyboard shortcuts are not working or is there a more serious problem. I tied the RAID sets yesterday and all came back all drives seem to be ok. I am baffled as to what I can do. Any help is always appreciated.
RegardsYou have no recovery partition. This is a normal condition if your boot volume is a RAID.
You have several options for reinstalling.
1. If you have access to a local, unencrypted Time Machine backup volume, and if that volume has a backup of a Mac (not necessarily this one) that was running the same major version of OS X and did have a Recovery partition, then you can boot from the Time Machine volume into Recovery by holding down the option key at the startup chime. Encrypted Time Machine volumes are not bootable, nor are network backups.
2. If your Mac shipped with OS X 10.7 or later preinstalled, or if it's one of the computers that can be upgraded to use OS X Internet Recovery, you may be able to netboot from an Apple server by holding down the key combination option-R at the startup chime. Release the keys when you see a spinning globe.
Note: You need an always-on Ethernet or Wi-Fi connection to the Internet to use Recovery. It won’t work with USB or PPPoE modems, or with proxy servers, or with networks that require a certificate for authentication.
3. Use Recovery Disk Assistant (RDA) on another Mac running the same major version of OS X as yours to create a bootable USB device. Boot your Mac from the device by holding down the option key at startup.Warning: All existing data on the USB device will be erased when you use RDA.
Once you've booted into Recovery, the OS X Utilities screen will appear. Follow the prompts to reinstall OS X. You don't need to erase the boot volume, and you won't need your backup unless something goes wrong. If your Mac was upgraded from an older version of OS X, you’ll need the Apple ID and password you used to upgrade, so make a note of those before you begin.
If none of the above choices is open to you, then you'll have to start over from an OS X 10.6.8 installation. There's no need to overwrite your existing boot volume; you can use an external drive. Install 10.6 from the DVD you originally used to upgrade, or that came with the machine. Run Software Update and install all available updates. Log into the App Store with the Apple ID you used to buy 10.7 or later, and download the installer. When you run it, be sure to choose the right drive to install on. -
Multiple threads destroying performance
I have an application which performs it's own graphical rendering, i.e. I completely bypass Swing/AWT and go fullscreen and just render straight to a BufferStrategy's graphics. This works fine and the render loop is in it's own thread and merrily churns away giving quite acceptable performance. However, what baffles me is that if I create just one more thread, the performance of the thread chokes. The new thread has a run() consists of just a while(true) loop that does nothing. My question is why would this second thread cause such a hit in performance?
I'll grant the render thread has to load images and what not, but it was instantaneous before the second thread and now takes 30secs to load. Both threads are the same priority but even if I tank the priority of the second thread to low I still get the same 30 sec delay. Any ideas? If it'll help I can post the code but my main render thread needs quite a lot of support code so I'm not sure if it'd help that much and the second thread is just as I said, an empty while(true) loop.
Thanks in advance.Ok, I've decided I will try the sleep(x) where x != 0
and see the results. But I think I may also just
restructure my code to see if I can do an on-demand
thread instead. Not sure if it'll work for my
purposes but I'll see. The only thing is this thread
is used for playing a sound file, so I'll have to
somehow work it so that sounds don't start piling up
on each other.I've seen Swing apps do pretty impressive multimedia stunts, very smoothly -- well, after long startup anyways. So, unless you want to go through it as an exercise, I'd think using event dispatcher should work fine.
>
The only thing I'm really curious about is why does
changing the thread priority to LOW not make a
difference. Is it not possible to change time slice
allocation that way?I've never had much success with fiddling around thread priorities. But I've never cared enough to look very closely. As I recall, JVM 1.4.1 (since 1.3?) uses native thread handler to improve performance -- look in the docs to confirm. And I don't think Windoze let you adjust thread priority, except the two broad categories: foreground/background.
But the above passage is mostly based on my guesses and memories, and I've been willfully oblivious to Windoze. So don't sue me. ;>
Cheers,
Bo -
Do I have to use Crystal for Eclipse? Using JRC - Performance is SLOW.
I have several applications using the JRC to run Crystal from within the application and like many others on here, the performance is unacceptable. 20 minutes to an hour for a 500 page report that runs in under 5 minutes in Crystal itself. I see most of the recommendations being to use Crystal for Eclipse version 2, which I'm assuming fixes some of these bugs. However, our applications are not being developed in Eclipse, but in IntelliJ. If I use C4E, aren't I committing to using Eclipse as a software development platform? I was using JRC because it was a set of JAR files that I could plug into my existing application. Any suggestions would be greatly appreciated!!!!
Mike BrubakerAre there examples of code for using the JSF viewer tag for JRC in your application? I downloaded the new runtime libraries and I'm not noticing much performance improvement. For example, when the report runs and displays in the viewer, it takes from 40-50 seconds to click the pagination button and go from page 1 to 2 and then 2 to 3, etc. My users are screaming about wanting a replacement for this technology, but I'm telling them I must have something configured wrong somewhere. The Crystal report running in plain old Crystal is fast, so it's not the database query or anything like that. In addition, when I click the export to PDF button, it takes as long to export it to PDF as it did to run the report initially and bring it up in the viewer. HELP!!! Thanks for any feedback you can give me, I'm baffled.
-
Performance deteriorating even after reformat/reinstall + cleaning inside
Hey all,
I've been noticing a decline in performance in my 2007 MBP. When running previously working CPU-Intensive projects in Reason/Logic, the processors use up much more processing power than before, causing the programs to cease playback.
I tried resetting the PRAM and NVRAM, with no luck, so I reformatted and did a fresh re-install of 10.5 (via firewire from another MBP in TDM mode as my superdrive has become somewhat DVD-challenged ^_^) but I've only noticed a slight improvement and the projects are still overloading my CPU.
I opened up the mac and found a lot of dust buildup on the heatsinks, with the right fan vent being totally blocked. My computer no longer gets anywhere near as hot as it should, but I'm wondering if it could have caused some irreparable damage through overheating.
Is there anything I can do to regain lost CPU performance?Malyan wrote:
I just took it to an AASP who ran the test, and it didn't fail =(
I was hoping. Oh well.
I took it back home, and thought i'd try it with my old HD. Forgot to copy over any of the test files from my new HD so i reconnected it without rescrewing the case, and everything started magically working, my projects weren't stopping mid-song anymore!
Interesting.
Doing a little dance of joy, I then rescrewed the case, turned it on, and its back to being a glorified paperweight again.
This is with the old HD that was working fine with the case apart?
The case is in quite bad nick, it's been everywhere with me for the last 4 years so maybe it's become warped, and the fixing screws are applying pressure on some plugs/sockets/devices that's causing it to misbehave. That's the only idea i have right now.
I'm gna try the old HD again with the test files that should run fine, just to rule out the HD.
I'm truly baffled!
Far from being baffled, It sounds like you're on the right track. The case putting pressure on cable/connection/component was not uncommon on the old TiBooks. I haven't heard much about that with these machines (beyond the battery affecting the keyboard/trackpad cable), but it's definitely a possibility. As is HD. As is HD cable/connector.
Message was edited by: tjk -
Does making final field static improve performance or save memory?
Are final fields given value at compile time by the compiler?
private final String FINAL_FIELD = "31";
Does making final field static improve performance or save memory?Actually it's final static fields with primitive or String type that are treated specially. For other final fields only extra compile time checking is produced, they are the same at run time.
final static primitives are treated as compiler constants, and a literal value is substituted. No actual field is created.
Myself I consider an awareness of Java internals valuable. For example, if you were unaware of the above the occasional misbehaviour of final static primitives referenced in other classes would be baffling. If your code references a final static primitive from another class it's compiled as a literal, not a reference, so if their value changes in the other class, the referring class will retain the old value if not re-compiled.
Maybe you are looking for
-
Bundling all java,html and even images in a jar file
Hi All, When i bundle all my application .java files in a jar along with manifest i give something as, jar -cvf Name.jar manifest.txt *.classMy question is if i want to bundle even couple of HTML and even some images like(jpg,png..etc)..How should i
-
How can I set up my iPhone 6 to use wifi instead of cell when available?
since getting my new iPhone 6, my cell data usage has increased while connected to a wifi network at home. How can I get my iPhone to use wifi if available?
-
Max cursors in a plsql package
Pls let me know are there any limitations to the maximum number of Cursors (normal) in a Pl/sQL package/procedure? Thanks in advance
-
Client creation and authorization
How to create our own client in sap like client 800 & 810 .... and how get authorization for created client regards, surya.
-
How do I hide the duration on the TOC in an Aggregator project.
Hi I installed the widget to hide the duration on the TOC in my Adobe Captivate 5 project and it worked great. However when I combine the Flash files in an Aggregator project it no longer works and the duration is showing at the bottom of the Aggreg