90 watt psu or 65 watt psu/ Better performance with USB devices?
The 20Volt 65W psu that came with my 3000 N200 0769aku laptop is replaceable with
a 20 Volt/90W Lenovo AC adapter.
I am connecting devices to all usb ports (two hard drives, wireless mouse, ILOK software dongle) and a two port firewire express card.
Would having a "beefier" psu help cause when plugging all this in at once?
Thanks
If you need it, I don't believe so. If a 90W would hurt - no way.
Andy ______________________________________
Please remember to come back and mark the post that you feel solved your question as the solution, it earns the member + points
Did you find a post helpfull? You can thank the member by clicking on the star to the left awarding them Kudos Please add your type, model number and OS to your signature, it helps to help you. Forum Search Option T430 2347-G7U W8 x64, Yoga 10 HD+, Tablet 1838-2BG, T61p 6460-67G W7 x64, T43p 2668-G2G XP, T23 2647-9LG XP, plus a few more. FYI Unsolicited Personal Messages will be ignored.
Deutsche Community Comunidad en Español English Community Русскоязычное Сообщество
PepperonI blog
Similar Messages
-
Get a better performance with audio interface?
Hi all.
I got a silly question. I run Logic 7 on a Powerbook G4 12" with 1.33 Ghz and (only) 768 Mb RAM and a (hard to built in) 160 Gb Harddrive.
After having often the "Disk is too slow..." message I was thinking about buying a external audio interface (presonus firebox).
My simple question is: does an external audio interface give Logic (or CoreAudio) more free DSP-Power? So that maybe I got a better performance? Or do I need to get something like a TC Powercore Firewire?
An answer would be great ... otherwise I have to sell my nice and small 12incher and buy an Intelbook ...
Thanks. Frank.
Powerbook G4 12" 1.33 Ghz Mac OS X (10.4.8)A new audio interface won't help.
You should start by getting an external 7200 RPM Firewire drive to put your audio on. That right there will help, especially with higher track counts.
Secondly, get more RAM. 768 of RAM is borderline for Logic. I always recommend at least 1 gig. Go ahead and max out that Powerbook to 2 gig of RAM, if you can.
All that to say, an Intel mac would be a huge improvement over the Powerbook. And if you do go that route, still get an external FW drive for audio, and as much RAM as you can afford. -
Better Performance WITH RECOMPILE option
Is performace gain when executes a stored procedure using the WITH RECOMPILE option ?
Why ?
dmpcalIs performace gain when executes a stored procedure using the WITH RECOMPILE option ?
Why ?
As with many performance question, the answer is: It depends.
If the procedure includes statements the best plan depends heavily on the input parameters, WITH RECOMPILE can help. Although, these days it is better to use the query hint OPTION (RECOMPILE) for that particular query.
But if the procedure is complex, the compilation time may overshadow execution time and WITH RECOMPILE will cost performance.
Erland Sommarskog, SQL Server MVP, [email protected] -
OS X Mountain Lion Messages Beta - Integration with iOS Devices
Finding it slightly disappointing that I can't link my phone number with Messages on the Mac.
I currently recieve all my iMessages on my iPhone through my phone number, and so if I start using Messages on my Mac with my email address I would have none of those conversations to continue, and if I did start conversations they firstly; won't actually go through to my phone (as my Apple ID is not linked with iMessage on my iPhone) and even if they did they would come up as seperate conversations to those already on my phone.
Kind of a confusing post.. and really just asking whether anyone else has downloaded the Messages Beta and realised the same thing?Having played around with it, I can get Messages to continue with the conversation in the same place on my end which is good. But on the other end the person recieves it from my email and so is in a different place to the rest of the actual convo.
Still would like some form of integration between email and number, but that article certainly shows why the email address works between different types of Apple Device! -
I'll wanting to buy one of these these are refurbished and cannot choose please help me thanks :)
Refurbished 13.3-inch MacBook Pro 2.4GHz Dual-core Intel i5 with Retina Display
Originally released October 2013
13.3-inch (diagonal) Retina display; 2560-by-1600 resolution at 227 pixels per inch
8GB of 1600MHz DDR3L SDRAM
256GB Flash Storage1
720p FaceTime HD camera
Intel Iris Graphics
Apple Certified Refurbished
OR
Refurbished 13.3 inch MacBook Pro 2.6Ghz Dual-Core Intel i5 with Retina
Originally released October 2013
13.3-inch (diagonal) Retina display; 2560-by-1600 resolution at 227 pixels per inch
8GB of 1600MHz DDR3L SDRAM
128GB Flash Storage1
720p FaceTime HD camera
Intel Iris Graphics
Apple Certified Refurbished
Which one is the better one, they are the same price what one will run faster and what one would you buy
Many Thanks :) -
Is there any way to get a better performance with this Xquery ?
I created two tables.
One has XBRL docoument with XMLType based on Clob and I'm going to select several columns from it and insert them into the other table.
I used this SQL using XMLTable.
INSERT INTO SV_XBRL_ELEMENT
SELECT r.finance_cd,r.base_month,xt.context_id,xt.ns,xt.name,nvl(xt.lang,'na')
as lang,xt.unit,xt.decimals,xt.value
FROM SV_XBRL_DOC r,
XMLTABLE(
XMLNAMESPACES(
'http://www.w3.org/1999/xlink' AS "xlink",
'http://www.xbrl.org/2003/linkbase' AS "link",
'http://www.xbrl.org/2003/instance' AS "xbrli",
'http://www.xbrl.org/2003/iso4217' AS "iso4217",
'http://www.xbrlkorea.com/kr/kisinfo/fr/gaap/ci/2007-02-09' AS "kisinfo-ci",
'http://www.xbrlkorea.com/kr/kisinfo/fr/gcd/2007-02-09' AS "kisinfo-gcd",
'http://www.xbrlkorea.com/kr/kisinfo/fr/profile/2007-02-09' AS "kisinfo-profile",
'http://www.xbrlkorea.com/kr/kisinfo/fr/ratio/2007-02-09' AS "kisinfo-ratio",
'http://www.xbrlkorea.com/kr/kisinfo/fr/common/scenario' AS "kisinfo-scenario",
'http://www.xbrl.or.kr/kr/fr/gaap/ci/2006-05-31' AS "kr-gaap-ci",
'http://www.xbrl.or.kr/kr/fr/common/pte/2006-05-31' AS "krfr-pte",
'http://www.xbrl.or.kr/kr/fr/common/ptr/2006-05-31' AS "krfr-ptr",
'http://www.xbrl.or.kr/2006/role/subitem-notes' AS "p0",
'http://xmlns.oracle.com/xdb' AS "ora"),
for $item in
$doc/xbrli:xbrl/*[not(starts-with(name(),"xbrli:")) and
not(starts-with(name(),"link:"))]
where $item/@contextRef
return <item contextRef="{$item/@contextRef}" xml:lang="{$item/@xml:lang}"
unitRef="{$item/@unitRef}" name="{local-name($item)}" ns="{namespace-uri($item)}">{$item}</item>'
PASSING r.xbrl as "doc"
COLUMNS context_id varchar2(128) PATH '@contextRef',
ns varchar2(128) PATH '@ns',
name varchar2(128) PATH '@name',
lang varchar2(2) PATH '@xml:lang',
unit varchar2(16) PATH '@unitRef',
decimals varchar2(64) PATH '@decimals',
value varchar(256) PATH '.'
) xt
SV_XBRL_DOC has 1450 records(1450 documents).
SV_XBRL_ELEMENT has more than 110,280 record (110,280 elements)
the sql above takes more than 6000 seconds with my machine (10g,sun5.8 4 cpus), I admit it's a big number of records, I'm looking for more efficient way to do.
Is there any point to boost the sql ?
here is the part of the xbrl document sample.
<?xml version="1.0"?>
<xbrli:xbrl xmlns:xbrli="http://www.xbrl.org/2003/instance" xmlns:link="http://www.xbrl.org/2003/linkbase" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:fines-b-ot="http://fss.xbrl.or.kr/kr/br/b/ot/2007-06-30" xmlns:fines-aa001="http://fss.xbrl.or.kr/kr/br/gaap/aa001/2007-06-30" xmlns:xbrldt="http://xbrl.org/2005/xbrldt" xmlns:ref="http://www.xbrl.org/2004/ref" xmlns:xbrldi="http://xbrl.org/2006/xbrldi" xmlns:iso4217="http://www.xbrl.org/2003/iso4217">
<link:schemaRef xlink:type="simple" xlink:href="http://fss.xbrl.or.kr/kr/br/fines/aa001/2007-06-30/fines-aa001-2007-06-30.xsd"/>
<xbrli:context id="ctx_AAA001C">
<xbrli:entity>
<xbrli:identifier scheme="http://fss.xbrl.or.kr">0010002</xbrli:identifier>
<xbrli:segment>
<xbrldi:explicitMember dimension="fines-b-ot:newItemDimension">fines-b-ot:AAA001C</xbrldi:explicitMember>
</xbrli:segment>
</xbrli:entity>
<xbrli:period>
<xbrli:instant>1999-03-31</xbrli:instant>
</xbrli:period>
</xbrli:context>
<xbrli:context id="ctx_BAA001C">
<xbrli:entity>
<xbrli:identifier scheme="http://fss.xbrl.or.kr">0010002</xbrli:identifier>
<xbrli:segment>
<xbrldi:explicitMember dimension="fines-b-ot:newItemDimension">fines-b-ot:BAA001C</xbrldi:explicitMember>
</xbrli:segment>
</xbrli:entity>
<xbrli:period>
<xbrli:instant>1999-03-31</xbrli:instant>
</xbrli:period>
</xbrli:context>
<xbrli:context id="ctx_CAA001C">
<xbrli:entity>
<xbrli:identifier scheme="http://fss.xbrl.or.kr">0010002</xbrli:identifier>
<xbrli:segment>
<xbrldi:explicitMember dimension="fines-b-ot:newItemDimension">fines-b-ot:CAA001C</xbrldi:explicitMember>
</xbrli:segment>
</xbrli:entity>
<xbrli:period>
<xbrli:instant>1999-03-31</xbrli:instant>
</xbrli:period>
</xbrli:context>
<xbrli:context id="ctx_DAA001C">
<xbrli:entity>
<xbrli:identifier scheme="http://fss.xbrl.or.kr">0010002</xbrli:identifier>
<xbrli:segment>
<xbrldi:explicitMember dimension="fines-b-ot:newItemDimension">fines-b-ot:DAA001C</xbrldi:explicitMember>
</xbrli:segment>
</xbrli:entity>
<xbrli:period>
<xbrli:instant>1999-03-31</xbrli:instant>
</xbrli:period>
</xbrli:context>
<xbrli:unit id="KRW">
<xbrli:measure>iso4217:KRW</xbrli:measure>
</xbrli:unit>
<fines-b-ot:AAA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">14</fines-b-ot:AAA001R>
<fines-b-ot:AAA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">0</fines-b-ot:AAA001R>
<fines-b-ot:AAA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">14</fines-b-ot:AAA001R>
<fines-b-ot:AAA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">0</fines-b-ot:AAA001R>
<fines-b-ot:A1AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">7</fines-b-ot:A1AA001R>
<fines-b-ot:A1AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">0</fines-b-ot:A1AA001R>
<fines-b-ot:A1AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">7</fines-b-ot:A1AA001R>
<fines-b-ot:A1AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">0</fines-b-ot:A1AA001R>
<fines-b-ot:A2AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">7</fines-b-ot:A2AA001R>
<fines-b-ot:A2AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">0</fines-b-ot:A2AA001R>
<fines-b-ot:A2AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">7</fines-b-ot:A2AA001R>
<fines-b-ot:A2AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">0</fines-b-ot:A2AA001R>
<fines-b-ot:BAA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">4788</fines-b-ot:BAA001R>
<fines-b-ot:BAA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">49</fines-b-ot:BAA001R>
<fines-b-ot:BAA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">4837</fines-b-ot:BAA001R>
<fines-b-ot:BAA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">30</fines-b-ot:BAA001R>
<fines-b-ot:B1AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">4788</fines-b-ot:B1AA001R>
<fines-b-ot:B1AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">48</fines-b-ot:B1AA001R>
<fines-b-ot:B1AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">4836</fines-b-ot:B1AA001R>
<fines-b-ot:B1AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">29</fines-b-ot:B1AA001R>
<fines-b-ot:B11AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">2317</fines-b-ot:B11AA001R>
<fines-b-ot:B11AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">21</fines-b-ot:B11AA001R>
<fines-b-ot:B11AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">2338</fines-b-ot:B11AA001R>
<fines-b-ot:B11AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">2</fines-b-ot:B11AA001R>
<fines-b-ot:B111AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">0</fines-b-ot:B111AA001R>
<fines-b-ot:B111AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">0</fines-b-ot:B111AA001R>
<fines-b-ot:B111AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">0</fines-b-ot:B111AA001R>
<fines-b-ot:B111AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">0</fines-b-ot:B111AA001R>
<fines-b-ot:B112AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">60</fines-b-ot:B112AA001R>
<fines-b-ot:B112AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">2</fines-b-ot:B112AA001R>
<fines-b-ot:B112AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">62</fines-b-ot:B112AA001R>
<fines-b-ot:B112AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">0</fines-b-ot:B112AA001R>
<fines-b-ot:B113AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">185</fines-b-ot:B113AA001R>
<fines-b-ot:B113AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">1</fines-b-ot:B113AA001R>
<fines-b-ot:B113AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">186</fines-b-ot:B113AA001R>
<fines-b-ot:B113AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">0</fines-b-ot:B113AA001R>
<fines-b-ot:B114AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">408</fines-b-ot:B114AA001R>
<fines-b-ot:B114AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">5</fines-b-ot:B114AA001R>
<fines-b-ot:B114AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">413</fines-b-ot:B114AA001R>
<fines-b-ot:B114AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">0</fines-b-ot:B114AA001R>
<fines-b-ot:B115AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">1664</fines-b-ot:B115AA001R>
<fines-b-ot:B115AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">13</fines-b-ot:B115AA001R>
<fines-b-ot:B115AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">1677</fines-b-ot:B115AA001R>
<fines-b-ot:B115AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">2</fines-b-ot:B115AA001R>
<fines-b-ot:B12AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">2471</fines-b-ot:B12AA001R>
<fines-b-ot:B12AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">27</fines-b-ot:B12AA001R>
<fines-b-ot:B12AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">2498</fines-b-ot:B12AA001R>
<fines-b-ot:B12AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">27</fines-b-ot:B12AA001R>
<fines-b-ot:B2AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">0</fines-b-ot:B2AA001R>
<fines-b-ot:B2AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">0</fines-b-ot:B2AA001R>
<fines-b-ot:B2AA001R decimals="0" contextRef="ctx_CAA001C" unitRef="KRW">0</fines-b-ot:B2AA001R>
<fines-b-ot:B2AA001R decimals="0" contextRef="ctx_DAA001C" unitRef="KRW">0</fines-b-ot:B2AA001R>
<fines-b-ot:B3AA001R decimals="0" contextRef="ctx_AAA001C" unitRef="KRW">0</fines-b-ot:B3AA001R>
<fines-b-ot:B3AA001R decimals="0" contextRef="ctx_BAA001C" unitRef="KRW">1</fines-b-ot:B3AA001R>
</xbrli:xbrl>Using 11g along XML indexes can greatly improve this XQuery. We can discuss further offline.
Regards,
Geoff -
Better performance with higher resolution with ultra..why ?
how is this possible ? I got a q6600, 8gb ram and a msi ultra8800.Running crysis with everything to max on 1024X768 at 32 fps, and 1280X1024 at 40 ?
weird, any ideas as to why thats happening ?
cheersme again...sorry for bothering
got another question concerning my ultra, and might as well post it here....Is this h2o cooling compatible with my msi 8800 ultra ?
http://www.virtual-hideout.net/reviews/Mega_Watercooling_Roundup/Zalman_Reserator_2/index.shtml
sorry if its a stupid question, but im really really REALLY a noob here
cheers for any info -
Better performance with lower CPU_COUNT ?
Hi
We had big performance problems on our production database, after lots of tunings and configuration we still didn't reach anywhere (oltp takes 9 hours when it take 20 minutes on test db).
production server Dual Xeon 2.8 2GB RAM RAID5 10k/s RPM Win 2k3.
Test db Intel 2.8 1 GB RAM 1 IDE disk Win 2000 Pro.
I've noticed the difference between the test db and the production db was the gets/s the production db was NOT using any CPU and I/O was almost null unlike the test db which the gauges were showing 100 % full on everything.
so stupidly we set the CPU_COUNT to 1 and PARALLEL_THREAD_PER_CPU = 1 when it was 2 x 4.(test db 1x2 unchanged)
surprisingly the response time went diving from 9 hours to just 7 minutes.
we check load balancing on CPUs and HT option, everything's fine.
now we are confused!! ANY suggestions is apreciated, has anyone else encountered such a problem???
Oracle 9i Rel2 9.2.0.1.0 on Win 2003 Standard.
Thanks in Advance
Regards
Tony G.Hi
The article on asktom.com was very usefull.
We will 1st patch the oracle version to the certified version on Win 2K3 (9.2.0.1.3.0 or higher), and then set the nescessery parameters (PARALLEL_ADAPTIVE_MULTI_USER, PARALLEL_AUTOMATIC_TUNING, etc..).
My question: do i have to alter the underlaying tables to parallel and gather stats on those tables in order the Cost Base Optimizer works correctly? and will this alteration affect other queries?
Any suggestion or comment is more than welcomed, Thanks in Advance
Regards
Tony G. -
How can I get better performance with OSX on my iMac G3?
I just recently got a G3 iMac running at 400mhz with 512 mb of RAM, and it has OS 10.4.10. I am relatively new to Macs; its been a few years since I've used one.
So, it actually runs surprisingly well, but I'm wondering if theres a way to sort of tone the visuals to make it run a little faster. I loaded windows XP pro onto an older dell that I just had sitting around, and was able to tone down the visuals enough to make it run very well on a 500 mhz machine with 256 mb of RAM, so I'm sure if I can tone down XP enough for an old computer, I can do the same with OSX.Welcome to Apple Discussions.
You are somewhat limited by the 400MHz processor speed. Increasing your RAM will be helpful. Look at these links
52 Ways to Speed Up OS X
http://www.imafish.co.uk/articles/post/articles/130/52-ways-to-speed-up os-x/
Tuning Mac OS X Performance
http://www.thexlab.com/faqs/performance.html
11 Ways to Optimize Your Mac's Performance
http://lowendmac.com/eubanks/07/0312.html
The Top 7 Free Utilities To Maintain A Mac.
http://mac360.com/index.php/mac360/comments/thetop_7_free_utilities_to_maintain_amac/
Mac OS X: System maintenance
http://discussions.apple.com/thread.jspa?messageID=607640
Cheers, Tom -
Bottom line, does 8800 ultra sli setup ensure better performance ?
greetings all, first time poster here !
anyways, just got my new rig in town the other day, quad core q6600, 4 mg ram, msi 8800 ultra OC, vista 32bit..thing is mad.Have a question tho, putting the already insane performance bar aside, if money isnt an object, would another ultra result in an obvious performance boost, or are we speaking only marginal advantages here, like early slis 6600gt for example, sli setup was only 25/30 percent faster than one card performance.
thanks for every answer in advance
cheersQuote
so you say, the bigger the resolution the better performance with ultra ?
Well the performance is not better the higher the resolution. The ADVANTAGE compared to a slower card or single card to SLi is higher.
Quote
How far can you reckon can this be pushed
Very doubtful it will do 700mhz gpu. At least nobody can tell you what one of those vgas does. It's a matter of the individual sample. You will only find out by testing. But the possibility you kill it that way is very little. If you keep an eye out for the temps and those don't get past 100°C you're fine.
If you set gpu clock too high it will show with artifacts or bluescreens in 3d. So if you lower the clock when experiencing such behavior you won't damage your card. Of course you shouldn't set way too high clocks. I'd recommend not to set any higher than 700mhz unless this proves to be rockstable. Mem clock shouldn't set higher than 2300 but doubtful it will make it.
DON'T TRUST MSI UTILITY! Set clocks manually and check after any step. Many of those GeForce 8 especially oc versions or high end models don't even do stock settings. So best would be checking the card first at stock settings and then starting to oc. -
Difference between Temp table and Variable table and which one is better performance wise?
Hello,
Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
Which one is recommended to use for better performance?
also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
Is that Table variable using Memory or Disk space?
Thanks Shiven:) If Answer is Helpful, Please VoteCheck following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
But it also depends upon specific scenarios you are dealing with , can you share it?
~manoj | email: http://scr.im/m22g
http://sqlwithmanoj.wordpress.com
MCCA 2011 | My FB Page -
How to setup airport time capsule to get better performance?
I need to set up my wireless system with my new Airport time capsule 3T as primary base station to get better performance, and If I have a cable modem as primary device to get the signal (5MB) from the ISP then my network has one, Macbook pro, Macbook air, mac mini, 2 ipad's, 2 iphones, but neither of them is connected all time.
What is the best way to do that?
What wifi channel need choose to?What is the best way to do that?
Use ethernet.. performance of wireless is never as good as ethernet.
What wifi channel need choose to?
There is no such thing as the best channel..
Leave everything auto.. and see if it gives you full download speed.
Use 5ghz.. and keep everything up close to the TC for the best wireless speed.
If you are far away it will drop back to 2.4ghz which is slower.
Once you reach the internet speed nothing is going to help it go faster so you are worrying about nothing. -
My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
Please suggest me which method (is Helper classes or Using EJBs) is perferable
1) to get better performance and.
2) for less network traffic
3) for better container resource utilization
I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
Please give detailed explanation.
thank you,
sudheer<i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
1) to get better performance</i>
EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
<i>2) for less network traffic</i>
There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
<i>3) for better container resource utilization</i>
Again, the EJB version will consume a lot more container resources. -
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
I do a lot of video editing for work. I am currently using the Creative Cloud, and the programs I use most frequently are Premiere Pro CS6, Photoshop CS6, and Encore. My issue is that when I am rendering video in Premiere Pro, and most importantly, transcoding in Encore for BluRay discs, I am getting severe lag from my computer. It basically uses the majority of my computer's resources and doesn't allow me to do much else. This means, that I can't do other work while stuff is rendering or transcoding. I had this computer built specifically for video editing and need to know which direction to go in for an upgrade to get some better performance, and allow me to do other work.
For the record, I do have MPE: GPU Acceleration turned ON, and I have 12GBs of RAM alloted for Adobe in Premiere Pro's settings, and 4GBs left for "other".
Here is my computer:
- Dell Precision T7600
- Windows 7 Professional, 64-bit
- DUAL Intel Xeon CPU E-2620 - 2.0GHz 6-core Processors
- 16GBs of RAM
- 256GB SSD as my primary drive. This is where the majority of my work is performed.
- Three 2TB secondary drives in a RAID5 configuration. This is solely for backing up data after I have worked on it. I don't really use this to work off of.
- nVidia Quadro 4000 2GB video card
When I am rendering or transcoding, my processor(s) performance fluctuates between 50%-70%, with all 12 cores active and being used. My physical memory is basically ALL used up while this is happening.
Here is where I am at on the issue. I put in a request for more RAM (32GBs), this way I can allot around 25GBs of RAM to the Adobe suite, leaving more than enough to do other things. I was told that this was not the right direction to go in. I was told that since my CPUs are working around 50-70%, it means that my video card isn't pulling enough weight. I was told that the first step in upgrading this machine that we should take, is to replace my 2GB video card with a 4GB video card, and that will fix these performance issues that I am having, not RAM.
This is the first machine that has been built over here for this purpose, so it is a learning process for us. I was hoping someone here could give a little insight to my situation.
Thanks for any help.You have a couple of issues with this system:
Slow E5-2620's. You would be much better off with E5-2687W's
Limited memory. 32 GB is around bare minimum for a dual processor system.
Outdated Quadro 4000 card, which is very slow in comparison to newer cards and is generally not used when transcoding.
Far insufficient disk setup. You need way more disks.
A software raid5 carries a lot of overhead.
The crippled BIOS of Dell does not allow overclocking.
The SSD may suffer from severe 'stable state' performance degradation, reducing performance even more.
You would not be the first to leave a Dell in the configuration it came in. If that is the case, you need a lot of tuning to get it to run decently.
Second thing to consider is what source material are you transcoding to what destination format? If you start with AVCHD material and your destination is MPEG2-DVD, the internal workings of PR may look like this:
Convert AVCHD material to an internal intermediate, which is solely CPU bound. No GPU involvement.
Rescale the internal intermediate to DVD dimensions, which is MPE accelerated, so heavy GPU involvement.
Adjust the frame rate from 29.97 to 23.976, which again is MPE accelerated, so GPU bound.
Recode the rescaled and frame-blended internal intermediate to MPEG2-DVD codec, which is again solely CPU bound.
Apply effects to the MPEG2-DVD encoded material, which can be CPU bound for non-accelerated effects and GPU bound for accelerated effects.
Write the end result to disk, which is disk performance related.
If you export AVCHD to H.264-BR the GPU is out of the game altogether, since all transcoding is purely CPU based, assuming there is no frame blending going on. Then all the limitations of the Dell show up, as you noticed.
Maybe you are looking for
-
Podcasts ends up in wrong folder (music)
Hi downloaded podcast stuff to iTunes last night. Ther all appear correctly under the folder Podcast in iTunes but when I sync my iPod they end up in the music folder in a genre called podcast and NOT in the podcast folder? ***? has anybody got an an
-
How can I prevent Applications from being copied to external drive?
Hello, Just a quick one... I run a studio in an education evironment. We've got quite a lot of software installed on our machines (Mac OS X 10.7.4) and there's always the chance that students will copy applications from the Applications folder onto a
-
PO Lead Time cannot capture the time taken for shipping!
Dear All I understand that we have PO lead time = PO Processing Time (Working Day) + Planned Delivery Time (Calendar Day) + GR processing time (working day). And this PO lead time will be added on top of my PO Creation Date to defer the actual goods
-
Chinese can't display correctly
Hi, i am encountering a strange problem. I am trying to run a jsp on OC4J 10.1.3.3 StandAlone and struts-1.3.9 I'm setting up Chinese in ApplicationResources_zh.properties.But when I use in test.jsp. Web page display is to acquiesce in language "West
-
Urgent smartforms!!!!!!
hi, i hav a table called zborrower which contains 5 fields , one of the field is :"date_of_return" ....i need to display the details of the borrower based on the date range for date_of_return in the smartform using select options.......how could i do