Selecting "better performance" vs "better battery" via desktop icon
Several months ago, I recall being able to click on the battery icon on the top right when I'm running on battery power to allow me to select the type of battery consumption I wanted to use. I believe it was something to the effect of "better performance" and "better battery life." Now, when I click on the battery, I can not switch my power/performance preferences via the battery. Did I do something or did a software update remove this feature? How do I get it back?
Is this a Unibody MacBook Pro? It appears so, if so this setting is controlled in the Energy Saver preference pane ONLY now, because of the logout required to switch graphics processors.
Similar Messages
-
Is there a better way to get rid of desktop icons?
My project has two things left: 1. get rid of desktop icons during the script 2. Ensure that windows don't open during the script.
so for part one, I have the following:
do shell script "defaults write com.apple.finder CreateDesktop -bool " & false
the problem is I can't change the desktop background picture when I use the above script. Without it, I can but I have icons.
The second issue is how I set up a handler in the event that windows open during the script, blocking the desktop-image message. I want to close all windows and keep them closed during the script. Here is what I have so far, but it doesn't work:
on exists windows of processes where the visible is true
tell application "Finder"
set willey to get (windows where the (exists) is true) of processes where the visible is true
set the exists of willey to false
end tell
end exists
PowerBook G4 1.67 Ghz Mac OS X (10.4.6)Even if you manage to change the static desktop picture, you're going to have to find a way to disable the dynamic desktop picture that many users will have activated. This "Change picture" feature is controlled by Dock.app, not Finder. If you quit or kill the Dock, the system automatically respawns it ad infinitum. The only way around that is to temporarily move or rename dock.app, which requires administrator privileges. I'm not particularly inclined to offer my password to prank apps, are you?
If I were you I'd take a look at Salling Clicker. I know that sounds crazy, but hear me out: In the Salling distro there is a neat freeware FBA called "SEC Helper" that lets you post Growl-like notices to the user, without the hassle of relying on a Growl install. It has a full-screen option that you can set up just like a slideshow.
click here to open this script in your editor<pre style="font-family: 'Monaco', 'Courier New', Courier, monospace; overflow:auto; color: #222; background: #DDD; padding: 0.2em; font-size: 10px; width:400px">tell application "SEC Helper"
"/Library/Desktop Pictures/Plants/Petals.jpg"
show screen picture data (read (POSIX file result as alias) as picture) with full screen
end tell</pre> -
Better Performance: ecryptfs or luks via loopback
Hi all,
I'm looking to encrypt a partition on my very old (circa 2000) desktop box cum server. It has an old pata disk drive that holds the / partition (ext4). The partition, however, is really big for a / partition, so I was hoping to reclaim some of that space for an encrypted home directory. My two possible methods to do so:
1) Make a pre-sized file on the / filesystem that I then access via loop-back and encrypt with Luks as detailed here: https://wiki.archlinux.org/index.php/Dm … oop_device
-Which filesystem would work best on top of this whole stack,e.g. btrfs (with lzo compression?), ext4, or something else
2) Use ecryptfs on top of the / ext4 filesystem, as detailed here: https://wiki.archlinux.org/index.php/ECryptfs
My main question: which of these two options would probably have the best performance? I know Luks tends to be, on the whole, faster (at least when encrypting a block device instead of a loop-back device), but does this advantage disappear when stacked via a loop-back device?Hello! The SATA card should outperform the ide drives but it all depends on what you're doing such as large or small file transfers. For some real improvement hook a 10,000 rpm Raptor drive to the SATA card and put the OS and apps on it. Tom
-
What's the Difference Between Better Performance and Better Quality
I am making a DVD of a wedding from a tape and I usually create an image and then transfer it to my PC to burn because my Mac Mini doesn't have a DVD burner. It is telling me that my project has exceded 4.2 GB which I figured would happen but would dread if it did. Well i was wondering what type of quality loss would happen if I used Best Quality? Or is there a way to make a Disc image that I dont know of? Any suggestions would be helpful.
http://docs.info.apple.com/article.html?artnum=164975
If you don't have a DL SuperDrive, I don't think you can create a disk image over two hours.
Well i was wondering what type of quality loss would happen if I used Best Quality?
Best Quality is just that, best quality encoding. I use it for every project. -
after updating from beta 9, to 10, to11, my desktop icon still says beta 9. The start menue has beta 10, and will not allow me to select open file location, which when accessing through programs(x86) is listed as beta 9. When checking uninstall programs, beta 11, is listed.
how do i remove references to beta 9 and 10?Mr. Willener,
Thank you for your reply.
I did un-install and then re-install per the Adobe recommendations which are what you listed in your reply. The problem remained until I did a MS Windows restore to a restore point a month or so ago. The IE problems vanished. However, now I'm using an outdated and unsupported version that is lower than most of the web pages require for their sites, at least the ones I want to view (they all say I need to upgrade my Flash Player which, when I did, put me in this fix).
As to:
Pat Willener wrote:
Usuallyexasperated wrote:
it requires a...... 2.33 Ghz processor!
You can safely ignore that requirement; it is simply not true.
Why would Adobe post a system requirement of 2.33 Ghz for 11.2 if it simply were not true?
Message was edited by: Usuallyexasperated -
Which provides better performance?
I have a ATI HD3200 video built-in to my motherboard. I also have an old Nvidia 6600 GT 128 MB videocard. Which will give me better performance?
I have the desktop effects enabled in KDE 4.2 and I also like to play the occasional 3D game.For desktop use with compositing i'd agree with Draje. For games, you can easily enough benchmark each card and see which is best. If you go this route I strongly recommend using a lighter weight wm just when you play games. I know my own comp went up 40fps or so when playing nexuiz under fluxbox as opposed to KDE 3.5.x.
-
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
How to get better perform here
hi there,
bellow code is using with in the loop. how can i modify to get better performance.
SELECT knumv kposn kwert FROM konv
INTO CORRESPONDING FIELDS OF lt_konv
WHERE knumv EQ lt_output-knumv
AND kposn EQ lt_output-posnr
AND kschl EQ 'VPRS'.
COLLECT lt_konv.
ENDSELECT.
thx in adv.the better solution for the select statement whould be to use the aggreagte function sum for the field kwert:
SELECT knumv kposn sum(kwert)
FROM konv
INTO CORRESPONDING FIELDS OF table lt_konv
WHERE knumv EQ lt_output-knumv
AND kposn EQ lt_output-posnr
AND kschl EQ 'VPRS'.
The select is inside the loop an lt_output.
Aggregate functions and FOR ALL ENTRIES can not be combined, the
FOR ALL ENTRIES is a select distinct !!!
So you must leave the loop around the select and you can't use the FOR ALL ENTRIES, but this is o.k.,
Siegfried -
Which approach is having better performance in terms of time
For large no of data from more then two diffrent tables which are having relations ,
In Oracle in following two approaches which is having better performance in terms of time( i.e which is having less time) ?
1. A single compex query
2. Bunch of simple queriesBecause their is a relationship between each of the tables in the simple queries then if you adopt this approach you will have to JOIN in some way, probably via a FOR LOOP in PL/SQL.
In my experience, a single complex SQL statement is the best way to go, join in the database and return the set of data required.
SQL rules! -
Which method has a better performance ?
Hello !
I'm using entity framework , and I have several cases where I should run a query than return some parent items , and after I display these parents and the related children in one report.
I want to know which of these methods have the better performance : ( or is there any other better method ??? )
Method1: (the childs collection are loaded later , using lazy loading)
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) select t).Tolist
Method2:
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) _
.Include(Function(t2) t2.Childs1) _
.Include(Function(t2) t2.Childs2) _
.Include(Function(t2) t2.Childs2.Child22) _
.Include(Function(t2) t2.Childs1.Childs11) _
.Include(Function(t2) t2.Childs1.Childs12) _
Select t).ToList
Method3:
Dim lista as IQueryable(Of MyObj)
Dim lst= (From t2 In context.MyObjs Where(..condition..) Select New with _
{ .Parent=t2
.ch1=t2.Childs1 _
.ch2=t2.Childs2 _
.ch21=t2.Childs2.Child21) _
.ch11=t2.Childs1.Childs11) _
.ch12= t2.Childs1.Childs12 _
).ToList
lista=lst.Select(Function(t2) t2.parent)
I noticed that the first method cause the report to open very slow. Also I read somewhere that Include() cause repeat of parent items?
But anyway I want a professional opinion in general for the three methods.
Thank you !Hello,
As far as I know, the Entity Framework offers two ways to load related data after the fact. The first is called lazy loading and, with the appropriate settings, it happens automatically. In your case, your first method uses the last loading, while the second
and third are the same actually, both of them are Eager Loading. (In VB, if you could check use code as “DbContext.Database.Log = Sub(val) Diagnostics.Trace.WriteLine(val)” to see the actually generated sql statement, you could see the third and second query
would generate a join syntax). Since you mentions, the lazy loading way is low performance, you could use either the second or third one.
>>Also I read somewhere that Include() cause repeat of parent items?
For this, nor sure if you worry it would firstly use lazy loading and then use eager loading, however, in my test, I do not see this behavior, the Entity Framework seems to be smart enough to use one mode to load data at the same time. Or you could disable
its lazy loading when using eager loading:
context.ContextOptions.LazyLoadingEnabled = false
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Better performance setting...
Hi everyone!
I keep my PowerBook on pretty much all the time because I need it for emails. It's now 3 years old and although it runs well it seems a bit faster with the Better Performance setting than the Normal. Is there any harm in leaving it with this setting all the time?
Thanks,
RegThanks for the reply!
Ok that makes me feel better...yes sometimes if I need better battery life I will lower it to Normal when unplugged...but most of the time I use it plugged anyways...
Now I can feel good to always use Better Performance...I don't know if others can notice the difference but I can with all the latest upgrades in software...
Thanks again,
Reg -
Please help to modifiy this query for better performance
Please help to rewrite this query for better performance. This is taking long time to execute.
Table t_t_bil_bil_cycle_change contains 1200000 rows and table t_acctnumberTab countains 200000 rows.
I have created index on ACCOUNT_ID
Query is shown below
update rbabu.t_t_bil_bil_cycle_change a
set account_number =
( select distinct b.account_number
from rbabu.t_acctnumberTab b
where a.account_id = b.account_id
Table structure is shown below
SQL> DESC t_acctnumberTab;
Name Type Nullable Default Comments
ACCOUNT_ID NUMBER(10)
ACCOUNT_NUMBER VARCHAR2(24)
SQL> DESC t_t_bil_bil_cycle_change;
Name Type Nullable Default Comments
ACCOUNT_ID NUMBER(10)
ACCOUNT_NUMBER VARCHAR2(24) YIshan's solution is good. I would avoid updating rows which already have the right value - it's a waste of time.
You should have a UNIQUE or PRIMARY KEY constraint on t_acctnumberTab.account_id
merge rbabu.t_t_bil_bil_cycle_change a
using
( select distinct account_number, account_id
from rbabu.t_acctnumberTab
) t
on ( a.account_id = b.account_id
and decode(a.account_number, b.account_number, 0, 1) = 1
when matched then
update set a.account_number = b.account_number -
What is the best way to replace the Inline Views for better performance ?
Hi,
I am using Oracle 9i ,
What is the best way to replace the Inline Views for better performance. I see there are lot of performance lacking with Inline views in my queries.
Please suggest.
RajWITH plus /*+ MATERIALIZE */ hint can do good to you.
see below the test case.
SQL> create table hx_my_tbl as select level id, 'karthick' name from dual connect by level <= 5
2 /
Table created.
SQL> insert into hx_my_tbl select level id, 'vimal' name from dual connect by level <= 5
2 /
5 rows created.
SQL> create index hx_my_tbl_idx on hx_my_tbl(id)
2 /
Index created.
SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user,'hx_my_tbl',cascade=>true)
PL/SQL procedure successfully completed.
Now this a normal inline view
SQL> select a.id, b.id, a.name, b.name
2 from (select id, name from hx_my_tbl where id = 1) a,
3 (select id, name from hx_my_tbl where id = 1) b
4 where a.id = b.id
5 and a.name <> b.name
6 /
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=7 Card=2 Bytes=48)
1 0 HASH JOIN (Cost=7 Card=2 Bytes=48)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
3 2 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
4 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
5 4 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
Now i use the with with the materialize hint
SQL> with my_view as (select /*+ MATERIALIZE */ id, name from hx_my_tbl where id = 1)
2 select a.id, b.id, a.name, b.name
3 from my_view a,
4 my_view b
5 where a.id = b.id
6 and a.name <> b.name
7 /
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=8 Card=1 Bytes=46)
1 0 TEMP TABLE TRANSFORMATION
2 1 LOAD AS SELECT
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
4 3 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
5 1 HASH JOIN (Cost=5 Card=1 Bytes=46)
6 5 VIEW (Cost=2 Card=2 Bytes=46)
7 6 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
8 5 VIEW (Cost=2 Card=2 Bytes=46)
9 8 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
here you can see the table is accessed only once then only the result set generated by the WITH is accessed.
Thanks,
Karthick. -
In the below queries which gives better performance
Hi All,
In the below two queries which gives better performance.
Requirement is I need to find if all the 3 score columns are null then I need to assign -ve value -9999 else some +ve value 2
1)
select case when count(CUST_score1)+count(CUST_score2)+count(CUST_score3)=0 then -111111'
else 11 end
from
customer
where subscriber_id=1050
and cust_system_code='1882484'
2)
select case whenCUST_score1 is null and CUST_score2 is null and CUST_score3 is null then '-9999'
else '11' end
from
customer
where subscriber_id=1050
and cust_system_code='1882484'
Please help, because we have more data in table customer so I need to confirm which is better.
Regards,
Chandauser546757 wrote:
Hi All,
In the below two queries which gives better performance.
Requirement is I need to find if all the 3 score columns are null then I need to assign -ve value -9999 else some +ve value 2
1)
select case when count(CUST_score1)+count(CUST_score2)+count(CUST_score3)=0 then -111111'
else 11 end
from
customer
where subscriber_id=1050
and cust_system_code='1882484'
2)
select case whenCUST_score1 is null and CUST_score2 is null and CUST_score3 is null then '-9999'
else '11' end
from
customer
where subscriber_id=1050
and cust_system_code='1882484'
Please help, because we have more data in table customer so I need to confirm which is better.
Regards,
ChandaThe two statements aren't equivalent. If you know that your where condition is restricting to a single row then there is no point in doing a count as that will introduce an additional aggregate function that isn't required for a single row. If you are dealing with multiple rows from the where condition then the second query will return multiple rows whereas the first query returns 1 row, so they don't do the same thing anyway. -
P45 Platinum [MS-7512 PCB 1.x]: BETA & Performance BIOS Versions
I will upload the latest Performance BIOS I could get my hand on. Unfortunately, I cannot provide changelogs at this point.
The BIOS is labeled A7512IMS.P0D and will appear as v25.0b13 on the POST Screen and in BIOS Setup. The BIOS was released on June 13th 2008.
I have just flashed it on a P45 Platinum Board that I have here for testing and found that memory compatibility has somewhat improved.
For those that are considering to give this BIOS a try: remember, the risk is yours and yours alone!
The archive contains the a flashing utility an can be used with the MSIHQ Forum USB Flashin tool (Method I):
>>Use the MSI HQ Forum USB flasher<<
/EDIT: I added the latest BETA-BIOS of the official BIOS series: A7512IMS.115. It was released on June 13th as well. Again, use this BIOS at your own risk and don't complain if anything goes wrong.Quote
If the only versions that work with certain hardware combinations that are claimed to be compatible are Beta or Performance Versions then they should be available for public use
Every official BIOS-Version is based on the preceding BETA-BIOS-Versions. The Final A7512IMS.110-Release will be based on the preceding A7512IMS.11x-BETAs. Sometimes, BETA-BIOS-Versions that have proven themselves are renamed and released as an official version. So the changes will be made public at some point. I uploaded the non-official versions precisely because they may be of help to others even though they are not available on official MSI Download Sites.
Quote
I feel like a child who has been chastised for daring to question the standard "because I said so"
I opened this thread to share (not necessarely stable) BETA-BIOS-Releases with users that are willing to test not officially released BIOS Versions and are aware of the risks involved. Feedback (positive & negative) on system behaviour is welcome and in the best interest of everyone who has this board. The intention behind this is to offer alternatives to help users that have problems with official versions and to collect experience based information.
I did not open this thread to discuss MSI's BIOS Release Policies (nobody here has any true influence on it and I am aware that it does not meet the wishes and expectations of every user) and I did not start this topic to talk about the experience different users have had with MSI Technical Support. If you want to discuss these questions, open your own topic and if you have concrete suggestions that concern MSI policies and procedures, direct them to MSI directly (nobody here works for MSI).
This said, I invite you to share your own experiences with specific BIOS Versions in this thread to provide information that may be helpful to other users.
Quote
except in this case it is the child doing the chastising!
If you want to insult me because you feel personally offended by the way I am trying to point out that I do not want to let this thread about specific BIOS Versions for the P45 Platinum mainboard be turned into a general discussion of what MSI should do to enhance individual customer satisfaction, please do that via PM and I will gladly respond.
Now, back to topic: I will post future Performance & BETA BIOS releases as soon as I get my hands on them. So far, vP0D & v1.1b5 are the latest versions I had access to and I have no idea when there be further updates and at what point a final and official version can be expected.
Maybe you are looking for
-
Xcelsius 2008 on top of SAP BPC
Hi Experts, I am trying to use Xcelsius 2008 on top of SAP BPC. I want following functionality in my dashboard: 1. There is a drop down in the dashboard for Year-Month selection. 2. Based on that selection, the BPC query to be refreshed for the selec
-
Itunes continuously quits unexpectedly
After recent update Itunes continuously quits unexpectedly. Here's the report for apple. Process: iTunes [1505] Path: /Applications/iTunes.app/Contents/MacOS/iTunes Identifier: com.apple.iTunes Version: 11.0.1 (11.0.1)
-
How can access MS Outlook Calender information from my Java application.
People schedule meeting with some data on regular basis. I need to access the Exchange server from my Java application and get the meeting dates along with other data pertaining to meeting.
-
To maintain smartform in english and chinese
Hi, How to maintain same form in 2 languages in English and Chinese. When the user logs in with English languge,smartform output has to be displayed in English. When he logs in with Chinese language,output has to be displayed in Chinese. Please reply
-
I have no problem sending or receiving a fax - however now even though auto answer is on, it still asks if I want to accept the incoming fax. Why? Can I turn this off so it will automatically accept and print the incoming fax? What if I am away from