Better performance setting...

Hi everyone!
I keep my PowerBook on pretty much all the time because I need it for emails. It's now 3 years old and although it runs well it seems a bit faster with the Better Performance setting than the Normal. Is there any harm in leaving it with this setting all the time?
Thanks,
Reg

Thanks for the reply!
Ok that makes me feel better...yes sometimes if I need better battery life I will lower it to Normal when unplugged...but most of the time I use it plugged anyways...
Now I can feel good to always use Better Performance...I don't know if others can notice the difference but I can with all the latest upgrades in software...
Thanks again,
Reg

Similar Messages

  • How to setup airport time capsule to get better performance?

    I need to set up my wireless system with my new Airport time capsule 3T as primary base station to get better performance, and  If I have a cable modem  as primary device to get the signal (5MB) from the ISP then my network has one, Macbook pro, Macbook air, mac mini, 2 ipad's, 2 iphones, but neither of them is connected all time.
    What is the best way to do that?
    What wifi channel need choose to?

    What is the best way to do that?
    Use ethernet.. performance of wireless is never as good as ethernet.
    What wifi channel need choose to?
    There is no such thing as the best channel..
    Leave everything auto.. and see if it gives you full download speed.
    Use 5ghz.. and keep everything up close to the TC for the best wireless speed.
    If you are far away it will drop back to 2.4ghz which is slower.
    Once you reach the internet speed nothing is going to help it go faster so you are worrying about nothing.

  • In PI 7.1 better performance is reached using RFC or Proxy?

    Hello Experts,
    As with PI 7.1 which one would be better option to have better performance?
    1)Proxy which goes through the Integration Engine by omiting Advance adaptor Engine
    2)RFC which goes through the AAE by omiting Integration Engine
    As we know there are alot of advantages of Proxies over RFC:
    1. Proxy communication always by passes the Adapter Engine and will directly interact with the application system and Integration engine. So it will give us better performance.
    2. Proxies communicate with the XI server by means of native SOAP calls over HTTP.
    3. Easy to handle messages with ABAP programming if it is ABAP Proxy .
    4. Proxy is good for large volumes of data. we can catch and persist the errors ( both system & application fault ) which was generated by Proxy setting.
    Thanks in Advance
    Rajeev

    Hey
    More than the performance,its a question of requirement.
    There are several restrictions which you need to consider before using AAE.To name a few
    IDOC,HHTP adapter wont be available
    No support for ABAP mapping
    No support for BPM
    No support for proxy
    No support for Multimapping,content based routing ( in first release)
    So if you want to use any of the above you cant use AAE in first place.but performance is significantly improved,upto 4 times better that simple AE-IE
    /people/william.li/blog/2008/01/10/advanced-adapter-engine-configuration-in-pi-71
    check the above blog and the article mentioned in it.
    Now coming to proxy,it supports all the above and performance is not that bad either.
    so it all boils down to what your requirements are:)
    Thanks
    Aamir

  • Which approach is having better performance in terms of time

    For large no of data from more then two diffrent tables which are having relations ,
    In Oracle in following two approaches which is having better performance in terms of time( i.e which is having less time) ?
    1. A single compex query
    2. Bunch of simple queries

    Because their is a relationship between each of the tables in the simple queries then if you adopt this approach you will have to JOIN in some way, probably via a FOR LOOP in PL/SQL.
    In my experience, a single complex SQL statement is the best way to go, join in the database and return the set of data required.
    SQL rules!

  • Scale out SSAS server for better performance

    HI
    i have a sharepoint farm
    running performance point service in a server where ANaylysis srver,reporting server installed
    and we have anyalysis server dbs and cubes
    and a wfe server where secure store service running
    we have
    1) application server + domain controller
    2) two wfes
    1) sql server sharepoint
    1) SSAS server ( analysis server dbs+ reporting server)
    here how i scaled out my SSAS server for better performance 
    adil

    Just trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
    We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage that
    will then be used to store the UPDs (one VHDX file per user).
    Cheers for now
    Russell
    Sure you can do it. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Scenario 2: Hyper-V failover cluster using file-based storage in a separate Scale-Out File Server
    This scenario uses Server Message Block (SMB) file-based storage as the location of the shared .vhdx files. You must deploy a Scale-Out File Server and create an SMB file share as the storage location. You also need a separate Hyper-V failover cluster.
    The following table describes the physical host prerequisites.
    Cluster Type
    Requirements
    Scale-Out File Server
    At least two servers that are running Windows Server 2012 R2.
    The servers must be members of the same Active Directory domain.
    The servers must meet the requirements for failover clustering.
    For more information, see Failover Clustering Hardware Requirements and Storage Options and Validate
    Hardware for a Failover Cluster.
    The servers must have access to block-level storage, which you can add as shared storage to the physical cluster. This storage can be iSCSI, Fibre Channel, SAS, or clustered storage spaces that use a set of shared SAS JBOD enclosures.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • ECC 6.0 SPRO has several notes related to BI performance setting

    On my ECC 6.0 instance there are several nodes related to BI performance. After reading through the notes attached to the node and the reference SAP notes, I am perplexed by the setting and what is considered required or good to have.
    Is there a note or some kind of document that could help provide a technical basis for the parameters setting.
    Under Performance Setting-->
    -Quantity Conversion: Set Buffer Size
    -Maintain Runtime Parameters of DataStore Objects
    -Settings for InfoSets
    -Settings for Database Interfaces
    -Settings for Database Interfaces (Oracle)
    -Parameters for Aggregates
    -Global Cache Settings
    -Settings for Analysis Processes
    Anyone have any thoughts on changing these parameters to better performance betweer or ECC and BW instances.
    Thanks
    Weyland Yutani

    Hi Weyland,
    a good starting point for BI Performance is the following SDN site:
    Performance Reporting in BW [original link is broken]
    See also these useful SAP Notes:
    892513 - Consulting: Performance: Loading data, no of pkg, r
    409641 - Examples of packet size dependency on ROIDOCPRMS
    417307 - Extractor package size: Collective note for applications
    857998 - Number range buffering for DIM IDs and SIDs
    Rgds,
    Colum

  • Please help to modifiy this query for better performance

    Please help to rewrite this query for better performance. This is taking long time to execute.
    Table t_t_bil_bil_cycle_change contains 1200000 rows and table t_acctnumberTab countains  200000 rows.
    I have created index on ACCOUNT_ID
    Query is shown below
    update rbabu.t_t_bil_bil_cycle_change a
       set account_number =
           ( select distinct b.account_number
             from rbabu.t_acctnumberTab b
             where a.account_id = b.account_id
    Table structure  is shown below
    SQL> DESC t_acctnumberTab;
    Name           Type         Nullable Default Comments
    ACCOUNT_ID     NUMBER(10)                            
    ACCOUNT_NUMBER VARCHAR2(24)
    SQL> DESC t_t_bil_bil_cycle_change;
    Name                    Type         Nullable Default Comments
    ACCOUNT_ID              NUMBER(10)                            
    ACCOUNT_NUMBER          VARCHAR2(24) Y    

    Ishan's solution is good. I would avoid updating rows which already have the right value - it's a waste of time.
    You should have a UNIQUE or PRIMARY KEY constraint on t_acctnumberTab.account_id
    merge rbabu.t_t_bil_bil_cycle_change a
    using
          ( select distinct account_number, account_id
      from  rbabu.t_acctnumberTab
          ) t
    on    ( a.account_id = b.account_id
           and decode(a.account_number, b.account_number, 0, 1) = 1
    when matched then
      update set a.account_number = b.account_number

  • What is the best way to replace the Inline Views for better performance ?

    Hi,
    I am using Oracle 9i ,
    What is the best way to replace the Inline Views for better performance. I see there are lot of performance lacking with Inline views in my queries.
    Please suggest.
    Raj

    WITH plus /*+ MATERIALIZE */ hint can do good to you.
    see below the test case.
    SQL> create table hx_my_tbl as select level id, 'karthick' name from dual connect by level <= 5
    2 /
    Table created.
    SQL> insert into hx_my_tbl select level id, 'vimal' name from dual connect by level <= 5
    2 /
    5 rows created.
    SQL> create index hx_my_tbl_idx on hx_my_tbl(id)
    2 /
    Index created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user,'hx_my_tbl',cascade=>true)
    PL/SQL procedure successfully completed.
    Now this a normal inline view
    SQL> select a.id, b.id, a.name, b.name
    2 from (select id, name from hx_my_tbl where id = 1) a,
    3 (select id, name from hx_my_tbl where id = 1) b
    4 where a.id = b.id
    5 and a.name <> b.name
    6 /
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=7 Card=2 Bytes=48)
    1 0 HASH JOIN (Cost=7 Card=2 Bytes=48)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
    3 2 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
    4 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
    5 4 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
    Now i use the with with the materialize hint
    SQL> with my_view as (select /*+ MATERIALIZE */ id, name from hx_my_tbl where id = 1)
    2 select a.id, b.id, a.name, b.name
    3 from my_view a,
    4 my_view b
    5 where a.id = b.id
    6 and a.name <> b.name
    7 /
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=8 Card=1 Bytes=46)
    1 0 TEMP TABLE TRANSFORMATION
    2 1 LOAD AS SELECT
    3 2 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
    4 3 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
    5 1 HASH JOIN (Cost=5 Card=1 Bytes=46)
    6 5 VIEW (Cost=2 Card=2 Bytes=46)
    7 6 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
    8 5 VIEW (Cost=2 Card=2 Bytes=46)
    9 8 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
    here you can see the table is accessed only once then only the result set generated by the WITH is accessed.
    Thanks,
    Karthick.

  • Agents: Method or Rule - WF more robust vs. better performance

    Hi all,
    we are in ECC 6.0 and building several workflows to cater for HR processes. These workflows will be implemented globally.
    Earlier this year, this thread talked a bit about this and I am using some of the statements from it in this post:
    Responsable agents: What's better? Role or expression (variable)
    We are writing a function module to Find Manager. What I am trying to determine is the best way to use this function module. I can either create a method to call it, or I can create a rule (called 'role' up to 4.6) to call it or I can create a virtual attribute to call it.
    This function module will be called a lot as most of the workflows will involve the employee's Manager.
    If implemented as a method, an RFC is used and I will need 2 steps in the WF - but I will be able to 'trap' any errors returned by the function module, e.g. manager not found, and use the returned exceptions within the workflow. The method can be implemented in a generic WF_UTILITY class/BOR, it doesn't need to be linked to a particular class/BOR.
    If implemented as a rule, it is 1 step instead of 2 - less logs, better performance. But if the rule fails, the workflow goes into error. I do not think there is a way to avoid the workflow going into error.
    I might be able to create a virtual attribute for it, but one of the parameters for the function module is the workflow that is calling it & it will also mean that I would have to make sure that every workflow has an instance of the object that I implement the virtual attribute.
    Is it worthy to 'trap' the errors and deal with it within the workflow? Or it is better to let the workflow go into error?
    Please let me know your thoughts on this one.
    Much thanks and regards,
    Cristiana

    I agree with Raja that you should choose the approach with rules. In your version you can also use tools to re-evaluate rules for active workflows to redetermine the agents, an option you lose if you implement it as a virtual attribute.
    Let the rule fail (flag HRS1203-ENFORCE set, the checkbox at the bottom of the rule definition screen) if no agent is found. Don't harcode sending it to anyone if no agent is found, that just gives you less flexibility. Whether the workflow administrator receives a work item in the inbox or sees it in the administrator transactions shouldn't make much difference to an administrator.
    If you want to avoid the workflow going into error (sending it to an administrator is not better than letting it go into error, it is just an error handling strategy) you must as in all programming have defined rules for handling the known problems. This could e.g. be a table which specifies who will receive the workflow (with as many parameters as you like for finding the most relevant person) if the proper agent can not be found. I have implemented solutions along those lines, but it always boils down to finding someone who will accept having responsibility for handling errors.

  • Bottom line, does 8800 ultra sli setup ensure better performance ?

    greetings all, first time poster here !
    anyways, just got my new rig in town the other day, quad core q6600, 4 mg ram, msi 8800 ultra OC, vista 32bit..thing is mad.Have a question tho, putting the already insane performance bar aside, if money isnt an object, would another ultra result in an obvious performance boost, or are we speaking only marginal advantages here, like early slis 6600gt for example, sli setup was only 25/30 percent faster than one card performance.
     thanks for every answer in advance
    cheers

    Quote
    so you say, the bigger the resolution the better performance with ultra ?
    Well the performance is not better the higher the resolution. The ADVANTAGE compared to a slower card or single card to SLi is higher.
    Quote
    How far can you reckon can this be pushed
    Very doubtful it will do 700mhz gpu. At least nobody can tell you what one of those vgas does. It's a matter of the individual sample. You will only find out by testing. But the possibility you kill it that way is very little. If you keep an eye out for the temps and those don't get past 100°C you're fine.
    If you set gpu clock too high it will show with artifacts or bluescreens in 3d. So if you lower the clock when experiencing such behavior you won't damage your card. Of course you shouldn't set way too high clocks. I'd recommend not to set any higher than 700mhz unless this proves to be rockstable. Mem clock shouldn't set higher than 2300 but doubtful it will make it.
    DON'T TRUST MSI UTILITY! Set clocks manually and check after any step. Many of those GeForce 8 especially oc versions or high end models don't even do stock settings. So best would be checking the card first at stock settings and then starting to oc.

  • Fact or Fiction: Caching PreparedStatements implies better performance?

    Hi,
    I've read a number of sources that claim the caching of PreparedStatements
    (on the client) leads to better performance because it eliminates the
    overhead of parsing the statement each time the
    Connection.prepareStatement() method is called. I tend to think that this
    claim is largely database server dependent and to a lesser degree JDBC
    driver dependent and I wonder whether any noticable benefit would be gained
    on databases such as Oracle, SQL Server, or Sybase.
    Oracle already caches SQL statements and associated query plans on the
    server to minimise the amount of parsing. It does this on a server wide
    basis so that ALL connections/sessions can benefit from the previously
    parsed SQL that is being executed. It also does this independent of the
    client technology that is being used (ie. Java, PL/SQL, C++, Perl, etc.) and
    has done so long before the introduction of JDBC. Why the need to cache the
    PreparedStatements on the client if they are already cached more efficiently
    on the server?
    I would be surprised if any parsed representation of the SQL (ie. the query
    plan) existed within the client side at all, as query plans tend to be
    dependent on how the data is stored and indexed, hence closely related to
    the schema (not the client).
    I would be very interested in reading other peoples comments on this and I
    am especially interested in seeing any performance data to prove that
    caching PreparedStatement objects actually does lead to better performance
    and is not just a myth.
    Regards,
    Craig.

    Hi. You are correct that the benefit of a prepared statement cache will vary,
    depending on how the driver and DBMS interact with prepared statements,
    as well as how the application uses pooled connections. In a best case scenario
    where the application has a small set of frequently used statements, to Oracle
    (in spite of the fact that Oracle does some query caching anyway) we have
    seen and customers have reported 10-40% performance improvements.
    Oracle/OCI cursor retention is a possible reason. A worst case scenario
    might be an appliaction that makes lots of unique prepared statements from
    user input in such a way that alters the query plan, using our driver to MS
    SQLServer, a cache would be useless, and might even hurt a little. This is
    because the cache might never contain a statement for the current SQL, and
    searching would slow things down, and our MS driver doesn't implement
    PreparedStatements in a way that has the DBMS caching anything ever, so
    everything is sent for parsing like fresh SQL for every execution. Other
    more recent type-4 JDBC drivers for MS SQLServer do much better
    in this regard, and would play well with a cache.
    Joe Weinstein
    Craig Munday wrote:
    Hi,
    I've read a number of sources that claim the caching of PreparedStatements
    (on the client) leads to better performance because it eliminates the
    overhead of parsing the statement each time the
    Connection.prepareStatement() method is called. I tend to think that this
    claim is largely database server dependent and to a lesser degree JDBC
    driver dependent and I wonder whether any noticable benefit would be gained
    on databases such as Oracle, SQL Server, or Sybase.
    Oracle already caches SQL statements and associated query plans on the
    server to minimise the amount of parsing. It does this on a server wide
    basis so that ALL connections/sessions can benefit from the previously
    parsed SQL that is being executed. It also does this independent of the
    client technology that is being used (ie. Java, PL/SQL, C++, Perl, etc.) and
    has done so long before the introduction of JDBC. Why the need to cache the
    PreparedStatements on the client if they are already cached more efficiently
    on the server?
    I would be surprised if any parsed representation of the SQL (ie. the query
    plan) existed within the client side at all, as query plans tend to be
    dependent on how the data is stored and indexed, hence closely related to
    the schema (not the client).
    I would be very interested in reading other peoples comments on this and I
    am especially interested in seeing any performance data to prove that
    caching PreparedStatement objects actually does lead to better performance
    and is not just a myth.
    Regards,
    Craig.

  • Selecting "better performance" vs "better battery" via desktop icon

    Several months ago, I recall being able to click on the battery icon on the top right when I'm running on battery power to allow me to select the type of battery consumption I wanted to use. I believe it was something to the effect of "better performance" and "better battery life." Now, when I click on the battery, I can not switch my power/performance preferences via the battery. Did I do something or did a software update remove this feature? How do I get it back?

    Is this a Unibody MacBook Pro? It appears so, if so this setting is controlled in the Energy Saver preference pane ONLY now, because of the logout required to switch graphics processors.

  • Difference between Temp table and Variable table and which one is better performance wise?

    Hello,
    Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
    Which one is recommended to use for better performance?
    also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
    In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
    Is that Table variable using Memory or Disk space?
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Check following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
    TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
    Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
    On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
    But it also depends upon specific scenarios you are dealing with , can you share it?
    ~manoj | email: http://scr.im/m22g
    http://sqlwithmanoj.wordpress.com
    MCCA 2011 | My FB Page

  • Issues with "Higher performance" setting on Macbook Pro with external monitor

    Hi all,
    I rarely used the "Higher performance" setting in the Energy Saver pane of System Preferences.
    I use my MB connected with an external monitor, with its own screen closed. I tried to switch that setting on and the external monitor seems to repeatedly turn off and on. This strange behaviour vanishes if I open the MB's screen, using it in a double monitor configuration.
    Did anyone hear about some similar problem?
    p.s.: I don't know if this is the exact location for this thread, any suggestions is welcome

    It was set to 1080p, 1920x1080 did not show up as an option (even when holding the option key). 1080p should be equivlant. As as experiment I grabbed another monitor that was not being used. It is a 22 inch LG with a maximum display of 1920x1080, and its currently set to 1920x1080. The issue is a little different, was not detected as a TV this time, but the screen still looks blurry. There may be some improvement but not much.

  • I need a clarification : Can I use EJBs instead of helper classes for better performance and less network traffic?

    My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
    I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
    Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
    Please suggest me which method (is Helper classes or Using EJBs) is perferable
    1) to get better performance and.
    2) for less network traffic
    3) for better container resource utilization
    I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
    Please give detailed explanation.
    thank you,
    sudheer

    <i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
    1) to get better performance</i>
    EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
    <i>2) for less network traffic</i>
    There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
    <i>3) for better container resource utilization</i>
    Again, the EJB version will consume a lot more container resources.

Maybe you are looking for

  • Greying out variable on the seclection screen

    Hello Experts, I need to grey out a variable ("X") on the selection screen in our BI7.0 system. The variable has been used in the query but as per the bussiness requirements this variable should not be changed by the end user. Currently, as per the s

  • IViews to R/3 Infotypes

    How can I tell how an iView relates to an Infotype in R/3? Address iView in ESS to Address Infotype in R/3 which is easy. However, other iViews are not so obvious is there somewhere that tells you how they relate to each other?

  • VLD-0201:Table has more than one primary key

    hi i'm working with OWB 9.2,I was imported the table from database,when i validating that it is giving message: VLD-0201:Table has more than one primary key vld-0209:Duplicate unique key i.e:cust_acc and SYS_C00435 detected. Can i use this table and

  • Install program or driver

    Hi I bought a new laptop with windows 8,1 (64bit). I can´t install my writer "Hp Color Laser Jet CP2025" because install CD dosent support windows 8,1(64bit). Please help me to get installing program för windows 8,1 for my product? Sincerely Azim Esk

  • Ipad2 crashes multiple times a day after ios7 upgrade.

    Ipad2 crashes multiple times a day after ios7 upgrade. This does not seem to be an app specific problem and has happened with Safari, other browsers, games, books, etc. The screen freezes and then the app closes to the reveal the desktop. I use the i