Apples vs Oranges which is faster??

A new 20" iMac with the 2.8GHz Intel Core 2 Duo and 3Gb of ram vs. a Dual 2.0Ghz Power PC G5 and 3Gb of ram.

If you are trying to decide whether to move forward as technology suggests...then moving to the new iMac is in order dave
I have numerous times...
I guess I don't find the need for Discussion to move forward..
Fair Well

Similar Messages

  • Which is faster -  Member formula or Calculation script?

    Hi,
    I have a very basic question, though I am not sure if there is a definite right or wrong answer.
    To keep the calculation scripts to a minimum, I have put all the calculations in member formula.
    Which is faster - Member formula or calculation scripts? Because, if i am not mistaken, FIX cannot be used in member formulas, so I need to resort to the use of IF, which is not index driven!
    Though in the calculation script,while aggregating members which have member formula, I have tried to FIX as many members as I can.
    What is the best way to optimize member formulas?
    I am using Hyperion Planning and Essbase 11.1.2.1.
    Thanks.

    Re the mostly "free" comment -- if the block is in memory (qualification #1), and the formula is within the block (qualification #2), the the expensive bit was reading the block off of the disk and expanding it into memory. Once that is done, I typically think of the dynamic calcs as free as the amount of data being moved about is very, very, very small. That goes out the window if the formula pulls lots of blocks to value and they get cycled in and out of the cache. Then they are not free and are potentially slower. And yes, I have personally shot myself in the foot with this -- I wrote a calc that did @PRIORS against a bunch of years. It was a dream when I pulled 10 cells. And then I found out that the client had reports that pulled 5,000. Performance when right down the drain at that point. That one was 100% my fault for not forcing the client to show me what they were reporting.
    I think your reference to stored formulas being 10-15% faster than calc script formulas deals with if the Formulas are executed from within the default calc. When the default Calc is used, it precompiles the formulas and handles many two pass calculations in a single pass. Perhaps that is what you are thinking of.^^^I guess that must be it. I think I remember you talking about this technique at one of your Kscope sessions and realizing that I had never tried that approach. Isn't there something funky about not being able to turn off the default calc if a user has calc access? I sort of thing so. I typically assing a ; to the default calc so it can't do anything.
    Regards,
    Cameron Lackpour

  • Which is faster - Member formula or Calculation scripts?

    Hi,
    I have a very basic question, though I am not sure if there is a definite right or wrong answer.
    To keep the calculation scripts to a minimum, I have put all the calculations in member formula.
    Which is faster - Member formula or calculation scripts? Because, if i am not mistaken, FIX cannot be used in member formulas, so I need to resort to the use of IF, which is not index driven!
    Though in the calculation script,while aggregating members which have member formula, I have tried to FIX as many members as I can.
    What is the best way to optimize member formulas?
    I am using Hyperion Planning and Essbase 11.1.2.1.
    Thanks.

    The idea that you can't reference a member formula in a FIX is false. Here's an example:
    - Assume you have an account that has a data storage of Stored or Never Share.
    - This account is called Account_A and it has a member formula of Account_B * Account_C;.
    - You would calculate this account within a FIX (inside of a business rule) something like this:
    FIX(whatever . . . )
    "Account_A";
    ENDFIX
    If you simply place the member named followed by a semi-colon within a business rule, the business rule will execute the code in the in that member's member formula.
    Why would you want to do this instead of just putting ALL of the logic inside the business rule? Perhaps that logic gets referenced in a LOT of different business rules, and you want to centralize the code in the outline? This way, if the logic changes, you only need to update it in one location. The downside to this is that it can make debugging a bit harder. When something doesn't work, you can find yourself searching for the code a bit.
    Most of my applications end up with a mix of member formulas and business rules. I find that performance isn't the main driving force behind where I put my code. (The performance difference is usually not that significant when you're talking about stored members.) What typically drives my decision is the organization of code and future maintenance. It's more art than science.
    Hope this helps,
    - Jake

  • Java io and Java nio, which is faster to binary io?

    Anybody can advise me about java io and java nio ?
    I want to write the faster code to read and write binary files.
    I'm going to read/write
    - individual elements (int, double, etc)
    - arraylists
    - objects
    Also I'm going (or I'd want) to use seek functions.
    Thanks

    Anybody can advise me about java io and java nio ?
    I want to write the faster code to read and write binary files.Which is "faster" depends on exactly how you're using them. For example, a MappedByteBuffer is usually faster for random access than a RandomAccessFile, unless your files are so large (and your accesses so random) than you're constantly loading from disk. And it's not at all faster for linear uni-directional access than a simple FileInputStream.
    So, rather than expecting some random stranger telling you that one is faster than the other without knowing your project, perhaps you should tell us exactly how you plan to use IO, and why you think that one approach may be faster than the other.

  • Which is faster while executing statment having Case or Decode.

    Pls tell me which execute faster a case or decode.

    ajallen wrote:
    If you are really concerned about this, then you are being taken over by the tuning virus. You are out of control. You are tuning beyond reason. DECODE() is deprecated - not being enhanced. CASE is the new preferred approach. CASE is simpler and easier to code/follow.I can't find a link saying that DECODE() function is already deprecated. Can you give us a valid link?
    Regards.

  • WHICH IS FASTER AND WHY

    Hi Guys,
    Just want to knw which is faster
    java.lang.String.toLowerCase()
    or
    java.lang.String.toUpperCase()
    Thanks,
    Tuniki

    A look into the source code tells me that
    toUpperCasemay be slightly slower in rare cases when a single lower-case
    letter needs to be converted to an array of upper-case double-bytes.
    However, this depends somewhat on the frequency of conversions.
    If you have frequently strings that are all-upper-case, use
    toUpperCase()and -- vice versa -- if the strings are frequently all lower-case, then prefer
    toLowerCaseboth methods first test whether a conversion has to be made at all.

  • Which is fast

    Which is fast either stored procedure or stored function in oracle 10g.
    Thanks in advance

    i would say it depends on what you are trying to do in the body of the procedure or function.   i don't think it's a matter of which is faster. it would be more of a question of did you want to return something (in which case use a function) or nothing (in which case you can use a procedure).

  • Which is fast / Smooth performing?

    I have 10 circles on the stage, animated on the time line, simple animation like scale, rotation and size changes.
    Which will give me best performance in terms of browser load and smoothness of animation?
    Ellipse or Svg file
    What if it is 100 objects, or even 1000 objects?

    Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?
    select * from Sales.[SalesOrderHeader]
    select SalesOrderNumber,RevisionNumber,rowguid from Sales.[SalesOrderHeader]
    As you can see both the query execution plan and subtree cost is same. So how selecting the particular columns optimize the query ?
    Yes, selecting specific columns is always better than select *.
    If you always need few columns in result, then just use SELECT col1, col2 FROM YourTable. If you SELECT * FROM YourTable; that is extra useless overhead.
    If in future if someone adds Image/BLOB/Text type columns in your table, using SELECT * will worsen the performace for sure.
    Let's say if you have SP and you use INSERT INTO DestTable SELECT * FROM TABLE which runs fine BUT again if someone adds few more columns then your SP will fail saying provided columns don't match.
    -Vaibhav Chaudhari

  • Which is fast: between or ( =  =)

    Hi,
    can anyone explain me which is fast?
    between
    or
    <= >= operator.
    Thanks in advance.

    Hi, you can easy test that and find out that they're in fact the same:
    MHO%xe> create table t as select level col from dual connect by level <= 1000000;
    Tabel is aangemaakt.
    MHO%xe> create index t_i on t(col);
    Index is aangemaakt.
    MHO%xe> set autotrace traceonly explain
    MHO%xe> select * from t where col between 100000 and 300000;
    Verstreken: 00:00:01.78
    Uitvoeringspan
    Plan hash value: 1601196873
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |   211K|  2679K|   460   (9)| 00:00:06 |
    |*  1 |  TABLE ACCESS FULL| T    |   211K|  2679K|   460   (9)| 00:00:06 |
    Predicate Information (identified by operation id):
       1 - filter("COL">=100000 AND "COL"<=300000)  -- See that BETWEEN gets rewritten to >= and <=
    Note
       - dynamic sampling used for this statement
    MHO%xe> select * from t where col >= 200000 and col <= 400000;
    Verstreken: 00:00:00.14
    Uitvoeringspan
    Plan hash value: 1601196873
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |   227K|  2885K|   460   (9)| 00:00:06 |
    |*  1 |  TABLE ACCESS FULL| T    |   227K|  2885K|   460   (9)| 00:00:06 |
    Predicate Information (identified by operation id):
       1 - filter("COL">=200000 AND "COL"<=400000)
    Note
       - dynamic sampling used for this statement
    MHO%xe> select * from t where col >= 500000 and col <= 600000;
    Verstreken: 00:00:00.34
    Uitvoeringspan
    Plan hash value: 4021086813
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |      | 48712 |   618K|   109   (2)| 00:00:02 |
    |*  1 |  INDEX RANGE SCAN| T_I  | 48712 |   618K|   109   (2)| 00:00:02 |
    Predicate Information (identified by operation id):
       1 - access("COL">=500000 AND "COL"<=600000)
    Note
       - dynamic sampling used for this statement
    MHO%xe> select * from t where col between 500000 and 600000;
    Verstreken: 00:00:00.11
    Uitvoeringspan
    Plan hash value: 4021086813
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |      | 48712 |   618K|   109   (2)| 00:00:02 |
    |*  1 |  INDEX RANGE SCAN| T_I  | 48712 |   618K|   109   (2)| 00:00:02 |
    Predicate Information (identified by operation id):
       1 - access("COL">=500000 AND "COL"<=600000)
    Note
       - dynamic sampling used for this statement

  • Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?

    Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?
    select * from Sales.[SalesOrderHeader]
    select SalesOrderNumber,RevisionNumber,rowguid from Sales.[SalesOrderHeader]
    As you can see both the query execution plan and subtree cost is same. So how selecting the particular columns optimize the query ?

    Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?
    select * from Sales.[SalesOrderHeader]
    select SalesOrderNumber,RevisionNumber,rowguid from Sales.[SalesOrderHeader]
    As you can see both the query execution plan and subtree cost is same. So how selecting the particular columns optimize the query ?
    Yes, selecting specific columns is always better than select *.
    If you always need few columns in result, then just use SELECT col1, col2 FROM YourTable. If you SELECT * FROM YourTable; that is extra useless overhead.
    If in future if someone adds Image/BLOB/Text type columns in your table, using SELECT * will worsen the performace for sure.
    Let's say if you have SP and you use INSERT INTO DestTable SELECT * FROM TABLE which runs fine BUT again if someone adds few more columns then your SP will fail saying provided columns don't match.
    -Vaibhav Chaudhari

  • Which is faster ESB or BPEL and why?

    Hi,
    Can anybody please tell me which is faster, either ESB or BPEL? I believe ESB is more faster than BPEL due to the message payload in BPEL not ESB.
    It would be great if anybody can put some info on this plz.
    Cheers!
    user623695
    Edited by: user623695 on Dec 24, 2010 12:31 AM

    You are considering only performance as the criteria for choosing between two products which have different purposes which is not the right way to go about it. Lets consider(just an assumption, might not be true) that BPEL is faster for a flow which requires simple routing/mediation to a back end service, even then you should ideally use OSB for that flow/service rather then doing it on BPEL.
    If you have to consider only performance then its my understanding that OSB would be faster then BPEL for simple routing.
    Architecture of both OSB and SOA Suite/BPEL is different, OSB is optimized for short running, fast and stateless services while BPEL is optimized for long running and stateful processes.

  • Which is faster for music production? internal hard drive or external drive

    hey there. simple question. i'm looking to run music poductin software on an apple mac - would sound files be retrieved faster from a new mac book pro internal hard drive or an external drive? (am thinking one of those lacie orange rugged things that spin at 7200 rpm)
    and if the sound files would play better coming form the internal hard drive, would it compromise processor speed for running soft synths / effects units etc etc?
    please note am not talking about my current mac book, but a new mac book pro
    chanx in advance - horton ;o)

    For what it is worth here is my tuppence worth on the subject, horton. (and a warm welcome to the forums , by the way!)
    One of the major reasons why many sound professionals (and others worried about speed) use fast external FW800 or (when possible) eSata drives for their work, rather than the internal drive of an MBP or the like, is that the internal boot drive is already carrying the OS and a bunch of apps on its fast, outer sectors. It is usually doing a fair bit of work keeping things running, too. An external FW 800 drive can be kept unencumbered of such things.
    On top of this the 2.5" drives in notebooks are slower at the best of times than 3.5" drives. If you want real speed, my suggestion would be that you keep the internal drive for your OS and Apps as much as possible, and get a fast 7200 rpm 3.5" FW800 external with its own power supply for your sound files (assuming you will have access to mains power).
    If you use the internal for this sort of purpose too, you are going to find that the internal fills up fast, that it soon develops free space fragmentation issues, and that it gradually slows to the speed of mollasses!
    Cheers
    Rod

  • Sun C++ Vs Sun JAVA - Future - Which is faster

    Hi,
    Sometimes I am perplexed with Sun's approach. They ship SunC++12 and market for JAVA. I work on both C++ and JAVA. I started career as a JAVA developer and then shifted as a C++ developer for SUN platform. Now I work on both.
    Many of Sun's article praise about JAVA and says it is faster than C++. Does that list include Sun C++ compiler too?
    Can't Sun C++ developers develop an app better and faster than a Sun java App?
    If for Sun everything is JAVA why they invest on C++ compilers?
    Now C++ is changing and after couple of years you get C++ - 0X.
    Will Sun upgrade the compiler to support thesame?
    Jayaram

    Dear Friend,
    This excersise is absolutely pointless. It can prove nothing.
    It proved C++ is powerful. You can close your eyes against truth and live in your fantacy world.
    By changing int -> double you went into a muddy ground of a floating point calculations.
    By moving your "toggle" objects from heap onto stack you stepped away from java semantics possibilities. No wonder C++ can win here.
    Here the constructor is called only one time. Stack and heap are irrelevent. By the way margin was of couple of seconds. No C++ guy will think of JAVA semantics when programming in C++. Do you think of pointers when you program in JAVA?
    Both are primitive types. Reason for moving I have specified.
    Btw, its likely that compiler managed to figure out those virtual calls and inline them altogether.
    I used babys of same mother. SUN. SUN CC compiler and SUN JAVA 1.6.
    Its hard to say what happened in your case as you did not tell which compiler flags did you use.
    Nothing hard or special in my case. It is truth. C++ WON. You can try yourself.
    Its funny that you used register/inline stuff because it can hardly make any real difference in this test.
    For particular C++ compiler on particular platform and compilation flags it might have an effect, but thats a third order effect of compiler bugs and inefficiencies.
    Being a battle started by the benchmark person, I have the freedom to use all weapons in C++. Use whatever you have. May be you can ask SUN to put a bug in their CC compiler to prove your point. Then I will test with MSVC++ compiler.
    Please not I didn't change the end output or logic. Pointers where not required So removed. I was in the office work when I did this. I am not a full time evangelist.
    Compiler option. If you understand JVM then it uses onthe fly profiling, compiler optimization etc.. So I used compiler Compiler switch is very much reasonable.
    Anyway, the bottom point is that different languages are suitable for different purposes.
    Creatively tweaking your programming environment you can make any given combination of particular language/compiler/program to win a race.
    First point is right. C++ is a general purpose language. For each specific purpose use only those parts. JAVA is a simple language suitable for normal brains.
    Whats ever you tweak for JAVA it is an interpreted language. Additional layer of JVM is there. So native languages may be even pascal and Basic will win JAVA in many bench marks.
    Using that car analogy - sport car on a muddy track will hardly compete with an unloaded truck. Though it doesnt mean that sport car is slower.
    Again your are risking. The track is not muddy. No only increment operation is done. No floating point multiplication or division. Even if it is there you don't have eany voice to complain. It is primitive type. If still you cribb I can't help.
    Java reached the point when performance is not a biggest concern. For majority of applications you win a great deal more by choosing proper algorithms than by rewriting your application on some "faster" language.
    Again you are mistaken. Our MAX CPUs clock speed is getting saturated. We are going for multicore architectures. All apps are not parallel. So single core performance is important. So if you can loose the fat in your language always it is better.
    Algorithems don't assure that they won't use floating point opration. And the choice of an algorithm is not propritary right of JAVA. Any language can use it.
    So the point is don't compare performance of JAVA withh C++ and get patents for the same. It won't take much time for a person like me to break the patents and prove you are wrong.
    If you see the url I am given, go three levels up. The test case is the one in the benchmark which had the highest margin of JAVA Win. So Your best result is ruined in a couple of hours!!!!.
    Ok letme come back to the point. Why is the C++ team of Sun silent?
    I feel they are not funded properly for development or they were told to keep their mouth shut.
    I would like to hear from the C++ team of Sun if any one reads this belong to that team.
    Reagrds,
    Jayaram Ganapathy

  • Which is fast IN or EXIST and how?

    Could any one clarify me?
    which will select data fast
    In or Exist, How it works
    Thanks

    Well read this thread completely to get teh answer.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:953229842074
    Aman....

  • Which is fast bulk delete or id's in a table and a where exists ....?

    I have some parent objects and that I use bulk collect with fetch limit and I currently store the primary keys of these parent objects to identify their child objects later by using where exists with a correlated subquery.
    I'm essentially moving objects graphs that span partitions from table to table.
    when I've done my SQL insert into select... I eventually do a delete.
    currently the delete uses the parent objects in this working table to identify the children to delete later.
    Q. What is likely to be faster?
    using a "temporary" table to requery for child objects based on the parents that I have for each batch or
    using returning clause from my insert into select so that I have rowid's or primary keys to work with later on
    when I want to perform my delete operation?
    I essentially have A's that have child B's which in tern have child C's.
    I store a batch of A pk's in a table and use those to identify the B's
    currently I don't store the B's pk but use the A pk's again to identify the B's which in turn are used to identify the C's later.
    I'm thinking if I remember the pk's I'm using at each level I can then use those later when it comes to the deletes.
    typically that's done in a returning clause and using a bulk delete from that collection later.
    thoughts?

    Parallel DML is one option. Another is to ceate a procedure (or package) that does a discreet unit of work (e.g. process parent and its children as a single business transaction). And then write a "+thread manager+" that runs x number copies of these at the same time (via DBMS_JOB for example).
    Let's say the procedure's signature is as follows:
    create or replace procedure ProcessFamily( parentID number ) is ..
    --// processes a family (parent and children)
    ..Using DBMS_JOB is pretty easy - when you start a job you get a job number for it. Looking at USER_JOBS will tell you whether that job is still in the job queue, or has completed (once off jobs are removed from the queue). The core of the this code will be a loop that checks how many jobs (threads) are running, and if less than the ceiling (e.g. it may only use 20 threads), start more ProcessFamily jobs.
    If the total number of threads/jobs to execute are known up front, then this ThreadManager can manually create a long operation entry. Such an entry contains the number of unit of works to do and then is updated with the number of units done thus far. Oracle provides time estimates for completion and percentage progress. This long operation can be tracked by most Oracle-based monitoring software and provide visibility as to what the progress is of the processing.
    The ProcessFamly procedure can also use parallel DML (if that makes sense). Or bulk processing (if needed). This approach is also scalable as h/w increases (server upgrades, new server h/w), so too does your ability to run more threads (aka jobs) at the same time.
    Now I'm not suggesting that you write a ProcessFamily() proc - I do not know the actual data and problem you're trying to solve. What I'm trying to convey is the basic principle for writing multi-thread/parallel processing software using PL/SQL. And it is not that complex. The critical thing is simply that the parallel procedure or thread be entirely thread safe - meaning that multiple copies of the same code can be started and these copies will not cause serialisation, dead locking, and other (application designed) problems.

Maybe you are looking for

  • Family sharing data needs to be changed

    An apple id has been set up for my child, but the incorrect birthdate was mistakenly entered.  As the adult, I am trying to set up Family Sharing on icloud, and it shows my child listed as an adult.  How do I change the birthdate to reflect he is a c

  • Two copies of iTunes on one computer?

    A pretty stupid problem, but I can't figure it out. It appears that there are two copies of iTunes on one of our computers--or at least, iTunes somehow displaying two different profiles. If I have "start iTunes automatically" checked, when I connect

  • Too many brushes

    I have CS5.1 and just installed a lot of brushes I found on the net. Now when I try to load up a brush set it displays all the brush sets on the screen and I cannot get to the ones off the screen(since it doesn't give a scroll bar to move them over).

  • OPMN vs Standalone OC4j (memory leak)

    I run the same servlet application using standalone OC4J instance on solaris without any problem. Trying to run the same application deployed in oracle enterprise (opmn) console causes memory leak. I see a lot of new classes allocated on each servlet

  • Tkprof output -- 10.2.0.4

    Hi All, I am facing a problem at the client side.The EOD process is taking long time.The database version is 10.2.0.4 and the OS is windows server 2003 R1 03:57:20database >show parameter optimizer NAME                                 TYPE