Which is fast: between or ( =  =)

Hi,
can anyone explain me which is fast?
between
or
<= >= operator.
Thanks in advance.

Hi, you can easy test that and find out that they're in fact the same:
MHO%xe> create table t as select level col from dual connect by level <= 1000000;
Tabel is aangemaakt.
MHO%xe> create index t_i on t(col);
Index is aangemaakt.
MHO%xe> set autotrace traceonly explain
MHO%xe> select * from t where col between 100000 and 300000;
Verstreken: 00:00:01.78
Uitvoeringspan
Plan hash value: 1601196873
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |   211K|  2679K|   460   (9)| 00:00:06 |
|*  1 |  TABLE ACCESS FULL| T    |   211K|  2679K|   460   (9)| 00:00:06 |
Predicate Information (identified by operation id):
   1 - filter("COL">=100000 AND "COL"<=300000)  -- See that BETWEEN gets rewritten to >= and <=
Note
   - dynamic sampling used for this statement
MHO%xe> select * from t where col >= 200000 and col <= 400000;
Verstreken: 00:00:00.14
Uitvoeringspan
Plan hash value: 1601196873
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |   227K|  2885K|   460   (9)| 00:00:06 |
|*  1 |  TABLE ACCESS FULL| T    |   227K|  2885K|   460   (9)| 00:00:06 |
Predicate Information (identified by operation id):
   1 - filter("COL">=200000 AND "COL"<=400000)
Note
   - dynamic sampling used for this statement
MHO%xe> select * from t where col >= 500000 and col <= 600000;
Verstreken: 00:00:00.34
Uitvoeringspan
Plan hash value: 4021086813
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT |      | 48712 |   618K|   109   (2)| 00:00:02 |
|*  1 |  INDEX RANGE SCAN| T_I  | 48712 |   618K|   109   (2)| 00:00:02 |
Predicate Information (identified by operation id):
   1 - access("COL">=500000 AND "COL"<=600000)
Note
   - dynamic sampling used for this statement
MHO%xe> select * from t where col between 500000 and 600000;
Verstreken: 00:00:00.11
Uitvoeringspan
Plan hash value: 4021086813
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT |      | 48712 |   618K|   109   (2)| 00:00:02 |
|*  1 |  INDEX RANGE SCAN| T_I  | 48712 |   618K|   109   (2)| 00:00:02 |
Predicate Information (identified by operation id):
   1 - access("COL">=500000 AND "COL"<=600000)
Note
   - dynamic sampling used for this statement

Similar Messages

  • Which is faster ESB or BPEL and why?

    Hi,
    Can anybody please tell me which is faster, either ESB or BPEL? I believe ESB is more faster than BPEL due to the message payload in BPEL not ESB.
    It would be great if anybody can put some info on this plz.
    Cheers!
    user623695
    Edited by: user623695 on Dec 24, 2010 12:31 AM

    You are considering only performance as the criteria for choosing between two products which have different purposes which is not the right way to go about it. Lets consider(just an assumption, might not be true) that BPEL is faster for a flow which requires simple routing/mediation to a back end service, even then you should ideally use OSB for that flow/service rather then doing it on BPEL.
    If you have to consider only performance then its my understanding that OSB would be faster then BPEL for simple routing.
    Architecture of both OSB and SOA Suite/BPEL is different, OSB is optimized for short running, fast and stateless services while BPEL is optimized for long running and stateful processes.

  • Which is faster -  Member formula or Calculation script?

    Hi,
    I have a very basic question, though I am not sure if there is a definite right or wrong answer.
    To keep the calculation scripts to a minimum, I have put all the calculations in member formula.
    Which is faster - Member formula or calculation scripts? Because, if i am not mistaken, FIX cannot be used in member formulas, so I need to resort to the use of IF, which is not index driven!
    Though in the calculation script,while aggregating members which have member formula, I have tried to FIX as many members as I can.
    What is the best way to optimize member formulas?
    I am using Hyperion Planning and Essbase 11.1.2.1.
    Thanks.

    Re the mostly "free" comment -- if the block is in memory (qualification #1), and the formula is within the block (qualification #2), the the expensive bit was reading the block off of the disk and expanding it into memory. Once that is done, I typically think of the dynamic calcs as free as the amount of data being moved about is very, very, very small. That goes out the window if the formula pulls lots of blocks to value and they get cycled in and out of the cache. Then they are not free and are potentially slower. And yes, I have personally shot myself in the foot with this -- I wrote a calc that did @PRIORS against a bunch of years. It was a dream when I pulled 10 cells. And then I found out that the client had reports that pulled 5,000. Performance when right down the drain at that point. That one was 100% my fault for not forcing the client to show me what they were reporting.
    I think your reference to stored formulas being 10-15% faster than calc script formulas deals with if the Formulas are executed from within the default calc. When the default Calc is used, it precompiles the formulas and handles many two pass calculations in a single pass. Perhaps that is what you are thinking of.^^^I guess that must be it. I think I remember you talking about this technique at one of your Kscope sessions and realizing that I had never tried that approach. Isn't there something funky about not being able to turn off the default calc if a user has calc access? I sort of thing so. I typically assing a ; to the default calc so it can't do anything.
    Regards,
    Cameron Lackpour

  • Which is faster - Member formula or Calculation scripts?

    Hi,
    I have a very basic question, though I am not sure if there is a definite right or wrong answer.
    To keep the calculation scripts to a minimum, I have put all the calculations in member formula.
    Which is faster - Member formula or calculation scripts? Because, if i am not mistaken, FIX cannot be used in member formulas, so I need to resort to the use of IF, which is not index driven!
    Though in the calculation script,while aggregating members which have member formula, I have tried to FIX as many members as I can.
    What is the best way to optimize member formulas?
    I am using Hyperion Planning and Essbase 11.1.2.1.
    Thanks.

    The idea that you can't reference a member formula in a FIX is false. Here's an example:
    - Assume you have an account that has a data storage of Stored or Never Share.
    - This account is called Account_A and it has a member formula of Account_B * Account_C;.
    - You would calculate this account within a FIX (inside of a business rule) something like this:
    FIX(whatever . . . )
    "Account_A";
    ENDFIX
    If you simply place the member named followed by a semi-colon within a business rule, the business rule will execute the code in the in that member's member formula.
    Why would you want to do this instead of just putting ALL of the logic inside the business rule? Perhaps that logic gets referenced in a LOT of different business rules, and you want to centralize the code in the outline? This way, if the logic changes, you only need to update it in one location. The downside to this is that it can make debugging a bit harder. When something doesn't work, you can find yourself searching for the code a bit.
    Most of my applications end up with a mix of member formulas and business rules. I find that performance isn't the main driving force behind where I put my code. (The performance difference is usually not that significant when you're talking about stored members.) What typically drives my decision is the organization of code and future maintenance. It's more art than science.
    Hope this helps,
    - Jake

  • Need help to join two tables using three joins, one of which is a (between) date range.

    I am trying to develop a query in MS Access 2010 to join two tables using three joins, one of which is a (between) date range. The tables are contained in Access. The reason
    the tables are contained in access because they are imported from different ODBC warehouses and the data is formatted for uniformity. I believe this cannot be developed using MS Visual Query Designer. I think writing a query in SQL would be suiting this project.
    ABCPART links to XYZPART. ABCSERIAL links to XYZSERIAL. ABCDATE links to (between) XYZDATE1 and ZYZDATE2.
    [ABCTABLE]
    ABCORDER
    ABCPART
    ABCSERIAL
    ABCDATE
    [ZYXTABLE]
    XYZORDER
    XYZPART
    XYZSERIAL
    XYZDATE1
    XYZDATE2

    Thank you for the looking at the post. The actual table names are rather ambiguous. I renamed them so it would make more sense. I will explain more and give the actual names. What I do not have is the actual data in the table. That is something I don't have
    on this computer. There are no "Null" fields in either of the tables. 
    This table has many orders (MSORDER) that need to match one order (GLORDER) in GLORDR. This is based on MSPART joined to GLPART, MSSERIAL joined to GLSERIAL, and MSOPNDATE joined if it falls between GLSTARTDATE and GLENDDATE.
    [MSORDR]
    MSORDER
    MSPART
    MSSERIAL
    MSOPNDATE
    11111111
    4444444
    55555
    2/4/2015
    22222222
    6666666
    11111
    1/6/2015
    33333333
    6666666
    11111
    3/5/2015
    This table has one order for every part number and every serial number.
    [GLORDR]
    GLORDER
    GLPART
    GLSERIAL
    GLSTARTDATE
    GLENDDATE
    ABC11111
    444444
    55555
    1/2/2015
    4/4/2015
    ABC22222
    666666
    11111
    1/5/2015
    4/10/2015
    AAA11111
    555555
    22222
    3/2/2015
    4/10/2015
    Post Query table
    GLORDER
    MSORDER
    GLSTARTDATE
    GLENDDATE
    MSOPNDATE
    ABC11111
    11111111
    1/2/2015
    4/4/2015
    2/4/2015
    ABC22222
    22222222
    1/5/2015
    4/10/2015
    1/6/2015
    ABC22222
    33333333
    1/5/2015
    4/10/2015
    3/5/2015
    This is the SQL minus the between date join.
    SELECT GLORDR.GLORDER, MSORDR.MSORDER, GLORDR.GLSTARTDATE, GLORDR.GLENDDATE, MSORDR.MSOPNDATE
    FROM GLORDR INNER JOIN MSORDR ON (GLORDR.GLSERIAL = MSORDR.MSSERIAL) AND (GLORDR.GLPART = MSORDR.MSPART);

  • I kept a dual boot of windows 7 and mac os x lion in macbook pro. so, should i keep antivirus for windows 7? which is prescribable between bit defender(bd) and microsoft security essentials(mse)?does bd and mse un-installs easily?

    i kept a dual boot of windows 7 and mac os x lion in macbook pro. so, should i keep antivirus for windows 7? which is prescribable between bit defender(bd) and microsoft security essentials(mse)?does bd and mse un-installs easily?

    lower your font size unless you have difficulty
    MS Security Essentials is excellent
    Then again maybe time to investigate Windows 8 RP (which uses Defender)

  • Java io and Java nio, which is faster to binary io?

    Anybody can advise me about java io and java nio ?
    I want to write the faster code to read and write binary files.
    I'm going to read/write
    - individual elements (int, double, etc)
    - arraylists
    - objects
    Also I'm going (or I'd want) to use seek functions.
    Thanks

    Anybody can advise me about java io and java nio ?
    I want to write the faster code to read and write binary files.Which is "faster" depends on exactly how you're using them. For example, a MappedByteBuffer is usually faster for random access than a RandomAccessFile, unless your files are so large (and your accesses so random) than you're constantly loading from disk. And it's not at all faster for linear uni-directional access than a simple FileInputStream.
    So, rather than expecting some random stranger telling you that one is faster than the other without knowing your project, perhaps you should tell us exactly how you plan to use IO, and why you think that one approach may be faster than the other.

  • Which is faster while executing statment having Case or Decode.

    Pls tell me which execute faster a case or decode.

    ajallen wrote:
    If you are really concerned about this, then you are being taken over by the tuning virus. You are out of control. You are tuning beyond reason. DECODE() is deprecated - not being enhanced. CASE is the new preferred approach. CASE is simpler and easier to code/follow.I can't find a link saying that DECODE() function is already deprecated. Can you give us a valid link?
    Regards.

  • WHICH IS FASTER AND WHY

    Hi Guys,
    Just want to knw which is faster
    java.lang.String.toLowerCase()
    or
    java.lang.String.toUpperCase()
    Thanks,
    Tuniki

    A look into the source code tells me that
    toUpperCasemay be slightly slower in rare cases when a single lower-case
    letter needs to be converted to an array of upper-case double-bytes.
    However, this depends somewhat on the frequency of conversions.
    If you have frequently strings that are all-upper-case, use
    toUpperCase()and -- vice versa -- if the strings are frequently all lower-case, then prefer
    toLowerCaseboth methods first test whether a conversion has to be made at all.

  • Which is fast

    Which is fast either stored procedure or stored function in oracle 10g.
    Thanks in advance

    i would say it depends on what you are trying to do in the body of the procedure or function.   i don't think it's a matter of which is faster. it would be more of a question of did you want to return something (in which case use a function) or nothing (in which case you can use a procedure).

  • Which is fast / Smooth performing?

    I have 10 circles on the stage, animated on the time line, simple animation like scale, rotation and size changes.
    Which will give me best performance in terms of browser load and smoothness of animation?
    Ellipse or Svg file
    What if it is 100 objects, or even 1000 objects?

    Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?
    select * from Sales.[SalesOrderHeader]
    select SalesOrderNumber,RevisionNumber,rowguid from Sales.[SalesOrderHeader]
    As you can see both the query execution plan and subtree cost is same. So how selecting the particular columns optimize the query ?
    Yes, selecting specific columns is always better than select *.
    If you always need few columns in result, then just use SELECT col1, col2 FROM YourTable. If you SELECT * FROM YourTable; that is extra useless overhead.
    If in future if someone adds Image/BLOB/Text type columns in your table, using SELECT * will worsen the performace for sure.
    Let's say if you have SP and you use INSERT INTO DestTable SELECT * FROM TABLE which runs fine BUT again if someone adds few more columns then your SP will fail saying provided columns don't match.
    -Vaibhav Chaudhari

  • Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?

    Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?
    select * from Sales.[SalesOrderHeader]
    select SalesOrderNumber,RevisionNumber,rowguid from Sales.[SalesOrderHeader]
    As you can see both the query execution plan and subtree cost is same. So how selecting the particular columns optimize the query ?

    Which is fast ? Select * from tableName or Select Column1,Column2 .... From tableName ? and Why ?
    select * from Sales.[SalesOrderHeader]
    select SalesOrderNumber,RevisionNumber,rowguid from Sales.[SalesOrderHeader]
    As you can see both the query execution plan and subtree cost is same. So how selecting the particular columns optimize the query ?
    Yes, selecting specific columns is always better than select *.
    If you always need few columns in result, then just use SELECT col1, col2 FROM YourTable. If you SELECT * FROM YourTable; that is extra useless overhead.
    If in future if someone adds Image/BLOB/Text type columns in your table, using SELECT * will worsen the performace for sure.
    Let's say if you have SP and you use INSERT INTO DestTable SELECT * FROM TABLE which runs fine BUT again if someone adds few more columns then your SP will fail saying provided columns don't match.
    -Vaibhav Chaudhari

  • Screen vibrates when booting up. It fluctuates very fast between Safari and Microsoft Messenger. Back and forth .... flicking away .... back and forth, very, very, quickly. And then when its completely finished booting, everything is OK.

    I sign in and when Safari opens, whilst Microsoft Messenger is downloading, the monitor screen vibrates between Safari and Messenger at a very, very, fast pace. Back and forth, back and forth, alternating between the two. When the booting process is completed the computer is fine. Everything is OK. Anyone know what is causing this, and what to do about it?

    YES .... but they have always been set to open at login. I must admit that if Messenger isn't set to open at login this doesn't happen. But even though Safari opens at login, the opening page in Safari doesn't open; I have to click on Safari again, (when I am reminded after the computer has booted-up), because the monitor screen is just blank! Then after a click on the Safari icon the screen fills up in a second. Gees, what have I unintentionally done?

  • Which is difference between Pipeline and Consignment..

    Hello  All...
    I need your help... Can you help me to understand the difference between Pipeline and Consignment?
    Many thanks and I appreciate your Help and comments!

    A pipeline material is a material that flows directly into the production process from a pipeline (for example, oil), from a pipe (for example, tap water), or from another similar source (for example, electricity).
    A material from the pipeline is always available; i.e. it can be withdrawn from the pipeline at any time and in any quantity.
    Depending on the system configuration, a material can be withdrawn only from the pipeline or, in addition to the pipeline, normal stocks of the material can also be managed.
    STEPS TO MAINTAIN PIPELINE MATERIAL
    1. You can create a material with PIPE material type or else you can use any material type but that should allow pipeline process.
    2. You should have Inforecord for the material with valid conditions, price will pick only from inforecord.
    3. If requied you can maintains ource list or else you can select during goods issue.
    4. From the Inventory Management menu, choose Goods movement ® Goods issue.
    Maintain the data on the initial screen. Choose Movement type ® Consumption ® To cost center (or To order, To network, All account assignments) ® From pipeline (Movement types : 201 P, 261 P, 281 P, or 291 P)
    5. On the collective entry screen, enter the account assignment. Enter the items.
    You do not have to enter the vendor as this will be found automatically by the system.
    If more than one vendor is possible, a pop-up window appears with a list of pipeline vendors, from which you can select the vendor you require.
    Post the goods movement.
    we can not stock pipeline material. It is readily available. directlly it is avilable at cost center. you need to pay for that consumption

  • Sun C++ Vs Sun JAVA - Future - Which is faster

    Hi,
    Sometimes I am perplexed with Sun's approach. They ship SunC++12 and market for JAVA. I work on both C++ and JAVA. I started career as a JAVA developer and then shifted as a C++ developer for SUN platform. Now I work on both.
    Many of Sun's article praise about JAVA and says it is faster than C++. Does that list include Sun C++ compiler too?
    Can't Sun C++ developers develop an app better and faster than a Sun java App?
    If for Sun everything is JAVA why they invest on C++ compilers?
    Now C++ is changing and after couple of years you get C++ - 0X.
    Will Sun upgrade the compiler to support thesame?
    Jayaram

    Dear Friend,
    This excersise is absolutely pointless. It can prove nothing.
    It proved C++ is powerful. You can close your eyes against truth and live in your fantacy world.
    By changing int -> double you went into a muddy ground of a floating point calculations.
    By moving your "toggle" objects from heap onto stack you stepped away from java semantics possibilities. No wonder C++ can win here.
    Here the constructor is called only one time. Stack and heap are irrelevent. By the way margin was of couple of seconds. No C++ guy will think of JAVA semantics when programming in C++. Do you think of pointers when you program in JAVA?
    Both are primitive types. Reason for moving I have specified.
    Btw, its likely that compiler managed to figure out those virtual calls and inline them altogether.
    I used babys of same mother. SUN. SUN CC compiler and SUN JAVA 1.6.
    Its hard to say what happened in your case as you did not tell which compiler flags did you use.
    Nothing hard or special in my case. It is truth. C++ WON. You can try yourself.
    Its funny that you used register/inline stuff because it can hardly make any real difference in this test.
    For particular C++ compiler on particular platform and compilation flags it might have an effect, but thats a third order effect of compiler bugs and inefficiencies.
    Being a battle started by the benchmark person, I have the freedom to use all weapons in C++. Use whatever you have. May be you can ask SUN to put a bug in their CC compiler to prove your point. Then I will test with MSVC++ compiler.
    Please not I didn't change the end output or logic. Pointers where not required So removed. I was in the office work when I did this. I am not a full time evangelist.
    Compiler option. If you understand JVM then it uses onthe fly profiling, compiler optimization etc.. So I used compiler Compiler switch is very much reasonable.
    Anyway, the bottom point is that different languages are suitable for different purposes.
    Creatively tweaking your programming environment you can make any given combination of particular language/compiler/program to win a race.
    First point is right. C++ is a general purpose language. For each specific purpose use only those parts. JAVA is a simple language suitable for normal brains.
    Whats ever you tweak for JAVA it is an interpreted language. Additional layer of JVM is there. So native languages may be even pascal and Basic will win JAVA in many bench marks.
    Using that car analogy - sport car on a muddy track will hardly compete with an unloaded truck. Though it doesnt mean that sport car is slower.
    Again your are risking. The track is not muddy. No only increment operation is done. No floating point multiplication or division. Even if it is there you don't have eany voice to complain. It is primitive type. If still you cribb I can't help.
    Java reached the point when performance is not a biggest concern. For majority of applications you win a great deal more by choosing proper algorithms than by rewriting your application on some "faster" language.
    Again you are mistaken. Our MAX CPUs clock speed is getting saturated. We are going for multicore architectures. All apps are not parallel. So single core performance is important. So if you can loose the fat in your language always it is better.
    Algorithems don't assure that they won't use floating point opration. And the choice of an algorithm is not propritary right of JAVA. Any language can use it.
    So the point is don't compare performance of JAVA withh C++ and get patents for the same. It won't take much time for a person like me to break the patents and prove you are wrong.
    If you see the url I am given, go three levels up. The test case is the one in the benchmark which had the highest margin of JAVA Win. So Your best result is ruined in a couple of hours!!!!.
    Ok letme come back to the point. Why is the C++ team of Sun silent?
    I feel they are not funded properly for development or they were told to keep their mouth shut.
    I would like to hear from the C++ team of Sun if any one reads this belong to that team.
    Reagrds,
    Jayaram Ganapathy

Maybe you are looking for

  • Using AUTHID = CURRENT_USER and Apex

    I'm new to Apex and am just assessing if we can make use of it, mainly for reporting functionality in our existing web based application. We have a schema that holds all of our procedures, packages, functions etc and they all have AUTHID = CURRENT_US

  • Exe file Regarding

    I have created an application using DAQ 6009,Now i have created an exe file but whenever i try to run the exe file in another system in which Labview is not installed im getting error that Resources Not Found..  I have installed the following in the

  • Sub Templates and Watermarks

    Hi, I need to do a BI Publisher report that uses different images for different companies. So, i though i would use sub templates. The main template is the fields, etc... and the sub templates are just the different company logos, images, etc.... But

  • I overhauled Band-in-a-Box and Home Concert Extreme

    I have posted a major overhaul of Band in a Box http://web.me.com/newdotmacuser/CurtisRushBell/SoftwareInnovations/Entries/2000/8/20Band-in-a-Box.html and Home Concert Extreme http://web.me.com/newdotmacuser/CurtisRushBell/SoftwareInnovations/Entries

  • Concept of using scope in JSP !!

    Hi All, I have some confusions about Scope field used in JSP. 1. Can anyone please tell me what is the significance of using scope in jsp?