Which sorting technique is most efficient?

Which of these four sorting technique is most efficient, And why?
Bubble sort, Insertion sort, or, Merge sort?
Just getting some opinions.

georgemc wrote:
punter wrote:
ejp wrote:
No need to actively mislead him thanks.I have given him my opinion,which he had asked for and not a hard and fast fact.If he is dumb enough to believe it,may be he deserves that.But what about the innocent noob who, in the course of doing the right thing and googling, comes across this thread and takes your opinion to be canon?Then they are dumb enough to be pushed out of the profession as well. Punter said this:
Bubble sort. Why ? Because I like that name "Bubble Sort".
It's not only obvious he's not being serious, it's equally obvious that he wasn't even trying to pretend to be serious.
His answer was like somebody asking "how fast is Java" and you answering "7". It's just too obviously non-applicable to the question to cause any harm.
Plus, he went on to encourage to the OP to do some research to test the validity of his own statement. Which is anything but misleading.
Edited by: endasil on 22-Dec-2009 12:21 PM

Similar Messages

  • Which transport protocol is most efficient in JMS adaptor and why..???

    Hi all,
    Which transport protocol is most efficient in JMS adaptor and why..???
    Also can anyone tell me how to check queues in the integration server and in the reciever side....???
    If any one explain it rather than providing any link...i will be delighted...
    Thanks....
    Biplab

    <i>Which transport protocol is most efficient in JMS adaptor and why..???</i>
    U have to select the JMS provider for the JMS adapter under Transport Protocol.
    The selection of JMS provider could be according to ur cost estimation.
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/739c4186c2a409e10000000a155106/frameset.htm
    SONIQ MQ and IBM MQ series r widely used
    <i>Also can anyone tell me how to check queues in the integration server and in the reciever side....???</i>
    smq1 - outbound queues
    smq2 - inbound queues
    Regards,
    Prateek

  • Which compiler genrerates the most efficient code?

    Does anybody know which compiler genrates the most efficient code? Jikes or javac? Or are there others more powerfull? If anybody has any information please submit a link to some benchmark test. I have tried both compilers, and I think that Jikes is compiling clearly faster, but does that mean that it is sloppy or doesn't it mean anything.

    It's not so much the comiler but rather the jvm that is responsible for speed issues.
    Of course if the compiler generates crappy bytecodes it doesn't help. But finally the jvm does the optimizing and running...

  • Which one is the most efficient - ODI or DIM?

    Hi,
    Which is the most efficient of ODI&DIM to use as a middleware between Relational DB and Hyperion Planning? Please do let me know the benifits of them?
    Thanks,
    Luv

    Hello, thanks for your rapid answer.
    I have a macBook Pro (15"), with a Intel 2,3Ghz Core I7. 4Go 1700 Mhz DDR3. NVIDIA GeForce GT 650M 512 Mo, and OS X 10.8.4.
    I have the following ports: FireWire 800, Thunderbolt, USB 3 (x2), SDXC card slot, audio In and audio out (headphones).
    I bought a Belkin Mini DisplayPort to HDTV Cable (https://discussions.apple.com/message/22742775?ac_cid=op123456#22742775)
    I have an HD TV from LG 32LH3000, with 3 HDMI inputs. I regularly use one of these HDMI input to watch movies, stored on an HDD.
    Thanks again for your help.

  • How to go from Firewire 800 or Thunderbolt to HDMI? Which one is the most efficient?, How to go from Firewire 800 or Thunderbolt to HDMI? Which one is the most efficient?

    I have no HDMI port on my macbook, but only Thunderbolt and FireWire 800, and I need to connect to my digital TV that requires an HDMI port.

    Hello, thanks for your rapid answer.
    I have a macBook Pro (15"), with a Intel 2,3Ghz Core I7. 4Go 1700 Mhz DDR3. NVIDIA GeForce GT 650M 512 Mo, and OS X 10.8.4.
    I have the following ports: FireWire 800, Thunderbolt, USB 3 (x2), SDXC card slot, audio In and audio out (headphones).
    I bought a Belkin Mini DisplayPort to HDTV Cable (https://discussions.apple.com/message/22742775?ac_cid=op123456#22742775)
    I have an HD TV from LG 32LH3000, with 3 HDMI inputs. I regularly use one of these HDMI input to watch movies, stored on an HDD.
    Thanks again for your help.

  • All of a sudden my outlook inbox messages are all scrambled by date instead of being sorted by most recent at the top...how do i sort them by most recent at top (which is suppose to be default)

    all of a sudden my outlook inbox messages are all scrambled by date instead of being sorted by most recent at the top...how do i sort them by most recent at top (which is suppose to be default)

    As far as I can tell, there is no way to reverse the sort.

  • Most efficient way to make an ordered list?

    Hi all,
    I have a simulation where I have agents running around in an environment. These agents need to get the closest agent near them, and they have to do this a whole bunch of times each cycle -- closest predator to them, closest prey, closest mate, next-closest mate if that one isn't interested, etc. etc.
    I realized that it would be faster if I just got all the agents in the vicinity ONCE, kept them in a sorted list, and then everytime you want the closest agent of a certain type, just search from the bottom of the list up until you find the first one of that type.
    So here's the question: to make that list, I ask the environment for all the agents in the vicinity, and then go through them one-by-one. I find out the distance between myself and that agent. I then.... what?
    I could put them all into an ArrayList, and then apply some sorting algorithm to that ArrayList. Or I can try to insert them in the right order WHILE I'm making the ArrayList. Or maybe there's some better Collection object that would be even more efficient -- somehow pushing them in and out of stacks, or whatever else smart programmers think up.
    Can anyone suggest the most efficient way to do this? This is something that every agent has to do every step, so efficiency is key.
    Thanks!
    Edit: As a note, calculating the distance between two agents isn't free, and if I either sort or insert as I'm making it, the naive implementation (i.e. the way I would do it...) would require re-checking this distance for every agent in the list every single time a new agent was added. So maybe I could make some use of a HashMap, so that I can store these distances?
    Edited by: TimQuinn on Oct 15, 2009 9:35 AM

    TimQuinn wrote:
    Ok, thanks for all the great suggestions. I think that caching the distance in a wrapper object is a big plus, and then I can run some tests and see if using a built-in collections sorting algorithm or using a treeset is faster in my specific case. Thanks!
    Any thoughts as to the idea of creating one giant map of the distance from each agent to every other agent just once, rather than having each agent work out their own distances to each other agent? I feel like this would be faster (at the expense of memory), but don't know how I'd start approaching it.Well, your idea of the Map would probably work. You would have to make some object that pairs up two agents, something like this:
    public class AgentPair {
       private final Agent a1, a2;
       public AgentPair(Agent a1, Agent a2) { //...
       public boolean equals(Object other) {
          //if distance is a symmetric operation (a.distanceTo(b) == b.distanceTo(a)) then the reverse pair should also be equal
       public int hashCode() {
          return 37*a1.hashCode() + a2.hashCode();
    }This assumes either that you don't override equals or hashCode in Agent, or that you properly override both.
    Then you would have a map that you can populate, given an array of all agents:
    Map<AgentPair, Double> distMap;
    Agent[] agents;
    for ( int i = 0; i < agents.length-1; i++ ) {
       for ( int j = i; j < agents.length; j++ ) {
          distMap.put(new AgentPair(agents, agents[j]), agents[i].distanceTo(agents[j]));
    }Alternatively, if you're not sure you'll definitely use all of the computations, you could alternatively test to see if the computation result is in the map, and if not, perform it; otherwise, use the cached result.
    Any thoughts? Should I make a new post? Abandon the idea?As always, try it and see.  Essentially, it will be faster assuming these two conditions:
    1) You would have to do more than n ^2^ (well actually n choose 2) distance calculations otherwise, and
    2) Computing the distance costs more than retrieving from distMap (this +isn't+ a given!).  If each Agent had a sequential ID, you could do a two-dimensional array of doubles which would speed up lookups.
    Edited by: endasil on 15-Oct-2009 3:39 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • PowerShell - what is the most efficient/fastest way to find an object in an arraylist

    Hi
    I Work with a lot of array lists in PowerShell when working as a sharepoint administrator. I first used arrays but found them slow and jumped over to array lists.
    Often i want to find a specific object in the array list, but the respons-time varies a lot. Does anyone have code for doing this the most efficient way?
    Hope for some answers:-)
    brgs
    Bjorn

    Often i want to find a specific object in the array list, but the respons-time varies a lot. Does anyone have code for doing this the most efficient way?
    As you decided to use an ArrayList, you must keep your collection sorted, and then use the method BinarySearch() to find the objects your looking for.
    Consider using a dictionary, and if your objects are string type, then a StringDictionary.
    You stil fail to understand that he slowness is no in the arraylist.  It is in the creating of the arraylist which is completely unnecessary.  Set up a SharePoint servefr and create a very large list and test..  You will see. An arraylist
    with 10000 items takes forever to create. A simple SharePoint search can be done in a few milliseconds.
    Once created the lookup in any collection is dependent on the structure of the key and the type of collection.  A string key can be slow if it is a long key.
    The same rules apply to general database searches against an index.
    The main point here is that SharePoint IS a database and searching it as a database is the fastesst method.
    Prove me wrong devil!    Submit!  Back to your cage! Fie on thee!
    ¯\_(ツ)_/¯
    You seem to be making a lot of assumptions about what he's doing with those arraylists that doesn't seem justified based on no more information than there is in the posted question.
    [string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " "

  • What is the most efficient way of passing large amounts of data through several subVIs?

    I am acquiring data at a rate of once every 30mS. This data is sorted into clusters with relevant information being grouped together. These clusters are then added to a queue. I have a cluster of queue references to keep track of all the queues. I pass this cluster around to the various sub VIs where I dequeue the data. Is this the most efficient way of moving the data around? I could also use "Obtain Queue" and the queue name to create the reference whenever I need it.
    Or would it be more efficient to create one large cluster which I pass around? Then I can use unbundle by index to pick off the values I need. This large cluster can have all the values individually or it co
    uld be composed of the previously mentioned clusters (ie. a large cluster of clusters).

    > I am acquiring data at a rate of once every 30mS. This data is sorted
    > into clusters with relevant information being grouped together. These
    > clusters are then added to a queue. I have a cluster of queue
    > references to keep track of all the queues. I pass this cluster
    > around to the various sub VIs where I dequeue the data. Is this the
    > most efficient way of moving the data around? I could also use
    > "Obtain Queue" and the queue name to create the reference whenever I
    > need it.
    > Or would it be more efficient to create one large cluster which I pass
    > around? Then I can use unbundle by index to pick off the values I
    > need. This large cluster can have all the values individually or it
    > could be composed of the previously mentioned clusters (i
    e. a large
    > cluster of clusters).
    It sounds pretty good the way you have it. In general, you want to sort
    these into groups that make sense to you. Then if there is a
    performance problem, you can arrange them so that it is a bit better for
    the computer, but lets face it, our performance counts too. Anyway,
    this generally means a smallish number of groups with a reasonable
    number of references or objects in them. If you need to group them into
    one to pass somewhere, bundle the clusters together and unbundle them on
    the other side to minimize the connectors needed. Since the references
    are four bytes, you don't need to worry about the performance of moving
    these around anyway.
    Greg McKaskle

  • How to determine which drive has the most recent backup?

    I have a few drives that contain backups of files. What would be the easiest way to figure out which of the drives has the most recent backup? When I sort by date, the folder shows one date, though the files inside show a different date? Is there a way to quickly check the drives to find which is the most recent?
    Thanks.

    the folder modification date isn't reliable, as is only tells of changed in the folder, but not in subfolders. there is no efficient way. I however highly suggest Time capsule, it is WAY easier and you can ask it not to sync the System data and/or applications, so it would prevent this hassle.
    You could try this: go to the first drive, type different letters, like a for instance, and see which file is the most recent document/folder.
    do the same thing on the other drives and compare.

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

  • OK, as always I waited before downloading a new OS and I sure glad I did. On the APP store I sorted comments by most critical...and WHOA...what is Apple doing? No support for Logic 9, MS Office? Is Apple only trying to reach the iphone/iTouch crowd? HELP!

    OK, as always I waited before downloading a new OS and I'm sure glad I did. On the APP store I sorted comments by most critical...and WHOA...what is Apple doing? No support for Logic 9, MS Office? Is Apple only trying to reach the iphone/iTouch crowd? HELP! I was going to buy a new Mac pro & two 27" monitors but until I see some real problem addressing by Apple...I'll keep what I have and see how everything pans out. If anyone has any comments to ally my fears, I welcome them. I've been a devoted Mac user since 1993. 7500; G4; G5; and my latest Mac Pro...Where do I go? Again...HELP!

    Hi there,
    If you look through ALL the reviews, they are mainly good. I feel that Lion is an excellent upgrade, although not essential.
    There have been some issues with MS Office, but right now, it is up to Microsoft to issue a Lion compatible update, which will come in time. Saying this, MS Office has been working fine on my mac, it seems to be an isolated issue.
    Logic 9 seems like a strange issue. Again, an update looks to be coming soon, with Lion support.
    I do not feel that apple only focusing on the iPhone and iPad user base. There are many features carried along, but the machine can still be used for pro tools and use just as well. It still is a fantastic, reliable, fast, easy to use OS, which I have had very few problems with. Some additions you may not use, but they don't get in the way. You will love the new Exposé, Mission Control, as it is great for pro users who have many windows open at once, and the new spaces. You may however, never use Launchpad, but you don't have to, just drag it away from the dock!
    I really reccomend buying a mac with Lion, although if you are worried about bugs, wait a few months for the issues to be ironed out, and updates to be given. Because the update is so very cheap, I really think you can hardly go wrong. Try it out with your current mac, and if you like it, go ahead and buy your new ones.
    Lion is fantastic, albeit maybe rushed.
    Any other queries, just ask,
    Nathan

  • Most efficient way to globally shift page content

    Hi folks,
    I decided to start my InDesign career with something nice and simple: a 326-pp book divided into 57 sections.  I just got my first POD proof back, and it all looks good – except that I would like to shift everything down the page by a pica or a quarter of an inch.
    The master page setup in my book is not the most efficient.  I have an A-master in each story document (= chapter = section) with nothing on it but a single column with margin settings. Then I have two chapter title page masters (recto & verso) and two body text masters (recto & verso).  I can modify the masters on my sync source story document, but when I try to sync the changes through the other story documents, I get erroneous placement of headers, and I lose the story- (i.e., chapter-) specific title text (which is entered in the title master pages).
    I have been looking for a global fix. I tried adjusting the top margin in the A-master page, but the margin doesn't seem to push page elements down.  Another possibility is to set up a global crop routine in Acrobat (my final output is PDF), and then add the material chopped from the bottom back to the top (also in Acrobat).  I'd like to find some way of pulling off the necessary shift in InDesign, however.  I have a gut feeling it's possible, but a search of the InDesign Help material hasn't turned up anything yet.
    Any thoughts?
    TIA
    Richard Hurley
    Grass Valley MultiMedia

    "Use the AdjustLayout.jsx, a sample script that ships with InDesign. Go the the Scripts Panel and find it there and double-click it,"
    Neat. 
    The Acrobat crop is the way to make this happen in a hurry, but I had a good time playing with the Script Panel. Thanks for letting me know about it.

  • Most efficient way to load XML file data into tables

    I have a complex XML file running into MBs. I want to load it's data into 7-8 tables.
    Which way will be better:
    1) Use SQL Loader to actually load directly into the 7-8 tables directly by modifying the control card.
    Is this really possible and feasible? I am not even sure about it
    2) Load data as XML Type in a table and register it. Then extract from there to load into various tables.
    Please help. I have to find the most efficient way of doing it.
    Regards,
    Sudhir

    Yes it is possible to use SQL*Loader to parse and load XML, but that is not what it was designed for and so is not recommended. You also don't need to register a schema, just to load/store/parse XML in the DB either.
    So where does that leave you?
    Some options
    {thread:id=410714} (see page 2)
    {thread:id=1090681}
    {thread:id=1070213}
    Those talk some about storage options and reading in XML from disk and parsing XML. They should also give you options to consider. Without knowing more about your requirements for the effort, it is difficult to give specific advice. Maybe your 7-8 tables don't exist and so using Object Relational Storage for the XML would be the best solution as you can query/update tables that Oracle creates based off the schema associated to the XML. Maybe an External Table definition works better for reading the XML into the system because this process will happen just once. Maybe using WebDAV makes more sense for loading XML to be parsed (I don't have much experience with this, just know it is possible from what I've read on the forums). Also, your version makes a difference as you have different options available depending upon the version of Oracle.
    Hope all that helps as a starter.
    Edited by: A_Non on Jul 8, 2010 4:31 PM
    A great example, see the answers by mdrake in {thread:id=1096784}

  • Most Efficient Way to Populate My Column?

    I have several very large tables, some of them are partitioned tables.
    I want to populate every row of one column in each of these tables with the same value.
    1.] What's the most efficient way to do this given that I cannot make the table unavailable to the users during this operation?
    I mean, if I were to simply do:
    update <table> set <column>=<value>;
    then I think I'll lock every row for writing by others until the commit. I figured there might be another way that makes better sense in my case.
    2.] Are there any optimizer hints I might be able to take advantage of here? I don't use hints much but with such a long running operation I'll take any help I can get.
    Thank you

    1. May be a better solution exists.
    Since you do not want to lock the table...
    Save the ROWID's of all the rows in that table in a temporary table.
    Write a routine which will loop through this temporary table and use the ROWID to update the main table and issue commit at regular intervals.
    However, this does not take into account the rows that would be added to main table after the temporary table has been created.
    2. Not that I am aware of.

Maybe you are looking for

  • Import Schema with tables resized

    I've created a new database and would like to import a schema from another database. I want everything from the schema I'm importing however I want to resize the tables. I was wondering what would be the best way to handle this. I can create the tabl

  • Print Server WPS54GU2 and Widows 7, x64

    I am trying to use my WPS54GU2 print server with Windows 7, x64. I get the following error: err=1805 AddMonitor Fail Need any help available!

  • Confused with as2 vs as3

    hi i am new at flash and have been following training i purchased from total training and have been building my website in flash using the action script 2 setting with some degree of success. i have now however hit a brick wall. i am trying to make s

  • Set-up ALE to Posting Payroll Accounting Doc to FI in Separated System

    Dear All, In My case SAP HR is in One system and Payroll run and Posting run will happen here( this system i consider as Source / sending System there are No of Company Codes and Payroll Areas) In other system full fledged FI is implementing by us(I

  • Does Lightroom rerender DNG previews????

    When a DNG is generated outside of LR, with a large preview, and then imported into LR, after any modifications, say in CS3 and reimported, does the preview get rerendered as a smaller preview? Hope that makes sense. LR does not generate a full size