Pros And Cons Of Using An External USB 2.0/FW DVD Burner ?

This is not entirely an iDVD query, though iDVD is the proram that will be using the DVD burner most. (I have also posted it on the iMac forum but received no response yet and iDVD users are likely to be more knowledgeable about DVD burning).
I intend to get an iMac when the next Leopard version is introduced.
However, I am not keen on slot loading CD/DVD burners as I once had a thicker than normal CD jam temporarily. Also I feel there is more potential for scratching and a few months ago I read that the mechanism used was comparatively flimsy and only good for a couple of hundred burns. I must stress that I don't know how accurate that last statement is?
So I have been toying with the idea of getting a cheap (LaCie) USB 2.0 or FW DVD burner to save wear and tear on the internal iMac mechanism.
If anyone could answer these questions I would be grateful:-
1. The USB 2.0 and FW versions have identical performance specs and the FW model costs nearly twice as much. So has the USB 2.0 burner got any disadvantages? (I already have Toast8).
2. Would they work as quickly and efficiently as the internal iMac burner ? (I realise I would have to convert my projects to disk images first).
3. Could I use them to install/reinstall the OS - should that ever be required ?
I am keen to know of any other advantages or disadvantages that I may have overlooked.
Ian.

I am posting this info, provided by Karsten, for the benefit of anyone else who may be contemplating making their own Firewire DVD Burner :-
"Although I have done car maintenance and general household DIY I have never attempted anything with computers," was my comment.
Karsten's reply was:-
........then, this is really a no-brainer for YOU.. me too just a Tim Allan guy/Home Improvement ..
first, open the fw-case, which is usually 4-6 small Philips screws on the bottom (back?) ... the innards then 'slide out', on the back or front, depending on case; some cases split in top/bottom half...
your DVDburner has two 'connectors - a smaller one for power, a very large (60? pins) one for 'data' .. be careful with this one..
ok: connect the power-cable within the case (mostly needs a strong push).. connect the data-/ATA cable to drive.. the plug has a 'nose', so no wrong connection possible, or you push real hard = kaputt ...
that large one has to be placed 'symmetrically'.. don't bend the many pins.. just work slowly and carefully.. easy...
ahh, forgot to mention: INSIDE are two 'slides'.. use 2-4 screws to fix the drive within the case (you find the screws usually in the package.. of the case AND of the drive.. )
close case, ...
... = 10min job, in case you have to brew an Espresso meanwhile..

Similar Messages

  • Pros and cons of using DB LINK

    Hi
    I am planning to use dblink in my current project to acess external database .But is there any other alternative to this ? What are the pros and cons of using Oracle DB Links ? Can anybody help me on this ?
    Thanks in advance .
    Pradipta

    Well, it depends on where you want to access the other database from. If you want to access tables in database B from a stored procedure in database A, then you have to use a dblink to do it. If your front-end application sometimes needs data from database A and sometimes needs data from database B, then you could establish two connections, one to each database, in your front-end, and use the appropriate connection for the different queries.
    HTH
    John

  • Pros and Cons of using JBDC Adapter with XI

    Hi Experts,
        Can somebody help me understand pros and cons in using different JDBC adapters with XI?
       I would be really greatful to you.
    Thanks & Regards
    Gopal

    Hai,
    Go through the links:
    /people/varadharajan.krishnasamy/blog/2007/02/27/configuring-jdbc-connector-service-to-perform-database-lookups
    http://help.sap.com/saphelp_nw04/helpdata/en/44/f565854b7341e6e10000000a1553f6/frameset.htm
    Regarding JDBC Adapter in SAP XI
    Configuring File and JDBC adapters
    Thanks,
    Rajani.

  • What are the pros and cons of using people keywords, given that my catalogue is already uptodate with regular keywording of all subjects?  e.g., will the people keyword transfer to other programs?, can I use the same name for a people keyword and regular

    What are the pros and cons of using people keywords, given that my catalog is already up to date with regular keywording of all subjects?  e.g., will the people keyword transfer to other programs?, can I use the same name for a people keyword and regular keyword in the same photo?

    What are the pros and cons of using people keywords, given that my catalog is already up to date with regular keywording of all subjects?  e.g., will the people keyword transfer to other programs?, can I use the same name for a people keyword and regular keyword in the same photo?

  • Pros and cons of using disksets for SUNW.HAStoragePlus

    Hi,
    untill now we were usign disksets for SUNW.HAStoragePlus for installing HA Oracle on both Sun Cluster 3.1 and 3.2.
    On the article ( http://www.sun.com/bigadmin/features/articles/cluster3.1_oracle10g.pdf ) with the title "Installing and Configuring Sun Cluster 3.1 Software for Oracle Database 10g HA" the author prefers using just the disk devices for SUNW.HAStoragePlus.
    What are the pros and cons of using disksets for disks you'll use for SUNW.HAStoragePlus?
    I think that if you do not need to add more than one disk to a diskset there's no need to to use disksets. So, you can eliminate a problematic layer such as SDS. What do you thnik and which option do you prefer?
    Murat BALKAS

    some more details :
    On " Chapter 4 . Configuring SolarisVolume Manager Software" of "Sun Cluster Data Service 16 for Oracle Guide for Solaris OS"
    (http://docs.sun.com/app/docs/doc/819-2980?l=en&q=Sun+CLuster+3.2 ) SDS disksets we use for HA-Oracle installations is explained in detail.
         Our procedure to create diskset to locate Oracle files is attached.
         After reading the article, I realized that using disksets is not a
    must for filesystems to be mounted as SUNW.HAStoragePlus. If yes, I
    would prefer using the disk devices directly instead of putting them
    on disksets so that I can eliminate one for layer which is the problematic SDS layer.
         Our disk devices ( below /dev/did/rdsk/d9 and /dev/did/rdsk/d12 ) are
    disk partitions from a RAID-5 logical drive on the storage.
    * Create the disksets required for HA-ORACLE files
    root@vasdb1 # metaset -s oraset10fs -a -h vasdb1 vasdb2
    root@vasdb1 # metaset -s oraset10fs -a -m vasdb1 vasdb2
    root@vasdb1 # metaset -s oraset10fs -a /dev/did/rdsk/d9
    /dev/did/rdsk/d12
    root@vasdb1 # metainit -s oraset10fs d325 1 1 /dev/did/rdsk/d12s0
    oraset10fs/d325: Concat/Stripe is setup
    root@vasdb1 # metainit -s oraset10fs d305 1 1 /dev/did/rdsk/d9s0
    oraset10fs/d305: Concat/Stripe is setup
    root@vasdb2 # newfs /dev/md/oraset10fs/rdsk/d305
    * Change the ownership of newly created devices to use by Oracle
    installation owner.
    root@vasdb1 # chown oracle10:dba /dev/md/oraset10/rdsk/d*
    root@vasdb1 # chown oracle10:dba /dev/md/oraset10/dsk/d*
    root@vasdb2 # chown oracle10:dba /dev/md/oraset10/rdsk/d*
    root@vasdb2 # chown oracle10:dba /dev/md/oraset10/dsk/d*
    * Make required directories on both nodes for mountpoints
    root@vasdb2 # cd /global
    root@vasdb2 # mkdir oracle10
    root@vasdb1 # cd /global/
    root@vasdb1 # mkdir oracle10
    * Add directories to vfstab on both nodes
    root@vasdb2 # more /etc/vfstab
    /dev/md/oraset10/dsk/d325 /dev/md/oraset10/rdsk/d325 /global/oracle10
    ufs 2 no logging
    * Create raw deivces for using as redolog files.
    root@lbsdb1 # metainit -s oraset d312 -p d305 1025M
    d302: Soft Partition is setup
    root@lbsdb1 # metainit -s oraset d313 -p d305 1025M
    d303: Soft Partition is setup
    root@lbsdb1 # metainit -s oraset d314 -p d305 1025M
    d304: Soft Partition is setup
    root@lbsdb1 # metainit -s oraset d315 -p d305 1025M
    d305: Soft Partition is setup
    root@lbsdb1 # metainit -s oraset d316 -p d305 1025M
    d306: Soft Partition is setup
    root@lbsdb1 # metainit -s oraset d317 -p d305 1025M
    d307: Soft Partition is setup
    root@lbsdb1 # metainit -s oraset d318 -p d305 1025M
    d308: Soft Partition is setup
    root@lbsdb1 # metainit -s oraset d319 -p d305 1025M
    d309: Soft Partition is setup
    * Register
    root@vasdb1 # scrgadm -a -L -j oraset10fs-lh -g oraset10fs-rg -l
    oraset10fs-IP -n net1@vasdb1,net2@vasdb2
    root@vasdb1 # scrgadm -a -t SUNW.HAStoragePlus:2
    root@vasdb2 # scrgadm -a -j oracle10-hastp-rs-raw -g oraset10-rg -t
    SUNW.HAStoragePlus:2 -x
    GlobalDevicePaths=oraset10,/dev/md/oraset10/rdsk/d305 -x
    AffinityOn=TRUE
    root@vasdb2 # scrgadm -a -j oracle10-hastp-rs-fs -g oraset10-rg -t
    SUNW.HAStoragePlus:2 -x FilesystemMountPoints=/global/oracle10 -x
    AffinityOn=TRUE
    root@vasdb2 # scswitch -e -j oracle10-hastp-rs-fs
    root@vasdb2 # scswitch -e -j oracle10-hastp-rs-raw

  • Are there pros and cons to using TFS as compared to Sharepoint?

    are there pros and cons to using TFS as compared to Sharepoint?

    TFS and SharePoint are two different products with overlapping functionality.
    If you are planning to use TFS for Application Lifecylce management, then I would not suggest SharePoint replacing TFS.
    Hope this helps!
    Ram - SharePoint Architect
    Blog - SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • What are the pros and cons re using an intel iMac vs MacPro with LP8?

    I'm considering getting a new intel Mac in the near future (presuming a new MacPro will be released in the near future - maybe at the same time as Leopard). I compose largely for film and television and I am composing largely using a combination of loops, software instruments and some recorded live performance. I usually sync to a low res quicktime movie.
    I do like the idea of a simple and uncluttered work environment with an iMac, added to which there's also a degree of portability with the iMac however the MacPro is obviously more powerful. I'm not sure how much the difference in power between the two computers would affect me.
    Would I be compromising myself much if I went for the iMac over the MacPro? What are the pros and cons of the iMac vs the MacPro in relation to Logic Pro 8?

    In the world large-scale music composition using samples, loops, etc (especially for film) your two biggest needs are RAM and HD speed/access. The imac looses big time in this department (as a single computer at least) as it can only be upgraded to 4gb ram, and only holds one internal HD, and also lacks any PCI expansion for DSP cards, audio interfaces, etc.
    In generalyou want to separate your data vs applications as much as possible, to ensure both can be accessed quickly and easily by the computer. So its best to have your system software / applications on one internal HD, and your logic data (samples, audio recordings, loops) on another drive(s). You can do this with an iMac via USB, FW800, & FW400, but depending on how big your projects get, how many USB and FW drives / interfaces you use, and how much data you need to stream, you could overload the buss on the imac (not sure if it has multiple busses or not). Plus speed wise, internal SATA are much faster than external drives, and the mac pro with its 4 internal slots seems like a great choice for speed and flexibility.
    However, if you dont find yourself doing large scale projects, then you might be better off going with an imac. We just purchased one for my father and it runs great! It blows the socks off of my Dual 2.5 G5 tower in terms of CPU speed! If you run into problems running things from the imac, you could always add a mac-mini down the road to stream sample libs, as a Logic node, etc, which seems to be a much more cost effective solution.
    I hope this was helpful, but I probably just made the decision harder . In fact, if I were to start over today, I dont know if I'd go with a MacPro, or an iMac / mac mini combination. they both seem to have their advantages / disadvantages.
    Best of luck!

  • What are the Pros and Cons of Using Batch Numbers over Serialization

    Dear SAP Gurus:
    Will someone please give me the benefits of using Batch Management over serial numbers in this case scenerio, or vice versa:
    Client wants to trace all the components of an assembly in a BOM.  Even the Raw material.  The client sends the material out today and has the vendor assign serial numbers to the individual pieces, the client gives the range of serial numbers to use.  We are looking at using batch numbers to accomplish this and issue one material and batch number to a production order.  Then use MB56 batch where used functionality to view history.  I am wanting to understand the benefits of this.  Please advise and points will be awarded as always. 
    Also, in this scenerio, can you issue multiple material/batch numbers to one vendor op that has its own production order?

    Yes it is a subcontract. 
    Example:  This is the solution but need the pros and cons of doing this scenerio
    Sheet of metal sent to vendor to make lets say 1800 peices of material number nas5703-01.  All 1800 pieces come back and issued to a production work order using one bacth number for one material item so that batch number can be traced in history in MB56 and a fit up report.

  • Pros and cons of using email sending package in oracle 8.1.6

    hi ,
    i would like to know the advantages /disadvantages of using email sending package from oracle 8.1.6
    compared to sending the same using say perl or php.
    iam developing a site in php/oracle8.1.6 , in which iam supposed to create a payement module.whenever a user
    register(for free trial or subscribing the site) i'll have to send him a welcoming mail.In addition to this iam also supposed to find out wether subscribers are paying cash at right time and if not send them reminder mails and other for related scenarios . i can do the same in Perl or PHP.but if iam not gaining much(say based on server performance or load) then i think i can go ahead with oracle package. when i tested it i found that its slow . what about the load that it may cause for the server (ours is linux ).
    please do give inputs on this

    Hi Ravi,
    Thanks for your reply.
    But I am specifically looking at pros and cons for web services. So the thread which you passed to me won't help.
    Regards
    Nitin.

  • Pros and cons of using iFS

    HAI ALL,
    assuming that i have stored all of my data(both RDBMS and no-relational data html files,xml files)in an iFS.the first thing i want to ask u is
    1)if i want to read the iFS and display only the RDBMS data from the iFS ,how fast it can be done compared to traditionally storing the RDBMS data in a table in oracle8i ?
    2)if i want to use indexing,how fast is iFS compared to doing indexing in oracle8i ?
    3)if i want to search for a particular record how fast is it compared to doing the same in an RDBMS table in oracle8i ?
    4)say i want to mirror the contents of the iFS from one oracle8i server to another oracle8i server,what are the pros and cons ?
    throw some light on the above points which is crutial for us to either go for iFS or not?
    awaiting reply
    null

    Ravi, I sorry, but I don't understand your questions. My comments are preceded with >>.
    Assuming that I have stored all of my data--both RDBMS and non-relational data, namely html files and xml files--in iFS, the first thing I want to ask you is:
    I don't know what you mean, "store your RDBMS data in iFS". You store RDBMS data in the Oracle database directly. 1) if i want to read the iFS and display only the RDBMS data from the iFS, how fast it can be done compared to traditionally storing the RDBMS data in a table in oracle8i?
    You don't store RDBMS data in iFS.2) if i want to use indexing, how fast is iFS compared to doing indexing in oracle8i ?
    It's the same. Under the covers, iFS uses both the normal RDBMS indexing for its metadata, and the normal interMedia Text indexing for the non-relational data.3) if i want to search for a particular record how fast is it compared to doing the same in an RDBMS table in oracle8i?
    It's the same. iFS uses the normal RDBMS searching capabilities under the covers.4) say i want to mirror the contents of the iFS from one oracle8i server to another oracle8i server,what are the pros and cons?
    I'm not sure what you mean by "mirror". Are you referring to replicating?null

  • Pros and Cons of using different uploads in LSMW

    Hi,
    I need to prepare a document, which specifies the pros and cons of migration using LSMW by different methods for a specific upload, where all the below options are available.
    1) Batch Input
    2) Direct Input
    3) Idocs
    4) BAPI
    If anyone has prepared any document, can you pls send to me at [email protected] or give me some pointers.
    Thanks in advance,
    Ananth

    Hi Kathirvel,
          I need to upload data into SAP using LSMW. I also need the document you sent to Anand. <b>Please</b> send it to me either at [email protected] or [email protected] will be a great help.
          One more thing Kathirvel, Can you help me find out how to Map flat legacy files to Hierarchical structures of SAP objects.(Flat file is having header record and list items).
         If you need more information please ask.
    Thanks a lot,
    Navdeep

  • New Firewire 800 External Hard Drive and how to use old External USB Drive

    Hi All,
    I have a new LaCie d2 Quadra Hard Disk hard drive 500 GB - FireWire / FireWire 800 / Hi-Speed USB / eSATA-300 on the way, when it arrives I plan to use it as my Time Machine Backup drive with the FireWire 800 connection. I am currently using a Seagate Free Agent USB 2.0 320 GB external drive as my Time Machine disk. The reason for upgrading is the USB drive is slow slow slow! I am not concerned about the data on my old drive and will just wipe it clean using Disk Utility. What I would like to do is use the old drive to archive iTunes 7.7.1 and Aperture 2.0 music and photographs. Can someone give me an idea how to go about doing this?
    Thanks!

    Hi, yes that will copy the whole iTunes..including everything in iTunes..BUT, as you add to your iTunes on the Main HD, with the ext HD I think you will have to manually update anything added. Or dragNdrop the whole thing again.. You could also locate you iTunes music Library & just copy that across..With images once again find your iPhoto library & just copy that across..L

  • What are the pros and cons of using a port system.

    Hello All,
    I'm a new explorer in the OS X UNIX world, and have installed macports, and, for the most part, succeeded in building and using a number of scientific applications.
    I have noted a somewhat negative attitude by some on the use of port systems, while others seem quite content with them.
    When making my decision to use macports, these "selling points" seemed desirable to me:
    ¤ Confines ported software to a private “sandbox” that keeps it from intermingling with your operating system and its vendor-supplied software to prevent them from becoming corrupted.
    ¤ Allows you to create pre-compiled binary installers of ported applications to quickly install software on remote computers without compiling from source code.
    Especially the first point seems valuable, but am I deluding myself? Or, am I losing functionality/ flexibility? Or, am I just missing out on manually installing lots of dependents?
    _I'm not trying to start a feud, here._
    I'm just looking for some pointers (preferably well-substantiated) from those more knowledgeable than me, before I am any further committed to a choice I might later regret.
    Thanks,
    PWK

    The biggest drawback/complaint I have is that you're bound by the implementation/installation policy of whoever built the port.
    For example, take the installation issue - all software gets installed into some specific directory which is great one one hand - fewer compatibility issues with conflicting versions from what Apple provide. The downside, though, is that nothing on your machine will use these versions unless/until you tweak them.
    For example, maybe you want to install the MacPorts version of PHP, great, but the standard Apache won't use that, so you either need to install the MacPorts version of Apache, or tweak your Apache installation to use the non-standard PHP version.
    Well, what about PATHs, I hear you ask? well, sure, you could prepend the MacPorts/fink/whatever directory to your $PATH, but then you always use the MacPorts/fink/whatever version of any installed software which might not be what you want.
    This becomes more of an issue in a multi-server environment where you have multiple systems that all need tweaking/maintaining - nothing worse than setting up a new server by copying an existing installation, only to find that it depends on MacPorts/fink/whatever being installed.
    The corollary to this is that these package managers often install ancillary software that you do not need, nor want. It might have improved since I last looked, but installing either MacPorts or fink, for example, installs whole new versions of perl, GNU tools (gzip/gunzip, etc.), curl, and more - they even install new copies of openssl/ssh.
    I don't want these. These already exist on my system so what are they needed for? Why can't they use the standard copies? Are they 'tweaked' in some way? How? why?
    The secondary issue is that you are limited to the port's implementation - especially compile options - which may not be ideal for your machine.
    Unlike most GUI-based software, much open-source software uses compile-time options to configure the executable. Now the port installer might do a reasonable job of tweaking the installation, but it's not psychic so there will be cases where you end up with sub-optimal installations. Sure, they might work well enough, but that doesn't beat knowing what those options are up-front and building your own.
    Now there have been cases where I've tried to install software and almost given up when faced with a daunting list of dependencies (try RRD, or PHP w/ GD, for example) and have almost given up, but when you succeed the satisfaction of getting it working, plus the fact you now know how to do it counts for a lot.
    Now, do I wish that Apple would do a better job of keeping up with the latest versions of the open source software they include in Mac OS X? absolutely - isn't that what Software Update is all about??). But I also wish the port maintainers would spend more of their time updating the original source configure and make scripts to incorporate whatever changes they're making to the code so that Mac users can easily use the same source distribution as other UNIX flavors.
    And right there is the final rub IMHO - all the while the port managers create their distributions of common tools Mac OS X is treated like a poor step-child that's kept in the cellar. OK, maybe not that bad, but there's no reason why anyone who wants to install open source software on a Mac should need much more than:
    (download>
    ./configure
    make
    sudo make install
    it really isn't all that hard. Too often the port managers perpetuate the myth that Mac OS X is too different from other UNIX systems to work with the standard tools that everyone else knows.
    Now, maybe I'm also too old for this game since you always downloaded and built tools yourself when I started, and maybe package managers on Linux (which may have the same issues I've complained about) have helped elevate Linux in the mindset of a younger generation who are looking for a quick fix. All I can say to that is…
    GET OFF MY LAWN! :-D

  • Photoshop CS6: Pros and Cons of Using Smart Objects

    I haven't had Photoshop CS6 for that long, and have only just got past feeling uncomfortable with using Curves, now I've learnt how to use them properly.
    My concern is - I am currently learning about Smart Objects. The concept, at first, seemed like 'the best thing since sliced bread', being able to non-destructively use filters, Shadows/Highlights command, Unsharp Mask, endlessly scale using Free-Transform etc etc, without harming pixels at all.
    However, the more articles I read about their use in Photoshop, the more I am afraid to start using them in my workflow.
    I understand that when you convert to a Smart Object, this process is non-destructive, i.e. I can perform as many readjustments to a filter, for example, and Photoshop will always work from the embedded container file (which has had no filter adjustment made to it) to adjust the filter to your most recently adjusted settings. If you later decide you don't want to use a filter at all, and rasterize the Smart Object back into a regular layer again, is this process non-destructive as well?
    Then there is this article, which I struggle to understand properly:
    http://bjango.com/articles/smartobjects/
    Please see the part 'Smart Objects Created in Photoshop'. It seems to say I can't scale with a Smart Object without causing interpolation and blurry edges. Please can somebody clarify what the writer of this article is trying to get across, because it is well documented that Smart Objects can be endlessly rescaled non-destructively.
    Please understand I use Photoshop primarily for editing photographs.

    There is much modern focus on "non-destructive" editing, but keep in mind if you don't overwrite or destroy the original file there is no destruction at the highest level.  Put in layman's terms, you could always start over with the raw file.
    That thought segways into my next one:  Non-destructive editing makes sense if you need to use the same information for a variety of somewhat related purposes, or if the work product may need to change (e.g., to suit the whims of a fickle client).
    But at another extreme, if you're editing for a particular purpose - say creating the best possible print from an exposure - sprinting right for the finish line by changing pixel values directly and being done with it can be an extremely effective approach.  This requires that you get things right the first time, and that takes practice.
    Some folks do their Photoshop work by building up layer after layer and using smart objects, smart filters, etc., and this can be effective but no computer has yet been built that can composite all that stuff in real time with a big image.  So there IS a cost to doing it.  What you might gain by being able to re-do things, you might not have needed to gain if your control responses were instantaneous and you could tweak the intermediate result at every step very easily.  Note the number of posts about how slow Photoshop CS6 is/was at editing deep documents, some by people using 2012 computers.
    As with most things, it's horses for courses.  It's good that Photoshop gives us rich tools and choices for how to work.
    Regarding your specific question, bear in mind that what's communicated to the parent document from each of its embedded Smart Objects is a flat, rasterized image.  Think of the embedded smart object kind of like going off and opening another document, making the changes you want, saving the document, then flattening it and pasting the pixels into your parent document.
    In the very first example in the linked article, they show how the smart-object-rasterized image of a vector circle, subsequently scaled by resampling the parent document in which the Smart Object is used, becomes fuzzy as it is scaled up.  Once you understand this you realize that of course you could scale up the smart object itself, e.g., to a size equal to or larger than what's ultimately needed by the parent document, and then it could be crisp in the parent document where it's used.
    Of course, having all your smart objects at a size larger than you need takes up even more resources.
    -Noel

  • Pros and Cons of using REST over JMS (and other technologies)

    Hey all,
    I am working on a project where we were using JMS initially to send messages between servers. Our front end servers have a RESTful API and use JEE6, with EJB 3.1 entity beans connected to a mysql database and so forth. The back end servers are more like "agents" so to speak.. we send some work for them to do, they do it. They are deployed in GlassFish 3.1 as well, but initially I was using JMS to listen to messages. I learned that JMS onMessage() is not threaded, so in order to facilitate handling of potentially hundreds of messages at once, I had to implement my own threading framework. Basically I used the Executor class. I could have used MDBs, but they are a lot more heavyweight than I needed, as the code within the onMessage was not using any of the container services.
    We ran into other issues, such as deploying our app in a distributed architecture in the cloud like EC2 was painful at best. Currently the cloud services we found don't support multi-cast so the nice "discover" feature for clustering JMS and other applications wasn't going to work. For some odd reason there seems to be little info on building out a scalable JEE application in the cloud. Even the EC2 techs, and RackSpace and two others had nobody that understood how to do it.
    So in light of this, plus the data we were sending via JMS was a number of different types that had to all be together in a group to be processed.. I started looking at using REST. Java/Jersey (JAX-RS) is so easy to implement and has thus far had wide industry adoption. The fact that our API is already using it on the front end meant I could re-use some of the representations on the back end servers, while a few had to be modified as our public API was not quite needed in full on the back end. Replacing JMS took about a day or so to put the "onmessage" handler into a REST form on the back end servers. Being able to submit an object (via JAXB) from the front servers to the back servers was much nicer to work with than building up a MapMessage object full of Map objects to contain the variety of data elements we needed to send as a group to our back end servers. Since it goes as XML, I am looking at using gzip as well, which should compress it by about 90% or so, making it use much less bandwidth and thus be faster. I don't know how JMS handles large messages. We were using HornetQ server and client.
    So I am curious what anyone thinks.. especially anyone that is knowledgeable with JMS and may understand REST as well. What benefits do we lose out on via JMS. Mind you, we were using a single queue and not broadcasting messages.. we wanted to make sure that one and only one end server got the message and handled it.
    Thanks..look forward to anyone's thoughts on this.

    851827 wrote:
    Thank you for the reply. One of the main reasons to switch to REST was JMS is strongly tied to Java. While I believe it can work with other message brokers that other platforms/languages can also use, we didn't want to spend more time researching all those paths. REST is very simple, works very well and is easy to implement in almost any language and platform. Our architecture is basically a front end rest API consumed by clients, and the back end servers are more like worker threads. We apply a set of rules, validations, and such on the front end, then send the work to be done to the back end. We could do it all in one server tier, but we also want to allow other 3rd parties to implement the "worker" server pieces in their own domains with their own language/platform of choice. Now, with this model, they simply provide a URL to send some REST calls to, and send some REST calls back to our servers.well, this sounds like this would be one of those requirements which might make jms not a good fit. as ejp mentioned, message brokers usually have bindings in multiple languages, so jms does not necessarily restrict you from using other languages/platforms as the worker nodes. using a REST based api certainly makes that more simple, though.
    As for load balancing, I am not entirely sure how glassfish or JBoss does it. Last time I did anything with scaling, it involved load balancers in front of servers that were session/cookie aware for stateful needs, and could round robin or based on some load factor on each server send requests to appropriate servers in a cluster. If you're saying that JBoss and/or GlassFish no longer need that.. then how is it done? I read up on HornetQ where a request sent to one ip/hornetq server could "discover" other servers in a cluster and balance the load by sending requests to other hornetq servers. I assume this is how the JEE containers are now doing it? The problem with that to me is.. you have one server that is loaded with all incoming traffic and then has to resend it on to other servers in the cluster. With enough load, it seems that the glassfish or jboss server become a load balancer and not doing what they were designed to do.. be a JEE container. I don't recall now if load balancing is in the spec or not..I would think it would not be required to be part of a container though, including session replication and such? Is that part of the spec now?you are confusing many different types of scaling. different layers of the jee stack scale in different ways. you usually scale/load balance at the web layer by putting a load balancer in front of your servers. at the ejb layer, however, you don't necessarily need that. in jboss, the client-side stub for invoking remote ejbs in a cluster will actually include the addresses for all the boxes and do some sort of work distribution itself. so, no given ejb server would be receiving all the incoming load. for jms, again, there are various points of work to consider. you have the message broker itself which is scaled/load balanced in whatever fashion it supports (don't know many details on actual message broker impls). but, for the mdbs themselves, each jee server is pretty independent. each jee server in the cluster will start a pool of mdbs and setup a connection to the relevant queue. then, the incoming messages will be distributed to the various servers and mdbs accordingly. again, no single box will be more loaded than any other.
    load balancing/clustering is not part of the jee "spec", but it is one of the many features that a decent jee server will handle for you. the point of jee was to specify patterns for doing work which, if followed, allow the app server to do all the "hard" parts. some of those features are required (transactions, authentication, etc), and some of those features are not (clustering, load-balancing, other robustness features).
    I still would think dedicated load balancers, whether physical hardware or virtual software running in a cloud/VM setup would be a better solution for handling load to different tiers?like i said, that depends on the tier. makes sense in some situations, not others. (for one thing, load-balancers tend to be http based, so they don't work so well for non-http protocols.)

Maybe you are looking for

  • Videos don't back up on MacBook Pro

    Hello I back up all my images well on my MBP but  the videos don't appear on Iphoto. I need  to delete to liberate space but I can't find the videos. What is the problem? Thanks O

  • OKB9 - Cost of sales accounts assigned to profitability segement

    Hi All, I am brand new in an SAP team and I want to understand how OKB9 works and how I could control it. I want that all Cost of sales accounts ( included cycle count adjustment, difference when posting MIRO ) are posted to profitability segment. At

  • Firewire connection to a External Lightscribe drive by Lacie.

    I have just changed to a Mac & I am having problems with Getting my Mac to recognise a laic external drive. The system profiler Lists the firewire Device as: FireWire Bus: Maximum Speed: Up to 800 Mb/sec d2 DVDRW FW/USB: Manufacturer: LaCie Model: 0x

  • Arabic output pdf file problem

    Hi I have tried to print a report through oracle application server 10g, but the ARABIC pdf output printed form left to right which is wrong (The correct is from right to left for Arabic). And the same report we printed in HTML file is appeared OK, t

  • Proper drive modules for a G5 Xserve?

    Hi all, We need up upgrade the drive modules in a G5 Xserve. I checked in the apple store, and the 750 GB SATA modules have this line: "These Serial ATA Apple Drive Modules are qualified for use only with the new Intel-based Xserve. For Xserve RAID o