A suggestion for handling optdepends.

[UPDATE]
I've rewritten this post to present the idea more clearly.
[/UPDATE]
I've submitted a feature request: http://bugs.archlinux.org/task/12708
If you like this idea, please express your support there too.
The Current Situation
The pacman database contains a file named "depends" in each package's directory which specifies the package's depends in the following format:
%DEPENDS%
foo
bar
this
that
Pacman reads this file and creates an internal representation of this list for the package which it uses during the sync operation to handle dependencies. Each package may also list optional dependencies which provide further functionality for the package without being required to use the package. Let's take gimp as an example:
pacman -Si gimp
Depends On : gtk2>=2.14.4 lcms>=1.17 libxpm>=3.5.7 libwmf>=0.2.8.4 libxmu>=1.0.4 librsvg>=2.22.3 libmng>=1.0.10 dbus-glib>=0.76 libexif>=0.6.16 pygtk>=2.13.0 desktop-file-utils gegl>=0.0.22 curl
Optional Deps : gutenprint: for sophisticated printing only as gimp has built-in cups print support
libwebkit: for the help browser
poppler-glib: for pdf support
hal: for Linux input event controller module
alsa-lib: for MIDI event controller module
If you want to install libwebkit to use gimp's help browser, you have 2 choices:
pacman -S libwebkit
pacman -S --asdeps libwebkit
With the first choice, libwebkit will clutter the list of explicitly installed packages ("pacman -Qet"). With the second choice, libwebkit will be considered an orphan and will be listed in "pacman -Qdt", which not only means it clutters that list but it also means that you can no longer purge orphans with "pacman -Rs $(pacman -Qqdt)".
In both cases, when you uninstall gimp, you must remember to uninstall libwebkit too, because pacman doesn't know that you installed it as a dependency for gimp.
This may not be a problem for one package, but it will once the number or optdepends you have installed increases.
My Suggestion
Create an optdepends database in /var/lib/pacman/optdepends/ that follows the same format as the current depends files:
%OPTDEPENDS%
foo
bar
this
that
Add a function to pacman to check if a package has an entry in the optdepends database.
During a sync operation, treat any optdepends specified in the optdepends database as if they had been specified in the depends file.
Add a "--getoptdeps" flag to pacman to enable interactive installation of optdepends for a given package that follows the same pattern as the current group installation dialogue.
Store the results of this dialogue in the optdepends database.
Let's take gimp as an example again. You know that gimp has an optdepend that you want, so you do this:
pacman -S --getoptdeps gimp
gimp package found, checking optdepends
:: gimp has the following optdepends:
gutenprint: for sophisticated printing only as gimp has built-in cups print support
libwebkit: for the help browser
poppler-glib: for pdf support
hal: for Linux input event controller module
alsa-lib: for MIDI event controller module
:: Install whole content? [y/N] n
:: Install gutenprint as optdepend for gimp? [y/N] n
:: Install libwebkit as optdepend for gimp? [y/N] y
:: Install poppler-glib as optdepend for gimp? [y/N] n
:: Install hal as optdepend for gimp? [y/N] n
:: Install alsa-lib as optdepend for gimp? [y/N] n
Retrieving libwebkit...
Libwebkit will now be handled exactly as if it were a true dependency of gimp. It is neither explicitly installed nor an orphan. It will get removed with gimp unless it's a depend or optdepend for another package.
/var/lib/pacman/optdepends/gimp/optdepends now looks like this:
%OPTDEPENDS%
libwebkit
The Benefits of This Method
Default pacman behavior remains unchanged.
Most of the code is already in place (depends file parser, package selection dialogue, dependency handling during sync operation)
The existing databases (local, sync) would not require any changes.
The only extra overhead would be checking if a package has an entry in the optdepends database.
Users can define their own optional dependencies by adding them to the optdepends database (manually or with provided tools)
This opens the doors for metapackages to replace groups.
About Metapackages
A metapackage is a package that contains nothing itself but organizes other packages. For an example of how these work on Arch, take a look at metapax.
Every package group could be converted to a metapackage if this suggestion were implemented. To understand the benefits of using metapackages instead of groups, we need to consider how groups currently work.
When you install gnome, this is what happens:
pacman -S gnome
gnome package not found, searching for group...
:: group gnome (including ignored packages):
epiphany gnome-applets gnome-backgrounds gnome-control-center gnome-desktop gnome-icon-theme gnome-media gnome-mime-data gnome-mount gnome-panel gnome-python gnome-screensaver gnome-session gnome-settings-daemon
gnome-themes gnome2-user-docs libgail-gnome metacity nautilus notification-daemon yelp
:: Install whole content? [Y/n] n
:: Install epiphany from group gnome? [Y/n] y
:: Install gnome-applets from group gnome? [Y/n] y
:: Install gnome-backgrounds from group gnome? [Y/n] y
:: Install gnome-control-center from group gnome? [Y/n] y
:: Install gnome-desktop from group gnome? [Y/n] y
Most users will install all of the packages, others won't. In either case, once the packages are on your system, pacman has no concept of the gnome "group". Each package is effectively independent of the gnome group. If a new package is added to the gnome group, for example "gnome-somenewpackage", pacman will not install it during your next update. It won't even ask you about it or tell you that there is a new package. There have been questions on this forum from users wondering why new gnome packages weren't installed automatically. This applies to all groups... kde, xorg, xfce, etc.
If we instead replaced groups with metapackages, each package in the group would become an optdepend of the metapackage. With my suggestion, this would lead to exactly the same dialogue as above. Each package in a metapackage would remain optional just as packages in groups currently are. The advantage would be that if "gnome-somenewpackage" is added to the gnome metapackage, it would be possible to inform the user during an update and prompt for installation.
Here's the discussion on flyspray about groups vs metapackages: http://bugs.archlinux.org/task/8242
Notes on Metapackages
The only complicated parts of handling metapackages are the following:
If a package is a metapackage, it should be detected during installation and automatically jump to the optdepends dialogue in order for it to behave exactly as groups do.
During a metapackage update, there should be a way to inform the user of new optdepends, but this might be as simple as including an upgrade message in the package install file.
Last edited by Xyne (2009-01-13 16:20:52)

No, this wouldn't affect a packages "true" dependencies in any way.
Packages now have 2 types of dependencies, "depends" and "optdepends". "depends" are installed with the package and are required for the package to run. "optdepends" just display message during installation to the effect of "optional dependencies for this package: foo - for foo support, bar - for bar support, baz - for web access and printing". "gimp" is an example of a package with optional dependencies.
As it is right now, optional dependencies are nothing more than installation messages. If you decide to install optional dependencies for a given package, they are completely independent of the target package. Let me give a concrete example:
pacman -Si gimp
Depends On : gtk2>=2.14.4 lcms>=1.17 libxpm>=3.5.7 libwmf>=0.2.8.4 libxmu>=1.0.4 librsvg>=2.22.3 libmng>=1.0.10 dbus-glib>=0.76 libexif>=0.6.16 pygtk>=2.13.0 desktop-file-utils gegl>=0.0.22 curl
Optional Deps : gutenprint: for sophisticated printing only as gimp has built-in cups print support
libwebkit: for the help browser
poppler-glib: for pdf support
hal: for Linux input event controller module
Ok, I want to install gimp and I want libwebkit to be able to use gimp's help browser. I have 2 options right now:
Option 1:
pacman -S gimp libwebkit
libwebkit is now installed as an explicit package.
Option 2:
pacman -S gimp
pacman -S --asdeps libwebkit
libwebkit is now installed as a dependency.
With option 1, libwebkit clutters my list of explicitly installed packages (pacman -Qet). With option 2, it is considered an orphan by pacman and would be removed with an orphan purge ("pacman -Rsn $(pacman -Qqdt)"). In both cases, if I remove gimp, libwebkit stays on my system even though I only want it for gimp. It will not be removed with "pacman -Rs gimp" because pacman has no idea that it has anything to do with gimp.
My suggestion therefore it to create a way for pacman to treat selected optdepends as depends. Given the gimp example, what this would mean for the user is that when the user runs "pacman -S gimp", it would present a dialogue as follows:
gimp has the following optional dependencies:
gutenprint: for sophisticated printing only as gimp has built-in cups print support
libwebkit: for the help browser
poppler-glib: for pdf support
hal: for Linux input event controller module
Would you like to install these optional dependencies? [y/N] y
Install all optional dependencies? [y/N] n
Install gutenprint? [y/N] n
Install libwebkit? [y/N] y
Install poppler-glib? [y/N] n
Install hal? [y/N] n
retrieving libwebkit...
libwebkit would now be treated as if it had been specified in gimp's depends array. When you uninstall gimp, it would be removed with gimp just as gimp's other dependencies.
There would also be tools to add optional dependencies to a package later (either with pacman or something else... I'll gladly contribute something to do this), so if you want to add gutenprint to gimp later, you could and then let pacman grab it as a dependency of gimp.
Again, this has nothing to do with "true" dependencies of packages. This is just a fix for the kludge now known as "optdepends".
First, let's look at what happens when you install the gnome
pacman -S gnome
gnome package not found, searching for group...
:: group gnome (including ignored packages):
epiphany gnome-applets gnome-backgrounds gnome-control-center gnome-desktop gnome-icon-theme gnome-media gnome-mime-data gnome-mount gnome-panel gnome-python gnome-screensaver gnome-session gnome-settings-daemon
gnome-themes gnome2-user-docs libgail-gnome metacity nautilus notification-daemon yelp
:: Install whole content? [Y/n] n
:: Install epiphany from group gnome? [Y/n] n
:: Install gnome-applets from group gnome? [Y/n] y
:: Install gnome-backgrounds from group gnome? [Y/n] y
:: Install gnome-control-center from group gnome? [Y/n] y
:: Install gnome-desktop from group gnome? [Y/n] y
:: Install gnome-icon-theme from group gnome? [Y/n] y
:: Install gnome-media from group gnome? [Y/n] n
:: Install gnome-mime-data from group gnome? [Y/n]
After the installation, each of those packages is treated as an independently installed package. The "group" gnome only exists when you select packages for the initial installation. There have been threads on this forum posted by users who didn't understand why "pacman -Syu" failed to retrieve packages that had been added to the gnome "group". That's because pacman simply updates the existing packages and doesn't know about groups once their on the system. If they add "gnome-some-new-package", you have to either run "pacman -S gnome" again and either re-install all the packages or run through the dialogue until you get to the new package, or you have to explicitly install any new packages directly. You need to find out when a new package has been included in gnome too, because there is no way for pacman to know this (I posted a script somewhere to check if you have all packages in a group, forgot where though).
The idea of a metapackage is that it is an empty package that simply specifies other packages as dependencies (i.e it contains no files, just package information). That's what metapax creates (http://bbs.archlinux.org/viewtopic.php?id=53788). If a gnome metapackage is created with metapax, the user can install it and get all of the packages in gnome. If a new package is added to the gnome metapackage, this package will be retrieved on the next sync update. The user doesn't need to regularly check that he has everything in gnome because the metapackage handles all the packages in gnome.
The problem with this approach is that everything it specifies is a "depends", so you have to include everything. With "optdepends" though, you would get a similar dialogue to the one when installing a group (as my example above for gimp), but the installed metapackage would have all of the advantages of a package when syncing and uninstalling.
Users would also create their own metapackages. Lets say that you would like to create a custom DE from existing packages so that you can quickly install a simliar desktop on different systems. You could create a metapackage with your window manager, text editor, image viewer, video player, etc. You could then simply install that package on different machines and be presented with the choice of which components you'd like to install. You could distribute this over your network with a local user repository. If you later want to add another package to it, that package could be optionally included on the different machines during the next update.
Last edited by Xyne (2009-01-11 18:03:10)

Similar Messages

  • Suggestions For Handling Bulk Updates Without Blocking Local User Updates

    Hi,
    This is a request for general implementation suggestions.
    We have a CRM database that is used by a call center application to allow reps to update customer info during business hours.  Outside of business hours we receive data feeds from another source that are bulk uploaded into the database to refresh the
    data. This has been working fine for now, but we are expanding the use of the app to offices in other countries and are beginning to encounter more blocking during the bulk upload because now the app is being used outside our local business hours because of
    the time difference.
    It seems this would be a common problem, but I haven't been able to identify a good source of information on methods to overcome this. 
    What suggestions do people have to complete bulk loads while still allowing updates by local users?
    Ideas I have been considering include duplicating the database and performing merge replication, using service broker to queue updates during the bulk load, using snapshot isolation or isolation levels with row versioning....
    Any ideas would be greatly appreciated.
    Thanks,
    Reinis

    I have considered trying to break the update into chunks, but my fear as you said is it will take a lot longer.
    Quite a few years ago, I rewrote a process in our system to make it a set-based update for better performance. But I heard as late as today from our customer with the biggest volumes, that they are still running the old process which updates one by one,
    because when they do all at once the blocks other operations. (Which admittedly is due to other shortcomings in the system.)
    Anyway, I would recommend you to look into that, and particularly make the chunks size configurable. Maybe you are able to find a sweet spot where blocking is not a problem, but the chunks are till big enough to be efficient.
    Both of those resolutions are feasible and I certainly know there will be significant changes we may have to undertake.  I was just thinking that this must be a really common issue now with the global reach of data and there must be people who have
    handled it in different ways.
    It's not really a simple problem, and the solution is likely to depend on the current architecure you have. What fits in one shop, may not fit in another. And most of all, one solution may be a lot less costly to implement than another.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Drive setup suggestion for multiple users editing simultaneously?

    At work here, a city college, not a professional company or broadcast studio, so resources are limited, we often have three people editing HDV content simultaneously in Final Cut Pro.
    Keeping the content on our multiple backup servers, there's simply too much network traffic to do this smoothly.
    Instead of keeping projects locally spread across multiple machines, I would like one centralized place for everything, for the Macs to access directly over gigabit or something else.
    So, what kind of setup do you guys suggest for this?
    The machines here are two quad-core G5s (no RAID or fiber-channel right now), and a Core2Duo iMac, F400 only.
    Again, it'd need to be able to handle three HDV projects going on simultaneously without skipping due to having to seek back and forth all over the drive.
    Thanks.

    Yes, an XSan system would perfectly fit the bill for what you want to do, but an XSAN is not a cheap solution. When it is all said and done, it will cost you tens of thousands of dollars.
    The best, cheap solution would be to use Firewire drives. I would not duplicate a project onto three drives, because you will then always be trying to figure out which version is the most current. Instead, keep all of your project, capture scratch and render files on the firewire drives. Then move the drive to whichever computer you want to do the editing on.
    Properly log & capture all your footage, then archive all your project files, because Firewire hard drives will fail over time, loosing all the info on the discs. I did say this was the cheap solution. "Cheap" does have its costs…

  • Suggestions for setting up external storage for video editing please?

    I am just starting up as a one-man video-editing business, using a 24 inch iMac running Snow Leopard, with Final Cut Studio. I have realised I'll need an external hard drive for HD footage, and I also need to get some back-up solution in place. Looking for speedy i/o, I would like to connect via the ethernet port (if only eSata was included in the iMac, eh!)
    I've been planning to get a Drobo, but looking around the forums I see that people say it's too slow for using as a working drive to keep all my source footage, so I've been looking at the G-tech 4 Tb, as it says it is designed for media-content production. Does anyone know if I could use two of the drives for working from and two as back up? Or would it be better to keep back-up entirely seperate, and get a Drobo for that for the G-tech to back up to?
    But I am also wondering whether a Mac Mini could be a worthwhile addition to this set up? I find myself sitting around waiting for rendering to complete on clips in my timeline (not to mention exporting to Quicktime conversion!), and I wondered if I put a Mac mini with Xserve installed (Apple store offers this with two 500gb hard drives inside), maybe I could farm the rendering out to the mini while I get on with editing on my iMac? That would require two installations of FCP, which I thought was allowed, but just today in a forum I saw that one would have to be a laptop... anyone have any suggestions for getting rendering done without stopping FCP from doing other things simultaneously?
    Also I don't know if that arrangement is even feasible... I see all these things like Xsan and Artbox... as a one workstation editing suite, does FCP handle all the dataflows for external working drive and external back ups okay without having to introduce more controllers?
    And can anyone explain to me how I could set up an ethernet connection to an external hard drive, or does that require the extra controllers mentioned above? I've seen it said that you can do it via ethernet, but haven't seen how you can actually go about doing it.
    Thanks for overlooking my newbie quality, any answers received with humble gratitude!
    Cheers, Syd

    Hi there,
    as NLEdit said, there will be loads of answers to this.
    IMO i'd avoid drobo like the plague. G tech drives have served me incredibly well working on a huge variety of broadcast projects (just over the water from you in Bristol), I've had no probs with FW800 when using DVCproHD, pro res is ok, sometimes a little slow with multiple layers and of course it eats up storage space. so I'd go for 2 4tb drives, keep the backup one in a different location.
    one tip that has saved me countless times is to format them as follows:-
    mac os extended (not journalled)
    create 2 partitions
    partition 1 - make this small (1gig) and call it "drive a - do not use"
    partition 2 - the rest of available storage and call "drive a"
    this is because the boot sector of the drive is within the first partition and with this method if it goes down it can be re erased without losing all your footage.
    If you call your backup drive the exact same name and have the exact same folder structure, you will not have to relink if you get a problem.
    Ignore getting a mac mini for rendering, won't help at all in FCP. instead I would make every attempt you can at buying a mac pro rather than an imac. much more expansion/speed possibilities and a more robust solution.
    best of luck
    Andy

  • Need a Suggestion For implementing the Digital Signature For the Documents

    Hi,
    Currently I am working in a Document Management System. I need a Good Suggestion for how to implement a Digital Signature For the Documents.
    Thanks in Advance
    Sabarish V

    Hmm, if you are not using Oracle Payroll, what are you using for payroll? I am wondering why you could not use your payroll system, whatever it is, to handle this reimbursement program.
    Well, you may want to talk to Oracle support about how to handle this in Oracle iExpense. You can certainly handle advances for Expense Reports. You would then apply the advance to the expense report items. The catch is I don't think you can stop expense item entry after the adavance is satisfied. You would have to set up a work flow process of some kind to have the expense reports reviewed and only approve expenses that are applied to the advance, is what I am thinking. Not your ideal solution, but something to think about. It could be the Oracle folks might know of a sneaky way to handle this. What you are trying to do is unusual. Employee advances are common, but the idea of not being able to exceed the advance amount is what unusual about this. Normally you will accept any expenses over the advance amount and reimburse the employee for those extra amounts not advanced.
    Good luck.
    John Dickey

  • Extrernal Number Ranges for Handling Units in COWBPACK

    Dear Friends,
    I need to activate external number range for handling units in COWBPACK transaction.
    When I run COWBPACK, I don't get any field to enter Handling Unit Number.
    Can anybody suggest the way out.
    Regards,
    Harsh

    OK

  • Suggestion to handle one-time-deal customer x1000

    We have thousands of old items that we would like to sell on ebay. All of these items are identical. By selling on ebay, we expect that there will be hundreds-thousands of new "one-time-deal customers".
    We don't want to create a new BP for one-time deal customers (as it will waste time), but at the same time, we'd like to know which item is sold to whom and record them. So just in case a customer called and inquired about the shipment, we can still provide the accurate information.
    Is there any efficient way or suggestion to handle this kind of transactions
    Edited by: Darius Heydarian on Mar 28, 2008 11:27 AM

    First you define a customer under Business Partner menu with code e.g. EbayCust and name XXX.
    When you enter the invoice into the SBO and chose the EbayCust as customer, the system fills in the customer name as XXX. You can overwrite it with the actual name and if you leave the field with the Ctrt+Tab combination, the system will let the changed name inthe field.

  • Feedback on Captivate as alternative to Authorware, or suggestions for better alternative?

    Hi -
    I've been using Authorware for a number of years to create questionnaires for research studies...and I'm glad it still works on Windows 7 and we're going to test our applications on a Windows-based tablet soon.
    But I know I will need to find a new solution for a development tool as technology moves forward.
    I've seen suggestions to head towards Captivate.  We purchased a copy and I've been testing it.
    I'm learning that Captivate does not support arrays, and will only support a limited number of user-defined variables (although I haven't received a clear answer as to the exact number....but the estimates I was given were really small).  In Authorware, we've created systems that use 3000+ variables using arrays and have had no problems.
    From the Captivate forum, I'm learning that it appears that there is not a simple way to export data to a text file on the computer (like Authorware's WriteExtFile and AppendExtFile).  I'm told I need to have an internet connection, and set up a LMS (learning management system).
    Our projects are research studies, so we're not grading or scoring...we just need to capture the answers that people provide and write that data to a text file.
    Two different people suggested that SurveyMonkey would be the better option.
    We would prefer a solution that allows for the creation of a standalone executable file with a nice graphical user interface, audio, easy data handling, and not be required to have an internet connection, etc.
    As Authorware users, would any of you have a suggestion for an Adobe (or other) product?  Or should I just learn Captivate and grow with it and hope some better features are added in the future?
    Thanks!
    Scott

    Yeah, it'll be a while, I think, before CP is anywhere close to the capabilties of Authorware. If you want the 'ease' of Captivate, you likely have to rethink how your product will work. Overall, filesystem access is going to be a challenge - very little software seems to allow that these days, and understandably so considering security issues.
    Lectora may be the best more for you, all that said. I'm not positive they allow filesystem access (write text files) but I think so. It's also not Flash-based, so if you do want to move to mobile delivery without Flash plugin support issues, well, there ya go.
    On the other hand, I believe Adobe's AIR product (method of bundling a Flash application for local playback) may now allow filesystem access, so if you can pick up Flash or Flex, publishing to an AIR application may be an option.
    OR you could try to get a Flash Widget made that you insert into CP to handle the text file writing, though that doesn't get you around the custom variable limit...
    ...BUT do note that CP can call JS functions in the parent HTML window, so you might be able to do what you want, as far as variables go, by creating the functions, values, etc. in JS then have CP call those functions...?
    All that said, Xerte may be a good option for you. It's got quite an underground (i.e. quiet) following:
    http://www.nottingham.ac.uk/xerte/
    It's designed to be an Authorware replacement in design and form, though I've not touched it in years so have no idea how far it's progressed (the support listserve is pretty active though).
    And it's free...!
    HTH
    Erik

  • Changing web-services.xml for handler

    I have been using the servicegen task to generate the .ear file for my webservice.
    Among other things, it took care of generating the web-services.xml file for me.
    I have a need to write a handler and thus it requires changes in the web-services.xml.
    I generated the web-services.xml file the first time and made the handler related
    changes and it's working fine.
    But now, I loose the auto-generation facility (since I have hand-edited it for
    handler changes). Any time, I change the interface of the webservice, I would
    have to manually change the web-services.xml file to reflect the new interface.
    Is there a better way of doing this so that I can auto-generate the webservices.xml
    file plus make my handler changes. Anything in the ant to do it smartly, rather
    than doing it manually. I would imagine that anyone who writes a handler would
    run into similar situation.
    thanks for help.
    John

    Here is an example of ejb and source2wsdd.
    "manoj cheenath" <[email protected]> wrote in message
    news:[email protected]...
    It works with EJBs too. You should use ejbLink attribute
    to point to the ejb link. Something like:
    <source2wsdd
    javaSource="${sourcecode.for.the.ejb.interface}"
    ddFile="${webss.output.dir}/WEB-INF/web-services.xml"
    typesInfo="${webss.output.dir}/WEB-INF/classes/types.xml"
    serviceURI="${webss.service.url}"
    ejbLink="${webss.ejb.link}" >
    "John" <[email protected]> wrote in message
    news:[email protected]...
    Thanks for the response. But the source2wsdd task only works with javacomponents
    whereas we have ejbs. Any other clues/suggestions.
    Thks,
    - John.
    "manoj cheenath" <[email protected]> wrote:
    Here you go.
    regards,
    -manoj
    "John" <[email protected]> wrote in message
    news:[email protected]...
    Thanks for the response. I'll appreciate if you could share the
    workaround
    in 7.0.
    I'm okay with it even if it's unofficial and might go away.
    thks,
    - John.
    "manoj cheenath" <[email protected]> wrote:
    Hi John,
    This is a know problem with WLS 7.0 and WLS 8.1. The
    DD file (web-services.xml) is much more expressive than
    the servicegen ant task. So one needs to modify the DD
    to use some features. But, if one modifies the DD then it
    is difficult to use the ant tasks again for iterative developemnt.
    JSR 181 [1] and JSR 175 [2] tries to address this
    problem by providing metadata (markup) in source code.
    Unfortunately these JSRs are in the early stages and
    will be completed around JDK 1.5 timeframe. There
    is an internal implementation of a similar beast in WLS
    since 7.0 SP2, but it is not officially supported or
    documented. Mainly because the above said JSRs are
    supposed to address iterative development problem
    in a standard way.
    So, if you cannot wait for the JSRs and dont mind using
    a non standard implementation that may not be supported
    or may change in the next major release (~WLS 9.0),
    let me know. I can send you details.
    Regards,
    -manoj
    [1] http://www.jcp.org/en/jsr/detail?id=181
    [2] http://www.jcp.org/en/jsr/detail?id=175
    "John" <[email protected]> wrote in message
    news:[email protected]...
    I have been using the servicegen task to generate the .ear file
    for
    my
    webservice.
    Among other things, it took care of generating the
    web-services.xml
    file
    for me.
    I have a need to write a handler and thus it requires changes in
    the
    web-services.xml.
    I generated the web-services.xml file the first time and made the
    handler
    related
    changes and it's working fine.
    But now, I loose the auto-generation facility (since I have
    hand-edited
    it
    for
    handler changes). Any time, I change the interface of the
    webservice,
    I
    would
    have to manually change the web-services.xml file to reflect the
    new
    interface.
    Is there a better way of doing this so that I can auto-generate
    the
    webservices.xml
    file plus make my handler changes. Anything in the ant to do it
    smartly,
    rather
    than doing it manually. I would imagine that anyone who writes a
    handler
    would
    run into similar situation.
    thanks for help.
    John
    begin 666 handler.zip
    M4$L#!!0`" `(`#V&D2X````````````````)``0`345402U)3D8O_LH```,`
    M4$L'" `````"`````````%!+`P04``@`" `]AI$N````````````````% ``
    M`$U%5$$M24Y&+TU!3DE&15-4+DU&\TW,RTQ++2[1#4LM*L[,S[-2,-0SX.5R
    M+DI-+$E-T76J! D8ZQG&&Y@I: 27YBGX9B87Y1=7%I>DYA8K>.8EZVGR<O%R
    M`0!02P<(<UT-_$<```!'````4$L#!!0`" `(`/V"D2X````````````````4
    M````=VQS-S O<')O<&5R=&EE<RYT>'2%4$UKPS ,O0?R'P3==5-W&@1Z&AD4
    M!BL[]1;<1&T]',O(RMHR]M]G)RNL'6PZV4_O0]+L]L\JBQF,%80#B5J*"?I/
    MDRA*?8#."K7*<@)EX$'#H+ C3V*4.MA:E]TRLTG,!6H?<!2;$)QMC5KV,9L`
    M;T'W!)'DG>3NDC%JNPHW9'!(C":-^I9B(^J0LJUQ^-.O+*ZU8^[!'2*V[+=V
    MA_VIX]Y8?Z5+L1L3"09QOP::'DUJ+?:JH4)TW!JWYZC5PWQ^/^V5L6QVEF^$
    M#TD)=*2R^/XL'BM<">_$]/"4+X1+KR2>%.IC<"PDN*S7J^>7U_JN7M?9>#KM
    MN,S-Q_F>GSC!9=$Z2UZ;UID8*5Y0+EME\0502P<(3D:RZ!$!```/`@``4$L#
    M! H```````V&D2X````````````````.````=VQS-S O<V%M<&QE-"]02P,$
    M% `(``@`^X21+@```````````````!T```!W;',W,"]S86UP;&4T+V%P<&QI
    M8V%T:6]N+GAM;%6/36_", R&[TC\!Z^77M9X5#M,4P!MT&E,^T "A'9"(;6V
    MH#2IDI2/?[^*`BLW^_7[O+;Y<%]HV)+SRII^U&-W$0`9:7-E?OK18OZ2/$3#
    M0;?3[?";\==H_CW-0)2E5E*$&H'IXOE],H(X09Q5!CZ4=-8??*#"W\+$2(8X
    MGH_A+<TR>&IQ/98B9I\QQ+\AE(^(&[$5S%>&25O@)B7"/.0>6[M6O57*:C%N
    MSFE-:@& Y\J76AP2(PH:O)+6=FF=SF?DMDH2QZMY0Y"73I7'B",`2V=U#DM:
    MPS_6,AVIPN:5;A+J;D?K4]DT2>74P(NBU'3/=L)Q/(L7E[0FT#XDSMIPMG*\
    M4D_A>$[G>-G*\?KS/U!+!PBT=FQ0$0$``,,!``!02P,$% `(``@`RX61+@``
    M`````````````!<```!W;',W,"]S86UP;&4T+V)U:6QD+GAM;*5726_;.!2^
    M%^A_8(6>"DL"!CW:';33%I-!$Q1-!SD&M/1L,Z5%@:3B&(7_>Q\WB5J\-/4A
    M",FWO^\MFM=2/$"A246WL$@4W=8<WB:DA!5MN%XDRX;Q,GGW\L7+%X3,7Z4I
    M*<1V*RJ"C#5(S4"1-'UG7_W5GJP81V%9EG=$F7[2"<EC09$$54/!5JP@6A"]
    M87@!\I$5,!;M[73/"7FDO,'SO\"YN!.2E[?A)3_%>%_3X@===P)THX5DE&=M
    M!$[S[ZAL>7LLQYA WA><*@6JY7O]4S2Z;O1]R>0AO_OT(;VZ^9RW5)TTJC>$
    ME8NDX PJG5F*S-PFUD9/`ARV^$S,_T:XHPY:#\&C(]2Q*:]_>C</J5?Y8-P]
    MR?] 'VED6:MNGINC=<4>-95K"'!SX$*PU5"5RC@(M)HIT$T]0YC5"*/,!6^V
    M@Z4W*K-<,V?+J1^*Y6+?"G*N1''MF6)5&U-4(5FMF:@628E.:B!KJ$!2#251
    MNEFM$A+BX-\Q9H,(1K'JTQS)R3QWMDQ:9L,QL*R0@ 81I]((QQ)&_!K8!,7;
    M'W@=]/81&-O7(WN.>?T\)<3)[5OK2(B72RJ $J.)301Q1!PC4:S$<B1>!VHQ
    MB"J(DH4U+DL(JPK>H.!%\B8SCPF!IW#C+,_?)"TL\%H?\SXXGUL=YWT<@B^9
    M<M*E!$G;YA7>A9R1G2JY[8NS'DD`9><VQ4ZD]S60X(FQ\1^,H*B04@5_HB9V
    MR+J";?TW[G\<`7/<90*]EW5C_1VK")T&#;2LOJA#J?5:`A<%=3&YH,,]2P)G
    MRQ/<YUJ22_W0C7D>(A]R,5>BD07\A:DK0YPN3SFA5>FS+L66.%D6`2%'QKY;
    M>QU%_."A'?25G^TLG8X#:DX]G\J>MCQ(-EZHJVHESJ0@MX0QIY?V_[>K19*/
    M8;5!ISA()WICAFZZ,U,W]0\IPQ<K+]38)%[^&"Y_A);?!<L(*_,\0D8WVB[I
    MDJ'>HS*]M$\Z5M\G6XRZ6QQ1IQ!*)WJ.@Z41;$ :$NS>_J.CQC&]$W@N</2T
    MKCESN1@PF7WID$''@>>;>./,=MW;F5848AAA_>MO<30*O@JI#?5[=0UZ(TK'
    MJ64#W7YP>@2%@>/+-9HVHR$ZT3J)A-7T1M=-YG8XG0676W1&RXNY;).^W./B
    M7N]9M3:Q-WOV^07*_J*DMHO&/NG-:8FM3(8^U<NX79.75,%X%/4"?<DFU=M4
    M8FA>?_K^WA1Z3"L>P7^%3%FDQ3%[7)HP4)X[\MZUM2G6@073<;FD.+J<G(O9
    M23PLI=@I&.#!71*-0,4*6V-X<$;%WUIQ3N$)"F+^-)HNG?V.7_;PC$H)9Q5$
    M2U8C^="U>(Y$\#;BSZ]>LAENY7@SL3:98G&-LSK=!;)KRJKG%N5S?/[[[O;C
    MEV%=CQV?Y_Y3'$^_`%!+!PCTFH+3( 0``)8/``!02P,$"@``````$8:1+@``
    M`````````````!4```!W;',W,"]S86UP;&4T+V-L:65N="]02P,$% `(``@`
    M[X*1+@```````````````"<```!W;',W,"]S86UP;&4T+V-L:65N="]#;&EE
    M;G1(86YD;&5R+FIA=F'M55%/VS 0?D?B/YSZE*C,8M+>NCY A0;3QAA%TZ1I
    M#ZY[:;TY=F8[4(3ZWV?'3NN4%/JPQZ$JQ+[OOCO?=^=4E/VF"P1;6Z4Y%<30
    MLA+XCC#!4=K1\='Q$2\KI2W\HO>4U)8+<F514X??M:[(JA1$TA)-11F2K]?N
    MM1^D*T8^GGV_O9E<K!A6EBLYV@-;4CD7J,D'E*@YNPS+U] 1=B4+]1KT,QKC
    M:C!1TN+*OH8VBE9D^N7LYIE;CV,#CE7H-WJF<S5_?!%P(; ,:KR >:F.NTGO
    MQ=SMJ4 XB)H'.:MZ)C@#)J@Q,&DZ)=8;G#_*N8&N6D_>"Z#2_)Y:A$0<8$H6
    M?#&*@$!\K_@<N.0VZX%"_N2A`';)#8E[XRT/P+I#UG3ACY^P0'N)=([:9"V#
    M1EMK&5U)"NCPQ*R;K(1:-(I+:S)(M $,_T]<WA8$WJ/8).H3:.W-^QAD+<0H
    M6+N&N/+)7&P-,2& Z:.Q6!)56^+2DE;(S!_LDX^7-5%S&,)@.'#/E#B/!09H
    MAQ?8DHNY1MD-.O&[,7*H0W![</L(6>ODQL%<.ZVS/(=X2G=.UQ\@_6,,F5_D
    M&[ALL!LV`%YD`<JEL50R5$6GG/F&%3K[#:/G3S9S&1JSQ:<:-?B3*,@0WN8;
    MW!J%P22*;_TMO5_M\AY8_#>^^ V1+^@W*FJG'^3]1(6HS3)+L@HOZY[^<W6R
    MKI^GUL5>; /W-%R G-=%X2;2S'R_X4-G-]&B4#IP\/'I"/C[ALJ]#(>)"&9&
    M:%6YR<Y@`.[7)KQN6>(H.9Q5(=#.$(5AG"DED$H(M^DM_JG1V*Q[E4+)-@?I
    MJ?B@XPL3*@3.@:1_@Z3?K7[<G.+YM0W,KMINZEKRDHUZ_-Q!0]RQ=_4"1T.B
    M87NGN].ZQ[AU\6!ONJ':S4(SX=*56E485MXE8>G<-)[I!$ZW=6?4LF6\@=J;
    M'S!1#$/%IM9]X>^T^QPGU':IU4/3$SO?X SW"6MUC0?H:2HE#6;P3%$X2-+@
    M?HBF\%_4?RNJ^_T%4$L'"&S_=^_[`@``#0H``%!+`P04``@`" #O@I$N````
    M````````````'@```'=L<S<P+W-A;7!L930O8VQI96YT+TUA:6XN:F%V8;54
    M2VO<,!"^!_(?!I]LNFA;Z"G+'D)(R4+21Y;00PA%D6=MM6/)2/(^*/GOE>7W
    M9DM[B3&6F/DT\VGF&Y=<_.(9@JN<-I(3L[PH"3\R01*56YR?G9_)HM3&P4^^
    MY:QRDMBMM+7GE?W2&'YHG5.WU&SUY7HOL'12JV/WGNT+8HH7:$LND'W[[+>G
    M0:84;(UF*P7^(UR-S+E*"0V[:=:5VNC%_T'O,?/W,(<0NZR>20H0Q*V%.R[5
    M[]H*T-JMX\XO6RU3*+PWAK4S4F6/3\!-9B%IX0!R$P<3(U29RV&YA _!#>%1
    MN OA&]#C^R=(%HWO!<EBCW.YT;N 7A%AQNG29%7AV]67)(;HX?X6],9C$6Q3
    M,%#:@2U1R(W$-!J"U\O+Y$IWHVO SJ;T8*@GZLO24[E!(OU=&TK;KO3)EH'@
    M*_^/E==7/,1<_#72U[I)H5/++BC+T)W$Q<FBJS% D$\X&38-D6#T9<F=*R_F
    M\T[O\U;O\V@&W7F Z&22:)SD2"=@NLV4[!0UH5D/"K2R"_N&:#]$\5";$8KQ
    M-(V;T@ZBCN$JC&MK8D&I,U 54?.%!,:I.[+,]A2O\M#RKFJS";.!R?I@'19,
    M5XZ57AR.VC,L[RL6)X.R!'<BCV$T^X CP9^*%GWBDC %IT$8Y YAA\^]JIK?
    MTD4$[^I 1VF.?PQODJL=%O_^`5!+!PC/*:Y>! (``#P%``!02P,$% `(``@`
    M[X*1+@```````````````"H```!W;',W,"]S86UP;&4T+VAE;&QO+7=O<FQD
    M+6AA;F1L97(M:6YF;RYX;6QMD+$.@C 0AG<3W^'2O; X5A<7=@?GDUZDR;60
    M'I#X]H(%4HC=^OWW];_4-!@L4]1U@R[([7P",#L&`3U=547,[;.-;*N4WN=0
    M_83YK!+4C"(Z2?W0M]$A%X*^8[H4#XHCQ>4%!9L]^2ZX7G<8T4N&=\&RRKJ>
    M)Q%\DX)\'&!$'J8IT!K06K+P^L"AM\R+RW_-IEQ:$MFNZ4\F>""S^0502P<(
    M&/JK'I\```!,`0``4$L#!!0`" `(`.^"D2X````````````````D````=VQS
    M-S O<V%M<&QE-"](96QL;U=O<FQD4V5R=FEC92YJ879A18TQRP(Q#(;W0O]#
    M[*0'VN6;%$%P<7)Q<(XUW!5C6]*>-XC_7>OY*00">=Z\3T)WP9:@]"6*1UYD
    MO":FOY566MFFT0H:V P\Y.5 ITQR\^Z51FFI[/%*.:&CM>E*24MK_UOLI\6:
    M^F^U2OV)O0/'F#/LB#D>H_#Y,/;=JPQ@U,%7&!,)%A\#=!C.3#)W'?JP-K^"
    MW0BV]6[&9UO7QW<HXD,+W3<_G=W?*1 JO00P;S2!H4*SJNRAU6N>4$L'"/O]
    MA9:Z````& $``%!+`P04``@`" `,AI$N````````````````&0```'=L<S<P
    M+W-A;7!L930O<F5A9&UE+FAT;6R%5DMOVS ,O@_H?R!\7F)LQ2Z#:Z"/%=VP
    M=L4RH+TJ-E.KE25/DI,8V(\?];!C+VYW262:Y/?Q)3JK;"WRDW<G[P"R"EF9
    MNQ.=+;<"\V_GCXN?]Y=PPV0I4)LL#?*H)+A\`8WB+#&V$V@J1)N [1H\2RSN
    M;5H8DT"-)6>D4FA$F013@$KCYBQ9+M-2%:EMK=*<B:4W\.ZS--+Q#VM5=CUJ
    M]6&&%PF#*L"OBAO /:L;@6 JM3- /V 5M 8'$]AQ6\$.UV!0;WF!R\.K@LG@
    M2A.%E.A;L!7"ZL?Y/8E^MVALJM$T2I+#&HUA3V0^@[M6M@J>"L%16C"\1" 8
    M#XHZ/%<1=QE48YC-$-#*8F. RZT26RSI`#O-+9=/Q.H`^;E7SY3(^RQ3B?('
    M4D98L^(%"?B9;1FQ8<; 1FD?UR@)L(#!E,4:W: 0ZD%I4:YBIIR/))^79RG+
    MLY1@>SIC$JOCJ.<0@UHL1T0[EKV)%!4I61L%C[??8</%;'25BV*Q<V$L(J6%
    M,UKN:Y'D;[U]$Y]1L=<M%V4`IN8K5-VXHZM^>.-R/S3?$3&O$U@,1P<Y*([*
    M') 'B/G*>.3C-,+!";%4K6U:FSY\N5A\O;M.?:.@@1CG!$VW$A@-KIMW9_J$
    M$C6CV%U<A2JQ.+285#(&3=WK#,QK'HUJ=8$?=Z8L)TZI21<Q6<:E`DILA.IJ
    M-U4NQ7/^*!*/?G45JD#8_\0')==8T.73O<8G#"[1``K!.4.FAYI.0PXC_LST
    MG"\20]M$_ ,L, -L\#EGV!=U!.'GQV5X5+MI\P3%])9Q&>=G.+H>\JWP']M+
    M_S<=PF/9N"-AMDU"G3Q]UC2"%XQ;UPT=1=!TX18;995+RJN3'-Q$,\OIOAUE
    M3FV&"<)IRK-T? 7V(SH>TFM%$[+S]EP:J]LB>!^2$!<3W>*$U<JEVY/N.M#H
    M\]?[(JZ^K=^'::"L6EH.D\D^XC%B&-;N:;_91N#&W^F?4K>!:HSP=[13/?R?
    MUPQ.IP87=.D'`SBV</%Q6>(^*M^H&OMRTD(][?FE<?F2+'PL_ 502P<(9EEH
    M_0\#```U" ``4$L#!!0`" `(`.^"D2X````````````````@````=VQS-S O
    M<V%M<&QE-"]397)V97)(86YD;&5R+FIA=F&-5;%NVS 0W0/D'PZ9*!35U*E&
    MAM1(VQ1HFL9!4:#HP%!GFRU%JN3)<6#DWTN*I"W9<A(-LD6^]^[XCD<V7/SE
    M"P1JR5C)5>EXW2A\-SD].3V1=6,LP1^^XF5+4I57A)9[X/[LNES7JM2\1M=P
    M@>7W:_]W'&0;47ZY^'E[,[U<"VQ(&CTY`EMR72FTY2?4:*7X'#]?0B?8E9Z;
    MEZ!?T3F_^*G1A&MZ">T,;\K9MXN;`]H(L0,G%\8G@]('4ST^"[A46*,>36V'
    M><['_:2/8NZ..! 78JI8SJ:]5U* 4-PYF*%=H4U^@^>CKAP,J[4)+(#&RA4G
    MA%YQ0!@]EXM)?WI&5NH%),=W*7>8&'IE9 522V(C8E!L`A2 EM*5:>R\'PD@
    MOH<A/(;%V$7$E@ND)#_M!E@1AE@DA^<L*;RMH\09%%V IT&V72/\^@U!#GF%
    MUK&<HD5JK89>O P8T;DW1B'7*>U;_->B(S;<B%"+K#U[=(1U:5HJO;6:E&8Y
    MX<2%*5<**RC[3U[#-CFR+;XB&]<8[9#!03[PJH0B_9F,4DWM(VQR`0[[$ 2M
    MNSH>S!2UF(SP_"*C%>>!&BJ0)E@Q@(<F]4OVK_-,"> P=<,MQ9UQJ5>H3(/Q
    M*U#V5%(G@\?X(]1WJU?SR(_2.IHNI:I8C#'.LNA:18>4G5J/-^;U;120#MZ?
    MP9ND%U+]P57KL^[QTQROJG FA-9G^^V2P4^"DU@R&!Q#@-NR`V#,84;^GKFS
    M_F[HV4)+:QY XP/L70@,MP%R\8]MR'QP]+S:LZ@_A>FWV&3=?*.!"&B+H2P)
    M%=SI-!([=F:D/?AQ!)9)_HYPU]XL5A2[+1J<`QU>?E>&CV(+UQUVJP8@YRQ"
    MI7;$M4 S'R2^,W2X+SK%O.O38*'C:9WQ^: )T.WPT[B_NE5JYZ]__0=02P<(
    M)86WEK("```@" ``4$L!`A0`% `(``@`/8:1+@`````"``````````D`! ``
    M`````````````````$U%5$$M24Y&+_[*``!02P$"% `4``@`" `]AI$N<UT-
    M_$<```!'````% `````````````````]````345402U)3D8O34%.249%4U0N
    M34902P$"% `4``@`" #]@I$N3D:RZ!$!```/`@``% ````````````````#&
    M````=VQS-S O<')O<&5R=&EE<RYT>'102P$""@`*```````-AI$N````````
    M````````#@`````````````````9`@``=VQS-S O<V%M<&QE-"]02P$"% `4
    M``@`" #[A)$NM'9L4!$!``##`0``'0````````````````!%`@``=VQS-S O
    M<V%M<&QE-"]A<'!L:6-A=&EO;BYX;6Q02P$"% `4``@`" #+A9$N])J"TR $
    M``"6#P``%P````````````````"A`P``=VQS-S O<V%M<&QE-"]B=6EL9"YX
    M;6Q02P$""@`*```````1AI$N````````````````%0`````````````````&
    M" ``=VQS-S O<V%M<&QE-"]C;&EE;G0O4$L!`A0`% `(``@`[X*1+FS_=^_[
    M`@``#0H``"<`````````````````.0@``'=L<S<P+W-A;7!L930O8VQI96YT
    M+T-L:65N=$AA;F1L97(N:F%V85!+`0(4`!0`" `(`.^"D2[/*:Y>! (``#P%
    M```>`````````````````(D+``!W;',W,"]S86UP;&4T+V-L:65N="]-86EN
    M+FIA=F%02P$"% `4``@`" #O@I$N&/JK'I\```!,`0``*@``````````````
    M``#9#0``=VQS-S O<V%M<&QE-"]H96QL;RUW;W)L9"UH86YD;&5R+6EN9F\N
    M>&UL4$L!`A0`% `(``@`[X*1+OO]A9:Z````& $``"0`````````````````
    MT X``'=L<S<P+W-A;7!L930O2&5L;&]7;W)L9%-E<G9I8V4N:F%V85!+`0(4
    M`!0`" `(``R&D2YF66C]#P,``#4(```9`````````````````-P/``!W;',W
    M,"]S86UP;&4T+W)E861M92YH=&UL4$L!`A0`% `(``@`[X*1+B6%MY:R`@``
    M( @``" `````````````````,A,``'=L<S<P+W-A;7!L930O4V5R=F5R2&%N
    ?9&QE<BYJ879A4$L%!@`````-``T`K@,``#(6````````
    `
    end
    [sample10.zip]

  • Any suggestions for maintenance on my 2008 MBP?

    So I bought my 2.4Ghz MBP last October. It works pretty good, and I run all media software programs (Photoshop, Illustrator, Logic, Ableton Live) But after time it seems like my maching is running sluggish. I get the rainbow windmill a lot now (mainly when I'm on the internet) and it just seems to drag more....
    Can anyone lend me some SOLID methods of maintenance??? I'm not familiar with any kind of maintenance on Macs.... Your help would be appreciated Thanks!

    Kappy's Personal Suggestions for OS X Maintenance
    For disk repairs use Disk Utility. For situations DU cannot handle the best third-party utilities are: Disk Warrior; DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible. TechTool Pro provides additional repair options including file repair and recovery, system diagnostics, and disk defragmentation. TechTool Pro 4.5.1 or higher are Intel Mac compatible; Drive Genius is similar to TechTool Pro in terms of the various repair services provided. Versions 1.5.1 or later are Intel Mac compatible.
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep. Dependence upon third-party utilities to run the periodic maintenance scripts had been significantly reduced in Tiger and Leopard.
    OS X automatically defrags files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems.
    I would also recommend downloading the shareware utility TinkerTool System that you can use for periodic maintenance such as removing old logfiles and archives, clearing caches, etc.
    For emergency repairs install the freeware utility Applejack. If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the commandline. Note that AppleJack 1.5 is required for Leopard.
    When you install any new system software or updates be sure to repair the hard drive and permissions beforehand. I also recommend booting into safe mode before doing system software updates.
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    1. Retrospect Desktop (Commercial - not yet universal binary)
    2. Synchronize! Pro X (Commercial)
    3. Synk (Backup, Standard, or Pro)
    4. Deja Vu (Shareware)
    5. Carbon Copy Cloner (Donationware)
    6. SuperDuper! (Commercial)
    7. Intego Personal Backup (Commercial)
    8. Data Backup (Commercial)
    9. SilverKeeper 2.0 (Freeware)
    10. MimMac (Commercial)
    11. CloneTool Hatchery (Shareware)
    12. Clone X (Commercial)
    The following utilities can also be used for backup, but cannot create bootable clones:
    1. Backup (requires a .Mac account with Apple both to get the software and to use it.)
    2. Toast
    3. Impression
    4. arRSync
    Apple's Backup is a full backup tool capable of also backing up across multiple media such as CD/DVD. However, it cannot create bootable backups. It is primarily an "archiving" utility as are the other two.
    Impression and Toast are disk image based backups, only. Particularly useful if you need to backup to CD/DVD across multiple media.
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
    Additional suggestions will be found in Mac Maintenance Quick Assist.
    Referenced software can be found at www.versiontracker.com and www.macupdate.com.

  • Is there any suggestion for the "SafeNamedCache was explicitly released"

    I deployed the coherence-java-3.6.0-java on red had with core 2.6.18 with 64 bit.
    Then the deployed the coherence-3.6.0-cpp on the same manchine and complied the example "hellogrid" and it works ok . No any error .
    My requirement is to integrate the Coherence into the Nginx0.7.62 for my web server . And i wrote a thread pool for the NameCache::handle and get a newly hendle after a 5 seconds just because of the error "was explicitly released" . But it is not useful for solving the problem .
    Anybody please give me a suggestion for the root cause of this problem .
    Thanks .
    And the configurate as follows which are all copied from the configurate file for the example "hellogrid"(coherence-cpp/examples/config/extend-server-config.xml and coherence-cpp/examples/config/extend-cache-config.xml):
    extend-server-config.xm:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <defaults>
    <serializer>pof</serializer>
    </defaults>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>dist-*</cache-name>
    <scheme-name>example-distributed</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>repl-*</cache-name>
    <scheme-name>example-replicated</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>local-*</cache-name>
    <scheme-name>example-local</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>example-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>example-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <lease-granularity>member</lease-granularity>
    <backing-map-scheme>
    <local-scheme/>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <replicated-scheme>
    <scheme-name>example-replicated</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <lease-granularity>member</lease-granularity>
    <backing-map-scheme>
    <local-scheme/>
    </backing-map-scheme>
    <autostart>true</autostart>
    </replicated-scheme>
    <local-scheme>
    <scheme-name>example-local</scheme-name>
    </local-scheme>
    <invocation-scheme>
    <scheme-name>example-invocation</scheme-name>
    <service-name>InvocationService</service-name>
    <autostart>true</autostart>
    </invocation-scheme>
    <proxy-scheme>
    <scheme-name>example-proxy</scheme-name>
    <service-name>ProxyService</service-name>
    <thread-count system-property="tangosol.coherence.extend.threads">2</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="tangosol.coherence.extend.address">localhost</address>
    <port system-property="tangosol.coherence.extend.port">9099</port>
    </local-address>
    </tcp-acceptor>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    </caching-schemes>
    </cache-config>
    extend-cache-config.xml:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <defaults>
    <serializer>pof</serializer>
    </defaults>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>local-*</cache-name>
    <scheme-name>local-example</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>near-*</cache-name>
    <scheme-name>near-example</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>dist-*</cache-name>
    <scheme-name>extend-dist</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <local-scheme>
    <scheme-name>local-example</scheme-name>
    </local-scheme>
    <near-scheme>
    <scheme-name>near-example</scheme-name>
    <front-scheme>
    <local-scheme>
    <high-units>100</high-units>
    <expiry-delay>1m</expiry-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <remote-cache-scheme>
    <scheme-ref>extend-dist</scheme-ref>
    </remote-cache-scheme>
    </back-scheme>
    <invalidation-strategy>auto</invalidation-strategy>
    </near-scheme>
    <remote-cache-scheme>
    <scheme-name>extend-dist</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address system-property="tangosol.coherence.proxy.address">localhost</address>
    <port system-property="tangosol.coherence.proxy.port">9099</port>
    </socket-address>
    <socket-address>
    <address system-property="tangosol.coherence.proxy.address">127.0.0.1</address>
    <port system-property="tangosol.coherence.proxy.port">9099</port>
    </socket-address>
    <socket-address>
    <address system-property="tangosol.coherence.proxy.address">172.16.1.23</address>
    <port system-property="tangosol.coherence.proxy.port">9099</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    </cache-config>
    Anyone who knows the root cause please tell me . Thanks .

    The original error shown in the console of the Coherence server as follows :
    2010-11-23 22:27:33.742/1566.313 Oracle Coherence GE 3.6.0.0 <D5> (thread=Proxy:ProxyService:TcpAcceptorWorker:0, member=1): An exce
    ption occurred while processing a PutRequest for Service=Proxy:ProxyService:TcpAcceptor: java.lang.IllegalStateException: SafeNamedC
    ache was explicitly released
    at com.tangosol.coherence.component.util.SafeNamedCache.ensureRunningNamedCache(SafeNamedCache.CDB:23)
    at com.tangosol.coherence.component.util.SafeNamedCache.getRunningNamedCache(SafeNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.put$Router(NamedCacheProxy.CDB:1)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.put(NamedCacheProxy.CDB:2)
    at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
    at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    It is shown again and again for any operation before the Coherence server is restart.

  • Suggestions for good framework

    Hi all,
    We are looking for framework for a dot com ecoomerce site (using EJBs, JSP,
    Servlet, HTML, XML..). We are using Weblogic 6.
    Does anybody have any suggestions for good frameworks. Framework should be
    simple to handle & learning curve should be minimum.
    Have a great day.
    thanks in advance.
    -ramu

    Ramu,
    You could look at WebGain tools - http://www.webgain.com. They are the only who
    vendors who currently support development on WLS 6.0.
    Nirav.
    Ramu wrote:
    Hi all,
    We are looking for framework for a dot com ecoomerce site (using EJBs, JSP,
    Servlet, HTML, XML..). We are using Weblogic 6.
    Does anybody have any suggestions for good frameworks. Framework should be
    simple to handle & learning curve should be minimum.
    Have a great day.
    thanks in advance.
    -ramu--
    Nirav Chanchani
    BEA Systems, Inc.

  • Moving to Japan - suggestions for iMac

    I will soon be moving to Japan (for about two years, though I may come back to visit) and would kind of like to bring my iMac with me. I only recently got it as a gift (less than a year old) so I'd really rather not sell it and buy another one in Japan, as I'd not get nearly the value it's worth most likely.
    Does anyone have any advice for transporting it with me? My main options seem to be finding a way to pack it in one of my suitcases (thus losing a LOT of other packing room, and risking the integrity of the computer) or having it shipped over, which I believe would be very expensive.
    I do have a laptop, and I could just leave my iMac behind, but it seems like such a waste to not use a brand new wonderful desktop for the majority of two years. Does anyone have any suggestions for how to handle this, and what would be the most efficient option? Thank you in advance

    I have carried my iMac on buses, trains and airplanes as carry-on baggage with an iLugger. Fortunately mine is an Early '06 17" iMac. It fits in an overhead bin. The same iLugger will carry a 20", but if yours is 24" they make an iLugger for that one but I have no idea if it would work for you as carry-on. If not, then I would just take the MacBook or ship it. Do you still have the original packing cartons? Perhaps an Apple Store or another reseller has an extra one. Apple ships them out from the factory in China in these cartons.
    Dah•veed

  • Need suggestions for office setup

    My company has moved completely to Macs for our client machines. We have roughly 30 employees and all have either MacBook Pros, or MacBook Airs, so all laptops.  We have legacy PC servers, many of which can go away and a couple that are running software we need that can be P2V'd.
    I'd like to get us off of Active Directory and moved over to a Mac server. I have a Mac Pro running 10.8 server, and I'm familiar with setting up, configuring and administrating OS X Server.
    My question is about how I should structure our setup.  Things to consider are as follows:
    We are an advertising agency, quite a bit of data, about a terabyte in our main share
    We currently backup to tape
    Users use their laptops for personal things as well, this means iTunes libraries on laptops (I know, but we're a pretty lax / cool company)
    We are doing in-house video work. Our video guy needs to keep the raw footage archived indefinitely. This has led to 6 terabytes in 6 months, and will not be stopping.
    We host our email with Google Business Apps
    Currently users have a local account and password on computer, Active Directory user / pass for shares, Google user / pass for email
    We can currently have 4 x 3 TB hard drives in the Mac Pro and we have a 16 TB Drobo
    What I would LIKE to do:
    Move all data to OS X Server
    Backup data to external drive that gets swapped weekly and stored offsite
    Allow users to use their laptops for personal / iTunes things, but protect important data on laptop.  Ideally I'd liket to either setup home sync on server an only sync the Documents folder or something similar, or do Time Machine backups to server from each laptop but exclude lots of folders. I'd like to be able to keep important data on server, not locally on laptops, but not be responsible for their music etc., but not be a jerk and bring down the heavy.
    I would like to have a separate solution for handling the video raw footage data, outside of day to day office data
    I would like to consolidate as much user / pass info as possible, even if that means I can only consolidate the local user with a network account.  What would be the best way to migrate local laptop users to network accounts?
    ANY suggestions will be greatly appreciated.  How would you go about setting this up?  It would be easy if I were starting from scratch, but I'm not. I need to implement the new way while preserving data from how things are setup now.
    THANKS THANKS THANKS!!!

    I can certainyl see why you don't want to copy that amount of data. In that case you need to do the following.
    Make sure an account for the user exists in Open Directory. This could be with the same or a different shortname.
    You can set mobile accounts to be created (for laptops) and you can configure options to either sync or not to sync the users home directory. If they are as big as you say you want not to sync the contents.
    On the laptop make sure you have a separate local admin account, not the users current account.
    On the laptop log in as the admin
    On the laptop run Terminal
    On the laptop cd /Users
    On the laptop mv (move) the existing home folder for the user to a new temporary name
    On the laptop open System Preferences
    On the laptop click on Accounts
    On the laptop delete the users current account
    On the laptop click on Login Options
    On the laptop join the laptop to your Open Directory system (assuming it is correctly configured and running)
    On the laptop reboot (just to be safe)
    On the laptop login as the users new network login, assuming you have configured things on the server appropriately you will get asked if you want to create a mobile account.
    On the laptop once it has created the account it will finish logging in to an empty account. Now log out.
    On the laptop log in as the admin account again.
    On the laptop run Terminal
    On the laptop mv (move/rename) the old home directory to the correct name for the account
    On the laptop do sudo chown -R usersnewshortname usersnewshortname. This makes sure the new account owns all the old files.
    All the above will move the users home directory from the old standlone account to a network account.

  • Need Suggestion for Archival of a Table Data

    Hi guys,
    I want to archive one of my large table. the structure of table is as below.
    Daily there will be around 40000 rows inserted into the table.
    Need suggestion for the same. will the partitioning help and on what basis?
    CREATE TABLE IM_JMS_MESSAGES_CLOB_IN
    LOAN_NUMBER VARCHAR2(10 BYTE),
    LOAN_XML CLOB,
    LOAN_UPDATE_DT TIMESTAMP(6),
    JMS_TIMESTAMP TIMESTAMP(6),
    INSERT_DT TIMESTAMP(6)
    TABLESPACE DATA
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    LOB (LOAN_XML) STORE AS
    ( TABLESPACE DATA
    ENABLE STORAGE IN ROW
    CHUNK 8192
    PCTVERSION 10
    NOCACHE
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    NOCACHE
    NOPARALLEL;
    do the needful.
    regards,
    Sandeep

    There will not be any updates /deletes on the table.
    I have created a partitioned table with same struture and i am inserting the records from my original table to this partitioned table where i will maintain data for 6 months.
    After loading the data from original table to archived table i will truncating the original table.
    If my original table is partitioned then what about the restoring of the data??? how will restore the data of last month???

Maybe you are looking for

  • Additional Internal Hard Drive

    I'd like to install a second hard drive to a G5, 2 Ghz dual process. The one I'm looking at is a Western Digital Caviar SE 250 GB. The drive will be mainly used for storing video. I haven't purchased an internal hard drive before, so I'm just looking

  • WLC 6 WLAN Override problem

    works function WLAN Override in controller WLC6?, we have tried it and it does not work, is a software problem?

  • Boot problem to the power of 10

    My computer (original imac 233mhz 160mb ram) always made me type "boot" before it booted up. I really didn't mind doing this. Well I tried to install 10.3 (Gray disk, I know, I know.) Some one told me o put the dvd in my external dvd drive and startu

  • Add search filter to ipm task list

    I am using Oracle 10G. Is there a way to add a search filter to refine the rows displayed within the IPM task list web tool?

  • How to make VPN work on Mountain Lion?

    Hello! I had VPN connection (L2TP over IpSec) setup on Lion OS - worked perfectly. But after upgraging to Mountain Lion when I try to connect - I get an error "The L2TP-VPN server did not respond. Try reconnecting. If the problem continues, verify yo