WHAT IS BEST STRATEGY FOR RMAN BACKUP CONFIGURATION

Hi all,
my database size is 50GB I want TAKe WEEKLY FULL BACKUP AND INCREMENTAL BACKUP
WITHOUT RECOVERY CATALOG.by follwing commands
weekly full database backup
run
backup as compressed backupset
incremental level=0
device type disk
tag "weekly_database"
format '/sw/weekly_database_%d_t%t_c%c_s%s_p%p'
database;
I want do CONFIGURE RMAN BY FOLLWING stragtegy
CONFIGURE RETENTION POLICY TO REDUNDANCY window of 14 days.
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'\SW\AUTOCFILE_'%F';
and other is by default
sql>alter system set control_file_record_keep_time=15 days;
os--aix6 and for two database 10g2 and 11g2
what is best configuration strategy for rman backup.AND BACKUP WITH RECOVERY CATALOG OR WITHOUT RECOVERY CATALOG
PLEASE TELL ME
Edited by: afzal on Feb 26, 2011 1:45 AM

For simply two databases, there really wouldn't be a need for a recovery catalog. You can still restore/recover without a controlfile and without a recovery catalog.
From this:
afzal wrote:
CONFIGURE RETENTION POLICY TO REDUNDANCY window of 14 days.I am assuming you want to keep two weeks worth of backups, therefore these:
alter system set control_file_record_keep_time=15 days;
CONFIGURE RETENTION POLICY TO REDUNDANCY window of 14 daysShould be:
RMAN> sql 'alter system set control_file_record_keep_time=22';
RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14;22 would give you that extra layer of protection for instances when a problem occurs with a backup and you want to ensure that data doesn't get aged out.

Similar Messages

  • Best pratices for RMAN backup management for many databases

    Dear all,
    We have many 10g databases (>40) hosted on multiple Windows servers which are backup up using RMAN.
    A year ago, all backup's were implemented through Windows Scheduled Tasks using some batch files.
    We have been busy (re)implementing / migrating such backup in Grid Control.
    I personally prefer to maintain the backup management in Grid Control, but a colleague wants now to go back to the batch files.
    What i am looking for here, are advices in the management of RMAN backup for multiple databases: do you guys use Grid Control or any third-party backup management tool or even got your home-made solution?
    One of the discussion topic is the work involved in case that the central backup location changes.
    Well... any real-life advices on best practices / strategies for RMAN backup management for many databases will be appreciated!
    Thanks,
    Thierry

    Hi Thierry,
    Thierry H. wrote:
    Thanks for your reaction.
    So, i understand that Grid Control is for you not used to manage the backups, and as a consequence, you also have no 'direct' overview of the job schedules.
    One of my concern is also to avoid that too many backups are started at the same time to avoid network / storage overload. Such overview is availble in Grid Control's Jobs screen.
    And, based on your strategy, do you recreate a 'one-time' Oracle scheduled job for every backup, or do your scripts create an Oracle job with multiple schedule?
    You're very welcome!
    Well, Grid Control is not an option for us, since each customer is in a separate infrastructure, and with their own licensing. I have no real way (in difference to your situation) to have a centralized point of control, but that on the other hand mean that I don't have to consider network/storage congestion, like you have to.
    The script is run from a "permanent" job within the dba-scheduler, created like this:
    dbms_scheduler.create_job(
            job_name        => 'BACKUP',
            job_type        => 'EXECUTABLE',
            job_action      => '/home/oracle/scripts/rman_backup.sh',
            start_date      => trunc(sysdate)+1+7/48,
            repeat_interval => 'trunc(sysdate)+1+7/48',
            enabled         => true,
            auto_drop       => false,
            comments        => 'execute backup script at 03:30');and then the "master-script", determines which level to use, based on weekday from the OS. The actual job schedule (start date, run interval etc) is set together with the customer IT/IS dept, to avoid congestion on the backup resources.
    I have no overview of the backup status, run times etc, but have made monitoring scripts that will alert me if/when a backup either fails, or runs for too long. This, in addition with scheduled disaster/recovery tests makes me sleep rather well at night.. ;-)
    I realize that there (might be) better ways of doing backup scheduling in your environment, since my requirements are so completely different than yours, but I guess that we all face the same challenges in unifying the environments as much as possible, to minimize the amount of actual work we have to do. :-)
    Good luck!
    //Johan

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Best strategy for offsite backups

    I am using TM on an external hard drive (call it #1) with my iMac, however I want to store a backup offsite, in the event of a fire, etc.
    Can you use TM on one machine to make back ups on multiple different external hard drives (not necessarily at the same time)? Is this the best way to back up offsite, by using TM to make an extra external hard drive backup that I would store elsewhere?
    If I use TM, would I get another external hard drive (#2), then turn off TM, disconnect external hard drive #1, plug in #2, then turn TM on, then have it do a full back up, then turn TM off, then disconnect #2, reconnect #1, and turn TM back on?
    Let's say #2 has been off site for a month (while #1 has been running with TM on the computer) and I want to update #2. Is the best way to erase it, and do a full back up? Or, after a month, can I just plug it in and let TM do an incremental backup? Does it matter that TM has been doing backups with #1?
    Sorry for the confusing questions.
    Thanks for any advice.
    Lee

    Agreeing with Kappy that your "secondary" backups should be made with a different app. You can use Disk Utility as he details, or a "cloning" app such as CarbonCopyCloner or SuperDuper that can update the clone periodically, rather than erasing and re-copying from scratch.
    [CarbonCopyCloner|http://www.bombich.com> is donationware; [SuperDuper|http://www.shirt-pocket.com/SuperDuper/SuperDuperDescription.html] has a free version, but you need the paid one (about $30) to do updates instead of full replacements, or scheduling.
    If you do decide to make duplicate TM backups, you can do that. Just tell TM when you want to change disks (thus it's a good idea to give them different names). But there's no reason to erase and do full backups; after the first full backup to each drive, Time Machine will back up only the changes made since the last backup +*to that disk.+* Each is completely independent.

  • Best strategy for variable aggregate custom component in dataTable

    Hey group, I've got a question.
    I'd like to write a custom component to display a series of editable Things in a datatable, but the structure of each Thing will vary depending on what type of Thing it is. So, some Things will display radio button groups (with each radio button selecting a small set of additional input elements, so we have a vertical array radio buttons and beside each radio button, a small number of additional input elements), some will display text-entry fields, and so on.
    I'm wondering what the best strategy for tackling this sort of thing is. I'm sort of thinking I'll need to do something like dynamically add to the component tree in my custom component's encodeBegin(), and purge the extra (sub-) components in encodeEnd().
    Decoding will be a bit of a challenge, maybe.
    Or do I simply instantiate (via constructor calls, not createComponent()) the components I want and explicitly call their encode*() and decode() methods, without adding them to the view tree?
    To add to the fun of all this, I'm only just learning Faces (having gone through the Dudney book, Mastering JSF, and writing some simpler custom components) and I don't have experience with anything other than plain vanilla JSP. (No EJB, no Struts, no Tapestry, no spiffy VisualDevStudioWysiwyg++ [bah humbug, I'm an emacs user]). I'm using JSP 2.0, JSF 1.1_01, JBoss 4.0.1 and JDK 1.4.2. No, I won't upgrade to 1.5 (yet).
    Any hints, pointers to good sample code? I've looked at some of the sample code that came with the RI and I've tried to navigate the JSF Blueprints stuff, but I haven't really found anything on aggregating components into a custom component. Did I miss something obvious?
    If this isn't a good question, please let me know how I can sharpen it up a bit.
    Thanks.
    John.

    Hi,
    We're doing something very similar. I had a look at the Tomahawk Date component, and it seems to dynamically created InputText components in the encodeEnd(). However, it doesn't decode this directly (it only expects a single textual value). I expect you may have to check the request yourself in decode().
    Other ideas would be appreciated, though - I'm still new to JSF.

  • What is best solution for backup ?

    Our environment consisted of RAC (2 nodes). we are trying to back up our database using RMAN with NetBackup.
    we also have other different databases need to be backed up.
    In RAC case, what is best solution for backing up the database ?
    As I mentioned above, we are using RMAN for backup our RAC with NetBackup.
    Actually, the RMAN is not simple utility for me.
    Is there any other way to back up the database in our situation without rman.
    I need some advice from all of you.
    Thanks in advance.

    Hi Justin
    There are many possible ways to backup your database. You must decide the one that suits your environment.
    Following is the list of options that you have.
    1. Take Online Backups
    Issue this command to freeze tablespaces
    SQL> ALTER TABLESPACE tblspcname BEGIN BACKUP;
    Copy all the files belonging to this tablespace to your backup location using OS commands.
    Release the tablespace by using this command.
    SQL> ALTER TABLESPACE tblspcname END BACKUP;
    To find the data files belonging to a particular tablespace you can issue this statement
    SQL>SELECT file_name, tablespace_name FROM dba_data_files ORDER BY tablespace_name;
    2. If your db size is not BIG then you can take logical backups. Logical Backups can be FULL or incremental. In 10g you can have filesets to spilit your logical backups in more than one file with specified sizes. (By Logical Backups I mean EXPORT).
    To take export for example issue this command.
    ORACLE_HOME\bin\exp file=fullpath+filename.dmp log=fullpath+logfilename.log FULL=Y userid=system/pwd@dbconnectstring
    To get full list of export parameters type
    ORACLE_HOME\bin\exp help=y
    3. RMAN (Strongly recommended) but you ruled out its possibility so I won't elaborate on that.
    4. COLD BACKUP
    To perform this type of backup you will need to shutdown your database by issuing this command.
    SQL>SHUTDOWN IMMEDIATE;
    (On RAC you will need to shutdown all the instances before copying files to the backup location).
    Use OS copy command to copy files to backup location.
    (This method is not recommended as it will flush your SGA and your client will complain about performance for the first few hours).
    Let me know if you need more details.
    Hopefully this helps.
    Rgds
    Adnan

  • What are the best practice for CQ5.5 configuration?

    Hello,
    What are the best practice for CQ5.5 configuration which handle for High availability.
    Last time I had a issues on server when I was uploaded 2 GB of DAM and then after that the server is not able to start and always getting error regarding Tar Persistance.
    So kindly request you to please let me know what are the best apache felix configuration.
    Thanks in advance...
    Regards,
    Satish

    Hi,
    A DAM upload, regardless of the size of the assets, never should result in TarPM problems, unless you run into an OOM, which left the repository in an unclean state. So if you regularly do DAM uploads of that size, you should check the Garbage Collection logs and probably adjust the heapsize if necessary. You might want to limit the number of concurrent running workflows to keep the memory consumption a bit lower.
    To your question: HA in a traditional sense you cannot achieve with a single box, even with optimized settings. In an author usecase you would need clustering.
    Jörg

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • What's best practice for logging messages in pageflow?

    What's best practice for logging messages in pageflow?
    Workshop complains when I try to use a Log4J logger by saying it's not serializable. Is there a context similar to JWSContext that you can get a logger from?
    There seems to be a big hole in the documentation on debug logging in workflows and JSP pages.
    thanks,
    Rodger...

    Make the configuration change in setDomainEnv.cmd. Find where the following variable is set:
    LOG4J_CONFIG_FILE
    and change it to your desired path.
    In your Global.app class, instantiate a static Logger like this:
    transient static Logger logger = Logger.getLogger(Global.class);
    You should be logging now as long as you have the categories and appenders configured properly in your log4j.xml file.

  • Whats the best option for passing parameters between tf?

    Dear All,
    I have three Task Flows:
    1. TF1
         -  Main Taskflow that calls a web service to gather its data
    2. TF2
         -  Secondary taskflow which receives a parameter and depending on the value of the parameter received will display its data accordingly.  Generally any data
         is feed from TF1
    3. TF3
         -  Same as TF2Use Case:
    All three TF will be dropped to the page as Regions in a Webcenter Portal Application. Changes in TF1 should propagate into TaskFlow 2.
    Question:
    1. How do I configure that changes in TF1 would be propagated back into task flow 2 and 3 and whats the best option for this?
    2. At runtime, user can choose to edit the page and TF2 and TF3 can be deleted but TF 1 should remain as the source of information.
    Given the scenario above:
    - shall I wire the taskflows via page parameters?
    - contextual events?
    What are the considerations that needs to be thought of. I havent done such requirements before.
    Please help.
    Webcenter 11.1.1.6

    Contextual events seem to be the best case.
    This way you can trigger whenever you want. Web services can be slow so you can trigger the event when the gathering of the data has been finished and then pass some value on the event.
    An event also has a payload so it's an ideal scenario to add the data from the service on it so you can use it in the other TF's.
    In order to manage the deletion of the TF1, you can use the UI events on the composer: http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_page_editor_adv.htm#CHDHHFDJ

  • What is best practice for installing Yosemite

    I am currently on OS X Mavericks version 10.9.5 Macbook pro 13.  2.6 ghz intel for i5, 8gb 1600 mhz ddr3.
    I am now downloading yosemite 10.10.1 but since i've been reading all these negative feedback so far, i am having second thoughts if i should continue to install the upgrade or not.
    Any suggestion What is best practice for installing Yosemite?  Or is it not yet time to upgrade since the platform is premature yet?
    Thanks in advance.

    Check your apps are compatible with 10.10 - roaringapps.com
    http://www.etresoft.com/etrecheck can show what is running & installed - look for updates on the developer own sites.
    If you have many kernel extensions or startup items look for updates to them too
    Take a full bootable backup to another disk via Carbon Copy Cloner, Super Duper! or Disk Utility
    Disconnect the backup before you begin any install (ideally set it aside & leave it untouched incase you need to go back to 10.9)
    Personally I prefer a clean install when there are signs of multiple migrations (if you have upgraded several OS for a period of years). Setup Assistant/ Migration Assistant can import user data from a backup, but consider that Apps & 'other data' should be manually reinstalled from the latest versions.
    If you clean install (erase the HD before installation) then make sure you deauthorise iTunes & any other apps that are associated online (like find my Mac).
    Basically the steps you would take before selling a Mac…
    What to do before selling or giving away your Mac - Apple Support

  • What is best recommendstion for DNS LB for lync 2013 Edge servers

    What is best recommendation for DNS LB for lync 2013 Edge servers ?. We have F5 LB for edge and want to decide if we can go with DNS base LB for Edge servers.
    Anil MCC 2011,ITIL V3,MCSA 2003,MCTS 2010, My Blog : http://messagingschool.wordpress.com

    It will be better to Use Hardware Load balancing (F5).
    If you choose to use DNS load balancing for a pool but still need to implement hardware load balancers for traffic such as HTTP traffic, the administration of the hardware load balancers is greatly simplified. For example, configuring the hardware load balancer
    will be simpler as it will only manage the HTTP and HTTPS traffic, while all other protocols will be managed by DNS load balancing
    Also for more info., you can check below links
    http://technet.microsoft.com/en-us/library/gg615011.aspx
    http://technet.microsoft.com/en-us/library/gg398634.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"
    Mai Ali | My blog: Technical | Twitter:
    Mai Ali

  • Best strategy for migrating GL 6 DW CS4?

    First I gotta say that -- as decade-long user of several Adobe products -- Adobe has really screwed over long-time users of Adobe Golive. After a week of messing around with something that should be a slam dunk (considering the substatial amount of $ I've paid to Adobe over the years for multiple copies of GoLive 5, 6 and CS2), I can tell you Adobe's GL > DW migration utility ONLY works for sites created FROM SCRATCH in GL CS2 (they must have the web-content, web-data folder structure). This means that Adobe's migration utility only works for maybe 10% - 15% of the GoLive sites out there, and about 10% - 15% of Adobe's loyal GoLive customers, and it particularly screws over Adobe's longtime GoLive customers. Sweet! ( (Just like Adobe screwed over users of PhotoStyler -- which was a better image editor than PhotoShop, BTW.) Obviously, I would walk away from Adobe and make sure I never ever paid them another cent, but the Republican-Democrat "free market economy" has made sure I don't have that option.
    So I've gotta make Dreamweaver work, which means I've gotta migrate several large sites (the biggest has 5,000 static pages and is about 2 gigs in size) from GoLive 6 (or GoLive CS2 with the older folder structure) to Dreamweaver CS4.
    Which brings me to my question -- what's the best strategy for doing ths? Adobe's migration utility is a bad joke, so here's the alternative, real world plan I've come up with. I'd apreciate any suggestions or comments...
    1) create copies of all my GL components in the content folders for my GL site (not in the GoLIve components folder)
    2) replace the content of all GoLive compnents (in the GoLive components folder) with unque character strings, so that instead of a header with images and text, my old header would look something like xxxyyyzzz9
    3) create a new folder called astoni in the root of my hard drive. Copy all my GoLIve web site content (HTML, images, SWF, etc.) into astoni in exactly the structure it was in with GL
    4) create a new Dreamweaver site by defining astoni as the local location for my site, astoni\images as the location for images, etc.
    5) use Dreamweaver and open the newly defined astoni site. Then open each of the GoLive components I copied into a content level folder in astoni, and drag each into the Dreamweaver Assets/Library pane, in order to create library items just like my old GoLive components
    6) use Dreamweaver to Search & Replace out the unique text strings like xxxyyyzzz9 with the content of my new DW library items
    7) refresh site view to make all the links hook up...
    Thanks for your help. Hope this discussion helps others too...

    Instead of ragging on people who are familiar with DW and Site Development, you should be gracious and accept the practical advice you've been given.  A "best strategy" would be to read some tutorials and learn how to work with HTML, CSS and server-side technologies. Without some basic code skills, you're going to find DW an uphill, if not impossible battle.
    Frankly, I don't have free time to hand-hold someone through the excruciating process of migrating a 5,000 page static site from GoLive  to DW. And I doubt anyone else in this forum has either.  We're not Adobe employees.  We don't get paid to participate here.  We are all product users JUST LIKE YOU.
    I'm sorry you're frustrated.  I'm also sorry for your clients. But the problem you have now isn't Adobe's fault. It's yours for not keeping up with server-side technologies or handing-off these huge static sites to more capable web developers.  I'm not saying you need to buy anyone's books, but they are good resources for people willing to learn new things.
    That said, maybe you should stick with GoLive.  The software doesn't have an expiration date on it and will continue working long into the future.  If you're happy using GL, keep using it to maintain your legacy sites. At the same time learn X/HTML, CSS & PHP or ASP.  Use DW CS4 for the creation of new projects.
    FREE Tutorial Links:
    HTML & CSS Tutorials - http://w3schools.com/
    From   Tables to CSS Web Design Part 1 -
    http://www.adobe.com/devnet/dreamweaver/articles/table_to_css_pt1.html
    From   Tables to CSS Web Design Part 2 -
    http://www.adobe.com/devnet/dreamweaver/articles/table_to_css_pt2.html
    Taking  a Fireworks (or Photoshop) comp to a CSS based layout in DW
    http://www.adobe.com/devnet/fireworks/articles/web_standards_layouts_pt1.html
    Creating  your first website in DW CS4 -
    http://www.adobe.com/devnet/dreamweaver/articles/first_cs4_website_pt1.html
    Guidance  on when to use DW Templates, Library Items and SSIs -
    http://www.adobe.com/devnet/dreamweaver/articles/ssi_lbi_template.html
    Best of luck,
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    www.alt-web.com/
    www.twitter.com/altweb
    www.alt-web.blogspot.com

  • Best strategy for buring dual layer DVD

    I've got a project that won't fit on a single layer DVD, so I've got it set up for dual layer and "best quality" which results in a 7.38GB disk image.  However, iDVD 8 warned me when burning the disk images that some utilities may not be able to burn a reliable dual layer disk copy and to use iDVD instead, does this include Disk Utility?
    I always use Disk Utility to make copies, iDVD took almost 7 hours to burn the disk image, I can't afford to wait that long for each copy.  So what's the best strategy for burning dual layer DVDs?

    subsequent copies burn in far less time immediately after the first provide you don't quit iDvd
    If you know ahead of time you'll need more than a few copies, I'd recommend burning from a disc image to the desktop and reducing the burn speed (4x or lower) I prefer to use Roxio Toast myself. But others have had success with Apple's Disc Utilities as well.

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

Maybe you are looking for

  • DSO activation problem after creating the secondary indexes

    Hi,     I am facing the problem with DSO activation after creating the secondary indexes. •  Compared with Info Cubes there is no functionality available which allows dropping and recreating a secondary index before/after the data activation. As a wo

  • How do I remove payment information from my account, like the credit card number and information?

    *NO THIS DOESN'T HELP ME AT ALL* V       V        V         V          V          V "Symptoms When there is a mismatch between the iTunes Store's records and your credit card company's records, you will receive an error message stating that your cred

  • Strange Problem - All of my preloaded .swf files play at once

    Hey guys,   I've been getting a strange problem that I haven't been able to debug.  I recently developed an interactive audio and video treatment program that users click through in which a master swf file (DTM-Start.swf) uses ActionScript upon first

  • REAL final cut system slowness problem/issue

    I should wait for the post-house engineer....but, I don't want to: I am having slow-ness issues on this FCP system I'm working on. Before I give you all the schematics of this system, I'll let you know that my temporary fix is copying the 'stuff' on

  • AT&T 8120

    I am trying to set up my WiFi, and no doing so good. On the home screen it shows that it sees my home WiFi along with AT&T, but when I load the browser the only thing showing is edge at the upper right and AT&T on the lower part of the screen. Any he