CarbonCopyClone strategy help

To start, I do have two backups for my 500GB drive on my MBP
A TimeMachine on a 500GB partition on a GPT WD bulky 1 TB disk
2 gray CDs
I just bought a 750 GB GoFlex pro pocket drive (it will work or not ... brand is not the question).
I downloaded CCC, but have little exerience with it.
I propose this idea, and invite comments:
Create 2 partitions, 500 GB and 250 GB, both Apple-formatted.  Set up the 500 GB drive as a CCC target, and let CCC *only* manage that as a *third* backup.  Use the 250 GB as general external-disk storage.
Please advise.

steve359 wrote:
I did not like the one-backup approach, but the initial purchase price of the MBP required some careful consideration of additional purchases and the 1 TB WD drive already was in the house.
You can use this and I understand you want to maximize your massive storage, so leave the CCC settings alone on the 1TB (default) and it will archive your deleted files between clone updates.
See how that works for you. I prefer having only a straight clone image, this way if I delete something it's really gone, off the clone/backup as well.
Return this new unit for a self-powered drive.  That makes sense, especially given the "USB device has been disabled due to power draw" issue.
Exaclty, if your USB port doesn't supply power and you suddenly need the clone....
Keep doing TM on a regular basis for "mostly-files" level recovery.  It can stay on its own existing place.
Right, use another drive for this than the clone.
On the new self-powered unit, set up a CCC and leave it at default behavior (full first, then incremental) which I assume is still bootable in case of disaster.  Let it backup perhaps less ofthen than TM as a bootable "mostly current" backup.
Yes you can do the clone less often than the TM updates, but be forewarned, TM traps your stuff, it requires a operating internal drive and/or boot disks.
A clone is self independent, has less things to rely upon in order to gain access to your files.
With a clone, push comes to shove, you can always install MacDrive on any PC (reads HFS+) and access your files.
How you set up your backups is up to you, everyone needs are different, just maintain two copies at all times.
My admin password does not change much (perhaps another error).
You don't need to change your admin password.
When you clone the first time it requires your admin password as it's cloning root level files, however when the schedualed clone update occurs it doesn't need it as often as it's mostly updating only user file changes.
So I think it's "smart" in that sense it tries not to bother you too much entering the admin password every day for clone updates.
Do not lose my gray install CDs.
1: First get them out of the cheap plastic film sleeves Apple provides, they destroy cd's. Put your disks in a new jeweled case and keep in a very safe place away from heat.
2: Keep the bottoms absolutely clean, especially since you got the Snow Leopard grey disks, those are in very hot demand at the moment.
3: Make copies of your Snow Leopard disks here.
http://www.walterjessen.com/make-a-bootable-backup-snow-leopard-install-disc/
A "correct" or "helpful" answer will come, but I would like confirmation of my interpretation to assign only once.
thank you

Similar Messages

  • Purchase Requisition Release Strategy ---- help required urgently.

    Hello all,
    I have one severe problem in PR release strategy. Whenever we create a PR the Release is triggering but the name of the processer who will release this PR is not triggering. I have done the PO13, where the HR positions are maintained as per the release levels and PA30 where the SAP user ids of these processers is given. But still the processor name is not triggering. Is there any other transaction where we have to maintain certain data?? Kindly help me as this is an urgent issue.
    I will be very glad and give points to all of u.
    Thanks and Regards,
    Umakanth.

    hi,
    Check your release codes and its assignments in the release strategy..
    Also do check whether the PR is liable to trigger for the given characteristics as compared to your PR which you are making...
    Regards
    Priyanka.P

  • Partitioning Strategy Help

    I am specifically looking for recommendations on why/when to partition both my data and indexes. We are trying to eak out more performance from our 350gb database. We have the data and indexes partitioned using the same partitioning ranges (6 partitions) and are experimenting with manually placing the files on different drives. Is there a direct benefit to partitioning the indexes? The tables are on name, address, and entity.

    You are asking for strategy. In my opinion, there are two basic strategies you can pursue. Sometimes you will need to use one strategy for some tables and the other for other tables.
    1) Partition pruning strategy: You try to partition on a key that will eliminate whole partitions when the data is selected or updated. For example, your data spans an entire year, but almost all processing is done on the current month. If you partition on the date column, the optimizer will determine that it need only process the partition for the current month.
    2) Parallel query strategy: Sometimes you have tables that are quite large and almost always have to be processed in their entirety. These can be partitioned on an evenly distributed key so that the partitions can efficiently be scanned by parallel query processors and the overall result can be returned much quicker, even though the whole table still must be queried. Of course, this should only be configured on a host machine with enough processors and memory to handle the PQ processing.
    I hope this helps,
    Tim

  • Purchase Req release strategy help

    PR's generated from MRP are being blocked, however when I convert a planned order from MRP into a PR it is automatically released. I want MRP generated Reqs to be also released. I tried creating a new class, characteristic and strategy group but ran into problems when assigning the class to the group. Basically we just want all reqs created from MRP to be released and not go through workflow. Any ideas? My inital thought was to use the creation indicator EBAN-ESTKZ with the value of B (MRP) and U (converted planned orders)

    hi Ryan
    U can use only one release class in PR release strategy
    error is because ur trying to use another class add the charactaristics ib existing class and configure release strategy
    Vishal...

  • Purchase Requisition Release Strategy help

    SAP Gurus,
    I have set up PR release strategies for item level and it is not working. Our setup is as follows:
    Document type NB with "Overall release of purchase requisitions" unchecked
    3 characteristics defined: one for Line item total, cost center, and an account assignment category.
    1 class with all three characteristics assigned
    1 Release group with "Overall release of purchase requisitions" unchecked
    104 different release codes
    127 different release strategies with varying combinations of cost centers and dollar amounts.
    Executed cl24n with all release strategies.
    Now when i enter a PR with amounts and cost centers that should trigger the release strategy, it is not triggering.
    I executed CL30N with different parameters and none of them register.
    The only clue i have as to what could be wrong is when running OMGQCK, it lists that the release group does not exist, but it does. Message 620401/2006 references note 365604 lists the problem i am experiencing, but none of the suggested steps have corrected the problem.
    I would appreciate any suggestions anyone has on what i can try to get this release strategy working
    Thanks,
    Rachel

    hi,
    Check your release codes and its assignments in the release strategy..
    Also do check whether the PR is liable to trigger for the given characteristics as compared to your PR which you are making...
    Regards
    Priyanka.P

  • Sequence edit strategy - help finding articles, resources discussing approaches...

    Hi All...
    I've just finished editing my first video documentary...
    Just a 5min. entry for a local contest.
    Adobe Premiere - worked Great !
    BUT - I'm confused by the differences and resulting consequences (for future management, ripple, roll, etc.) between:
    a) Nesting,  b)Grouping,  c)Linking artifacts on the Premiere timeline....
    For my documentary - I:
    1) I Grouped together video and related audio....  Though "Linking" seemed to acheive the same goal - what's the difference..
    2) I did NOT "Nest" any of my content - instead I would build transitions, audio, titles, videos etc. for a given scene in one "Sequence" container timeline.
        Once I was happy with this - I would save - and then just Drag/Drop that sequence onto another "Root" level - or Main sequence that I would use
        to gather all my scenes together... It's on this "Main" top-level sequence - that I would add my final narration and background music ?
       What's the point of "Nesting" - if you can just drag-drop your content to other sequences - and it appears as a single video/audio channel ?
    Can anyone here - forward me to a good article, resource discussing some higher level strategies for organizing sequences etc. for larger projects...
    Pros/Cons..
    Thanks very much...

    Thanks very much my friend...
    * Is there any danger - to the approach I took...
    * What advantage/difference - is there between groups video/audio.... instead of "linking"...
    Re: Nesting...the way you describe... sounds like - the ends are the same...
    You say most editors work with a single sequence - but - when that sequence gets busy...
    then they grab logical-sections/"scenes" and "Nest" them....
    ** The act of doing that would replace all that busy/complexity with a single video/audio channel - linked to a newly created sequence container - with everything that was selected for the "nest" operation.
    You still end up with two sequence containers...  a top-level one... and linked "children" ?
    Sorry if this is too newbie of a question...
    But I don't think timeline/sequence nesting/management strategies are really discussed in depth anywhere in the online help....
    I'd hate to get far into a more complex project - based on poor workflow I've developed for my first work.... only to learn that there are problems
    issues - with my approach.   Perhaps the crashes or limits w.r.t. editing ?
    Thanks

  • Disk crash, need recovery strategy help.

    We had a two drive failure on our AS400 and had to reinstall the operating system. Can we install A7.3 from scratch and then copy in our libraries from our latest backup and JDE function properly? Any ideas would help!

    Well, it may be that I am not correctly understanding your disaster situation. I am assuming that you have the same IBM CPU and same IBM hardware serial number. It sounds like you had to replace a disk drive and a tape drive, but not the actual CPU itself. Now it does sound like your operating system backup was eaten up by the tape drive and lost, and thus you are saying you had to reinstall the IBM operating system? Would you not have a slightly older backup tape that you could recover from? Even if you did not and had to reinstall the operating system, JDE World is fairly independent of the OS/400 itself (other than needing to run on it). So I am thinking that you do NOT need to reinstall JD Edwards. You should be able to restore the JD Edwards libraries, though you would need to have the QGPL library as part of that restore process, and run okay.
    If my assumptions are wrong, then what I said may not apply. Certainly if you restore onto a new CPU, with a different serial number, you will have to contact JDE support to get a new license key. When doing a hot site test, I restore the operating system, and then restore all the user libaries, get new license keys, and I am up and running. Note that in a hot site test I did not have to do a reinstall of JDE. But certainly the difference maybe that in a hot site test, I am restoring the operating system from backup tape (so not reinstalling OS/400). So I have not personally been involved in the exact situation that you may be facing (I doubt many folks have such experience).
    You probably want to be in touch with JDE support and look to them for guidance on what to do, giving them your exact restore situation that you are facing.
    Hope this helps a little bit. Bit surprised that these days a disk failure would cause a system crash (have not seen that since the 1980s) - most systems have disk mirroring or RAID of some kind now. So the system can still run while waiting for the disk drive to be replaced (perfomance may suffer, but better that than losing the whole system).
    Good luck.
    John Dickey

  • Need Workflow Strategy Help

    I'm hoping you all can give a noob some workflow tips on a developing project, so I don't go up blind alleys.
    I am recording a performance--a play including 12 songs.  I am planning to give a CD of the songs to the cast and crew as a gift.  I am recording from a PreSonus 24.4.2 board via Firewire.
    I have used PreSonus Capture to record one performance, using about 14 channels.  I plan to record two or three more performances, then audition each song for the best version among the three or four tries and put the all the best choices on the CD.
    I will record each show continuously, with lots of dialog and other material not to be recorded on the final CD.  (I'm not the sound guy, so I don't want to interrupt his work.)  Each take, therefore, is abourt 1.5 hours of uninterupted audio: Start at the beginning, stop at the end.  Last night's performance resulted in a 25 gig folder.
    Here's my plan:
    1.  Import each performance into Logic
    2.  Audition all four performances for the best of each song.
    3.  For each song, import the best take to a new Logic file, excluding all dialog and other unwanted material
    4.  Work on the new Logic file for mixing.
    I'm battling my way through the manual and the Exploring Logic Pro/Demo to bring me up to speed.  In the meantime, am I on the right track?  Is there a simpler way?
    I also plan to search these boards for helpful secondary reading, but if you guys have suggestions, I'b be grateful.
    Finally, my work in FCP suggests I should use a backup disk for my source files, leaving my main computer (recent thunderbolt iMac) for processing.  Is this advisable, or does it not matter?
    Thanks.

    Richard
    If you have 14 Tacks (14 audio files) per show and you record three shows, then I would just stick with one Logic Project. It saves you a lot of opening and closing and it is easier to compare takes.
    Create one Logic Project, create the 14 Audio Tracks and import the 14 audio files form show #1 onto the 14 Tracks, then import the 14 audio files from show #2 onto the same 14 tracks after the first batch of audio files, and so on. I guess the track assignment is the same so this method of dealing with one master Project requires only to set the balance between the 14 tracks once. Setting the levels for one show and you done for the other shows too. The project gets fairly long 1.5h x3 = 4.5h but that doesn't matter. Use marker and the zoom in-out tool to navigate quickly.
    Now cut and truncate the audio regions to isolate the songs. Create a Track for the Master Channel and use the Track Automation to create some nice fade in and outs of the songs. The automation is similar to FCP, instead of keyframes, Logic called them nodes. You can also add plugins on the master channel to do some quick mastering adjustments (EQ, compressor, etc)
    Once you you've chosen the versions of the songs and did the cleanup and leveling, just export (bounce) each song as an aiff audio file. Put the aiff files into iTunes and burn the CDs then from there.
    One thing to consider. For Audio CDs as the final destination it would be good to have the Logic Project already in 44.1kHz. However if your output from the show is 48k, then set the Logic Project to 48 (before importing any files). Do the conversion when you export from Logic or you can let iTunes do the transcoding when you burn the CD.
    If you looking for Logic material to read up on this fine application, I wrote several manuals over the years that I made available for free on my website http://DingDingMusic.com/Manuals/. Scroll down on the page, there is a link for the "Free Manuals". As a FCP user you might find some other useful material on the website.
    Putting the audio files on an external drive is only necessary if you don' thane enough space on your internal drive. You are dealing with straight audio streaming and not complex sampler instruments I guess your machine can handle that. Look at the Navigation Bar. There is an indicator how much you burden the CPU and the Drive.
    Hope that helps 
    Edgar Rothermich
    http://DingDingMusic.com/Manuals/ 'I may receive some form of compensation, financial or otherwise, from my recommendation or link.'

  • Workset Strategy Help

    Hi, I have a workset with 5 reports.  This workset is assigned to 4 Roles.  Along comes a new requirement to add an additional report, but only 2 of the 4 assigned Roles should get the new report.
    1. Should I copy the original workset, add the new report, and assign it to the roles that require the new report? 
    (Then probably the 2 similar worksets would have same name, but 1 report is only difference)
    2. Just add the new report into the Roles that require it?
    (If I go into the Roles and view the assigned worksets I can individually add the new report without touching the worksets).  You can do so and make it look like the new report visually belongs to the workset, even though it doesn't.  The new report shows up in the Role as "added".
    How would you do it?  Any opinions are greatly appreciated.

    Hi Kenneth,
    I think this depends heavily on the way you have done it up until now. If you have strictly been using the delta-link concept (by assigning Worksets to Roles) then I think option 2 will be a little confusing (since all of a sudden more than what is assigned to the workset is displayed in the role).
    If you are working more with roles (e.g. PCD admins look at roles first) I think option 2 would be the fastest.
    Regards,
    Holger.

  • How DB adapter works when polling strategy is "remove row selected"?

    How DB adapter works when polling strategy is "delete the rows that were read"?
    I want to know how database adapter works when polling startegy is "remove row".This polling strategy helps for polling the changes to table and remove records after new records are inserted or changes are done to table.Here,i want to understand how DB adapter identifies which record to be deleted.
    For example: there is a table with 100 recorrds.How DB adapter works when polling strategy is "delete the rows that were read"?
    I want to know how database adapter works when polling startegy is "delete the rows that were read".This polling strategy helps for polling the changes to table and remove records after new records are inserted or changes are done to table.Here,i want to understand how DB adapter identifies which record to be deleted.
    For example:
    There is a table EMP with 100 recorrds.Now, i deploy a BPEl process with db adapter polling on table EMP for changes.So that ,if any change happen to records in table that records will be picked up.I use a polling startegy "delete the rows that were read" .
    Now i insert 9 new records and update one existing records.These 10 records should be picked up and then deleted by Db adapter.In this 109 records how DB adapter identifies these 10 records when it polls.How it differentiatess old records with new records when there is no flag field or no sequence id used to identify the new or modified records.
    Please let me know.
    Why i want to know this?
    Some times customer may not allow BPEL process to do any modifications to source table or source database.In this case the options provided by database adapter wizard are not useful.
    If there is any mechanism to identify new or modified records without having a FLAG field or sequence table,then it is possible to have an option like only read the changes from table rather than deleting the records after reading.Which helps in meeting above requirement.
    Please let me know if there is any way to do this.
    thanks
    Arureddy

    Once the record has been read it is deleted. Therefore, you can update rows in this table as many times you like before it is read. Once it is read there will be nothing to update as it will be deleted.
    If you don't want to use a sequence table, you can use a sequence file. You can only use this functionality if the key you are using increments sequentially. e.g. 1,2,3,4. If your key is random, e.g. 2,1,3,5,7,4 then your options are delete or use a processing flag.
    The other option is to create a trigger that inserts a key into a polling table when insert or updates occur. You can then use the delete, which is the most desirable as it uses database locking.
    cheers
    James

  • Batch command help ?

    Hi Guys I have this batch file called via ant build script.The batch file has 10 different function .Now i have this requirement to break it into smaller parts.
    Example the functionality should now be broken into three parts
    1)employee data processing
    2)employee working days
    3)employee salary
    4)employee promotion history
    1)employee data processing
    2)employee working days
    3)employee salary
    1)employee data processing
    2)employee working days
    4)employee promotion history
    Problem is absence of OR statment in batch file will force me to write this
    command in separate batch file.(I was trying to put it into one so that I dont have to repeat statments).Basically I was trying to execute target and target will execute the batch file with a parameter,which will then decide which functionality to execute , but in absence of OR is causing me some headache.Not able to decide the best strategy.

    help.Cut it out already.
    We've been nice considering this is a Java forum and not a
    Windows batch file forum.
    Your problem may be that you just dont know enough about
    batch files.
    http://www.robvanderwoude.com/index.html
    http://www.google.com/search?hl=en&q=batch+files+if

  • Digital Cinema Desktop Preview?

    Hi there
    Just wondering if it's possible to run Color with 2 screens using one screen as the UI+Scopes and the second screen as full screen Digi Cinema Desktop Preview?
    (I see the standard Video out options as FCP has, but they are all ghosted and can't be selected...)
    Would an extra PCIex Graphics card allow it, or would I need to get an AJA Kona or similar ?
    Thnx!

    The configuration you are suggesting is "the way it used to be" for non-Kona/Blackmagic Final Touch users. The scopes were imbedded in the UI primary screen where the "Curves" now are, for example in the Primary Room. The second screen (DVI #2 output on your GPU) was full-screen Preview/Program out. Why it was switched around...??? I understand that the 'curves' adjustment strategy helps those who don't have control panels, in that a 'knack' for them can evolve, usually for tablet/stylus drag artists. Never use them, myself.
    Its lovely having three screens, the third being an approved grade-level, calibrated monitor, hanging off the end of a real SDI-Video card, with a full-raster/full time waveform/vector/gamut alarm scope. Not much guesswork after that. Takes some investment, though. If you do have all the toys, outboard scopes, panels and multi-screens, the amount of wasted space in the UI becomes obvious.
    As a side-note, I would only use the curves if trying to correct, or emulate, badly damaged film ( eg. severe dye-layer fading). Otherwise you are messing with gamma-point, more than correction. I'm open to debate on that one, though. Seems like most video, especially and particularly the pro-sumer highly compressed formats, don't have the jam to withstand much serious value reassignment.
    COLOR will not boot with more than one PCIe card installed/in use. I have heard of users working around that by unplugging the monitor from the second card. Kinda defeats the purpose, though...
    jPo

  • SAP WM -Stock Placement

    Hi
    Two quries
    The scenario is -On receipt of stock need to first fill the Fixed bin in the Fixed storage type and if the capacity reached max in the fixed bin then the reserve bins in the reserve storage types are to be filled.Only one fixed bin is maintained for a material.
    (1) Wat stock placement strategies need to be maintained for the storage types Fixed and reserve storage types.
    (2) Iam trying using Stock placement strategy for fixed storage type as "F" (fixed bin ) and in the reserve storage types as "K" (near picking bin).Defined Storage type search sequence first fixed and next reserve storage type.In the Near picking bin stock placement strategy activation expect "Search width per level" and "Assign shelves to the aisiles" all other configuraion are done.
    Pls tell me the impact of not using these two steps in detail. (a) Search width per level(b) Assign shelves to the aisiles .Is it mandatory to use this two steps for the near picking bin strategy .
    help me to solve these quries

    Hi Kumar,
    I suppose you have maitained capacity check for the storage type 231 .
    Based on the SUT's and u r strategy P, the system will allcate the numbers.
    usually if there are two SUt's ina a bin where it belongs to the storage type 231 then the bin will have 001-02-A/2
    the special characters gets assigned after the placement.
    hope this helps !!!

  • Planned order scheduling in mrp

    hi
    iam trying to run MRP at my client place
    facing problem in planned order dates
    i choosen  tequirement type BSF and strategy group as 11
      in md 61 pir was given in a month of june2008(monthly)
    after mrp run at modo4 iam getting planned order with
    execption message 64 called order date in past
    means iam getting planned order START DATE AND END DATE IN THE MONTH OF JUNE
    where as i need to  my planned order satar date in MAY MONTH and
    end date must be 1st of june
    means i given PIR which is my start of my june month
    checked scheduling parameters which  says backword scheduling, and i choosed planning mode 3 and leddtime scheduling
    plz suggest which setting effects the planned order start and end dates??
    regards
    sasikanth

    hi
    i selected schedule margine key with 0 flots (key000)
    and chossed *LEED TIME SCHEDULING (in mrp controll parameters :-scheduling 2 at MD02 )
    how the reduction strategy helps in this case ,casue i am doing lead time scheduling system has to consider only routing time ??? right ??
    plz suggest how to  arrest message 64
    regards
    sasikanth 
    message 64 diagnosis enclosed
    64: Production finish after order finish                                                                  (P4)
    Message no. MD427
    Diagnosis
    The production finish date calculated using the routing lies after the order finish date that was calculated using the in-house processing time in the material master record.
    As the system uses the order finish date in the planning run, this situation may lead to material shortages if production is not finished until the production finish date calculated by the system.
    System Response
    This exception is displayed in the MRP list and in the current stock/requirements list.
    Procedure
    Check the times in the material master record and in the routing.
    Check whether the order has to be rescheduled in.

  • Best location for application files

    In our current WebLogic environment (WLS 4.51), we have seperate document roots
    for each of te 12 seperate applications being served up in seperate WebLogic instances.
    We are migrating to WLS6.1, and I am not sure if I should continue this practice.
    All applications will be under the same domain, and some common utility .jar and
    .war files may be targeted to several servers in the domain (email servelts, search
    servlets, etc.). However, I am no tsure I want all these directories or .jar/.war
    files to beunder the same "applications" folder. If I put them under /apps/Java/$app_name/
    I can deploy them fine from the console. If I do this, will I lose the auto_deploy
    feature? Is there any inherent dangers to doing this? Mostly, I want to keep app1's
    files in a different location than app2's.

    Please see my inline replies:
    Mark wrote:
    In our current WebLogic environment (WLS 4.51), we have seperate document roots for each of
    te 12 seperate applications being served up in seperate WebLogic instances.
    We are migrating to WLS6.1, and I am not sure if I should continue this practice.
    All applications will be under the same domain, and some common utility .jar and
    war files may be targeted to several servers in the domain (email servelts, search
    servlets, etc.). However, I am no tsure I want all these directories or .jar/.war
    files to beunder the same "applications" folder. If I put them under /apps/Java/$app_name/
    I can deploy them fine from the console. If I do this, will I lose the auto_deploy
    feature?Yes. if you are keeping your applications outside of the applications directory they would not
    be auto-deployed. It is necessary to keep the applications in the
    /config/domain_name/applications directory, if you want to have the auto-deploy feature to
    work. Also, we recommend to use auto-deployment only during development and turn off the
    feature in the prod environment.
    Is there any inherent dangers to doing this? Mostly, I want to keep app1's
    files in a different location than app2's.Maybe this strategy helps you:
    You can have all the applications in the applications directory in the exploded dir format. In
    this case, they will be auto-deployed on the Admin server. Using the console, you can target
    them to different servers in the domain (analogous to different WLS 451 server instances).
    Now, if you change a particular file (jsp) in the webApp, then you will need to just touch the
    REDEPLOY file. The Admin server will check that the timestamp for this file has changed and
    redeploy the application to the concerned target servers.
    So, by maintaining an exploded directory structure for each webApp in the applications
    directory, you will be able to keep the files separate and still achieve redeployment.
    More details can be found at:
    http://e-docs.bea.com/wls/docs61////adminguide/appman.html#1029683
    Also, as everything is a webApp now, each webApp has its own documentRoot. So if you have 12
    webApps, each has a different docRoot automatically.
    hope this helps.
    Mihir

Maybe you are looking for