Snapshot and backup

Hello BW Experts,
Current Scenario:
There is a daily snapshot of the BW system in nite for our Production server. During this they have to stop any jobs / loads running at that time.
Secondly they take a backup every weekend. They stop any jobs / loads during that time also.
Wondering if you could provide ur experiences, that would very great of you..
1) Is snapshot necessary every  day?
2) what is the alternative to daily snapshot?
3) what is the best procedure for daily snapshot ?
4) is there a procedure not to stop the loads / jobs  during snapshot ?
5) Is backup necessary every weekend?
6) what is the alternative to weekly backup?
7) what is the best procedure for weekly backup ?
8) is there a procedure not to stop the loads / jobs  during backup ?
9) is there any doc / paper / link / notes for the snapshots and backups for BW production system.
10)what is the typical time taken for snapshot?
11) what is the typical time taken for backup ?
any does please mail to [email protected]
Thanks,
BWer

Thanks for the valuable info Mimosa.
In our company, they call the snapshot: copy of the production box to the box on the SAN network. The copy is made to the disc. Here the snapshot disc is on RAID 0 and production disc is on RAID 5. They do this every night.
In our company backup is : taking copy to the tape. they do this every weekend.
Its interesting that they do the 'split mirror on-line backup' daily during system is available to all users & jobs. Wondering if you have any details of this backup. What is the procedure called. What level is this copy done, on the database level, network functionality or sap functionality.....?
Suggestions appreciated.
Thanks,
BWer

Similar Messages

  • Snapshots and BAckups

    Hi
    In the Beta version there was the ability to create photo binders as a way or archiving. Is there any current way to do something similar to this in v1? if not are there plans?
    As for snapshots, are these recorded in the DNG files or xmp files? ie if i backup an image and remove it fromt he library are the snapshot setting all saved or are they gone forver?
    thanks
    dp

    Part one,o ther isn't and yes there may be binder type of stysems brought back.
    Part two, i don't know but you could test it and let us know, it would be use ful.

  • Hyper-V(2012R2) Backup with Checkpoints(Snapshot) and Export - What to consider ?

    good evening
    Today I recognized that a VM Snapshot (excuse
    a checkpoint) can be exported  with
    Windows Server 2012R2. 
    The subsequent import resulted in a
    functional VM in saved state.  
    Question: What concerns a backup to a deduplicated Drive
    with the following PowerShell
    commands?
    # Backup running VM's on a hyper-v 2012R2
    # (p) by steini'14
    $rootPath = "I:\VMBackup"
    #create Backup Path of the Day
    $path = new-item -ItemType Directory -Path "$rootPath\$(Get-Date -f yy_MM_dd_HH-mm)"
    # Do Snapshot and export and "Merge!" Snapshot/Checkpoint for each vm on this host.
    foreach ($vm in get-vm){
        write-host "Start: $($vm.Name)"
        $snap = $vm |checkpoint-vm -SnapshotName "BackupJob $(Get-Date -f yy_MM_dd_HH-mm)" -Passthru
        $snap | Export-VMSnapshot -Path $Path.FullName
        $snap | Remove-VMSnapshot
    Start-DedupJob -Volume $path.Root -Type Optimization -Wait
    #PrepareDisk:
    #Add-WindowsFeature -name FS-Data-Deduplication
    #Enable-DedupVolume I:
    #Set-DedupVolume g: -MinimumFileAgeDays 0
    #Start-DedupJob -Volume i: -Type Optimization -Wait

    Hi ,
    I am not familiar with powershell .
    #create Backup Path of the Day
    $path = new-item -ItemType Directory -Path "$rootPath\$(Get-Date -f yy_MM_dd_HH-mm)"
    I think it will create a new folder to store the export image when the script start to run .
    Also the disk space for storing these image is a factor what you can think of .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Backup sql+exchange both snapshots and regular

    Hi all
    I have fue sql and exchange (2010 dag)
    all of them are phisical connected to shared storage.
    up till now the were backed up with netapp aware snapshot and the snapshots were backed up with net backup to off site.
    no we replace netapp with hp 3par and take aware snapshots for exchange and sql. BUT we can not backup those hp snapshot as we did with netapp.
    is it OK to backup the database (exchane and sql) with two separate backup method? is there going to be issue with the logs are trunck? one trunck do the snapshot aware and one trunck do the net backup?

    Hi,
    Do you mean that you want to backup the database (exchane and sql) with hp 3par snapshot which is created by netapp aware snapshot? If so, the issue is related to third-party software, I suggest you ask for help from the vendor of the software for better
    and accurate answer to the question.
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • How do you stop Mountain Lion from making endless copies and backups of everything?!

    I already have a similar topic on this when I needed to figure out how to stop Mountain Lion from making endless copies of my text edit files and I was going to try the disabling of it that someone in that thread suggested, but I can't find where the Versions backups are being stored.
    Anyway, tonight I cam across a new problem, when I went into "about this Mac" and looked at my Storage, I had almost 1gb of data in the "backups" section. I just happened to have Finder open when I went into the Time Machine in the status bar and when it entered Time Machine, I had dozens of copies of my desktop. Thing is, I don't have my Time Machine hard drive connected and I don't have a Time Capsule router/hard drive. And don't give me this garbage about how "It's not real backups, it's just a snapshot" or whatever, because the ONLY things that I have stored on my computer that I have altered since I got mountain lion is two Text Edit files and the Finder Desktop (and yes, I have checked almost all the programs I use, and of the things that Time Machine will open in, those are the only things that have backups listed that I can access) and almost 1gb is being taken up completely needlessly with the computer backing up things onto itself. Obviously these are more than just snapshots and are obviously the real files because after throwing EVERYTHING on my desktop into a file, that file is only 3.8mb in size, so that almost 1GB in backups is coming from SOMEWHERE.
    The solution I aluded to in my first sentence also depends on going into the Terminal to make each individual app stop auto saving Versions, however, since my desktop is being backed up, there's no program to stop Mountain Lion from doing that.
    Can anyone help me figure out how to make it stop backing up? Seriously, almost 1 gb is being totally wasted on a series of desktop items that are 3.8mb in size, one Text Edit file that is 250kb in size, and another text edit file that is 41kb in size. What person in Apple would possibly have thought this endless backing up of files onto the main hard drive would have been a good idea? So in a year if I did nothing but go on the internet, save one picture to one file on my desktop and did nothing else with my computer, would I have 200 gb of desktop backups that the computer saved onto itself? Are you kidding me? OR failing that, can anyone tell me how to find this damned backup file and delete it? I'm not an idiot, I know how to plug in a hard drive or flash drive and backup my information, I don't need my computer to be making hundreds of copies of every single file on my computer that I change slightly onto itself that not only needlessly eats up it's own space, but if the hard drive died, all those "backups" would be gone too because it wrote the backups to itself.
    I am so frustrated by this totally ridiculous and wasteful "feature" they put in, it does nothing but waste hard drive space with it's hundreds of copies of everything you alter slightly. And I am absolutely not starting any video editing projects until I figure out how to turn this off. If around 4mb of total text and desktop backups add up to almost 1gb in a few weeks, what the heck is a video editing project going to end up in with size? Am I the only one completely flustered by this ridiculousness? Is there any point in writing into Apple to complain or are they just going to say "No, we're brilliant, we think this is an amazing feature so you just have to deal iwth it and when your hardd drive fills up, too bad, jerk!"?? Help! Thanks!

    Well as I pointed out in my post, "Sadly, my Text Edit documents are still all there, every single time I opened them and made a small change, there is another copy stretching all the way back to the day I installed Mountain Lion." so if you can explain to me how July 26 - Aug 16 is a week, I will concede that it is a good idea. Short of that, no I have been capable of making my own backups since I was 16 years old (16 years ago) I don't need my computer to endlessly do it for me in a way I am completely unable to shut off. That's what I have external hard drives, flash drives, a DVD burner, Drop Box, E-mail (as in emailing copies to myself) and icloud for. If I am doing something and I don't have access to a single one of those options, I don't know why I brought my computer with me in the first place. And I don't see how you can say it's an illusory gain in disk space, hundreds of copies of text documents, garageband files, Final Cut files, Motion Files and basically any program that makes files onto my comptuer that I can myself make incremental changes to, those all need to be stored somehow. Again, if you can explain to me how July 26 - Aug 16 is a week, I will agree with you that that is a good thing because IF it only saved for a week, it wouldn't be an issue. Sadly, no matter how many links you post, I still have text edit backups stretching from today back to July 26. They're not going away and I have no reason to beleive any other file I edit is going to go away either since they haven't been. They're all there eating up HD space and this imaginary "week" limit is simply not coming into effect. I hate it and there needs to be a simple way to simply shut it off. I didn't spend hundreds of dollars on external drives just for my main hard drive to be filled up with hundreds un useless, undeletable backups of stuff I already have backed up.

  • What is 'Other' and 'Backups' on my HD?  How can I delete them to create more space?

    Other and Backups take up the majority (235GB) of my 250GB HD and leave me with little room for anything else.  I have moved iTunes, iMove and iPhoto to an external harddrive. What are they and can I delete them to create more space?  Please help.

    "Backups" are local snapshots created by Time Machine. You don't need to delete them; that happens automatically when necessary. Consider them as equivalent to free space, which is in fact how the Finder reports them.
    As for the "Other" category, use a tool such as OmniDiskSweeper to explore your volume and find out what's taking up the space.

  • PX Feature clarification (Snapshots) and Crypto Ransomware

    Hello,
    I am trying to better understand the "Snapshots" feature of the Lenovo PX2 series NAS devices.  I currently have a px2-300d (with latest firmware).  I have inquired over the phone, but the person I spoke with did not provide a definitive answer.
    I would like my PC workstation backups to be protected from Cryptolocker-like malware, however, we do not want to use a cloud based backup system (which does protect against Crypto Ransomware).  As you may know, Cryptolocker, and similar malware, can encrypt, or corrupt, all files acccessible from the PC/workstation, including network backup locations containing backups.
    I would like to know if the "Snapshots" feature of the PX2 NAS devices would provide adequate protection against this.  Additionally, I would like instructions for properly setting up that feature to provide the necessary protection.  If the "Snapshots" feature does not provide protection, I'm looking for other suggestions or solutions to the problem that can be used with these NAS devices.
    Thanks, in advance, for your time and help.
    Regards,
    Bob
    Solved!
    Go to Solution.

    Hello ohbobva
    With px devices you can manage how much of the storage pool is used for specific tasks.  
    With a default configuration on a px2 it would be RAID1 across the disks creating a single storage pool.  This is the storage pool listed under all features>drive management.  
    The portion of the storage pool that is blue is allocated, the white portion is unallocated.  
    Under the storage pool name there will be a "Volumes" section, this will list the different typs of volumes that are using the allocated space.
    The default is a share volume and usually a portion of the storage pool is allocated (reserved) for that volume's use.  iSCSI LUNs(drives) and Snapshots would be other types of volumes that can allocate(reserve) the storage pools unallocated space
    If you have a share volume that you want to use snapshots with and it is 2TB in size you would want to allow 2 TB of unallocated space to be available when setting up the Snapshot.
    With what your are trying to do, having a 45/45/10 allocation would be best.  45% of the storage pool allocated for the Share volume, 45% for the Snapshots, and 10% left unallocated for the firmware to maintain the storage pool and volumes efficiently. 
    Again this should not be consided a backup, but more of a volume level fault tolerance. 
    LenovoEMC Contact Information is region specific. Please select the correct link then access the Contact Us at the top right:
    US and Canada: https://lenovo-na-en.custhelp.com/
    Latin America and Mexico: https://lenovo-la-es.custhelp.com/
    EU: https://lenovo-eu-en.custhelp.com/
    India/Asia Pacific: https://lenovo-ap-en.custhelp.com/
    http://support.lenovoemc.com/

  • Backpac: A package state snapshot and restore tool for Arch Linux

    backpac:
    A package state snapshot and restore tool for Arch Linux with config file save/restore support.
    https://aur.archlinux.org/packages.php?ID=52957
    https://github.com/altercation/backpac (see readme on the github repository for more information)
    Summary & Features
    It's a common method of setting up a single system: take some notes about what packages you've installed, what files you've modified.
    Backpac creates those notes for you and helps back up important configuration files. Specifically, backpac does the following:
    maintains a list of installed groups (based on 80% of group packages being installed)
    maintains a list of packages (including official and aur packages, listed separately)
    maintains a list of files (manually created)
    backs up key config files as detailed in the files list you create
    The package, group and files lists along with the snapshot config files allows system state to be easily committed to version control such as git.
    Backpac can also use these lists to install packages and files. Essentially, then, backpac takes a snapshot of your system and can recreate that state from the files and lists it archives.
    Use Cases
    Ongoing system state backup to github
    Quick install of new system from existing backpac config
    Conform current system to given state in backpac config
    Backpac is a very, very lightweight way of saving and restoring system state.
    It's not intended for rolling out and maintaining multiple similar systems, it's designed to assist individual users in the maintainance of their own Arch Linux box.
    Status
    Alpha, release for testing among those interested. Passing all tests right now but will continue to rework and refine. Bug reports needed.
    Why?
    There are a lot of 'big-iron' solutions to maintaining, backing up and restoring system state. Setting these up for a single system or a handful of personal systems has always seemed like overkill.
    There are also some existing pacman list making utilities around, but most of them seem to list either all packages or don't separate the official and aur packages the way I wanted. Some detect group install state, some don't. I wanted all these features in backpac.
    Finally, whatever tool I use, I'd like it to be simple (c.f. the Arch Way). Lists that are produced should be human readable, human maintainable and not different from what I'm using in non-automated form. Backpac fulfills these requirements.
    Regarding files, I wanted to be able to backup arbitrary system files to a git repository. Tools like etckeeper are interesting but non /etc files in that case aren't backed up (without some link trickery) and there isn't any automatic integration with pacman, so there is no current advantage to using a tool like that. I also like making an explicit list of files to snapshot.
    Sample Output
    This is the command line report. Additionally, backpac saves this information to the backpac groups, packages and files lists and the files snapshot directory.
    $ backpac -Qf
    backpac
    (-b) Backups ON; Files will be saved in place with backup suffix.
    -f Force mode ON; No prompts presented (CAUTION).
    (-F) Full Force mode OFF; Prompt displayed before script runs.
    (-g) Suppress group check OFF; Groups will be checked for currency.
    (-h) Display option and usage summary.
    (-p) Default backpac: /home/es/.config/backpac/tau.
    -Q Simple Query ON; Report shown; no changes made to system.
    (-R) Auto-Remove OFF; Remove/Uninstall action default to NO.
    (-S) System update OFF; No system files will be updated.
    (-U) backpac config update OFF; backpac files will not be updated.
    Sourcing from backpac config directory: /home/es/.config/backpac/tau
    Initializing.................Done
    GROUPS
    ============================================================================
    /home/es/.config/backpac/tau/groups
    GROUPS UP TO DATE: group listed in backpac and >80% local install:
    base base-devel xfce4 xorg xorg-apps xorg-drivers xorg-fonts
    GROUP PACKAGES; MISSING?: group member packages not installed:
    (base: nano)
    (xfce4: thunar xfdesktop)
    PACKAGES
    ============================================================================
    /home/es/.config/backpac/tau/packages
    PACKAGES UP TO DATE: packages listed in backpac also installed on system:
    acpi acpid acpitool aif alsa-utils augeas cowsay cpufrequtils curl dialog
    firefox gamin git ifplugd iw mesa mesa-demos mutt netcfg openssh rfkill
    rsync rxvt-unicode sudo terminus-font vim wpa_actiond wpa_supplicant_gui
    xmobar xorg-server-utils xorg-twm xorg-utils xorg-xclock xorg-xinit xterm
    yacpi yajl youtube-dl zsh
    AUR UP TO DATE: aur packages listed in backpac also installed on system:
    flashplugin-beta freetype2-git-infinality git-annex haskell-json
    package-query-git packer wpa_auto xmonad-contrib-darcs xmonad-darcs
    AUR NOT IN backpac: installed aur packages not listed in backpac config:
    yaourt-git
    FILES
    ============================================================================
    /home/es/.config/backpac/tau/files
    MATCHES ON SYSTEM/CONFIG:
    /boot/grub/menu.lst
    /etc/acpi/handler.sh
    /etc/rc.conf
    /etc/rc.local

    firecat53 wrote:I think your plan for handling an AUR_HELPER is good. If AUR_HELPER is defined by the user, then either you might need a list of major AUR helpers and their command line switches so you can pick the correct switch for what needs to be done (most use some variation of -S for installing, but not all), or have the user define the correct switch(es) somehow for their chosen AUR helper.
    That's a good idea. I'll add that to my AUR refactoring todo.
    I also found directory tracking to be a weakness in other dotfile managers that I tried. I think you would definitely have to recursively list out the contents of a tracked directory and deal with each file individually. Wildcard support would be nice...I just haven't personally found a use case for it yet.
    I've been thinking that I could just add the directory and scan through it for any non-default attribute files. If those are found then they get automatically added to the files list. That's pretty close to what etckeeper does.
    Edit: I just compiled the dev version and removed my comments for already fixed things...sorry!
    The master branch should have those fixes as well, but I didn't update the version number in the package build. I'll have to do that.
    1. Still apparently didn't handle the escaped space for this item: (the file does exist on my system)
    Ok, good to know. This wildcard directory business will require some new code and refactoring so I'll also rework my filenames handling.
    2. Suggestion: you should make that awesome README into a man page!
    I was working on one (the pkgbuild has a commented out line for the man page) but I had to leave it for later. Definitely want a man page. Once this stabilizes and I'm sure there aren't any big structural changes, I'll convert it to man format.
    3. Suggestion: add the word 'dotfile' into your description somewhere on this page, the github page, and in the package description so people looking for dotfile managers will find it. You could also consider modularizing the script into a dotfile manager and the package manager, so people on other distros could take advantage of your dotfile management scheme.
    I actually have a different script for dotfile management that doesn't touch packages, but there is definitely overlap with this one. That script isn't released yet, though, and if people find this useful for dotfile management that's great. I'll add that in.
    4. Suggestion: since -Q is a read-only operation, why not just make it run with -f automatically to avoid the prompt?
    Originally, running backpac without any command line options produced the Query output. I was concerned that since it is a utility that can potentially overwrite system files, it is important to give users a clear statement prior to execution about what will be done. Since the Query output is essentially the same as the Update and System reports in format and content, I wanted to be explicit about the Query being a passive no-change operation. The current command line options aren't set in stone though. If you feel strongly about it being different, let me know.
    Long answer to a short question
    5. Another suggestion: any thought to providing some sort of 'scrub' function to remove private information from the stored files if desired? This would be cool for publishing public dotfiles to github. Perhaps a credentials file (I did this with python for my own configs). Probably detecting email addresses and passwords without a scrub file would be rather difficult because dotfiles come in so many flavors.
    Yes, absolutely. In fact, if you look at the lib/local file (pretty sure it's in both master and dev branches in this state) you'll see some references to a sanitize function. The idea there is that the user will list out bash associative arrays like this:
    SANITIZE_WPA_=(
    [FILE]='/etc/wpa_supplicant.conf'
    [CMD]='sed s/expungepattern/sanitizedoutput/g'
    Question: am I missing an obvious option to remove a file from the files.d directory if I delete it from the files list? Or do I have to delete it manually? It might be helpful to add a section to the README on how to update and delete dotfiles from being tracked, and also a more detailed description of what the -b option does (and what is actually created when it's not used).
    You are only missing the function I didn't finish. There should be either dummy code or a TODO in the backpac main script referencing garbage collection, which isn't difficult but I just haven't finished it. The idea being another loop of "hey I found these old files in your files.d, mind if I delete them?" It's on my list and I'll try to get it in asap.
    And finally, just out of curiosity, why did you choose to actually copy the files instead of symlink like so many other dotfile managers do?
    git not following symlinks, hardlinks also out for permissions issues (git wouldn't be able to read the files, change them, etc.)
    I definitely would prefer to not make an entire copy of the file, but I haven't come up with a better option. Shout with ideas, though. Also, if there is a way around the link issues I noted above, let me know. I don't see one but that doesn't mean it's not there.
    edit: I think a Seattle area Arch meetup would be cool! Perhaps coffee someplace? Bellevue? U-district? Anyone else? BYOPOL (bring your own pimped out laptop)
    A general meetup sounds good. I was also thinking it would be fun to do a mini archcon with some demos.

  • The future of Snapshot, and similar tools.

    My apologies for posting this twice. I posted it first as a follow up
    to an old thread in ...
    novell.support.zenworks.desktop-management.6x.install-setup
    I am quoting some comments made in that forum.
    Then after surfing the forums for awhile, I thought the issue might get
    more attention, if I posted it here. I am looking for ideas and advice
    on the future of snap shot, and any tools that Novell might provide to
    replace it.
    If I could address some of these comments, in hopes of better
    understanding ...
    RE: >>>
    > Most companies invest in packaging tools such the full version of
    > AdminStudio to create deployment packages. They then use their desktop
    > management suites to deploy those packages.
    Admin Studio is not cheap. After a client has spend thousands of
    dollars to implement ZENWorks, now I have to tell them to drop another
    couple thousand down for an application packager. I also don't fully
    understand Novell's relationship with the product, they don't seem to be
    partnering well together, I'd appreciate anyone else's read on this
    relationship.
    RE: >>
    > Novell has for over 5 years now been trying to steer people from
    > Snapshot
    I've heard that statement made verbally by many engineers, but I don't
    always see the practice following that.
    Has Novell ever gone on record to state they will no longer support snap
    shot???
    RE: >> > If your snapshots are failing with the latest versions of
    snapshot, then
    > most likely your software package falls outside the scope of what
    > snapshot should be trying to handle.
    Maybe I've just gotten lucky? I've have not had many issues with using
    snapshot, and I have been involved in a wide variety of applications.
    I have seen the statement made on the forum that ZEN Works is not really
    a "Packaging" suite. While that may be a true statement, because
    Snapshot has been packaged with the product for so long, the mind set of
    customers is that it is expected to work, and be supported.
    I look forward to additional comments on this matter.

    Tom,
    I'm not sure what exactly are you searching for? ZdM is already providing
    you with snapshot replacement - AdminStudio ZfD edition, included in ZfD
    price.
    AdminStudio part for preparing snapshot MSI is identical in ZfD and
    Professional edition.
    All comments you qouted (including this one) are personal opinions - I
    suggest you to try AdminStudio ZfD edition and make your own.
    Denis
    "tom" <[email protected]> wrote in message
    news:[email protected]...
    > My apologies for posting this twice. I posted it first as a follow up to
    > an old thread in ...
    >
    > novell.support.zenworks.desktop-management.6x.install-setup
    >
    > I am quoting some comments made in that forum.
    >
    > Then after surfing the forums for awhile, I thought the issue might get
    > more attention, if I posted it here. I am looking for ideas and advice on
    > the future of snap shot, and any tools that Novell might provide to
    > replace it.
    >
    >
    > If I could address some of these comments, in hopes of better
    > understanding ...
    >
    > RE: >>>
    > > Most companies invest in packaging tools such the full version of
    > > AdminStudio to create deployment packages. They then use their desktop
    > > management suites to deploy those packages.
    >
    > Admin Studio is not cheap. After a client has spend thousands of dollars
    > to implement ZENWorks, now I have to tell them to drop another couple
    > thousand down for an application packager. I also don't fully understand
    > Novell's relationship with the product, they don't seem to be partnering
    > well together, I'd appreciate anyone else's read on this relationship.
    >
    > RE: >>
    > > Novell has for over 5 years now been trying to steer people from
    > > Snapshot
    >
    > I've heard that statement made verbally by many engineers, but I don't
    > always see the practice following that.
    >
    > Has Novell ever gone on record to state they will no longer support snap
    > shot???
    >
    >
    > RE: >> > If your snapshots are failing with the latest versions of
    > snapshot, then
    > > most likely your software package falls outside the scope of what
    > > snapshot should be trying to handle.
    >
    > Maybe I've just gotten lucky? I've have not had many issues with using
    > snapshot, and I have been involved in a wide variety of applications.
    >
    > I have seen the statement made on the forum that ZEN Works is not really a
    > "Packaging" suite. While that may be a true statement, because Snapshot
    > has been packaged with the product for so long, the mind set of customers
    > is that it is expected to work, and be supported.
    >
    > I look forward to additional comments on this matter.

  • I'm using Firefox with Ubuntu 11.04 and the Import and Backup-button cannot be found in the Library-window as instructed, how can I enable this particular feature on Ubuntu 11.04?

    The import and backup-button is, however, where it's supposed to be on Ubuntu 10.10.

    Ubuntu may have moved those three button as well to that control bar.

  • Can't find the storage and backup link to start an icloud backup

    Why is it that when you create an icloud account the Storage and Backup link isn't always there.  I need to create an icloud account and then create an icloud backup right away so that I can setup multiple ipads using the icloud backup.  How do I force an icloud backup?
    Thanks

    You can check your Junk/Spam folders, sometimes the e-mail gets sent there. Also, double check that the primary e-mail address in your Apple ID account is hers. If none of this works, I suggest to wait it out, sometimes it can take up to 24 hours!

  • What is the best way to create additional storage AND backup a laptop, as well as the storage?

    Beyond being seriously annoyed at an Apple Employee recommended we go the NAS route at an Apple Store no less—only to find it doesn't work for storage OR Time Machine backups of iPhoto (after spending weeks organizing/transferring date, outside of a good deal of coin)—we are now perplexed as how to set up/store based on our needs.
    Goals
    Create additional storage (iPhoto, iTunes, documents) for the family laptop
    Accessible by anyone/device on the home network (iPad, mini iPad, iPhones, etc..)
    Would prefer the ability to access this offsite as well
    Backup of the laptop
    Backup of the storage HD
    Ensure we do not lose iPhoto libraries data/files
    Here is what I am thinking—need to know if it will work.
    Purchase two HDs
    Plug them into my base station
    Allocate one HD @2T for storage (additional iPhoto libraries, home movies, iTunes libraries, etc...)
    Partition the second HD @3T to handle two Time Machine backups
    (a) Time Machine of the laptop (up to 1T of backups)
    (b) Time Machine of the first HD (up to 2T of backups) - Is this possible?
    What I need to know is
    Is it possible to partition a HD to Time Machine two backups from different sources—one being a HD?
    If this is not a good solution—what is? I am pulling my hair out trying to figure out how to deal with a lot of data related to iPhoto due to the formatting limitations there.

    While most do not advocate mixing Time Machine backups with other files, I have 3 Time Capsules with files and backups on the same partition and have never had a problem despite doing it for many years. All 3 Time Capsule are set up as NAS devices.
    One Time Capsule holds the Snow Leopard partition backup of two computers. The other two each have a backup of the Mountain Lion partition of two computers. Time Machine is set up to alternate between the devices, which it can now do. On these 2 Time Capsules is a copy of the iTunes Library and a copy of the iPhotos Library, which provides a backup for the two. Every so often I manually copy the new files from the primary iTunes/iPhotos library to the backup on the other Time Capsule.
    If the computer is on, I can access the computer from my iPad and then access the hard drives and the files.
    Hopefully, I have been able to make this clear.

  • How can I synch my ipod to a new computer without losing all my content, purchased and my old copied albums? My PC and backup died in the same week and now I am told I could lose all my content if I synch?

    how can I synch my ipod to a new computer without losing all my content, purchased and my old copied albums?
    My PC and backup died in the same week and now I am told I could lose all my content if I synch?
    I have new purchases on my new laptop but too paranoid to synch the ipod incase I lose the thousands of tunes I copied over from my old collections.

    See these 2 Links...
    Syncing to a New Computer...
    https://discussions.apple.com/docs/DOC-3141
    Recovering your iTunes library from your iPod or iOS device
    https://discussions.apple.com/docs/DOC-3991

  • How to cancel email notification from snapshot and simulation

    Hi,
    We've activated workflow template WS28700001 so that email notification can be triggered when task release. Meanwhile we are using snapshot and simulation at the same time. But when saving sanpshot or simulation the tasks which have status of released also trigger email nofitication. How to cancel this notification of snapshot and simulation?
    Regards.

    Hi Ravi
    I have added a Container Element(version) in workflow template WS28700001 and set a workflow start condition as follows.
    &Task.Version& = ' '
    Regards
    Yemi

  • SQL Server 2008 R2 Replication - not applying snapshot and not updating all repliacted columns

    We are using transactional replicating on SQL Server 2008 R2 (SP1) using a remote distributor. We are replicating from BaanLN, which is an ERP application to up to 5 subscribers, all using push publications. 
    Tables can range from a couple million rows to 12 million rows and 100's of GBs in size. 
    And it's due to the size of the tables that it was designed with a one publisher to one table architecture.  
    Until recently it has been working very smooth (last four years)) but we have come across two issues I have never encountered.
    While this has happen a half dozen times before, it last occurred a couple weeks ago when I was adding three new publications, again a one table per publication architecture.
    We use standard SS repl proc calls to create the publications, which have been successful for years. 
    On this occasion replication created the three publications, assigned the subscribers and even generated the new snapshot for all three new publications. 
    However,  while it appeared that replication had created all the publications correctly from end to end, it actually only applied one of the three snapshot and created the new table on both of the new subscribers (two on each of the
    publications).  It only applied the snapshot to one of the two subscribers for the second publications, and did not apply to any on the third.  
    I let it run for three hours to see if it was a back log issue. 
    Replication was showing commands coming across when looking at the sync verification at the publisher and 
    it would even successfully pass a tracer token through each of the three new publications, despite there not being tables on either subscriber on one of the publishers and missing on one of the subscribers on another.  
    I ended up attempting to reinitialize roughly a dozen times, spanning a day, and one of the two remaining publications was correctly reinitialized and the snapshot applied, but the second of the two (failed) again had the same mysterious result, and
    again looked like it was successful based on all the monitoring. 
    So I kept reinitializing the last and after multiple attempts spanning a day, it too finally was built correctly.  
    Now the story only get a little stranger.  We just found out yesterday that on Friday the 17th 
    at 7:45, the approximate time started the aforementioned deployment of the three new publications, 
    we also had three transaction from a stable and vetted publication send over all changes except for a single status column. 
    This publication has 12 million rows and is very active, with thousands of changes daily. 
    , The three rows did not replicate a status change from a 5 to a 6. 
    We verified that the status was in fact 6 on the publisher, and 
    5 on both subscribers, yet no messages or errors.  All the other rows successfully updated.  
    We fixed it by updating the publication from 6 back to 5 then back to 6 again on those specific rows and it worked.
    The CPU is low and overall latency is minimal on the distributor. 
    From all accounts the replication is stable and smooth, but very busy. 
    The issues above have only recently started.  I am not sure where to look for a problem, and to that end, a solution.

    I suspect the problem with the new publication/subscriptions not initializing may have been a result of timeouts but it is hard to say for sure.  The fact that it eventually succeeded after multiple attempts leads me to believe this.  If this happens
    again, enable verbose agent logging for the Distribution Agent to see if you are getting query timeouts.  Add the parameters
    -OutputVerboseLevel 2 -Output C:\TEMP\DistributionAgent.log to the Distribution Agent Run Agent job step, rerun the agent, and collect the log.
    If you are getting query timeouts, try increasing the Distribution Agent -QueryTimeOut parameter.  The default is 1800 seconds.  Try bumping this up to 3600 seconds.
    Regarding the three transactions not replicating, inspect MSrepl_errors in the distribution database for the time these transactions occurred and see if any errors occurred.
    Brandon Williams (blog |
    linkedin)

Maybe you are looking for