Snapshots Directory

Can you guys please create a better structure in snapshots directory at http://releng.archlinux.org/isos/
Something like this would be good;
/testing/ (where all the testing iso folders go)
/snapshots/ or /snaps/ (for the other iso folders)
Chagelog (leave it outside the directories please)
The reason I'm requesting this, is to not complicate the automated scripting that I have done for ArchSnaps. Thank you so much.

Bugs and Feature Requests belong in the Bug Tracker
The relevant people will NOT see it here.
Last edited by fukawi2 (2012-05-10 00:20:27)

Similar Messages

  • Trying to understand BtrFS snapshot feature

    I'm trying to understand how the copy-on-write and Btrfs snapshot works.
    Following simple test:
    <pre>
    # cd /
    # touch testfile
    # ls --full-time testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:04:43.629620401 +0200 testfile
    Test 1:
    # btrfs subvol snapshot / /snap1
    # touch testfile
    # ls --full-time testfile /snap1/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:04:43.629620401 +0200 /snap1/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:07:38.348932127 +0200 testfile
    Test 2::
    # btrfs subvol snapshot / /snap2
    # touch testfile
    # ls --full-time testfile /snap1/testfile /snap2/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:04:43.629620401 +0200 /snap1/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:07:38.348932127 +0200 /snap2/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:09:21.769606369 +0200 testfile
    </pre>
    According to the above tests I'm concluding/questioning the following:
    1) Btrfs determines which snapshot maintains a logical copy and physically copies the file to the appropriate snapshot before it is modified.
    a) Does it copy the complete file or work on the block level?
    b) What happens if the file is very large, e.g. 100 GB and there is not enough space on disk to copy the file to the snapshot directory?
    c) Doesn't it have a huge negative impact on performance when a file needs to be copied before it can be altered?

    Hi, thanks for the answers!
    I guess calling it "logical copy" was a bad choice. Would calling the initial snapshot a "hard link of a file system" be more appropriate?
    Ok, so BTRFS works on the block level. I've done some tests and can confirm what you said (see below)
    I find it interesting that although the snapshot maintains the "hard link" to the original copy - I guess "before block image" (?) - there really is no negative performance impact.
    How does this work? Perhaps it is not overwriting the existing file, but rather creating a new file? So the snapshot still has the "hard link" to the original file, hence nothing changed for the snapshot? Simply a new file was created, and that's showing in the current file system?
    It actually reminds me of the old VMS ODS filesystem, which used file versioning by adding a simicolon, e.g. text.txt;1. When modifying the file the result would be text.txt;2 and so on. When listing or using the file without versions, it would simply show and use the last version. You could purge old version if necessary. The file system was actually structured by records (RMS), similar like a database.
    <pre>
    [root@vm004 /]# # df -h /
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda3 16G 2.3G 12G 17% /
    # time dd if=/dev/zero of=/testfile bs=8k count=1M
    1048576+0 records in
    1048576+0 records out
    8589934592 bytes (8.6 GB) copied, 45.3253 s, 190 MB/s
    Let's create a snapshot and overwrite the testfile
    # btrfs subvolume snapshot / /snap1
    # time dd if=/dev/zero of=/testfile bs=8k count=1M
    dd: writing `/testfile': No space left on device
    491105+0 records in
    491104+0 records out
    4023123968 bytes (4.0 GB) copied, 21.2399 s, 189 MB/s
    real     0m21.613s
    user     0m0.021s
    sys     0m3.325s
    <pre>
    So obviously the there is not enough space to maintain the original file and the snapshot file.
    Since I'm creating a complete new file, I guess that's to be expected.
    Let's try with a smaller file, and also check performance:
    <pre>
    # btrfs subvol delete /snap1
    Delete subvolume '//snap1'
    # time dd if=/dev/zero of=/testfile bs=8k count=500k
    512000+0 records in
    512000+0 records out
    4194304000 bytes (4.2 GB) copied, 21.7176 s, 193 MB/s
    real     0m21.726s
    user     0m0.024s
    sys     0m2.977s
    # time echo "This is a test to test the test" >> /testfile
    real     0m0.000s
    user     0m0.000s
    sys     0m0.000s
    # btrfs subvol snapshot / /snap1
    Create a snapshot of '/' in '//snap1'
    # df -k /
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda3 16611328 6505736 8221432 45% /
    # time echo "This is a test to test the test" >> /testfile
    real     0m0.000s
    user     0m0.000s
    sys     0m0.000s
    # df -k /
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda3 16611328 6505780 8221428 45% /
    # btrfs subvol delete /snap1
    Delete subvolume '//snap1'
    # df -k /
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda3 16611328 6505740 8221428 45% /
    The snapshot occupied 40k
    # btrfs subvol snapshot / /snap1
    Create a snapshot of '/' in '//snap1'
    # time dd if=/dev/zero of=/testfile bs=8k count=500k
    512000+0 records in
    512000+0 records out
    4194304000 bytes (4.2 GB) copied, 21.3818 s, 196 MB/s
    real     0m21.754s
    user     0m0.019s
    sys     0m3.322s
    # df -k /
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda3 16611328 10612756 4125428 73% /
    There was no performance impact, although the space occupied doubled.
    </pre>

  • Manage btrfs snapshots

    One of the feature that i much like about btrfs is Snapshot.
    What i want to do is to create snapshots of /home every 5/15 minutes (i have to test better the impact on performance) and retain:
    02 - Yearly snapshots
    12 - Monthly snapshots
    16 - Weekly snapshots
    28 - Daily snapshots
    48 - Hourly snapshots
    60 - Minutely snapshots
    The above scheme is indicative and i'm looking for suggestion in order to improve  it.
    After long google searching finally i've found a very simple but powerful bash script to manage btrfs snapshots: https://github.com/mmehnert/btrfs-snapshot-rotation
    I've changed a bit the script in order to use my naming scheme
    #!/bin/bash
    # Parse arguments:
    SOURCE=$1
    TARGET=$2
    SNAP=$3
    COUNT=$4
    QUIET=$5
    # Function to display usage:
    usage() {
    scriptname=`/usr/bin/basename $0`
    cat <<EOF
    $scriptname: Take and rotate snapshots on a btrfs file system
    Usage:
    $scriptname source target snap_name count [-q]
    source: path to make snaphost of
    target: snapshot directory
    snap_name: Base name for snapshots, to be appended to
    date "+%F--%H-%M-%S"
    count: Number of snapshots in the timestamp-@snap_name format to
    keep at one time for a given snap_name.
    [-q]: Be quiet.
    Example for crontab:
    15,30,45 * * * * root /usr/local/bin/btrfs-snapshot /home /home/__snapshots quarterly 4 -q
    0 * * * * root /usr/local/bin/btrfs-snapshot /home /home/__snapshots hourly 8 -q
    Example for anacrontab:
    1 10 daily_snap /usr/local/bin/btrfs-snapshot /home /home/__snapshots daily 8
    7 30 weekly_snap /usr/local/bin/btrfs-snapshot /home /home/__snapsnots weekly 5
    @monthly 90 monthly_snap /usr/local/bin/btrfs-snapshot /home /home/__snapshots monthly 3
    EOF
    exit
    # Basic argument checks:
    if [ -z $COUNT ] ; then
    echo "COUNT is not provided."
    usage
    fi
    if [ ! -z $6 ] ; then
    echo "Too many options."
    usage
    fi
    if [ -n "$QUIET" ] && [ "x$QUIET" != "x-q" ] ; then
    echo "Option 4 is either -q or empty. Given: \"$QUIET\""
    usage
    fi
    # $max_snap is the highest number of snapshots that will be kept for $SNAP.
    max_snap=$(($COUNT -1))
    # $time_stamp is the date of snapshots
    time_stamp=`date "+%F_%H-%M"`
    # Clean up older snapshots:
    for i in `ls $TARGET|sort |grep ${SNAP}|head -n -${max_snap}`; do
    cmd="btrfs subvolume delete $TARGET/$i"
    if [ -z $QUIET ]; then
    echo $cmd
    fi
    $cmd >/dev/null
    done
    # Create new snapshot:
    cmd="btrfs subvolume snapshot $SOURCE $TARGET/"${SNAP}-$time_stamp""
    if [ -z $QUIET ]; then
    echo $cmd
    fi
    $cmd >/dev/null
    I use fcontab
    [root@kabuky ~]# fcrontab -l
    17:32:58 listing root's fcrontab
    @ 5 /usr/local/bin/btrfs-snapshot /home /home/__snapshots minutely 60 -q
    @ 1h /usr/local/bin/btrfs-snapshot /home /home/__snapshots hourly 48 -q
    @ 1d /usr/local/bin/btrfs-snapshot /home /home/__snapshots daily 28 -q
    @ 1w /usr/local/bin/btrfs-snapshot /home /home/__snapshots weekly 16 -q
    @ 1m /usr/local/bin/btrfs-snapshot /home /home/__snapshots monthly 12 -q
    And this is what i get
    [root@kabuky ~]# ls /home/__snapshots
    [root@kabuky ~]# ls -l /home/__snapshots
    total 0
    drwxr-xr-x 1 root root 30 Jul 15 19:00 hourly-2011-10-31_17-00
    drwxr-xr-x 1 root root 30 Jul 15 19:00 hourly-2011-10-31_17-44
    drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-18
    drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-20
    drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-22
    drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-24
    drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-26
    drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-32
    drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-37
    drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-42

    tavianator wrote:
    Cool, I just wasn't sure if the "And this is what i get" was what you were expecting.
    Looks good, I may take a look since I'm using btrfs.  Don't confuse this for a backup though .
    Sorry for confusion, my english is not very good
    Yes this is not a backup.
    It's a sort of parachute if you do something stupid with your files you can at any time come back

  • Removing Snapshot Files

    Hi All,
    One of my VM's was written off by the Panda AV issue. I have restored the affected VM and all is well.
    However, I am looking to clean things up a little. And have a load of snapshot files from before the restore which were taken as part of the resolution process.
    These snapshots don't show under the vm itself, so I though I could delete all the snapshots though HyperV manager, and manually delete what is left in the snapshot directory.
    When I try this I get a file in use error, so left as is. Restarted Hyper V service and tried again with same result.
    Anybody know the best way to do this, as there is a fair amount of GB to remove.
    Kind Regards
    Rich
    Richard Tarlton

    Considering that you ran into some issue with your AV software and you performed some restoration, these files could very well be in use and you just don't realize it yet.
    Each AVHD file is a differencing disk, so they are all linked together.
    You need to 'inspect' each virtual disk beginning with the one that is configured to make sure that you are not using any differencing disks. 
    Only after you verify that the disks are indeed not being used by a VM can you safely delete them manually - once you determine the process that is locking them.
    If it is the VMMS that is locking the virtual disk snapshots, then Hyper-V thinks it needed to do something with these disks - use the Hyper-V event logs to figure out what it thinks it is trying to do (  Applications and services - Microsoft - windows
    - Hyper-V )
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • Backpac: A package state snapshot and restore tool for Arch Linux

    backpac:
    A package state snapshot and restore tool for Arch Linux with config file save/restore support.
    https://aur.archlinux.org/packages.php?ID=52957
    https://github.com/altercation/backpac (see readme on the github repository for more information)
    Summary & Features
    It's a common method of setting up a single system: take some notes about what packages you've installed, what files you've modified.
    Backpac creates those notes for you and helps back up important configuration files. Specifically, backpac does the following:
    maintains a list of installed groups (based on 80% of group packages being installed)
    maintains a list of packages (including official and aur packages, listed separately)
    maintains a list of files (manually created)
    backs up key config files as detailed in the files list you create
    The package, group and files lists along with the snapshot config files allows system state to be easily committed to version control such as git.
    Backpac can also use these lists to install packages and files. Essentially, then, backpac takes a snapshot of your system and can recreate that state from the files and lists it archives.
    Use Cases
    Ongoing system state backup to github
    Quick install of new system from existing backpac config
    Conform current system to given state in backpac config
    Backpac is a very, very lightweight way of saving and restoring system state.
    It's not intended for rolling out and maintaining multiple similar systems, it's designed to assist individual users in the maintainance of their own Arch Linux box.
    Status
    Alpha, release for testing among those interested. Passing all tests right now but will continue to rework and refine. Bug reports needed.
    Why?
    There are a lot of 'big-iron' solutions to maintaining, backing up and restoring system state. Setting these up for a single system or a handful of personal systems has always seemed like overkill.
    There are also some existing pacman list making utilities around, but most of them seem to list either all packages or don't separate the official and aur packages the way I wanted. Some detect group install state, some don't. I wanted all these features in backpac.
    Finally, whatever tool I use, I'd like it to be simple (c.f. the Arch Way). Lists that are produced should be human readable, human maintainable and not different from what I'm using in non-automated form. Backpac fulfills these requirements.
    Regarding files, I wanted to be able to backup arbitrary system files to a git repository. Tools like etckeeper are interesting but non /etc files in that case aren't backed up (without some link trickery) and there isn't any automatic integration with pacman, so there is no current advantage to using a tool like that. I also like making an explicit list of files to snapshot.
    Sample Output
    This is the command line report. Additionally, backpac saves this information to the backpac groups, packages and files lists and the files snapshot directory.
    $ backpac -Qf
    backpac
    (-b) Backups ON; Files will be saved in place with backup suffix.
    -f Force mode ON; No prompts presented (CAUTION).
    (-F) Full Force mode OFF; Prompt displayed before script runs.
    (-g) Suppress group check OFF; Groups will be checked for currency.
    (-h) Display option and usage summary.
    (-p) Default backpac: /home/es/.config/backpac/tau.
    -Q Simple Query ON; Report shown; no changes made to system.
    (-R) Auto-Remove OFF; Remove/Uninstall action default to NO.
    (-S) System update OFF; No system files will be updated.
    (-U) backpac config update OFF; backpac files will not be updated.
    Sourcing from backpac config directory: /home/es/.config/backpac/tau
    Initializing.................Done
    GROUPS
    ============================================================================
    /home/es/.config/backpac/tau/groups
    GROUPS UP TO DATE: group listed in backpac and >80% local install:
    base base-devel xfce4 xorg xorg-apps xorg-drivers xorg-fonts
    GROUP PACKAGES; MISSING?: group member packages not installed:
    (base: nano)
    (xfce4: thunar xfdesktop)
    PACKAGES
    ============================================================================
    /home/es/.config/backpac/tau/packages
    PACKAGES UP TO DATE: packages listed in backpac also installed on system:
    acpi acpid acpitool aif alsa-utils augeas cowsay cpufrequtils curl dialog
    firefox gamin git ifplugd iw mesa mesa-demos mutt netcfg openssh rfkill
    rsync rxvt-unicode sudo terminus-font vim wpa_actiond wpa_supplicant_gui
    xmobar xorg-server-utils xorg-twm xorg-utils xorg-xclock xorg-xinit xterm
    yacpi yajl youtube-dl zsh
    AUR UP TO DATE: aur packages listed in backpac also installed on system:
    flashplugin-beta freetype2-git-infinality git-annex haskell-json
    package-query-git packer wpa_auto xmonad-contrib-darcs xmonad-darcs
    AUR NOT IN backpac: installed aur packages not listed in backpac config:
    yaourt-git
    FILES
    ============================================================================
    /home/es/.config/backpac/tau/files
    MATCHES ON SYSTEM/CONFIG:
    /boot/grub/menu.lst
    /etc/acpi/handler.sh
    /etc/rc.conf
    /etc/rc.local

    firecat53 wrote:I think your plan for handling an AUR_HELPER is good. If AUR_HELPER is defined by the user, then either you might need a list of major AUR helpers and their command line switches so you can pick the correct switch for what needs to be done (most use some variation of -S for installing, but not all), or have the user define the correct switch(es) somehow for their chosen AUR helper.
    That's a good idea. I'll add that to my AUR refactoring todo.
    I also found directory tracking to be a weakness in other dotfile managers that I tried. I think you would definitely have to recursively list out the contents of a tracked directory and deal with each file individually. Wildcard support would be nice...I just haven't personally found a use case for it yet.
    I've been thinking that I could just add the directory and scan through it for any non-default attribute files. If those are found then they get automatically added to the files list. That's pretty close to what etckeeper does.
    Edit: I just compiled the dev version and removed my comments for already fixed things...sorry!
    The master branch should have those fixes as well, but I didn't update the version number in the package build. I'll have to do that.
    1. Still apparently didn't handle the escaped space for this item: (the file does exist on my system)
    Ok, good to know. This wildcard directory business will require some new code and refactoring so I'll also rework my filenames handling.
    2. Suggestion: you should make that awesome README into a man page!
    I was working on one (the pkgbuild has a commented out line for the man page) but I had to leave it for later. Definitely want a man page. Once this stabilizes and I'm sure there aren't any big structural changes, I'll convert it to man format.
    3. Suggestion: add the word 'dotfile' into your description somewhere on this page, the github page, and in the package description so people looking for dotfile managers will find it. You could also consider modularizing the script into a dotfile manager and the package manager, so people on other distros could take advantage of your dotfile management scheme.
    I actually have a different script for dotfile management that doesn't touch packages, but there is definitely overlap with this one. That script isn't released yet, though, and if people find this useful for dotfile management that's great. I'll add that in.
    4. Suggestion: since -Q is a read-only operation, why not just make it run with -f automatically to avoid the prompt?
    Originally, running backpac without any command line options produced the Query output. I was concerned that since it is a utility that can potentially overwrite system files, it is important to give users a clear statement prior to execution about what will be done. Since the Query output is essentially the same as the Update and System reports in format and content, I wanted to be explicit about the Query being a passive no-change operation. The current command line options aren't set in stone though. If you feel strongly about it being different, let me know.
    Long answer to a short question
    5. Another suggestion: any thought to providing some sort of 'scrub' function to remove private information from the stored files if desired? This would be cool for publishing public dotfiles to github. Perhaps a credentials file (I did this with python for my own configs). Probably detecting email addresses and passwords without a scrub file would be rather difficult because dotfiles come in so many flavors.
    Yes, absolutely. In fact, if you look at the lib/local file (pretty sure it's in both master and dev branches in this state) you'll see some references to a sanitize function. The idea there is that the user will list out bash associative arrays like this:
    SANITIZE_WPA_=(
    [FILE]='/etc/wpa_supplicant.conf'
    [CMD]='sed s/expungepattern/sanitizedoutput/g'
    Question: am I missing an obvious option to remove a file from the files.d directory if I delete it from the files list? Or do I have to delete it manually? It might be helpful to add a section to the README on how to update and delete dotfiles from being tracked, and also a more detailed description of what the -b option does (and what is actually created when it's not used).
    You are only missing the function I didn't finish. There should be either dummy code or a TODO in the backpac main script referencing garbage collection, which isn't difficult but I just haven't finished it. The idea being another loop of "hey I found these old files in your files.d, mind if I delete them?" It's on my list and I'll try to get it in asap.
    And finally, just out of curiosity, why did you choose to actually copy the files instead of symlink like so many other dotfile managers do?
    git not following symlinks, hardlinks also out for permissions issues (git wouldn't be able to read the files, change them, etc.)
    I definitely would prefer to not make an entire copy of the file, but I haven't come up with a better option. Shout with ideas, though. Also, if there is a way around the link issues I noted above, let me know. I don't see one but that doesn't mean it's not there.
    edit: I think a Seattle area Arch meetup would be cool! Perhaps coffee someplace? Bellevue? U-district? Anyone else? BYOPOL (bring your own pimped out laptop)
    A general meetup sounds good. I was also thinking it would be fun to do a mini archcon with some demos.

  • Can't delete files

    Hello,
    I'm having an issue deleting a file from my home directory.
    This the file I want to delete .shapshot. It looks like it is shapshoting my files in my home directory. Anyway how do I stop these snapshots?
    This is the command I used.
    rm -rf /home/cyberninja/.snapshot
    ls -la /home/cyberninja/
    drwxrwxrwx 10 root root  4096 mar 8 12:00 .snapshot
    ls -la  /home/cyberninja/.snapshot
    drwxr-x--- 26 cyberninja users 8192 Mar 6 14:17 hourly.0
    drwxr-x--- 26 cyberninja users 8192 Mar 6 14:17 hourly.1
    drwxr-x--- 26 cyberninja users 8192 Mar 6 14:17 hourly.2
    drwxr-x--- 26 cyberninja users 8192 Mar 6 14:17 hourly.3
    drwxr-x--- 26 cyberninja users 8192 Mar 6 14:17 hourly.4
    drwxr-x--- 26 cyberninja users 8192 Mar 6 14:17 hourly.5
    drwxr-x--- 26 cyberninja users 8192 Mar 6 14:17 nightly.0
    drwxr-x--- 26 cyberninja users 8192 Mar 6 14:17 nightly.1If I remove the file as root it just comes back. Can someone point me in the right direction?
    Edited by: CyberNinja on Mar 8, 2012 9:40 AM

    Hi Ninja,
    .snapshot isn't a file, it is a directory. Is this a Solaris 10 release?
    The .snapshot directory is generally used for ongoing snapshots or automatic
    snapshots. The contents of your .snapshot directory look like automatic snapshots
    to me, where you have hourly.* and nightly.* directories for those hourly and nightly
    snapshots.
    If you want to remove the contents of the .snapshot directory, you can, but you
    need to disable whatever is creating these snapshots if you don't want them.
    Thanks,
    Cindy

  • No logon Servers Win 8.1

    Hi all, 
    I am apart of the I.T. support team at a school and we have just purchased 5 Surface Pro 3's. We have loaded windows 8.1 Enterprise on them and domain joined them, but we can't log in on them on anything else other than local accounts. Whenever another account is tried it goes to log on and then brings up the error "no logon servers available". We think the problem relates to our group policy but we have been unable to find the cause of it.
    We are running windows server 2008 R2 hosted on a virtual server. Group policy currently handles pushing our wireless profile, we have also tried to blocking group policy's and adding this manually.
    Regards,
    Your friendly neighborhood Trainee
    This topic first appeared in the Spiceworks Community

    I've tried talking to Carbonite support about this, but they don't seem to be able to provide in depth support about the exact process carbonite goes through to backup hyper-v vm's. According to their support page here - http://support.carbonite.com/articles/Server-Windows-Hyper-V-BackupIt looks like carbonite will use one of two methods for vm backup; child vm snapshot method and saved state method. My question is, for the child vm snapshot method, does it actually take a snapshot in the vm snapshot directory and then create the carbonite compressed backup in the designated directory? I tried running a test backup and watched the snapshot directory but it never seemed to put any files there, or change in size during the whole process. The main reason for the question is, I'm trying to figure out how much free space we'd need to do a...

  • SLD Landscape Question

    Hi
    We are about to implement the SLD landscape for our PI systems.
    From the documentation (SDN and HELP.SAP.COM), it is recommended to have all SLD Data Suppliers send to the PI Production SLD Bridge.  The SLD data is then automatically forwarded to the DEV/QA SLD Bridge (Design Time SLD).
    This is straight forward and I can see how to get the ABAP stack to look at the correct SLD Server.
    However, when it comes to the JAVA stack, I can't see how to configure the SLD CLIENT of an instance in the design-time landscape to use the design-time SLD Server.
    Version of Netweaver in 7.0.
    Thanks
    Doug

    SAP Web AS (Java) 6.30/6.40 Start-up: JProbe's Direct Support
    This feature is available starting with JProbe 5.2.2. For details of application server integration, refer to the JProbe documentation. Here, the single steps for the SAP application server are briefly described.
    Step 1: Create an Integration for the instance to be profiled.
          Click Tools -> Application Server Integration.
          Choose SAP Application Server 6.30.
          Click Create.
          To fill out the columns, follow the help you get via tooltip.
          Specify whether server or dispatcher should be profiled.
    Step 2: Create a Configuration.
          Click Tools -> Manage J2EE Configurations.
          Click Add.
          Fill in Configuration Name and Integration.
          Optionally, you could specify an application to be profiled; this is just used to help to set the correct filters for profiling. If you know which packages or classes should be profiled, you can leave these fields empty; otherwise, specify the directory where the application is deployed and the .ear or .war file in this directory.
    Step 3: Create the J2EE Settings.
          Click Session -> New J2EE Settings.
          Specify the configuration.
          Specify the other options, especially set filters (optionally, you could use the application specified in the configuration for filtering).
    Step 4: Use the Connection Manager.
          Different to "startup via jlaunch" below the Connection Manager has to be used.
          Click Tools -> Options -> Advanced Session; mark "Use Connection Manager."
    Step 5: Check the property 'IDxxxxxx.JavaPath' in instance.properties in the directory /usr/sap/<SID>/JCxx/j2ee/cluster
          JProbe relies on the property to find Java home, but there are Web AS 6.30/6.40 SPs that do not have this property.
          If it is not included, add it for dispatcher or server (dependent on which one should be profiled)
          For example, add "ID8156650.JavaPath=C:/java/jdk1.4.2_06."
          Be aware that using the standard startup procedure instance.properties is overwritten, so you might repeat this step.
    Step 6: Start the J2EE Engine with JProbe
          Start the database and the central instance (enqueue and message server).
          In JProbe click "Run" for the J2EE settings defined before.
          Dispatcher and server of the J2EE Engine will be started; the one specified in the Integration will be profiled.
    SAP Web AS (Java) 6.30/6.40 Start-up: Using JProbe 5.2.x via JLaunch
    Starting the Server via JLaunch
    Detailed instructions how to start JLaunch can be found in the documentation of SAP Web AS (Java). Here we give only the minimal list of steps:
       1.
          Include the location of JLaunch in the PATH
       2.
          Go to base directory of dispatcher or server (depending on which one you want to start).
          jlaunch pf=c:\usr\sap\<SID>\SYS\profile\<SID>_JCxx_<Host>                   
          -nodename=IDxxxxxxx 
          where "IDxxxxxxx" can be figured out from the "instance.properties" in directory "cluster."
    Creating a JProbe Startup File
    In order to start up JProbe via JLaunch, you need to create a start-up file (.jpl) with the JProbe launchpad as follows:
       1.
          Click Session -> New J2SE Settings
       2.
          Enter filters, triggers, ...
       3.
          In the Configuration main class and the classpath fields are mandatory; a dummy entry for main class is sufficient, for classpath you could use %CLASSPATH%
       4.
          Save the .jpl file
       5.
          Switch off the Connection Manager; in JProbe 5.2.x a Connection Manager is introduced, which does not work if a Java application is started from a C-framework; therefore use Tools -> Options -> Session -> Advanced to switch off the Connection Manager
       6.
          Change the .jpl file with a text editor
                Comment out the line starting with -jp_java
                Add the connection information and the snapshot directory information; related to the introduction of the Connection Manager this is not written when the .jpl file is saved in the launchpad
                    o
                      -jp_socket=<host for analysis>:<port> (e.g. localhost:4444)
                    o
                      -jp_snapshot_dir=<snapshot directory> (e.g. c:
    Jprobe
    snapshots)
    Starting JProbe with the Server
    Start JLaunch as above, but attach the following parameters to the JLaunch command line:
    jlaunch pf=... -nodename=... -Xbootclasspath/a:JProbe-Base-Dir\lib\jpagent.jar
                -Xrunjprobeagent:-jp_input=.jpl-file
    For the .jpl-file please, specify the complete path. If there are blanks in the file or directory names, use double quotes or the DOS notation (e.g. PROGRA~1). After starting JLaunch you can now attach the viewer to it (JProbe -> Program -> Attach to Remote/Running Session).
    regards
    chandra

  • Reconfigure Open Directory in Yosemite Server

    Is it possible to delete and reconfigure Open Directory in Yosemite server?
    The host name and configuration were modified after Open Directory was activated and I get the message "Unable to load replica list" in the Settings Tab of Open Directory on the Server App (Server 4.0.3 (Build 14S350)). I think the best way would be to start over the automatic configuration.

    Many Open Directory problems can be resolved by taking the following steps. Test after each one, and back up all data before making any changes.
    1. The OD master must have a static IP address on the local network, not a dynamic address. It must not be connected to the same network with more than one interface; e.g., Ethernet and Wi-Fi.
    2. You must have a working DNS service, and the server's hostname must match its fully-qualified domain name. To confirm, select the server by name in the sidebar of the Server application window, then select the Overview tab. Click the Edit button on the Host Name line. On the Accessing your Server sheet, Domain Name should be selected. Change the Host Name, if necessary. The server must have at least a three-level name (e.g. "server.yourdomain.com"), and the name must not be in the ".local" top-level domain, which is reserved for Bonjour.
    3. The primary DNS server used by the server must be itself, unless you're using another server for internal DNS. The only DNS server set on the clients should be the internal one, which they should get from DHCP if applicable.
    4. Only if you're still running Mavericks server, follow these instructions to rebuild the Kerberos configuration on the server.
    5. If you use authenticated binding, check the validity of the master's certificate. The common name must match the hostname and domain name. Deselecting and then reselecting the certificate in Server.app has been reported to have an effect in some cases. Otherwise delete all certificates and create new ones.
    6. Unbind and then rebind the clients in the Users & Groups preference pane. Use the fully-qualified domain name of the master.
    7. Reboot the master and the clients.
    8. Don't log in to the server with a network user's account.
    9. Disable any internal firewalls in use, including third-party "security" software.
    10. If you've created any replica servers, delete them.
    11. If OD has only recently stopped working when it was working before, you may be able to restore it from the automatic backup in /var/db/backups, or from a Time Machine snapshot of that backup.
    12. As a last resort, export all OD users. In the Open Directory pane of Server, delete the OD server. Then recreate it and import the users. Ensure that the UID's are in the 1001+ range.
    If you get this far without solving the problem, then you'll need to examine the logs in the Open Directory section of the log list in the Server app, and also the system log on the clients.

  • Failed to take snapshot of one or more contents in package

    We have two main SCCM site system servers, and all of a sudden (everything was working before, no change) the applications would not distribute to the DPs.
    distmgr.log:
    Snapshot processing content with ID 16781461 ...
    The source directory \\sccm02\Packages\ doesn't exist or the SMS service cannot access it, Win32 last error = 5
    Failed to take snapshot of one or more contents in package 00239
    I tried granting Everyone full control for both Share and NTFS, and granting the site server computer account full control, still the same issue.  I could access it manually w/ the UNC path and read/write/delete all of its contents.
    If I copy the folder to another server and point it there in the Application, it distributes the content to DPs just fine.

    I tried replacing ntfs permissions on all the folders, still getting Win32 last error = 5
    I was seeing some errors in event log
    The shadow copies of volume \\?...ac2-11e4-9f6d-005056a7533c} were aborted because of an IO failure on volume \\?...ac2-11e4-9f6d-005056a7533c}.
    The system failed to flush data to the transaction log. Corruption may occur.
    Reset to device, \Device\RaidPort0, was issued.

  • Flash recovery area: archivelog file directory modification time

    Using Oracle Database 10g Release 10.2.0.3.0 - 64bit (Standard Edition) on Solaris 10, SunOS 5.10 Generic_118833-33, archivelog files in the Flash Recovery Area are still present, after the time that they should have been made redundant and therefore deleted::
    bvsmdb01-oracle10g $ date
    Thu Jan 24 16:04:46 GMT 2008
    bvsmdb01-oracle10g $ ls -lt archivelog
    total 20
    drwxr-x--- 2 oracle10g oinstall 1024 Jan 24 16:00 2008_01_24
    drwxr-x--- 2 oracle10g oinstall 512 Jan 23 23:30 2008_01_19
    drwxr-x--- 2 oracle10g oinstall 1536 Jan 23 23:04 2008_01_23
    drwxr-x--- 2 oracle10g oinstall 1536 Jan 22 23:30 2008_01_18
    drwxr-x--- 2 oracle10g oinstall 1536 Jan 22 22:53 2008_01_22
    drwxr-x--- 2 oracle10g oinstall 1024 Jan 21 23:07 2008_01_21
    drwxr-x--- 2 oracle10g oinstall 512 Jan 20 22:20 2008_01_20
    bvsmdb01-oracle10g $
    The archivelog directory for 2008_01_19 has a modification time of Jan 23 23:30 - this is almost 4 days after it was last written to.
    The current redundancy setting in RMAN is shown in the output of show all:
    RMAN> show all;
    using target database control file instead of recovery catalog
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/10.2.0/dbs/snapcf_kierli.f'; # default
    So, the current retention policy is three days. It is 24/1 today - the archivelog directory from 19/1 should have been deleted by now, but it has not been.
    The modification time for the archivelog directory 2008_01_19 has a modification time of Jan 23 23:30 2008.
    Why is this? How can this be investigated? Does anyone have any suggestions?
    Thanks
    Jerry Mander

    From 2 Day DBA:
    Even after files in the flash recovery area are obsolete, they are generally not deleted from the flash recovery area until space is needed to store new files. As long as space permits, files recently moved to tape will remain on disk as well, so that they will not have to be retrieved from tape in the event of a recovery.
    What is the current space used in the FRA and what is the FRA disk limit ?

  • Problem Converting standby database from snapshot to physical

    Any help willl be greatly appreciated...
    I am trying to convert a standby database that is in "snapshot" mode back to "physical" standby and I am encountering problems in the process from the "DGMGRL" command line.
    Both instances are on the same physical machine. Everything was working fine untill I tried to change the db from snapshot to physical. The DGMGRL starts the conversion process and is able to shutdown but when trying to restart the instance is fails and reports that the service is not defined.
    Here is the issue I am facing:
    C:\app\MMJ\product\11.1.0\db_1\BIN>
    C:\app\MMJ\product\11.1.0\db_1\BIN>set ORACLE_SID=I11G1 <======= the primary database
    C:\app\MMJ\product\11.1.0\db_1\BIN>dgmgrl
    DGMGRL for 32-bit Windows: Version 11.1.0.6.0 - Production
    Copyright (c) 2000, 2005, Oracle. All rights reserved.
    Welcome to DGMGRL, type "help" for information.
    DGMGRL> connect sys/password@i11g1sb <===== the standby database currently in snapshot mode
    Connected.
    DGMGRL> connect sys/password@i11g1 <==== the primary database
    Connected.
    DGMGRL> convert database 'i11g1sb' to physical standby;
    Converting database "i11g1sb" to a Physical Standby database, please wait...
    Operation requires shutdown of instance "i11g1sb" on database "i11g1sb"
    Shutting down instance "i11g1sb"...
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    Operation requires startup of instance "i11g1sb" on database "i11g1sb"
    Starting instance "i11g1sb"...
    Unable to connect to database
    ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
    Failed.
    You are no longer connected to ORACLE
    Please connect again.
    Unable to start instance "i11g1sb"
    You must start instance "i11g1sb" manually
    Failed to convert database "i11g1sb"
    DGMGRL> show configuration
    Configuration
    Name: DGConfig1
    Enabled: YES
    Protection Mode: MaxPerformance
    Databases:
    i11g1 - Primary database
    i11g1sb - Snapshot standby database (disabled)
    Fast-Start Failover: DISABLED
    Current status for "DGConfig1":
    SUCCESS
    DGMGRL> exit
    C:\app\MMJ\product\11.1.0\db_1\BIN>set ORACLE_SID=I11G1SB
    C:\app\MMJ\product\11.1.0\db_1\BIN>sqlplus /nolog
    SQL*Plus: Release 11.1.0.6.0 - Production on Wed Mar 25 11:40:16 2009
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    SQL> connect / as sysdba
    Connected to an idle instance.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 426852352 bytes
    Fixed Size 1333648 bytes
    Variable Size 369100400 bytes
    Database Buffers 50331648 bytes
    Redo Buffers 6086656 bytes
    Database mounted.
    Database opened.
    ==============>>>> as you can see I can start the standby database without any problems and even query the table in which I made some changes. I had added the record with "Region_ID"=30. I have
    SQL> select * from hr.regions;
    REGION_ID REGION_NAME
    30 JAPAC
    1 Europe
    2 Americas
    3 Asia
    4 Middle East and Africa
    SQL>
    The same table on the primary database has the following records in the same table:
    Microsoft Windows XP [Version 5.1.2600]
    (C) Copyright 1985-2001 Microsoft Corp.
    C:\Documents and Settings\MMJ>set ORACLE_HOME=c:\app\mmj\product\11.1.0\db_1
    C:\Documents and Settings\MMJ>set ORACLE_SID=i11g1
    C:\Documents and Settings\MMJ>
    C:\Documents and Settings\MMJ>cd %ORACLE_HOME%
    C:\app\MMJ\product\11.1.0\db_1>cd bin
    C:\app\MMJ\product\11.1.0\db_1\BIN>
    C:\app\MMJ\product\11.1.0\db_1\BIN>
    C:\app\MMJ\product\11.1.0\db_1\BIN>sqlplus /nolog
    SQL*Plus: Release 11.1.0.6.0 - Production on Wed Mar 25 11:43:10 2009
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> select * from hr.regions;
    REGION_ID REGION_NAME
    1 Europe
    2 Americas
    3 Asia
    4 Middle East and Africa
    20 JAPAC
    40 JAPAC
    6 rows selected.
    SQL>
    =======> The TNSPING works fine against both the databases.
    C:\app\MMJ\product\11.1.0\db_1\BIN>set O
    ORACLE_HOME=c:\app\mmj\product\11.1.0\db_1
    ORACLE_SID=I11G1SB
    OS=Windows_NT
    C:\app\MMJ\product\11.1.0\db_1\BIN>
    C:\app\MMJ\product\11.1.0\db_1\BIN>tnsping i11g1sb
    TNS Ping Utility for 32-bit Windows: Version 11.1.0.6.0 - Production on 25-MAR-2009 16:56:42
    Copyright (c) 1997, 2007, Oracle. All rights reserved.
    Used parameter files:
    c:\app\mmj\product\11.1.0\db_1\network\admin\sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MHost)(PORT = 1523))) (CONNECT_DATA = (SERVICE_NAM
    E = I11G1SB)))
    OK (230 msec)
    C:\app\MMJ\product\11.1.0\db_1\BIN>tnsping i11g1
    TNS Ping Utility for 32-bit Windows: Version 11.1.0.6.0 - Production on 25-MAR-2009 16:56:47
    Copyright (c) 1997, 2007, Oracle. All rights reserved.
    Used parameter files:
    c:\app\mmj\product\11.1.0\db_1\network\admin\sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MHost)(PORT = 1523))) (CONNECT_DATA = (SERVICE_NAM
    E = I11G1)))
    OK (30 msec)
    C:\app\MMJ\product\11.1.0\db_1\BIN>lsnrctl
    LSNRCTL for 32-bit Windows: Version 11.1.0.6.0 - Production on 25-MAR-2009 16:57:01
    Copyright (c) 1991, 2007, Oracle. All rights reserved.
    Welcome to LSNRCTL, type "help" for information.
    LSNRCTL> set current_listener i11g1
    Current Listener is i11g1
    LSNRCTL> services
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MHost)(PORT=1523)))
    Services Summary...
    Service "I11G1" has 1 instance(s).
    Instance "I11G1", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:1 refused:0
    LOCAL SERVER
    Service "I11G1SB" has 1 instance(s).
    Instance "I11G1SB", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:55 refused:1
    LOCAL SERVER
    Service "I11G1SB_DGMGRL" has 1 instance(s).
    Instance "I11G1SB", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0
    LOCAL SERVER
    Service "i11g1.mhost" has 1 instance(s).
    Instance "i11g1", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:6 refused:0 state:ready
    LOCAL SERVER
    Service "i11g1XDB.mhost" has 1 instance(s).
    Instance "i11g1", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: MHost, pid: 3944>
    (ADDRESS=(PROTOCOL=tcp)(HOST=MHost)(PORT=1430))
    Service "i11g1_DGB.mhost" has 1 instance(s).
    Instance "i11g1", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:6 refused:0 state:ready
    LOCAL SERVER
    Service "i11g1_XPT.mhost" has 1 instance(s).
    Instance "i11g1", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:6 refused:0 state:ready
    LOCAL SERVER
    Service "i11g1sb.mhost" has 1 instance(s).
    Instance "i11g1sb", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    LOCAL SERVER
    Service "i11g1sbXDB.mhost" has 1 instance(s).
    Instance "i11g1sb", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: MHost, pid: 7336>
    (ADDRESS=(PROTOCOL=tcp)(HOST=MHost)(PORT=1931))
    Service "i11g1sb_DGB.MHost" has 1 instance(s).
    Instance "i11g1sb", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    LOCAL SERVER
    Service "i11g1sb_XPT.mhost" has 1 instance(s).
    Instance "i11g1sb", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    LOCAL SERVER
    The command completed successfully
    LSNRCTL>

    Thanks for the response.
    So, here is the status now....with a little background...
    After my original post, I started to read the manuals and I found the sql command to convert the database back from snapshot to physical standby (sb).
    That worked fine and I had my snapshot sb back to physical sb.
    So when you posted the suggestion, I already had my db in physical sb mode. I said no problem, I will convert it back to snapshot and then back again using dgmgrl instead of sql+
    Well here is how my listener is configured now.
    SID_LIST_I11G1 =
    (SID_LIST =
    (SID_DESC =
    (GLOBAL_DBNAME = I11G1.MHOST)
    (ORACLE_HOME = c:\app\mmj\product\11.1.0\db_1)
    (SID_NAME = I11G1)
    (SID_DESC =
    (GLOBAL_DBNAME = I11G1SB.MHOST)
    (ORACLE_HOME = c:\app\mmj\product\11.1.0\db_1)
    (SID_NAME = I11G1SB)
    (SID_DESC =
    (GLOBAL_DBNAME = I11G1SB_DGMGRL)
    (ORACLE_HOME = c:\app\mmj\product\11.1.0\db_1)
    (SID_NAME = I11G1SB)
    Then using dgmgrl I tried to change the db from p-sb to s-sb and the results are not good....
    "i11g1 >"dgmgrl
    DGMGRL for 32-bit Windows: Version 11.1.0.6.0 - Production
    Copyright (c) 2000, 2005, Oracle. All rights reserved.
    Welcome to DGMGRL, type "help" for information.
    DGMGRL>
    DGMGRL> connect sys/[email protected]
    Connected.
    DGMGRL>
    DGMGRL>
    DGMGRL> convert database 'i11g1sb' to snapshot standby;
    Converting database "i11g1sb" to a Snapshot Standby database, please wait...
    Database "i11g1sb" converted successfully
    DGMGRL> show configuration
    Configuration
    Name: DGConfig1
    Enabled: YES
    Protection Mode: MaxPerformance
    Databases:
    i11g1 - Primary database
    i11g1sb - Snapshot standby database
    Fast-Start Failover: DISABLED
    Current status for "DGConfig1":
    Warning: ORA-16607: one or more databases have failed
    DGMGRL> show configuration
    Configuration
    Name: DGConfig1
    Enabled: YES
    Protection Mode: MaxPerformance
    Databases:
    i11g1 - Primary database
    i11g1sb - Snapshot standby database
    Fast-Start Failover: DISABLED
    Current status for "DGConfig1":
    Warning: ORA-16607: one or more databases have failed
    DGMGRL> show database 'i11g1sb';
    Database
    Name: i11g1sb
    Role: SNAPSHOT STANDBY
    Enabled: YES
    Intended State: APPLY-OFF
    Instance(s):
    i11g1sb
    Current status for "i11g1sb":
    SUCCESS
    DGMGRL> show database 'i11g1';
    Database
    Name: i11g1
    Role: PRIMARY
    Enabled: YES
    Intended State: TRANSPORT-ON
    Instance(s):
    i11g1
    Current status for "i11g1":
    Error: ORA-16778: redo transport error for one or more databases
    DGMGRL> exit
    Not sure if the following (notice the typo in the service name) in the parameter (log_archive_dest_2) definition on the standby d/b has anything to do with this. I did not get this error when I initially converted to s-sb.
    also I checked all my session notes, I did not type the command to set this parameter on the standby d/b and so it was not a typo on my part. However, it seems to have come from the rman script supplied with the obe. This script is supposed to clone the primary d/b to a standby db and in the process replace the string /i11g1/ with /i11g1sb/.
    SQL> show parameter log_archive_dest_2
    NAME TYPE VALUE
    log_archive_dest_2 string service=i11g1sbsb async valid_
    for=(online_logfile,primary_ro
    le) db_unique_name=i11g1sb
    SQL>
    SQL>
    SQL> select instance_name from v$instance;
    INSTANCE_NAME
    i11g1sb
    SQL>
    Given all this, the archive logs seem to be shipping correctly to the sby d/b.
    "i11g1sb >"cd C:\app\MMJ\flash_recovery_area\i11g1sb\ARCHIVELOG\2009_03_30
    "i11g1sb >"dir
    Volume in drive C is Local Disk
    Volume Serial Number is 3189-6472
    Directory of C:\app\MMJ\flash_recovery_area\i11g1sb\ARCHIVELOG\2009_03_30
    30/03/2009 09:05 PM <DIR> .
    30/03/2009 09:05 PM <DIR> ..
    30/03/2009 05:41 PM 35,627,008 O1_MF_1_137_4X2H4JJM_.ARC
    30/03/2009 05:41 PM 1,910,784 O1_MF_1_138_4X2H4LVC_.ARC
    30/03/2009 09:04 PM 10,447,360 O1_MF_1_139_4X2V03RW_.ARC
    30/03/2009 09:05 PM 8,654,848 O1_MF_1_140_4X2V3BWB_.ARC
    4 File(s) 56,640,000 bytes
    2 Dir(s) 39,716,225,024 bytes free
    "i11g1sb >"
    "i11g1 >"dir
    Volume in drive C is Local Disk
    Volume Serial Number is 3189-6472
    Directory of C:\app\MMJ\flash_recovery_area\I11G1\ARCHIVELOG\2009_03_30
    30/03/2009 09:05 PM <DIR> .
    30/03/2009 09:05 PM <DIR> ..
    30/03/2009 04:09 PM 35,627,008 O1_MF_1_137_4X29QHTV_.ARC
    30/03/2009 04:24 PM 1,910,784 O1_MF_1_138_4X2BMOC7_.ARC
    30/03/2009 06:32 PM 10,447,360 O1_MF_1_139_4X2L4J3X_.ARC
    30/03/2009 09:05 PM 8,654,848 O1_MF_1_140_4X2V37KL_.ARC
    4 File(s) 56,640,000 bytes
    2 Dir(s) 39,716,229,120 bytes free
    I am tempted to start all over again, but I'd rather use this opportunity to debug this issue (as a learning exercise). I can always start from scratch. That brings up another Q: What do I need to do to blow away all traces of the stand by database (including all the archive logs etc...) keeping my primary intact. I'd also like to blow away all snapshot and archive logs for the primary as well.

  • Time Machine, Disk Utility, and clearing out corrupted snapshots

    Hi, all. I've been having some troubling issues with Time Machine and could use your collective insight.
    I have a 500GiB external USB2 disk that I've been using for Time Machine since December (the oldest snapshot in Backups.backupdb is named 2007-12-13-222250). Everything worked fine until sometime in late January, when Time Machine decided to ax 2/3 of my Applications directory. Time Machine's logging is terse, so I wasn't able to determine what caused the corruption. The fact that this error wasn't caught and flagged by Time Machine bothers me.
    Especially because I discovered this in the process of needing to recover from a crash on my laptop's hard drive -- I had to roll back to a previous, complete snapshot to get all my applications back.
    After that I had to engage in some jiggery-pokery to get Time Machine to re-snapshot everything, but things proceeded normally until I received my new copy of DiskWarrior 4.1 last night (although DiskWarrior is an innocent bystander here). I tried to use DiskWarrior to rebuild the directory on the Time Machine disk, but it failed before trying to write anything.
    (For those who care: after some back and forth with Alsoft tech support, we determined this was likely because my Time Machine volume has 6.2GB of catalog data, and DiskWarrior works by rebuilding the directory in memory, so it has to be able to fit the whole catalog into virtual memory.)
    I was a little paranoid that something might be wrong with the disk after that, so I fired up Disk Utility and told it to repair the disk. It churned for about 6 hours and repaired an extraordinarily large number of hardlink reference count errors, but completed by claiming that the disk had been repaired.
    When I next told Time Machine to run, instead of copying over a couple GB of data like it usually does, it started copying 38GB! ***? I took a look at the previous Time Machine snapshots, and discovered that Disk Utility had totally corrupted every snapshot I'd made since last month's fandango. Most of the root-level directories in the snapshots now look like this:
    $ ls -al /Volumes/Time\ Machine\ Backups/Backups.backupdb/xxx.net/2008-03-13-025640/Euphrosyne/
    total 128
    drwxrwxr-t@ 16 root admin 1020 Feb 15 00:17 ./
    drwxr-xr-x@ 3 root ogd 204 Mar 13 02:56 ../
    -r--r--r--@ 9151 root 20051003 0 Feb 11 23:13 .DRM_Data
    -r--r--r--@ 50574 root 20051004 0 Feb 11 23:13 .DS_Store
    -r--r--r--@ 9154 root 20051006 0 Feb 11 23:13 .SymAVQSFile
    -r--r--r--@ 9155 root 20051007 0 Feb 11 23:13 .VolumeIcon.icns.candybarbackup
    drwxr-xr-x@ 2 ogd admin 68 Feb 29 2004 .buildinstaller1.tmp/
    -r--r--r--@ 9149 root 20051001 0 Feb 11 23:13 .com.apple.timemachine.supported
    -r--r--r--@ 9150 root 20051002 0 Feb 11 23:13 .com.apple.timemachine.supported (from old Mac)
    -r--r--r--@ 9153 root 20051005 0 Feb 11 23:13 .redethist
    -r--r--r--@ 58730 root 20051009 0 Feb 11 23:13 Applications
    -r--r--r--@ 37526 root 20051012 0 Feb 11 23:13 Desktop Folder
    -r--r--r--@ 13328 root 20051013 0 Feb 11 23:13 Developer
    -r--r--r--@ 64211 root wheel 0 Feb 11 23:13 Library
    -r--r--r--@ 49886 root 20051080 0 Feb 11 23:13 Network
    -r--r--r--@ 5643 root 20051120 0 Feb 11 23:13 System
    lrwxr-xr-x 1 root ogd 60 Mar 13 02:56 User Guides And Information@ -> /Library/Documentation/User Guides and Information.localized
    drwxr-xr-x@ 5 root admin 204 Dec 6 02:07 Users/
    -r--r--r--@ 2554 root wheel 0 Feb 11 23:13 Volumes
    -r--r--r--@ 37448 root 20051010 0 Feb 11 23:13 bin
    -r--r--r--@ 37525 root 20051011 0 Feb 11 23:13 cores
    lrwxr-xr-x@ 1 root ogd 11 Mar 13 02:56 etc@ -> private/etc
    -r--r--r--@ 26309 root 20051078 0 Feb 11 23:13 mach_kernel
    -r--r--r--@ 26310 root 20051079 0 Feb 11 23:13 mach_kernel.ctfsys
    lrwxr-xr-x 1 root ogd 12 Mar 13 02:56 opt@ -> /private/opt
    drwxr-xr-x@ 8 root wheel 272 Dec 6 02:31 private/
    -r--r--r--@ 52664 root 20051119 0 Feb 11 23:13 sbin
    lrwxr-xr-x@ 1 root ogd 11 Mar 13 02:56 tmp@ -> private/tmp
    -r--r--r--@ 41668 root 20052660 0 Feb 11 23:13 usr
    lrwxr-xr-x@ 1 root ogd 11 Mar 13 02:56 var@ -> private/var
    That is clearly, completely wrong, and it alarms me that Disk Utility could cause so much damage to a Time Machine volume.
    Since I also use SuperDuper! to regularly clone this disk, it's not as big a nightmare as it could be, but this means that I'm going to have to clear out every snapshot for the last month by hand, both because I obviously can't trust them anymore, and because I keep getting messages like this in the logs:
    2008/03/13 7:25:35 PM /System/Library/CoreServices/backupd[1229] Error: (-36) copying /.com.apple.timemachine.supported to /Volumes/Time Machine Backups/Backups.backupdb/xxx.net/2008-03-13-192246.inProgress/70254508-90E3-4FF A-AE46-9E8210095DAE/Euphrosyne
    2008/03/13 7:25:35 PM /System/Library/CoreServices/backupd[1229] Error: (-36) copying /.com.apple.timemachine.supported (from old Mac) to /Volumes/Time Machine Backups/Backups.backupdb/xxx.net/2008-03-13-192246.inProgress/70254508-90E3-4FF A-AE46-9E8210095DAE/Euphrosyne
    2008/03/13 7:25:35 PM /System/Library/CoreServices/backupd[1229] Error: (-36) copying /.DRM_Data to /Volumes/Time Machine Backups/Backups.backupdb/xxx.net/2008-03-13-192246.inProgress/70254508-90E3-4FF A-AE46-9E8210095DAE/Euphrosyne
    2008/03/13 7:25:35 PM /System/Library/CoreServices/backupd[1229] Error: (-36) copying /.DS_Store to /Volumes/Time Machine Backups/Backups.backupdb/xxx.net/2008-03-13-192246.inProgress/70254508-90E3-4FF A-AE46-9E8210095DAE/Euphrosyne
    2008/03/13 7:25:35 PM /System/Library/CoreServices/backupd[1229] Error: (-36) copying /.redethist to /Volumes/Time Machine Backups/Backups.backupdb/xxx.net/2008-03-13-192246.inProgress/70254508-90E3-4FF A-AE46-9E8210095DAE/Euphrosyne
    2008/03/13 7:25:35 PM /System/Library/CoreServices/backupd[1229] Error: (-36) copying /.SymAVQSFile to /Volumes/Time Machine Backups/Backups.backupdb/xxx.net/2008-03-13-192246.inProgress/70254508-90E3-4FF A-AE46-9E8210095DAE/Euphrosyne
    2008/03/13 7:25:35 PM /System/Library/CoreServices/backupd[1229] Error: (-36) copying /.VolumeIcon.icns.candybarbackup to /Volumes/Time Machine Backups/Backups.backupdb/xxx.net/2008-03-13-192246.inProgress/70254508-90E3-4FF A-AE46-9E8210095DAE/Euphrosyne
    I'd like to keep the old snapshots if I can. What's likely to happen if I just go and clear out the b0rked backup sessions by hand?
    (Hmm... it appears yes, because I just deleted them all and the disk didn't recover the freed space.)
    Here's the important question: has anyone else had issues using Disk Utility on a Time Machine volume, and does anyone at Apple have any plans to add integrity checking to Time Machine? As it stands, I don't really feel like I can trust it in its present form, as convenient as it is...

    I have trouble with disk utility on my new (1 TB Lacie) as well as on my old (500 GB WD) drive. After doing a backup, du tells me a never ending list of errors like
    HasFolderCount flag needs to be set (id = 599567)
    (it should be 0x10 instead of 0)
    Everything was O.K. before doing the backup; so is the disk to be backupped, according to du.
    Up to now, no other problem occurred, but I´d like to clear this issue before my trust in time machine is restored.

  • AD snapshot - how to mount/recover when DB is installed to drive other than C?

    When I installed AD, I put everything on the D: (DB, logs, sysvol)
    I am trying to utilize the AD snapshot feature. I can take snapshots and delete them all day long. I cannot however, mount and access them. I started off with trying
    Ashley McGlone's function set but after it failed it mount I contacted him and he said it was not written to handle AD that was not installed on C:. 
    So I regressed back to the NTDSUTIL commands in which his functions were written to handle in the background. I cleared all snapshots and manually took one.
    Here you can see even though I took ONE snapshot, two show up: one for C and one for D
    I went ahead and mounted both at this point. Then tried running the following command and this is the error I get:
    I also tried running that command on both snapshots that were mounted.
    Has anyone else successfully used the AD snapshot utility when AD is not installed using the default locations? 
    Is this action even supported?

    When I installed AD, I put everything on the D: (DB, logs, sysvol)
    I am trying to utilize the AD snapshot feature. I can take snapshots and delete them all day long. I cannot however, mount and access them. I started off with trying
    Ashley McGlone's function set but after it failed it mount I contacted him and he said it was not written to handle AD that was not installed on C:. 
    So I regressed back to the NTDSUTIL commands in which his functions were written to handle in the background. I cleared all snapshots and manually took one.
    Here you can see even though I took ONE snapshot, two show up: one for C and one for D
    I went ahead and mounted both at this point. Then tried running the following command and this is the error I get:
    I also tried running that command on both snapshots that were mounted.
    Has anyone else successfully used the AD snapshot utility when AD is not installed using the default locations? 
    Is this action even supported?
    Yes this works and is supported, try running this in a standard cmd shell, not in powershell, powershell requires paths to be accessed with .\
    Enfo Zipper
    Christoffer Andersson – Principal Advisor
    http://blogs.chrisse.se - Directory Services Blog

  • Backup failed with Error: (-50) Creating Directory

    Repeats ad nauseum.
    This is backing up to a second internal drive that is an exact duplicate (in terms of HD model and size).
    Does anyone know what Error -50 means? Lack of permissions? Invalid name (doesn't seem like it)?
    8/30/08 9:06:37 AM /System/Library/CoreServices/backupd Starting standard backup
    8/30/08 9:06:37 AM /System/Library/CoreServices/backupd Backing up to: /Volumes/Sliffy Time/Backups.backupdb
    8/30/08 9:06:58 AM /System/Library/CoreServices/backupd Error: (-50) Creating directory 2008-08-30-090658.inProgress
    8/30/08 9:06:58 AM /System/Library/CoreServices/backupd Failed to make snapshot container.
    8/30/08 9:07:03 AM /System/Library/CoreServices/backupd Backup failed with error: 2

    Hi Glenn,
    Thanks for the suggestion. Nope, it's not listed. The only thing listed is my Time Machine volume.
    After a reboot, Time Machine seems to be working. It's making backups on schedule and the logs look good, not reporting any strangeness.
    A bit bummed about these phantom errors that go away on reboot. I'll keep on eye on the error/reboot frequency.
    Rob

Maybe you are looking for

  • Installing Windows 7 in Bootcamp after Installing Mountain Lion

    Before installing Mountain Lion I erased Windows XP and all contents from Bootcamp. Now I am running Mountain Lion and attempting to put windows 7 on Bootcamp 5.0.0. I ran the disk utility on Bootcamp and it shows (formatted as Mac OS Extended (Journ

  • CAD Agent Unable to Login due PGUser Association in CUCM

    Hi all, hope everyone is well. I am running a UCCE 7.5.8 environment with CAD 7.5.8 and CUCM 7.1.3 I am having some CAD agent suddenly unable to login to CAD due the following error "An unknow failure was receved from the Cisco Unified Communication

  • Constantly playing music even when itunes off!

    I was playing a radio station last night through itunes. I went to close itunes window and music still played, and continued to do so. I shut the computer down for the night. When I turned it back on this morning and rebooted it, low and behold music

  • Reading servlet output in Flex

    Hi, I am uploading a file to the server by using "FileReference.upload(URLRequest)" method. URLRequest is the servlet which reads the file content and sends that as response. So, how to read that response back in Flex? Help is really appreciated. Tha

  • BW data source for R3 table glpcp

    Hi BW folks, I would like to create a data source in BW for table glpcp.  Does anyone know of an existing one that I could use? Kind regards, Cheryl