[Solved] diff vs rsync

I have two folders, on different machines, with the same content. One of the machines has access to both folders, through an sshfs mount. The filesystem is the same on both machines: ext4. Running diff -ur a/ b/ yields no output: i.e. the folders' content is the same. However, running "rsync --progress -avz a/ b/"  causes the ALL the content to start being copied (in this example, b/ is the sshfs mount point). Moreover, the folders are about 1 Gb large, and both contain a lot of files. If I interrupt rync's copying, and resume afterwords, the copying start ***at the point where it had been interrupted***. So I guess my question is... what is causing rsync to behave in this way? (not the way in which it resumes where it left of; rather, why does it think that files that are in fact equal (according to diff at least) are different?
Last edited by gauthma (2013-06-18 00:39:11)

From the rsync man page, under OPTIONS:
-c, --checksum
        This  changes the way rsync checks if the files have been changed and are   
        in need of a transfer.  Without this option, rsync uses a  "quick  check"
        that  (by  default) checks if each file’s size and time of last modifica‐
        tion match between the sender and receiver.  This option changes this  to
        compare  a 128-bit checksum for each file that has a matching size.
Edit: Seems the -c option could mean lots of disk thrashing because both copies have to be read in full to compute the checksums.
Last edited by thisoldman (2013-06-17 23:25:00)

Similar Messages

  • [SOLVED] bash help: rsync only if device mounted

    Hi,
    First bash-script here - the tldp.org-guide has gotten me somewhere already, but the final condition 'if rsync...' seems to fail and I don't know why. The script checks if the location to backup the files to is a mountpoint. If not, the script should mount it or die. If it is a mountpoint, rsync should be run.
    #!/bin/bash
    ## VARIABLES
    # Set location to backup from
    BACKUP_FROM="/srv/media/"
    # Set location to backup to
    BACKUP_TO="/media/backup/media/"
    BACKUP_DEV="e3434573-ad6f-4c44-8168-391292ba5ec5"
    BACKUP_MNT="/media/backup"
    # Log file
    LOG_FILE="/var/log/script_sync_media.log"
    ## SCRIPT
    # Check if the log file exists
    if [ ! -e $LOG_FILE ]; then
    touch $LOG_FILE
    fi
    # Check if the drive is mounted
    if [[ `! mountpoint -q $BACKUP_MNT` ]]; then
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Backup device needed mounting!" >> $LOG_FILE
    # If not, mount the drive
    if [[ `mount -U $BACKUP_DEV $BACKUP_MNT` ]]; then
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Backup device mounted." >> $LOG_FILE
    else
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Unable to mount backup device." >> $LOG_FILE
    exit
    fi
    fi
    # Start entry in the log
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Sync started." >> $LOG_FILE
    # Start sync
    if [[ `rsync -a --delete $BACKUP_FROM $BACKUP_TO` ]]; then
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Sync completed succesfully." >> $LOG_FILE
    else
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Sync failed." >> $LOG_FILE
    fi
    # End entry in the log
    echo "" >> $LOG_FILE
    exit
    It is probably a trivial problem for a bash-professional? The log states 'Sync started.' and immediately 'Sync failed.'...
    Thx.
    Vincent
    Last edited by zenlord (2011-03-05 15:29:41)

    OK, a little bit later than I should have replied, but I have incorporated the tips provided above, and a few other changes:
    * added an extra check for the target dir
    * applied general exit codes
    * added unmount after running the script (which makes the earlier warning if the device nees to be mounted a little superfluous)
    Here goes:
    #!/bin/bash
    ## VARIABLES
    # Set source location
    BACKUP_FROM="/srv/media/"
    # Set target location
    BACKUP_TO="/media/backup/media/"
    BACKUP_DEV="xxxxxxx-xxxxx-xxxxxxxxxxxxxxx" #UUID of the disk
    BACKUP_MNT="/media/backup"
    # Log file
    LOG_FILE="/var/log/script_sync_media.log"
    ## SCRIPT
    # Check that the log file exists
    if [ ! -e "$LOG_FILE" ]; then
    touch "$LOG_FILE"
    fi
    # Check that source dir exists and is readable.
    if [ ! -r "$BACKUP_FROM" ]; then
    echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to read source dir." >> "$LOG_FILE"
    echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
    echo "" >> "$LOG_FILE"
    exit 1
    fi
    # Check that target dir exists and is writable.
    if [ ! -w "$BACKUP_TO" ]; then
    echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to write to target dir." >> "$LOG_FILE"
    echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
    echo "" >> "$LOG_FILE"
    exit 1
    fi
    # Check if the drive is mounted
    if ! mountpoint "$BACKUP_MNT"; then
    echo "$(date "+%Y-%m-%d %k:%M:%S") - WARNING: Backup device needs mounting!" >> "$LOG_FILE"
    # If not, mount the drive
    if [ mount -U "$BACKUP_DEV" "$BACKUP_MNT" ]; then
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Backup device mounted." >> "$LOG_FILE"
    else
    echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to mount backup device." >> "$LOG_FILE"
    echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
    echo "" >> "$LOG_FILE"
    exit 1
    fi
    fi
    # Start entry in the log
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Sync started." >> "$LOG_FILE"
    # Start sync
    if rsync -a -v --delete "$BACKUP_FROM" "$BACKUP_TO" &>> "$LOG_FILE"; then
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Sync completed succesfully." >> "$LOG_FILE"
    else
    echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: rsync-command failed." >> "$LOG_FILE"
    echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
    echo "" >> "$LOG_FILE"
    exit 1
    fi
    # Unmount the drive so it does not accidentally get damaged or wiped
    if [ umount "$BACKUP_MNT" ]; then
    echo "$(date "+%Y-%m-%d %k:%M:%S") - Backup device unmounted." >> "$LOG_FILE"
    else
    echo "$(date "+%Y-%m-%d %k:%M:%S") - WARNING: Backup device could not be unmounted." >> "$LOG_FILE"
    fi
    # End entry in the log
    echo "" >> "$LOG_FILE"
    exit 0
    Usage:
    1. c/p to a .sh-file
    2. Make the file executable (chmod +x <file>.sh)
    3. set an entry in crontab to run this script daily/weekly/monthly
    Maybe this little script helps other bash newbies...
    Last edited by zenlord (2012-09-23 21:08:32)

  • [SOLVED] How to rsync with sudo needed to a destination in LAN

    I need to do a full backup as demonstrated here but the destination is another machine on my LAN, I already did many rsync operations without password needed over LAN but I never needed sudo rsync before. Now this full backup operation clearly requires root so I added my source root key into the destination's ~/.ssh/authorized_keys, as this method worked well when I needed to rsync without password from my user1@src to user2@dest. Now what appears when I use sudo rsync....
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that a host key has just been changed.
    The fingerprint for the ECDSA key sent by the remote host is
    c1:bc:80:c2:1e:40:61:f0:4f:b2:70:68:6b:c5:de:04.
    Please contact your system administrator.
    Add correct host key in /root/.ssh/known_hosts to get rid of this message.
    Offending ECDSA key in /root/.ssh/known_hosts:1
    ECDSA host key for 192.168.1.2 has changed and you have requested strict checking.
    Host key verification failed.
    rsync: connection unexpectedly closed (0 bytes received so far) [sender]
    rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.1]
    What should I do?
    Last edited by miktore (2014-08-16 15:51:43)

    miktore wrote:What should I do?
    Well, I would assume that since you have a fingerprint listed (c1:bc:80:c2:1e:40:61:f0:4f:b2:70:68:6b:c5:de:04), you should check your key.

  • [solved] diff /etc/pam.d/login /etc/pam.d/login.pacnew

    ... results in
    2,21c2,7
    < auth required pam_securetty.so
    < auth requisite pam_nologin.so
    < auth required pam_unix.so nullok
    < auth required pam_tally.so onerr=succeed file=/var/log/faillog
    < # use this to lockout accounts for 10 minutes after 3 failed attempts
    < #auth required pam_tally.so deny=2 unlock_time=600 onerr=succeed file=/var/log/faillog
    < account required pam_access.so
    < account required pam_time.so
    < account required pam_unix.so
    < #password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3
    < #password required pam_unix.so sha512 shadow use_authtok
    < session required pam_unix.so
    < session required pam_env.so
    < session required pam_motd.so
    < session required pam_limits.so
    < session optional pam_mail.so dir=/var/spool/mail standard
    < session optional pam_lastlog.so
    < session optional pam_loginuid.so
    < -session optional pam_ck_connector.so nox11
    < -session optional pam_systemd.so
    >
    > auth required pam_securetty.so
    > auth requisite pam_nologin.so
    > auth include system-local-login
    > account include system-local-login
    > session include system-local-login
    Is it safe to use new  /etc/pam.d/login?
    Last edited by student975 (2012-07-05 11:54:37)

    I assume that tomegun meant using the new one rather than booting with the old one. (The latter might also be fine - I've no idea.)
    I'm a bit confused about the role which /etc/pam.d/passwd is playing now. Should options I've added here be duplicated for the password lines in e.g. system-auth? Currently, I have this in passwd:
    password required pam_unix.so sha512 shadow nullok rounds=65536
    but since system-auth etc. seems to have its own password lines, I'm wondering if having this in passwd is now either pointless or at least insufficient.
    The default set up, if I understand it correctly, is not actually that different from the old one. The diff above is missing the additions:
    > auth include system-local-login
    > account include system-local-login
    > session include system-local-login
    I think this is invoking the stuff in /etc/pam.d/system-local-login which in turn calls system-login and system-auth, for example. If you compare the cumulative effect, I believe there are only minor differences which don't impact security e.g. to do with announcing the last login time or displaying message of the day.
    EDIT: So adding that stuff all back into login just duplicates stuff with maybe some very minor differences such as requiring message of the day etc.
    Last edited by cfr (2012-07-04 23:25:35)

  • From-scratch minimalist backup implementation?

    I have done backups since I first used a linux distro, and I've been using a self-designed python script that leverages rsync and LUKS to perform serial encrypted backups to several external drives. I derived the python code from a BASH script I wrote several years previous, and it seemed to work pretty well after initial testing. However, upon recent inspection, it appears it isn't doing its job properly; also, looking back at the code I've realized that it's sloppy, inefficient, and fails to meet my standards of minimalism.
    That said, my problem is this: I need to design an effective backup solution within the following parameters:
    - it must be encrypted
    - it must support multiple disks
    - it must be rsync-based
    - it should preferably use file compression, both for transfer (which I already do using the --compress option for rsync), and on the actual storage medium, and that if so:
        - the compression algorithm should preferably be space efficient over CPU-efficient
    - it must be minimalist, ideally using separate, simple, standard-issue and widely available Linux utilities united by a simple script (ideally BASH)
    I have some idea as to where to begin, but I'd like the input of the smart masses on this. I'm by no means a newbie to arch linux or linux in general, but I'd rather not overlook and/or bungle anything.
    Also, if this is the wrong category for this thread, I apologize and request that someone more educated in forum topography move it to a more appropriate location. It seemed a good fit, but the various fine distinctions between some forum categories continue to elude me.
    UPDATE:
    This is what I have so far, though it does seem inefficient:
    compression algorithm:
    - tar.xz (are there any methods of compressing a tar archive with better compression rate? this was the best my research gave me.)
    rsync to transfer files
    GPG to encrypt compressed archives
    my question now is how to make these work together. I thought of compressing the folders to be backed up using tar + xz, then rsyncing them to a backup drive connected over USB.
    I have 3 external drives I plan to use: one 1TB (to host full system mirror), one 500GB (also for full system mirror), one 64GB (home folder mirror), and one 2GB (documents mirror). This is the general disk scheme my old setup used.
    the main problem I forsee is that my complete system mirror and my home folder mirror would both be very, very large (~13GB and ~10GB, respectively), and I don't see how rsync would be a practical solution to copying such large archives to an external disks. Correct me if I'm wrong, but doesn't rsync copy files that have changed, instead of the delta of that file? Wouldn't that mean that rsync would just copy the whole archive over again if it changed even in the slightest? Even if that wasn't the case, where would I fit the archived files before they are transferred? I have an SSD as my main drive, so space is limited and so are R/W-cycles, so holding the archives temporarily on disk before they are copied makes no sense.
    If my interpretation of rsync's functioning is correct, is there any way to make the archive once, transfer it to the external disk, and then simply sync the changes to the files on my main drive to the archive on the external ones? Or is there a way to tar-xz-encrypt the folders being copied (/, /home, and /home/me/docs) during the transfer without storing the archives on disk, and while maintaining rsync's ability to copy only altered files?
    if that scheme seems inefficient (as it does to me), does anyone have a better idea? My old system rsync'd the files on my main drive to encrypted partitions on my external drives, and while this solved the problem I mention above it also forwent the portability and space-saving nature of compressed tar archives.
    Last edited by ParanoidAndroid (2014-01-23 01:58:55)

    @firecat53:
    Woah! I didn't see that post for some reason. I haven't checked out the script yet, but I've heard of duplicity and even used it previously (in the form of deja-dup). However, I remember having issues with it and it never worked quite the way I wanted.
    I've been thinking about this, and as you all have suggested I have concluded that with the paradigm I proposed rsync-style snapshots aren't a good fit. So, here is my modified proposal:
    and rdiff-type backup to a compressed tar archive. for a whole system backup, for example:
    = initial backup =
    / is tarred, encrypted, compressed, then transferred to the backup disk all in one piece with something like
    tar - / | <GPG> | lrzip -b > /mnt/my_disk/orig.bak
    = future backups =
    a diff file is generated against /, which is then appended to the original tar archive. To restore a point in time, one would simply reverse the first step and apply the diff for that point.
    The problem with this setup is that I don't know if you can append data to an encrypted tar archive and have that data encrypted in the process. If not, I suppose one could encrypt the diffs with the same key as the original tar and append them that way. Also, I don't know if compression works that way, either. That is, I'm not sure if one can append to a compressed tar archive and have that data compressed. If not, I suppose one could individually compress each diff; or one could leave the original tarball uncompressed and only compress the diffs. Encryption still poses a problem as postulated above.
    EDIT:
    Looking back, this is something like duplicity, the only difference being that this setup would create a single archive, instead of a folder with regular files and a bunch of diffs. Also, my idea would store the files future as diffs, instead of its past (which, as I understand it, is what duplicity does).
    Since I don't really need true diff-style backup behavior (the "back in time" functionality), would it be possible to overwrite the last diff appended to the original archive file, so that only the most recent diff would be added?
    EDIT2:
    After a little more research on the thrice-blessed ArchWiki, I found System Tar & Restore, btar, and rdup. The latter two look great, but ideally I'd like to implement this myself. These programs still don't do quite what I'm asking for.
    The biggest problem I can forsee (even if the abovementioned issues are solved), is how to do this while the system is running. As I understand it, tarring a whole system while it runs is a recipe for data corruption. Instead of using diffs, I notice that rsync can create delta files, and rsync can run safely on a system while it is running. With both diffs and rsync delta files, the question remains, though: if they work by calculating the changes between two data sets.... how does one calculate the difference between a system's / and the same / that is compressed and tarred?
    Last edited by ParanoidAndroid (2014-01-23 23:31:31)

  • [SOLVED] rsync doesnt work on nfs shares

    Ok, i have a nfs4 set up and i can copy files, delete files and move files around just fine from my desktop to my nas.
    But as soon as i want to use rsync to sync directories it fails with this message:
    rsync: chown "/mnt/nas/Audio/Rips/flac/Anthrax" failed: Invalid argument (22)
    the user i am copying with is the very same on both systems:
    [carnager@archnas ~]$ id
    uid=1000(carnager) gid=100(users)
    [carnager@freebox ~]$ id
    uid=1000(carnager) gid=100(users)
    One thing that i noticed is this:
    When i do a "ls -l" on the mounted nas directory i get an output like this:
    [carnager@freebox ~]$ ls -l /mnt/nas/ | tail -1
    drwxr-xr-x 8 4294967294 4294967294 4096 Dec 31 18:03 Video
    where it normally should show my username and group.
    Any ideas?
    Last edited by Rasi (2012-01-02 16:10:26)

    tomk wrote:The wiki has the answer - at least for those funky numbers. Hope it solves your rsync issue.
    ok, then i probably know my problem, but not how to solve it:
    [carnager@freebox ~]$ sudo /etc/rc.d/nfs-common start
    :: Mounting pipefs filesystem [DONE]
    :: Starting rpc.idmapd daemon [FAIL]

  • What r the diff. error that can be solved using RSRV tcode?

    Hi ,
    what r the diff. error that can be solved using RSRV tcode?
    I want to know all the errors that can be solved using RSRV t code
    if any body is having good document regarding RSRV please send it to me at
    <u><b>[email protected]</b></u>
    Thanx in advance,
    ravi.

    Hi,
    Refer the below links for more details about RSRV TCODE.
    /community [original link is broken]
    http://help.sap.com/saphelp_nw04/helpdata/en/92/1d733b73a8f706e10000000a11402f/frameset.htm
    it's for bw objects consistency analysis and repair.
    from transaction code RSRV doc itself :
    Transaction RSRV: BW Data and Metadata Test and Repair Environment.
    Transaction RSRV checks the consistency of data stored in BW. It mostly examines the foreign key relationships between individual tables in the enhanced star schema of the BW system.
    The transaction interface was re-designed for SAP Portals release 3.0A. A brief guide about how to use the transaction follows.
    Starting the Transaction
    You can reach the test and repair environment
    By entering the transaction code RSRV
    From InfoObject maintenance (Transaction RSD1)
    By clicking on the "Analyze" button in the intial screen.
    After selecting a characteristic in the maintenance screen via the "Processing -> Analyze InfoObject" menu option.
    The Initial Screen
    When using the test and repair environment for the first time, the message "Values were not found for all setting parameters" draws your attention to the fact that there aren't any saved settings for your user.
    After confirming the dialog box, you reach the initial screen of the transaction. This is divided into two parts:
    1. On the left-hand side, you can see a tree structure with the pool of available tests.
    2. The right-hand side is empty at first. The tests you have selected will be put here. A selection of tests is called a Test Package here.
    Combined and Elementary Tests
    An Elementary Test is a test that cannot be divided into smaller tests and that can therefore only be executed as a whole.
    In this regard, a Combined Test determines which elementary tests are to be executed after entering the parameters. You can remove individual elementary tests from the test package before carrying out the actual test run, in order to reduce run time, for example.
    Combining a Test Package and Executing it.
    Firstly select one or more tests with drag&drop or by double-clicking. Each selected test appears as a closed folder in the view of your test package. (An exception is elementary tests without parameters: These do not appear as a folder). You can also drag a whole folder of tests from the test pool across to the right-hand screen area; all tests that are located in the hierarchical structure under this folder are then added to the test package. You can also display a short description of the test, if required. Do this right-clicking on a test and choosing "Description" from the context menu.
    Afterwards, you must supply the tests with parameters. Tests without parameters must not be given parameters. You are given notification of this when selecting them. You can enter parameters by double-clicking on a test (or a test package) or by expanding the folder of the test.
    A dialog box appears in which you must enter the required parameters. Input help is often available. After entering the parameters, a folder with the name "Parameter" is added under the test. This contains the parameter values. The test name can change in some circumstances, enabling you to see at first sight for which parameter values the test is to be executed. It is possible to select the same test several times and give it different parameters, which may even be preferable in some situations. When you have supplied the combined test with parameters, the folder with the name "Elementary Tests" is added under this one. It contains the elementary tests, from which the combined test is built. You can delete individual elementary tests in the test pool using drag & drop.
    After supplying all tests with parameters, you can start the test run by clicking on the "Execution" button. After execution, the test icons change from a gray rhombus to a red, yellow or green one, depending on whether the test had errors, warnings or was error-free.
    Test Results
    The test results are written to the application log. Depending on the settings, the system jumps automatically to this display, or you can reach it by clicking on the "Display" button. The results are saved in the database, and can therefore be compared later with additional test runs.
    In the left-hand side of the window, you can see an overview of the most recent test runs. Double-clicking on a folder displays all messages under these nodes as a flat (non-hierarchical) list in the right-hand screen area. Long texts or detail data may be available for individual messages, which can be displayed with a mouse click.
    Repairs
    Some tests can repair inconsistencies and errors. Automatic correction is generally not possible: If entries are missing from the SID table for a characteristic, in which case the lost SIDs are still being used in a dimension table (and the corresponding dimension keys are still being used in the fact table) of an InfoCube, you can only remove the inconsistency by reloading the transaction data of the InfoCube. Also note that you must make repairs in the correct sequence. You must always read the documentation for the test and have a good idea about how the error occured, before making the repairs.
    After executing the test run, go from the application log back to the initial screen to make these repairs. Click on the "Fix Errors" button to start an error run. Since the dataset could have changed between the test and the repair run, the required tests are executed again before the actual repair. The results can be found in the application log once again.
    Test Packages
    The test package is deleted if you do not save the test package in the display before leaving the test environment. Choose "Test Packages -> Save Test Package" in the option menu. You can do the following via options in the "Test Package" menu:
    Load packages
    Load for processing - the package is then locked against changes by others.
    Delete and
    Schedule execution at a later date or at regular intervals in background processing
    Settings
    In the "Settings" menu option, you can make settings (adjust the size of the screen areas, for example) and save them. The settings are automatically read when starting the test environment. Support packages are being delivered with additional settings options since the test environment is under development at the moment. A message notifies the user at the start if there aren't any values for the setting options.
    Jobs Menu Option
    You can access the job overview via the Jobs -> Job Overview menu. Use this when you want to check the status of a test package you have scheduled.
    Application Log Menu Option
    You can display old logs from previous test runs in the dialog box, as well as scheduled ones. The option of deleting test logs can also be found here.
    New Selection Button
    The currently selected test package is deleted when you press this button.
    Filter Button
    After a test run, click on this button to remove all elementary tests without errors or warnings from the test package.
    Executing Test Packages in Process Chains
    You can add a process chain to the ABAP Programm RSRV_JOB_RUNNER in the process chain maintenance transaction, RSPC. To do this, use drag & drop to select the process type "ABAP Program" under "General Services" in the process type view. When maintaining process variants you will be asked for the program name and a program variant. Enter RSRV_JOB_RUNNER for the program name. Choose a program variant name and click on "Change". In the next screen you are able to either change or display an already existing variant, or create a new variant. When creating a new variant you will be asked for the following: Package name (an imput help on this is available), the detail level for the log to which the RSRV log in the process chain log is to be integrated, and a message type severity at which process chain processing should be terminated.
    The RSRV process log in the process chain is built as follows:
    First is a summary specifying whether errors, warnings, or no errors occurred for each elementary test.
    A log view of the RSRV test package at the specified detail level follows.
    Example: If you choose the value '3' for the detail level, only messages up to and including detail level 3 will be included in the log processes for the process chain. Messages occuring at a lower layer of the test package test are not displayed in this log. Please note that, unlike the application log, the process log does not propagate errors from deep detail levels to low detail levels. For example, if a single detail level 4 error occurs the summary will show that the relevant test delivered an error. However, this error will not be listed in the second part of the log.
    A complete log is always written independantly of the RSRV process log in the process chain. You can view this in the menu option "Application Log->Display Log->From Batch".
    Please note that there is currently no transport object for test packages and that consequently these cannot be transported. Process chains that execute RSRV test packages must therefore be manually postprocessed after a transport to a different system: The relevant test packages must be created.
    Hope This Helps,
    This is already there in SDN.
    Regards,
    rik

  • [SOLVED]I need help updating the ABS tree by a form other than rsync

    hi arch family presented the following problem, I have internet access through a firewall, my problem is I can not update the tree ABS rsync to update there any alternative other than because they appreciate assistance rsync
    thanks
    Last edited by jccl1706 (2010-03-12 15:50:27)

    ok friend thanks Snowman solved my problem thanks to this great community that we all are here greetings from Cuba also used arch
    thanks

  • [SOLVED] Send diff Parameters to diff kernels w/GRUB2 &survive updates

    1. Installed LMDE.
    2. Loaded the core ARCH and all went well - loaded necessary drivers for my soundcard and video and all is well
    3. Discovered early with the first distro installs that the linux kernel does not handle synaptic touchpad devices well during the PM suspend and resume process.
    4. The cure is to add atkbd.reset to a command line in the grub file  - then update grub and end result - touchpad works great after resume.
    5. Now, since grub was already installed I bypassed that during ARCH setup. Went into LMDE and mounted the root for ARCH and then installed OS-prober - ran update-grub and now when my comp starts up I have a grub menu the allows me to choose which distro to choose.
    SO QUESTION IS - How do i get grub to send parameters to the kernel for ARCH that are different from parameters being sent to LMDE??
    Temporary Solution is to edit the grub.cfg file directly - Just worried about updates to grub erasing my manual adjustments!!!
    Permanent Solution you will find at the bottom of this topic post #22.
    Last edited by iamk2 (2013-03-27 00:50:07)

    [RESOLVED]  - How to Send different Parameters to diff. kernels with grub?  AND retain these setting automatically even after using update-grub. NOTE this will not prevent any overwrites to your grub.d scripts via pacman. Wonder ho often this actually occurs???
    For any one out there that has two or more Linux distro's installed on one PC and uses OS-Prober to locate the 2nd, 3rd, etc.., distro's for the grub2 menu, here is a risk free solution that allows you to send parameters to other kernels as well as your first installed one (or the one you have installed with grub2).
    (I say risk free because if your into ARCH, I would believe you have some skill and competence)
    1. Edit /etc/grub.d/30_os-prober with root permission
        find the code line containing
    linux ${LKERNEL} ${LPARAMS}
        append ${GRUB_CMDLINE_LINUX_DEFAULT} to the end of this line
    It should now look like this:
    linux ${LKERNEL} ${LPARAMS} ${GRUB_CMDLINE_LINUX_DEFAULT}
    Save the file
    2. Open terminal if need be and
    sudo update-grub
    This will attach the same parameters form your existing GRUB_CMDLINE_LINUX_DEFAULT in the /etc/default grub file to other distro's found by OS-Prober.
    3. Now if you wanted to send different parameters to the distro's found by OS-Prober, then instead of
    linux ${LKERNEL} ${LPARAMS} ${GRUB_CMDLINE_LINUX_DEFAULT}
    make it like something like this
    linux ${LKERNEL} ${LPARAMS} ${GRUB_CMDLINE_OS_PROBER_FINDS_DEFAULT}
    Then got to your /etc/defaults grub file and insert
    GRUB_CMDLINE_OS_PROBER_FINDS_DEFAULT
    so that your grub file now looks like this
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash atkbd.reset acpi_osi=\"Linux\""
    GRUB_CMDLINE_LINUX="atkbd.reset"
    GRUB_CMDLINE_OS-PROBER-FINDS_DEFAULT="your parameters"
    Now to send completely different parameters to each and every kernel is a game I wonder about but not interested at this time in solving.
    Last edited by iamk2 (2013-03-27 01:55:56)

  • SOLVED sorta-Rsync cron/crontab 2 Processes?

    I recently tried to setup cron to run rsync my /home to my old harddrive. I made a script in /etc/cron.daily as follows:
    18 0 * * * rsync -ar --delete /home/comhack/Videos /home/comhack/Music /mnt/backup/Backup &> /dev/null
    Well after restarting cron it would not work. So I used crontab to add the cron job:
    bash-3.2# crontab -l
    #!/bin/bash
    18 0 * * * rsync -ar --delete /home/comhack/Videos /home/comhack/Music /mnt/backup/Backup &> /dev/null
    Well when I do that it shows 2 processes in top:
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    8343 root 20 0 32920 1716 400 R 33 0.0 0:20.49 rsync
    8341 root 20 0 30032 2172 748 S 27 0.1 0:15.89 rsync
    Also, /var/log/crond shows only one process started:
    08-Apr-2009 00:18 FILE /var/spool/cron/root USER root pid 8340 cmd rsync -ar --delete /home/comhack/Videos /home/comhack/Music /mnt/backup/Backup &> /dev/null
    Any ideals?
    Last edited by securitybreach (2009-06-14 10:42:10)

    I take it that worked? If so, can you edit your original post and mark as solved please to help other in the future searching for similar problems?
    Last edited by fukawi2 (2009-04-08 22:38:46)

  • [SOLVED] More rsync problems after restore... Read only filesystem?

    Yesterday I backed up all of my data with rsync to a USB thumb drive. Surprisingly it all fit, but my intentions were to wipe my disk and create a new partition scheme for a dualboot while still being able to continue where I left off with arch. So after backing up with the script from the arch wiki titled full system restore with rsync, I followed the rest of that page, and modified the fstab on the backup on the USB and reconfigured grub and restarted my computer and I was able to boot into the USB. Then, I decided that it would be easier to just use the live cd to repartition and pacstrap the base system and generated an fstab. Next, I chrooted from the livecd environment into the newly built base and installed rsync. I then mounted my USB to /mnt in the chrooted environment and used
    rsync -aAXv /mnt/ARCH_BACKUP/* / --exclude={/mnt/ARCH_BACKUP/etc/fstab,/mnt/ARCH_BACKUP/boot/grub/}
    I just did this since I figured that all of the directories excluded in the script wouldn't really have to be excluded here since they were already empty. Everything seemed to run smoothly except for one file(or maybe directory?) called /etc/resolv.conf.t.jbnW2 which kept giving an error. I figured the file was unique to the USB or something and continued to reinstall grub and then reboot. The weird thing is that everything was back on my system, but it didn't automatically boot into a display manager which it's set up to do. Then when trying to manually start x, that wouldn't work. Then I tried to connect to wifi, it recognized my network but wouldn't connect, so I tried sudo su, which worked but returned an error that the var file system was read only. I then tried to use rsync again thinking maybe that directory would transfer now that I wasn't in the chrooted environment. But instead, it was trying to restore every single file, but every one returned the error "filesystem is read only".
    It seems like when I restored all of the files in my entire system were made read only.
    Can anyone tell me what may have caused this and how to fix it?
    I think it may either be that when I mounted the device in the chrooted environment I didn't specify 'rw' capabilities, or it may be that when I backed up I didn't specify 'rw' capabilities I just opened the file manager and clicked on it and it automagically mounted to /media/.
    Last edited by eroge008 (2013-01-25 20:29:22)

    Ok, well I fiixed it. I had edited the fstab on the usb to be able to boot into the usb, and, while I edited it again on the hdd each time I restored from the backup, I didn't realize that the top entry /dev/sda1 (which is my root partition) was on a line that was commented out.  I guess I accidentally hit the backspace button or something while editing it to be able to boot into the usb.
    Also, I think there was a second problem, that my usb controller is starting to fail. Because on the last time that I backed up, if I moved my computer or anything I would get a I/O error. This was caused by the failing usb controller. The computer would remount it as read-only because of the failure. The solution to this was to use
    hdparm -r0 /dev/sdxy
    Note: The device must be unmounted.
    Where x is the device and y is the partition number, for me this was sdb1.
    Then mount specifying read-write option.
    mount -o rw /dev/sdxy /media/<BACKUP>
    I also tried
    mount -o remount,rw /dev/sdxy /media/<BACKUP>
    as was recommended on all of the sites that I could find, but this command kept returning an error.
    Note: For newer users, Arch does not automatically create the media folder as it doesn't provide automatic mounting of external drives. If such is the case, and you haven't installed a desktop environment with a file manager that will automatically do this for you when the device is recognized, you must make the media directory and the directory which you intend to mount to manually.
    # mkdir /media
    # mkdir /media/<BACKUP>
    Where <BACKUP> can be whatever you want it to be.
    Last edited by eroge008 (2013-01-25 20:27:52)

  • Solved: pacman - permissions differ on tmp/

    Not sure if anyone else is seeing this because this may have been part of something I did a way back.  I'm getting:
    ( 35/400) upgrading filesystem [######################] 100%
    warning: directory permissions differ on tmp/
    filesystem: 777 package: 1777
    I know a bit about the filesystem package.  If I remember correctly it contains files that are Arch specific like the boot scripts...  Now if I'm reading this correctly the 777 permissions of "filesystem" is original directory and "package" is the package's permissions for the tmp directory.  Here's my /tmp directory permissions after the update:
    drwxrwxrwt 11 root root 4.0K Sep 13 11:41 tmp
    Can anyone help me clarify this?  Is the problem now fixed, or is there something I need to do?
    Last edited by Gen2ly (2010-09-13 19:49:14)

    I haven't seen pacman changing permission on directories yet, but according to the message, your /tmp didn't have the sticky bit set before the upgrade. Though things will work fine with just 777, not having the sticky bit on /tmp will mean that someone could delete and replace files owned by someone else, which is a security risk.

  • [SOLVED] warning: directory permissions differ on var/log/wicd/

    Hi,
    I've seen several posts about this but I couldn't really figure out what's the appropriate action. Well, anyway I get the following error message when doing a pacman -Syu
    warning: directory permissions differ on var/log/wicd/
    filesystem: 1363 package: 755
    Is it a bug? Should I change the filepermission of the directory, and if so to what?
    Last edited by OMGitsUGOD (2009-09-18 10:38:32)

    This is sort of related,
    http://bbs.archlinux.org/viewtopic.php?pid=432588
    or at least thats the post at the end has the same file permisions as I have in /var/log/wicd.
    $ ls -la /var/log/ | grep wicd
    d-wxrw--wt 2 root root 4096 2009-08-27 07:58 wicd
    I'm pretty bad at this stuff, but isn't this rather 1361 than 1363, or am I totally wrong? And why not allow theowner to read the file?
    Last edited by OMGitsUGOD (2009-09-17 08:43:32)

  • Pacman on rsync-ed disk - puzzling [SOLVED]

    I usually keep a system backup on an old box just in case. I recently migrated to arch and decided to redo the backup, so i made a basic install of arch on the old box, then
    rsync -av / old_box:/ --exclude-from=my_exclude_list* --delete
    as i usually do. Everything went fine, except that running pacman -Syu on the old box after the sync produced a huge list of pending upgrades. That was puzzling because the pkg cache (isn't that where pacman looks to compare installed vs available ?) must have been copied along with everything else and the main box is up to date. I did pacman -Syu anyway and it downloaded everything, but refused to install saying the packages already existed on the system, which is hardly surprising as it had just been synced. So if the packages already existed, why on earth did pacman dowload them all over again ?
    */music
    /etc/fstab
    /boot/grub
    /home/michael/.VirtualBox
    /home/michael/.mldonkey
    /home/michael/deluge*
    /home/michael/qBT_dir
    /mnt
    /media
    /etc/rc.conf
    /etc/X11/xorg.conf
    edit : I must have overlooked something, i reinstalled arch on the old box, rsync-ed everything into it from the main box, (with some additions to the exclude file i had forgotten, namely /proc ,/dev/ , /etc/hosts.* and /etc/rc.local) and now pacman -Syu works just fine, fetching only what's new and without complaining. Rsync is a much better backup tool by the way, because you can select what you want transferred. If i had been using partimage, the system would have a lot of wrong confs for the old machine.
    Last edited by michaelks (2009-02-15 22:35:48)

    Figured it out like 3 hours after working and 10 minutes after posting... Problem is that you have to poweroff your vm in order to enable ehci.

  • [Solved] Using diff for directory compare

    I'm interested in using diff specifically to verify data integrity; that is, to make sure each file in dirA is data-identical to the corresponding file in dirB. I'm pretty sure it does that, I just want to verify it's not 'assuming identical' if size and timestamps are the same. Thanks.
    Last edited by alphaniner (2014-04-09 13:17:16)

    Wait, no, that wasn't a sufficient test. Again with only one invocation of 'touch' so the change times are the same:
    % touch -amt 201301011234.20 foo/foo bar/foo
    % stat bar/foo foo/foo
    File: 'bar/foo'
    Size: 6 Blocks: 8 IO Block: 4096 regular file
    Device: fe01h/65025d Inode: 341815 Links: 1
    Access: (0644/-rw-r--r--) Uid: ( 1000/ trent) Gid: ( 100/ users)
    Access: 2013-01-01 12:34:20.000000000 -0500
    Modify: 2013-01-01 12:34:20.000000000 -0500
    Change: 2014-04-09 08:18:44.000000000 -0400
    Birth: -
    File: 'foo/foo'
    Size: 6 Blocks: 8 IO Block: 4096 regular file
    Device: fe01h/65025d Inode: 341873 Links: 1
    Access: (0644/-rw-r--r--) Uid: ( 1000/ trent) Gid: ( 100/ users)
    Access: 2013-01-01 12:34:20.000000000 -0500
    Modify: 2013-01-01 12:34:20.000000000 -0500
    Change: 2014-04-09 08:18:44.000000000 -0400
    Birth: -
    % diff -qr foo bar
    Files foo/foo and bar/foo differ
    Last edited by Trent (2014-04-09 12:16:40)

Maybe you are looking for

  • Is there a way to tighten a text bounding box?

    Hey all, I've looked before but never found a definitive answer to this question: Is there a way to tighten the text bounding box without converting it to outlines? I work for a screen printer, and want to figure out a way to tighten the bounding box

  • Error "no ocijdbc11 in java.library.path" in SQL Developer 4

    Hi! I have just installed the latest version of Oracle 12c and the latest SQL Developer (4.0.0.12). I have also JDK 7u25 installed. I run everything on Windows 7 64bit. When I create a local connection in SQL Developer and test or try to connect I ge

  • Archive or Drag and Drop GargageBand Files?

    I'm wondering what is best to archive my work then burn for backup or just drag the individual files over to the burn folder?

  • Fuji xerox scanning still not working under 10.9.1 mavericks

    Mac OS releases up until 10.8.x have allowed me to scan documents to my MacBook Pro from my (USB connected) Fuji-Xerox Docuprint M205b. Since updating to 10.9 and now 10.9.1, the scanner is not recognised. Printing works fine. (My last driver update

  • Can't open apps in Time Machine

    In June we had a great electrical storm, and my i.Mac fried. Lucky for me the Maxtor external drive was OK, so I was able to get a total backup on to my replacement iMac and I lost not one thing.(three cheers for "Time Machine"). So I then reformatte