Found 2 Mistakes in "Local Mirror" on Wiki

I've just found out 2 mistakes in the article: "Local Mirror" on Wiki. You can find out what they are from the history tab. I just changed my script to the newest one and got a few errors.
Can somebody please confirm that the newest one on Wiki works fine ?

Yes, they are fine. Thank you.
I added -k option because otherwise ftp/current does not get synced (it's a symlink).

Similar Messages

  • Local Mirror - Wiki - help please

    Hello peoples,
    I have read and followed the Wiki on having a local mirror  http://wiki2.archlinux.org/index.php/Local%20Mirror
    My updates do not occur automatically, and I do not have the knowledge yet to fix this so I was wondering if there is a better way of doing this.  I have read Man:crond and crontab but am left a little confused as to how this can be incorporated into using the original script.  Also I would like to automate the time to 0200HRS.   Can somebody help me out please?   

    Thankyou for your reply i3839,
    From your answer, I can define more clearly what the issue is.  The updating I was refering to was only the ¨Current¨ and ¨Extra¨ database that I sync.  The script given in the wiki didn´t work for me so here is my version that works for me.
    #!/bin/sh
    rsync -avz --delete rsync.archlinux.org::ftp/current/os/i686/ /home/mirror/current/os/i686
    rsync -avz --delete rsync.archlinux.org::ftp/extra/os/i686/ /home/mirror/extra/os/i686
    # rsync -avz --delete rsync.archlinux.org::ftp/testing /mirror/
    # rsync -avz --delete rsync.archlinux.org::ftp/unstable /mirror/
    The two issues I have are:
    1. the script does not run automatically via cron.daily/sync file, yet when I click on the sync.sh file it works. 
    2. The updates occur at 0002hrs, which I would like to change to 0200hrs instead
    cron.daily/sync:
    !/bin/sh
    SYNCLOGFILE="/var/log/sync.log"
    SYNCLOCKFILE="/var/lock/sync.lock"
    if [ -f $SYNCLOCKFILE ]; then
    # lock file already present, bail
    exit 1
    fi
    echo -n ">>> Sync log for " > $SYNCLOGFILE
    date >> $SYNCLOGFILE
    cd /home/mirror
    touch $SYNCLOCKFILE
    su - evan -c "/home/mirror/sync.sh" >> $SYNCLOGFILE
    rm -f $SYNCLOCKFILE
    Also, the sync.log and sync.lock files have not been created. Is this a permissions issue and if so how do I fix it?
    Cheers

  • Best approach to implement feature "I found a mistake" in SAP EP

    Hi All,
    We are planning to implement functionality where any end user can log in the mistake he/she found while using Portal i.e. "I found a mistake".
    This is very critical to Business since
    1) Evryday we post lot of content
    2) There will be always some improvement areas which we do not notice
    Regards,
    Ganga

    Hi,
    Thanks for your reply. But I need to know following details:
    1) Send a message to your local SAP Support. - Where we need to configure this?
    2) Can we change configuration so that an email will be sent once the user logs the error
    3)Component --- Can we taek out this and since we are reporting for an specific iView, can we make it to point to name of iView
    4) Attach any relevant files (optional).-- Where attachments are stored?
    5) What are the database tables involved in this?
    Regards,
    Ganga
    Edited by: Hiremath Gangadharayya on Jul 25, 2008 7:39 PM

  • [SOLVED] vsftpd on Local Mirror, running but not working

    I'm building a Local Mirror on a vm (vbox) with bridged adapter and fix-ip by following this wiki.
    http://wiki.archlinux.org/index.php/Loc … cal_mirror
    After the painful rsync and those setup, I tried pacman -Syu from another Arch vm (no firewall).  I received the following error.
    :: Synchronizing package databases...
    error: failed retrieving file 'core.db.tar.gz' from 192.168.100.100 : Service not available, closing control connection
    I've tried by nmap on the hosting PC and find that the vsftpd should be running.
    Starting Nmap 4.62 ( http://nmap.org ) at 2010-08-27 01:03 HKT
    Interesting ports on 192.168.100.100:
    Not shown: 1714 closed ports
    PORT   STATE SERVICE
    21/tcp open  ftp
    MAC Address: 08:00:27:76:33:1C (Cadmus Computer Systems)
    Nmap done: 1 IP address (1 host up) scanned in 1.318 seconds
    In the wiki, it suggests to use "ftp" to replace "mirror" for ftp_username & nopriv_user.  I tried both.
    I also find that there is no "archlinux" under my /home/mirror/files as "suggested" by the following statement in vsftpd.conf
    # Chroot directory for anonymous user
    anon_root=/home/mirror/files/archlinux
    I tried both (1) amend the vsftpd.conf to remove the "archlinux", and (2) manually add that directory with owner/group=mirror.
    Meanwhile, I only find under /home/mirror/files 6 items - community core extra community.lastsync core.lastsync extra.lastsync.  Have I completed the rsync successfully?  Or, something is missing.  Is the directory structure correct?
    Is the sample vsftpd.conf in the Local Mirror wiki updated?  I've cross reference it with the vsftpd wiki but I'm not knowledgable enough to find things useful.
    What else should I check?
    I love ArchLinux so much that I really hope that it can work.
    Please help.
    Thanks.
    Last edited by dboat (2010-08-27 15:38:14)

    I have tried couple of Linux distro to learn Linux/Network.  I like ArchLinux's "simple" concept, light weight, updated packages, nice document and fast bootup/shutdown.  I have installed over ten times ArchLinux in different virtualmachines and netbook in the past week.  I will keep some, delete some and create more.  I don't have a fast internet connection and that's why I would like to set up my local mirror.  I am a newbie here, so please feel free to let me know if I am taking too much (bandwidth) from the community, and it is not encouraged for my case.  And sorry if I have already created any trouble.
    Well, back to my problem.
    1. After the rsync, including everything, the / now occupies 14G harddisk space.  Is it a normal size for a local mirror?
    2. I have inserted "Server = file:///home/mirror/files/$repo/os/i686" as the first line in its /etc/pacman.d/mirrorlist
        pacman -Syy  looks fine.
        pacman -Syu  gives a list of warning (xxx: local is newer than core), end with "there is nothing to do"
        pacman -S mplayer  starts installtion normally, but need mirrors on internet cause a large portion of software is missing/inaccessible on my local mirror.
    3. I have tried to login by FileZilla from an Ubuntu vm, and receive this error message (on FileZilla)
    Status:    Connecting to 192.168.100.100:21...
    Status:    Connection established, waiting for welcome message...
    Response:    421 Service not available.
    Error:    Could not connect to server
    Seems I have issues on both the mirror and the vsftpd.  I prefer to resolve the vsftpd problem first, but all suggestion/comment are very welcome.
    Lastly, did I post my question in a wrong place?  If yes, please let me know.

  • Does time machine backup locally mirrored idisk?

    I was wondering if anyone knew if time machine backed up a locally mirrored idisk? The main advantage of this being that you could go back and see files that were on the locally mirroed idisk a month ago rather than just what is there today.
    Thanks,
    David

    David-
    I have been wrestling with this issue also. I put my working files on iDisk so that I can access them from my laptop, other office, etc. Works great--but--as I recently learned the Time Machine doesn't back up those files.
    And the sparsebundle (or whatever it's called) has to be excluded from the Time Machine backup (or else it'll fill up your TM in about 2 weeks)
    I THINK that I may have found a solution (at least for me). I have a program that backups the iDisk to my DOCUMENTS directory on my hard drive. So--every day at 9PM it backups/mirrors my iDisk to my hard drive and then TM makes a copy of the hard disk DOCUMENTS directory into TM.
    So far it APPEARS to be working as I'd like. I can go into TM and pull out a single file. The only drawback is that I have the program automated to copy the files over every night at 9PM so my TM backup is only that single snapshot but it's better then discovering that the files that I thought were being backed up to TM were not.
    Would this work? Do you see any holes in this?

  • Using rsync to keep a local mirror - way to not download 64?

    I keep a local mirror since i keep multiple computers synced and want to save bandwidth.
    But now I have an x86_64 mirror for which I have no use. Is there some way to *not* download those files?

    It is very simple! I use:
    rsync -rptvz --delete rsync.archlinux.org::current/os/i686 /mnt/storage1/mirror/current/os/
    Note that I use -rptvz instead of -avz. IMHO this is better.
    But why do you mirror whole repos when there is an easier way.
    I'm going to write howto on wiki. I don't like this way - it wastes bandwidth, time and money.

  • Making archlinux (local) mirror

    Before I post the issue PLEASE dont send me to the wiki page, I have been there and it needs to be updated big time!
    okay, I followed the steps on the wiki and I cannot get the mirror script to function, I just get access denied. I believe that I get access denied due to the fact that I am not an official mirror. There is nowhere on the wiki that shows any other servers that can be used properly for the script. I tried the ibiolo mirror on the page but it does not work. If anyone has an unofficial mirror, can you please paste your whole sync.sh? or can someone give me an alternative mirror to rsync.archlinux.org? Thanks
    Last edited by 3nd3r (2007-07-01 05:54:50)

    i got the same with that script u say.. the thing is dir hierarchy errs  made a changes in it and works
    #!/bin/bash
    # This script is used to sync a local mirror of the Arch Linux repositories.
    # Copyright (c)2007 Woody Gilk <[email protected]>
    # Licensed under the GNU GPL (version 2)
    # Original modificado !
    # Filesystem locations for the sync operations
    SYNC_HOME="$HOME/archmirror"
    SYNC_LOGS="$SYNC_HOME/logs"
    SYNC_PKGS="$SYNC_HOME/packages"
    EXCLUDE_FROM=rsync_arch.excludes
    # This allows you to set which repositories you would like to sync
    # Valid options are: current, extra, community, unstable, testing, release
    SYNC_REPO=(current extra community )
    # servidor oficial SYNC_SERVER="rsync.archlinux.org::"
    #francia rsync://distrib-coffee.ipsl.jussieu.fr/pub/linux/archlinux/ rsync
    #francia rsync://mir1.archlinuxfr.org/archlinux rsync
    SYNC_SERVER="distro.ibiblio.org::distros/archlinux/"
    # directorio almacenar descargas temporales ( rync )
    PARTIAL_DIR="$SYNC_HOME/rsync_partial"
    # eliminar ficheros que no existan en el servidor ..
    DELETE=true
    # The following line is the format of the log file name, this example will
    # output something like this: pkgsync_20070201-8.log
    LOG_FILE="pkgsync_$(date +%Y%m%d-%H).log"
    # Do not edit the following lines, they protect the sync from running more than
    # one instance at a time.
    if [ ! -d $SYNC_HOME ]; then
    echo "$SYNC_HOME does not exist, please create it, then run this script again."
    exit 1
    fi
    SYNC_LOCK="$SYNC_HOME/sync_running.lock"
    if [ -f $SYNC_LOCK ] ; then
    echo "aparentemente existe una instancia de este programa en ejecucion... desea continuar? (s):"
    read continua
    if test "$continua" != "s"; then
    exit 1
    fi
    fi
    touch "$SYNC_LOCK"
    # End of non-editable lines
    # Create the log file and insert a timestamp
    #[c]touch "$SYNC_LOGS/$LOG_FILE"
    #[c] echo "==========================================" >> "$SYNC_LOGS/$LOG_FILE"
    #[c] echo ">> Starting sync on $(date)" >> "$SYNC_LOGS/$LOG_FILE"
    #[c] echo ">> ---" >> "$SYNC_LOGS/$LOG_FILE"
    # Rsync each of the repos set in $SYNC_REPO
    for repo in ${SYNC_REPO[@]}; do
    repo=$(echo $repo |tr [:upper:] [:lower:])
    echo "Syncing $repo to $SYNC_PKGS/$repo"
    repourl=$repo
    # If you only want to mirror 32bit packages, you can add
    # " --exclude=os/x86_64" after "--delete"
    # If you only want to mirror 64bit packages, use "--exclude=os/i686"
    # If you want both 32bit and 64bit, leave the following line as it is
    #rsync -rptv --delete --exclude=os/x86_64 rsync.archlinux.org::$repourl "$SYNC_PKGS/$repo" >> "$SYNC_LOGS/$LOG_FILE"
    parametros=" -av --progress -hh -k --stats --partial --partial-dir="$PARTIAL_DIR
    # parametros para eliminar ficheros que no existan en el servidor ..
    if $DELETE; then
    parametros=$parametros" --delete --delete-excluded"
    fi
    if test -e $EXCLUDE_FROM ; then
    parametros="$parametros --exclude-from=$EXCLUDE_FROM"
    fi
    echo $parametros
    rsync $parametros --ipv4 --exclude=os/x86_64 $SYNC_SERVER$repourl "$SYNC_PKGS/"
    # If you know of a rsync mirror that is geographically closer to you than archlinux.org
    # simply replace ".archlinux.org" with the domain of that mirror in the above line
    done
    # Remove the lock file and exit
    rm -f "$SYNC_LOCK"
    exit 0
    and if there is a file rsync_arch.excludes in the dir where app is running then it must be like this one (man rsync)
    where every line that matches a file in repo will be excluded in the download, otherside if prefixed with a + it will be
    inckuded (man rsync)
    # solo descargando paquetes,
    # si se quiere descargar tambien las imagenes de instalalcion, comentar
    current/iso/
    wesnoth*
    nexuiz*
    flightgear*
    tremulous*
    sauerbraten*
    scorched3d*
    #torcs-*
    + koffice-l10n-es*
    koffice-l10n-*
    + kde-i18n-es*
    kde-i18n-*
    festival-*
    eclipse-*
    rfc-*
    skype-*
    + openoffice-base*
    + openoffice-es*
    + openoffice-spell-en*
    + openoffice-spell-es*
    openoffice-*

  • Any ideas on how to do a local mirror for this situation?

    I'm starting a project to allow ArchLinux to be used on a Cluster environment (autoinstallation of nodes and such). I'm going to implement this where I'm working right now (~25 node cluster). Currently they're using RocksClusters.
    The problem is that the connection to internet from work is generally really bad during the day. There's a HTTP proxy in the middle. The other day I tried installing archlinux using the FTP image and I took more than 5 hours just to do an upgrade + installing subversion and other packages, right after an FTP installation (which wasn't fast either).
    The idea is that the frontend (the main node of the cluster) would hold a local mirror of packages so that when nodes install use that mirror (the frontend would use this also, because of the bad speed).
    As I think it should be better to only update the mirror and perform an upgrade not very often (if something breaks I would leave users stranded until I fix it), I thought I should download a snapshot of extra/ and current/ only once. But the best speed I get from rsync (even at night, where an HTTP transfer from kernel.org goes at 200KB/s) is ~13KB/s this would take days (and when it's done I would have to resync because of any newer package that could have been released in the meantime).
    I could download extra/ and current/ at home (I have 250KB/s downstream but I get like ~100KB/s from rsync), record several CDs (6!... ~(3GB + 700MB)/700MB) but that's not very nice. I think that maybe this would be just for the first time. Afterwards an rsync would take a lot less, but I don't know how much less.
    Obiously I could speed things a little If I download the full ISO and rsync current using that as a base. But for extra/ I don't have a ISOs.
    I think this is a little impractical (to download everything) as I wouldn't need whole extra/ anyways. But it's hard to know all packages needed and their dependencies to download only those.
    So... I would like to know if anyone has any ideas on how to make this practical. I wouldn't wan't my whole project to crumble because of this detail.
    It's annoying because using pacman at home, always works at max speed.
    BTW, I've read that HOWTO that explains how to mount pacman's cache on the nodes to have a shared cache. But I'm not very sure if that's a good option. Anyway, that would imply to download everything at work, which would take years.

    V01D wrote:After installation the packages that are in cache are the ones from current. All the stuff from extra/ won't be there until I install something from there.
    Anyway, if I installl from a full CD I get old packages which I have to pacman -Syu after installation (that takes long time).
    Oh, so that's how is it.
    V01D wrote:
    I think I'm going to try out this:
    * rsync at home (already got current last night)
    * burn a DVD
    * go to work and then update the packages on DVD using rsync again (this should be fast, if I don't wait long time after recording it)
    And to optimize further rsync's:
    * Do a first install on all nodes an try it out for a few days (so I install all packages needed)
    * Construct a list of packages used by all nodes and frontend
    * Remove them from my mirror
    * Do further rsync updates only updating the files I already have
    This would be the manual approach of the shared cache idea I think.
    Hmm... but why do you want to use rsync? You'll need to download the whole repo, which is quite large (current + extra + testing + community > 5.1GB, extra is the largest). I suggest you to download only those packages and their dependencies that you use.
    I have similar situation. At work I have unlimited traffic (48kbps at day and 128kbps at night), at home - fast connection (up to 256kbps) but I pay for every megabyte (a little, but after 100-500 megabytes it becomes very noticeable). So I do
    yes | pacman -Syuw
    or
    yes | pacman -Syw pkg1 pkg2 ... pkgN
    at work (especially when packages are big), then put new downloaded files on my flash drive, then put them into /var/cache/pacman/pkg/ at home, and then I only need to do pacman -Sy before installing which takes less than a minute.
    I have 1GB flashdrive so I can always keep the whole cache on it. Synchronizing work cache <-> flash drive <-> home cache is very easy.
    P.S.: Recently I decided to make complete mirror of all i686 packages from archlinux.org with rsync. Not for myself but for my friends that wanted to install Linux. Anyway I don't pay for every megabyte at my work. However it took almost a week to download 5.1 GB of packages.
    IMHO for most local mirror solutions using rsync is overkill. How many users are there that use more than 30% of packages from repos? So why to make full mirror with rsync when you can cache only installed packages?

  • How to download a local mirror of public yum, public-yum-downloader.sh

    Hello there
    I did write an script to create a local mirror of public-yum.oracle.com, it now includes the errata and security bug fixes information._
    First of all, thanks for giving a try to the script.
    The script can be located at:
    https://github.com/kikitux/public-yum-downloader
    Direct RAW access:
    https://raw.github.com/kikitux/public-yum-downloader/master/public-yum-downloader.sh
    Download as
    # wget https://raw.github.com/kikitux/public-yum-downloader/master/public-yum-downloader.sh
    The hierarchy is 100% the same as what is on public-yum
    The script can take several argumentas, like -P for the OS directory, and --url for where the same path will be public, so you can put the mirror in a different path
    example, I have my own repo in /u02/stage/ and is shared like http://mirandaa00/stage
    on my apache I have
    Alias /stage "/u02/stage/"
    <Directory "/u02/stage/">
    Options Indexes MultiViews FollowSymLinks
    AllowOverride None
    Order allow,deny
    Allow from all
    </Directory>
    In that way, I have everything I want in my own path.
    When you use the url option, the script will create a local-yum-ol6.repo file with the url you gave, with GPG enabled, so you can be sure nothing wrong will happen in the middle
    I use this script it this way
    as root, i have /root/bin/dl.sh with this content
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 6.latest --url http://mirandaa00/stage -l /u02/stage/repo/OracleLinux/OL6/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 5.latest --url http://mirandaa00/stage -l /u02/stage/repo/OracleLinux/OL5/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 4.latest --url http://mirandaa00/stage -l /u02/stage/repo/EnterpriseLinux/EL4/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 6.4 --url http://mirandaa00/stage -l /u02/stage/repo/OracleLinux/OL6/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 5.9 --url http://mirandaa00/stage -l /u02/stage/repo/OracleLinux/OL5/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 4.9 --url http://mirandaa00/stage -l /u02/stage/repo/EnterpriseLinux/EL4/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 4.8 --url http://mirandaa00/stage -l /u02/stage/repo/EnterpriseLinux/EL4/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 6.UEK --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 5.UEK --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -r ol6_addons --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -r el5_addons --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -r el5_oracle_addons --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -r ol6_playground_latest
    the -l will look on that path to find the rpm, useful for example if you have a dvd and you want to use as initial cache
    I do run my commands in that way as when 5.9 came out, I had a lot of those rpms in 5.8 or 5 latest, rite?
    Worst thing that could happen, is the rpm is not there, and will have to download, but if it's there will copy it
    for UEK and addons those are unique rpm, so I don't use -l
    for the playground, that are the new kernel based on 3.x directly, i don't use --url, as I don't wat the script to enable that repo, but I do want to download what that channel have
    so, for known versions 6.0 to 6.4 you can use -R 6.n or even -R 6.UEK
    for other repos you can pass the name as -r repo
    Regarding the OVM3, the OVM3 is not on the repo, so I don't use my script for that, however, you can use the tools your self
    mkdir -p /u02/stage/repo/OracleVM/OVM3/latest/x86_64/repodata/.cache
    and create a repo file
    cat /u02/stage/public-yum-ovm3.repo
    [ovm3_latest]
    name=Oracle Linux $releasever Latest (x86_64)
    baseurl=http://public-yum.oracle.com/repo/OracleVM/OVM3/latest/x86_64/
    gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-el5
    gpgcheck=1
    enabled=1
    Then, you can download what is there as:
    http_proxy=http://proxy:3128 yumdownloader -c /u02/stage/public-yum-ovm3.repo --destdir=/u02/stage/repo/OracleVM/OVM3/latest/x86_64/ '*'
    createrepo -v -c /u02/stage/repo/OracleVM/OVM3/latest/x86_64/repodata/.cache /u02/stage/repo/OracleVM/OVM3/latest/x86_64
    Please, take note the yumdownloader use --destdir=/path  then SPACE, then what you want to download, as we want a mirror, space '*'
    any question, here, or feel free to mailme at [email protected]
    if you have time, check http://kikitux.net
    Alvaro.
    Edited by: Alvaro Miranda on May 6, 2013 9:13 PM

    Keep it running, time by time it takes FOR Ever
    The good thing is the script already downloaded the repo file and the GPG key, so internet is working.
    Where you tell me is waiting for ever, is downloading the metadata of the files, and doing the list of packages and dependencies to download.
    then, with that list, it will use wget to download the rpm files
    On /var/tmp/public-yum-downloader/ it will be leaving some files that you can use to check what's doing, a folder for x86_64 and i383 to have a local copy of the metadata
    /var/tmp/public-yum-downloader/list.log will show is the output of yum-downloader and then the output of wget
    I don't think your download will be slower than mine.. I am on New Zealand.. :D
    Alvaro.

  • Problem with local mirror syncing

    Hi mens! And sorry, my English is bad.
    I rewrite script from here for syncing mirror for my local network.
    That's what happened http://ix.io/1aK or http://paste.pocoo.org/raw/263591/ or permanent http://ix.io/user/atommixz
    It support http (maybe ftp?) mirroring other repos. Using lftp for it.
    #!/bin/bash
    # The script to sync a local mirror of the Arch Linux repositories and ISOs
    # Copyright (C) 2007 Woody Gilk <[email protected]>
    # Modifications by Dale Blount <[email protected]>
    # and Roman Kyrylych <[email protected]>
    # and Vadim Gamov <[email protected]>
    # and Aleksey Frolov <[email protected]>
    # Licensed under the GNU GPL (version 2)
    USECOLOR=yes
    . /etc/rc.d/functions
    # Filesystem locations for the sync operations
    SYNC_HOME="/home/mirror"
    SYNC_LOGS="$SYNC_HOME/logs"
    SYNC_FILES="$SYNC_HOME/files"
    SYNC_LOCK="$SYNC_HOME/mirrorsync.lck"
    SYNC_REPO=(core extra community multilib iso archlinuxfr catalyst)
    #SYNC_REPO=(archlinuxfr catalyst)
    typeset -A REPO_URL
    REPO_URL=(
    [archlinuxfr]='http://repo.archlinux.fr/x86_64'
    [catalyst]='http://catalyst.apocalypsus.net/repo/catalyst/x86_64'
    #SYNC_SERVER=distro.ibiblio.org::distros/archlinux
    SYNC_SERVER=mirror.yandex.ru::archlinux
    RATE_LIMIT=50 # in kb/s
    TIME_OUT=5 # in sec
    # Set the format of the log file name
    # This example will output something like this: sync_20070201-8.log
    #LOG_FILE="pkgsync_$(date +%Y%m%d-%H).log"
    LOG_FILE="pkgsync_$(date +%Y%m%d).log"
    #Watchdog part (time in seconds of uninterruptable work of script)
    # Needed for low-speed and/or unstable links to prevent
    # rsync hunging up.
    # New instance of script checks for timeout, if it occurs
    # it'll kill previous instance, in elsecase it'll exit without
    # any work.
    WD_TIMEOUT=10800
    # Do not edit the following lines, they protect the sync from running more than
    # one instance at a time
    if [ ! -d $SYNC_HOME ]; then
    printhl "$SYNC_HOME does not exist, please create it, then run this script again."
    exit 1
    fi
    if [ -f $SYNC_LOCK ];then
    OPID=`head -n1 $SYNC_LOCK`;
    TIMEOUT=`head -n2 $SYNC_LOCK|tail -n1`;
    NOW=`date +%s`;
    if [ "$NOW" -ge "$TIMEOUT" ];then
    kill -9 $OPID;
    fi
    MYNAME=`basename $0`;
    TESTPID=`ps -p $OPID|grep $OPID|grep $MYNAME`;
    if [ "$TESTPID" != "" ];then
    printhl "exit";
    exit 1;
    else
    rm $SYNC_LOCK;
    fi
    fi
    echo $$ > "$SYNC_LOCK"
    echo `expr \`date +%s\` + $WD_TIMEOUT` >> "$SYNC_LOCK"
    # End of non-editable lines
    # Create the log file and insert a timestamp
    touch "$SYNC_LOGS/$LOG_FILE"
    printhl "Starting sync on $(date --rfc-3339=seconds)" | tee -a "$SYNC_LOGS/$LOG_FILE"
    for repo in ${SYNC_REPO[@]}; do
    repo=$(echo $repo | tr [:upper:] [:lower:])
    printhl "Syncing $repo" | tee -a "$SYNC_LOGS/$LOG_FILE"
    NEXT=false
    for i in ${!REPO_URL[*]}; do
    if [ $i = $repo ]; then
    mkdir -p "$SYNC_FILES/$repo"
    cd "$SYNC_FILES/$repo"
    lftp -c "\
    set xfer:log no; \
    set net:limit-rate $[RATE_LIMIT * 1000]; \
    mirror \
    --delete \
    --only-newer \
    --verbose=3 \
    ${REPO_URL[$repo]}" | tee -a "$SYNC_LOGS/$LOG_FILE"
    date --rfc-3339=seconds > "$SYNC_FILES/$repo.lastsync"
    NEXT=true
    #sleep $TIME_OUT
    fi
    done
    if $NEXT; then continue; fi
    rsync -rtvHh \
    --bwlimit=$RATE_LIMIT \
    --no-motd \
    --delete-after \
    --delete-excluded \
    --prune-empty-dirs \
    --delay-updates \
    --copy-links \
    --perms \
    --include="*/" \
    --include="latest/*x86_64.iso" \
    --include="latest/*sum*.txt" \
    --include="archboot/latest/*.iso" \
    --include="os/x86_64/*" \
    --exclude="*" \
    $SYNC_SERVER/$repo "$SYNC_FILES" | tee -a "$SYNC_LOGS/$LOG_FILE"
    # Create $repo.lastsync file with timestamp like "2007-05-02 03:41:08+03:00"
    # which may be useful for users to know when the repository was last updated
    date --rfc-3339=seconds > "$SYNC_FILES/$repo.lastsync"
    # Sleep 5 seconds after each repository to avoid too many concurrent connections
    # to rsync server if the TCP connection does not close in a timely manner
    sleep $TIME_OUT
    done
    # Insert another timestamp and close the log file
    printhl "Finished sync on $(date --rfc-3339=seconds)" | tee -a "$SYNC_LOGS/$LOG_FILE"
    printsep >> "$SYNC_LOGS/$LOG_FILE"
    # Remove the lock file and exit
    rm -f "$SYNC_LOCK"
    unset REPO_URL
    exit 0
    But I'm have problem. If I'm run
    sudo pacman -Syu
    on my server, it's fine work.
    [atommixz@fileserver ~]$ date; sudo pacman -Syu; echo "------"; date; sudo pacman -Syu
    Сбт Сен 18 19:55:47 MSD 2010
    :: Синхронизируются базы данных пакетов...
    core 35,7K 14,2M/s 00:00:00 [#############################################################] 100%
    extra 465,9K 189,3M/s 00:00:00 [#############################################################] 100%
    community 383,1K 198,6M/s 00:00:00 [#############################################################] 100%
    archlinuxfr не устарел
    :: Запускается полное обновление системы...
    нечего выполнять
    [atommixz@fileserver ~]$ date; sudo pacman -Syu
    Сбт Сен 18 19:55:48 MSD 2010
    :: Синхронизируются базы данных пакетов...
    core не устарел
    extra не устарел
    community не устарел
    archlinuxfr не устарел
    :: Запускается полное обновление системы...
    нечего выполнять
    But if I'm try it on my desktop, it work wrong. Always reget base. But it is updated properly.
    [atommixz@relentless ~]$ date; sudo pacman -Syu; echo "------"; date; sudo pacman -Syu
    Сбт Сен 18 19:58:42 MSD 2010
    :: Синхронизируются базы данных пакетов...
    core 35,7K 34,0M/s 00:00:00 [#############################################################] 100%
    extra 465,9K 58,7M/s 00:00:00 [#############################################################] 100%
    community 383,1K 57,9M/s 00:00:00 [#############################################################] 100%
    multilib 19,3K 34,7M/s 00:00:00 [#############################################################] 100%
    archlinuxfr 18,6K 42,6M/s 00:00:00 [#############################################################] 100%
    catalyst 2,2K 67,7M/s 00:00:00 [#############################################################] 100%
    :: Запускается полное обновление системы...
    нечего выполнять
    Сбт Сен 18 19:58:43 MSD 2010
    :: Синхронизируются базы данных пакетов...
    core 35,7K 34,0M/s 00:00:00 [#############################################################] 100%
    extra 465,9K 58,7M/s 00:00:00 [#############################################################] 100%
    community 383,1K 64,8M/s 00:00:00 [#############################################################] 100%
    multilib 19,3K 48,9M/s 00:00:00 [#############################################################] 100%
    archlinuxfr 18,6K 38,9M/s 00:00:00 [#############################################################] 100%
    catalyst 2,2K 55,5M/s 00:00:00 [#############################################################] 100%
    :: Запускается полное обновление системы...
    нечего выполнять
    What am I doing wrong?

    I'm not sure, but your script may be too old.  That is, it may no longer work.
    I believe that creating a local mirror is discouraged due to the high bandwidth needed to do that.
    Perhaps you could try this instead.

  • Icloud found a mistake under the attempt to connect to the server

    I can not log inn!
    Pop up window says on my PC (IPhone and Ipad works):
    icloud found a mistake under the attempt to connect to the server.
    What to do??

    Jan Odegaard wrote:
    I can not log inn!
    Pop up window says on my PC (IPhone and Ipad works):
    icloud found a mistake under the attempt to connect to the server.
    What to do??
    Tell us what the PopUp actually said, post a screenshot please.

  • Create a local mirror of public yum

    I want to create a private local mirror of public-yum repository. This allow me to make updatea or the magical "yum install oracle-validated" in machines that doesn't have internet access.
    Any guide or ideas about how to do this?
    Thanks.

    Re: local YUM repository Channel registration fail
    some problems that i faced by creating my local yum
    *T                                                                                                                                                                                                                                                       

  • [solved] local copy of wiki

    I _know_ this has been answered earlier but I forgot to bookmark it and now I can't find it.
    Can anyone please give the command to download the entire wiki (english only) for local use?
    Again - I apologize for asking what has allready been answered, but I've just spent over 2 hours trying to find it (both googling and in this forum).
    Last edited by perbh (2009-10-15 16:59:13)

    *lol* didn't realize it was as easy as that - thanks a ton!! (running off to mark it as solved)

  • Creating a local mirror of repository for home use

    Hi,
    I am thinking of syncing the whole arch mirror (at least current and extra). i could just download the whole directory from the net including the db file - though i am a bit scared that as it takes a while somebody could update something during the download process and the whole thing would not work anymore - due to a db file that doesnt work any longer.
    how could i do that as i try to avoid creating my own repo with gensync - i want the original one :-)
    thanks for any help provided!!!

    I'm not quite sure what you're asking.  But, I synced my machine with the Arch repo's using "abs".  I make my own packages in `/var/abs/local` and have my custom package repo on `/home/pkgs/`(using gensync).  Wheenver I want to customize an Arch package (in `/var/abs/local`), I just copy the entire directory from `/var/abs/<package>` to the local path.
    The actual "abs" sync only takes a matter of seconds, since it just downloads the PKBUILDS and any associated patches, scripts, etc, not the actual packages.
    There was only 1 occasion that I can remember when the Arch repos were not in sync with mine, right after I had just used "abs" in fact.  It was while rebuilding the "bash" package.  That was an easy enough fix though.  I posted the problem, and before I knew it, the current "bash" package was in sync again within an hour I think.  I think that's a very rare case indeed when I synced and the developers hadn't yet updated the repo with the change.  Either way, just keep resyncing your local "abs" tree and everything will be kept up to date.

  • Found a mistake in Russian translation - how to re...

    I found a sort of incorrect translation, or in other words - I have a better translation.
    How/where can I report this?

    Best to report it here and i'll forward it to the translation team. Please also mention which platform your discovered the bug on (Windows, Mac, Android).
    Follow the latest Skype Community News
    ↓ Did my reply answer your question? Accept it as a solution to help others, Thanks. ↓

Maybe you are looking for