Create a local mirror of public yum

I want to create a private local mirror of public-yum repository. This allow me to make updatea or the magical "yum install oracle-validated" in machines that doesn't have internet access.
Any guide or ideas about how to do this?
Thanks.

Re: local YUM repository Channel registration fail
some problems that i faced by creating my local yum
*T                                                                                                                                                                                                                                                       

Similar Messages

  • How to download a local mirror of public yum, public-yum-downloader.sh

    Hello there
    I did write an script to create a local mirror of public-yum.oracle.com, it now includes the errata and security bug fixes information._
    First of all, thanks for giving a try to the script.
    The script can be located at:
    https://github.com/kikitux/public-yum-downloader
    Direct RAW access:
    https://raw.github.com/kikitux/public-yum-downloader/master/public-yum-downloader.sh
    Download as
    # wget https://raw.github.com/kikitux/public-yum-downloader/master/public-yum-downloader.sh
    The hierarchy is 100% the same as what is on public-yum
    The script can take several argumentas, like -P for the OS directory, and --url for where the same path will be public, so you can put the mirror in a different path
    example, I have my own repo in /u02/stage/ and is shared like http://mirandaa00/stage
    on my apache I have
    Alias /stage "/u02/stage/"
    <Directory "/u02/stage/">
    Options Indexes MultiViews FollowSymLinks
    AllowOverride None
    Order allow,deny
    Allow from all
    </Directory>
    In that way, I have everything I want in my own path.
    When you use the url option, the script will create a local-yum-ol6.repo file with the url you gave, with GPG enabled, so you can be sure nothing wrong will happen in the middle
    I use this script it this way
    as root, i have /root/bin/dl.sh with this content
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 6.latest --url http://mirandaa00/stage -l /u02/stage/repo/OracleLinux/OL6/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 5.latest --url http://mirandaa00/stage -l /u02/stage/repo/OracleLinux/OL5/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 4.latest --url http://mirandaa00/stage -l /u02/stage/repo/EnterpriseLinux/EL4/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 6.4 --url http://mirandaa00/stage -l /u02/stage/repo/OracleLinux/OL6/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 5.9 --url http://mirandaa00/stage -l /u02/stage/repo/OracleLinux/OL5/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 4.9 --url http://mirandaa00/stage -l /u02/stage/repo/EnterpriseLinux/EL4/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 4.8 --url http://mirandaa00/stage -l /u02/stage/repo/EnterpriseLinux/EL4/
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 6.UEK --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -R 5.UEK --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -r ol6_addons --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -r el5_addons --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -r el5_oracle_addons --url http://mirandaa00/stage
    ~/bin/public-yum-downloader.sh -P /u02/stage/ -p http://proxy:3128 -r ol6_playground_latest
    the -l will look on that path to find the rpm, useful for example if you have a dvd and you want to use as initial cache
    I do run my commands in that way as when 5.9 came out, I had a lot of those rpms in 5.8 or 5 latest, rite?
    Worst thing that could happen, is the rpm is not there, and will have to download, but if it's there will copy it
    for UEK and addons those are unique rpm, so I don't use -l
    for the playground, that are the new kernel based on 3.x directly, i don't use --url, as I don't wat the script to enable that repo, but I do want to download what that channel have
    so, for known versions 6.0 to 6.4 you can use -R 6.n or even -R 6.UEK
    for other repos you can pass the name as -r repo
    Regarding the OVM3, the OVM3 is not on the repo, so I don't use my script for that, however, you can use the tools your self
    mkdir -p /u02/stage/repo/OracleVM/OVM3/latest/x86_64/repodata/.cache
    and create a repo file
    cat /u02/stage/public-yum-ovm3.repo
    [ovm3_latest]
    name=Oracle Linux $releasever Latest (x86_64)
    baseurl=http://public-yum.oracle.com/repo/OracleVM/OVM3/latest/x86_64/
    gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-el5
    gpgcheck=1
    enabled=1
    Then, you can download what is there as:
    http_proxy=http://proxy:3128 yumdownloader -c /u02/stage/public-yum-ovm3.repo --destdir=/u02/stage/repo/OracleVM/OVM3/latest/x86_64/ '*'
    createrepo -v -c /u02/stage/repo/OracleVM/OVM3/latest/x86_64/repodata/.cache /u02/stage/repo/OracleVM/OVM3/latest/x86_64
    Please, take note the yumdownloader use --destdir=/path  then SPACE, then what you want to download, as we want a mirror, space '*'
    any question, here, or feel free to mailme at [email protected]
    if you have time, check http://kikitux.net
    Alvaro.
    Edited by: Alvaro Miranda on May 6, 2013 9:13 PM

    Keep it running, time by time it takes FOR Ever
    The good thing is the script already downloaded the repo file and the GPG key, so internet is working.
    Where you tell me is waiting for ever, is downloading the metadata of the files, and doing the list of packages and dependencies to download.
    then, with that list, it will use wget to download the rpm files
    On /var/tmp/public-yum-downloader/ it will be leaving some files that you can use to check what's doing, a folder for x86_64 and i383 to have a local copy of the metadata
    /var/tmp/public-yum-downloader/list.log will show is the output of yum-downloader and then the output of wget
    I don't think your download will be slower than mine.. I am on New Zealand.. :D
    Alvaro.

  • Creating a local mirror of repository for home use

    Hi,
    I am thinking of syncing the whole arch mirror (at least current and extra). i could just download the whole directory from the net including the db file - though i am a bit scared that as it takes a while somebody could update something during the download process and the whole thing would not work anymore - due to a db file that doesnt work any longer.
    how could i do that as i try to avoid creating my own repo with gensync - i want the original one :-)
    thanks for any help provided!!!

    I'm not quite sure what you're asking.  But, I synced my machine with the Arch repo's using "abs".  I make my own packages in `/var/abs/local` and have my custom package repo on `/home/pkgs/`(using gensync).  Wheenver I want to customize an Arch package (in `/var/abs/local`), I just copy the entire directory from `/var/abs/<package>` to the local path.
    The actual "abs" sync only takes a matter of seconds, since it just downloads the PKBUILDS and any associated patches, scripts, etc, not the actual packages.
    There was only 1 occasion that I can remember when the Arch repos were not in sync with mine, right after I had just used "abs" in fact.  It was while rebuilding the "bash" package.  That was an easy enough fix though.  I posted the problem, and before I knew it, the current "bash" package was in sync again within an hour I think.  I think that's a very rare case indeed when I synced and the developers hadn't yet updated the repo with the change.  Either way, just keep resyncing your local "abs" tree and everything will be kept up to date.

  • Oracle Public Yum Slow Today

    The Oracle Public Yum seems slow today. Does anyone notice the same? Does anyone know of a mirror?
    I keep retrying the yum operation and occasionally I can download a package. But most of the time, it looks like:
    Package(s) data still to download: 4.7 M
    (1/6): libcom_err-1.41.12-14.el6.i686.rpm | 36 kB 00:00
    (2/6): libcom_err-1.41.12-14.el6.x86_64.rpm | 36 kB 00:00
    http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/getPackage/libss-1.41.12-14.el6.x86_64.rpm: [Errno 12] Timeout on http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/getPackage/libss-1.41.12-14.el6.x86_64.rpm: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds')
    Trying other mirror.
    http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/getPackage/selinux-policy-3.7.19-195.0.1.el6_4.3.noarch.rpm: [Errno 14] PYCURL ERROR 56 - "Failure when receiving data from the peer"
    Trying other mirror.

    Give a try to one script i wrote to create a local mirror
    It have a -m option, for minimum packages for lxc hosts
    https://github.com/kikitux/public-yum-downloader
    examples of usage
    http://kikitux.net/ol/public-yum-downloader.html
    Alvaro

  • Problem with local mirror syncing

    Hi mens! And sorry, my English is bad.
    I rewrite script from here for syncing mirror for my local network.
    That's what happened http://ix.io/1aK or http://paste.pocoo.org/raw/263591/ or permanent http://ix.io/user/atommixz
    It support http (maybe ftp?) mirroring other repos. Using lftp for it.
    #!/bin/bash
    # The script to sync a local mirror of the Arch Linux repositories and ISOs
    # Copyright (C) 2007 Woody Gilk <[email protected]>
    # Modifications by Dale Blount <[email protected]>
    # and Roman Kyrylych <[email protected]>
    # and Vadim Gamov <[email protected]>
    # and Aleksey Frolov <[email protected]>
    # Licensed under the GNU GPL (version 2)
    USECOLOR=yes
    . /etc/rc.d/functions
    # Filesystem locations for the sync operations
    SYNC_HOME="/home/mirror"
    SYNC_LOGS="$SYNC_HOME/logs"
    SYNC_FILES="$SYNC_HOME/files"
    SYNC_LOCK="$SYNC_HOME/mirrorsync.lck"
    SYNC_REPO=(core extra community multilib iso archlinuxfr catalyst)
    #SYNC_REPO=(archlinuxfr catalyst)
    typeset -A REPO_URL
    REPO_URL=(
    [archlinuxfr]='http://repo.archlinux.fr/x86_64'
    [catalyst]='http://catalyst.apocalypsus.net/repo/catalyst/x86_64'
    #SYNC_SERVER=distro.ibiblio.org::distros/archlinux
    SYNC_SERVER=mirror.yandex.ru::archlinux
    RATE_LIMIT=50 # in kb/s
    TIME_OUT=5 # in sec
    # Set the format of the log file name
    # This example will output something like this: sync_20070201-8.log
    #LOG_FILE="pkgsync_$(date +%Y%m%d-%H).log"
    LOG_FILE="pkgsync_$(date +%Y%m%d).log"
    #Watchdog part (time in seconds of uninterruptable work of script)
    # Needed for low-speed and/or unstable links to prevent
    # rsync hunging up.
    # New instance of script checks for timeout, if it occurs
    # it'll kill previous instance, in elsecase it'll exit without
    # any work.
    WD_TIMEOUT=10800
    # Do not edit the following lines, they protect the sync from running more than
    # one instance at a time
    if [ ! -d $SYNC_HOME ]; then
    printhl "$SYNC_HOME does not exist, please create it, then run this script again."
    exit 1
    fi
    if [ -f $SYNC_LOCK ];then
    OPID=`head -n1 $SYNC_LOCK`;
    TIMEOUT=`head -n2 $SYNC_LOCK|tail -n1`;
    NOW=`date +%s`;
    if [ "$NOW" -ge "$TIMEOUT" ];then
    kill -9 $OPID;
    fi
    MYNAME=`basename $0`;
    TESTPID=`ps -p $OPID|grep $OPID|grep $MYNAME`;
    if [ "$TESTPID" != "" ];then
    printhl "exit";
    exit 1;
    else
    rm $SYNC_LOCK;
    fi
    fi
    echo $$ > "$SYNC_LOCK"
    echo `expr \`date +%s\` + $WD_TIMEOUT` >> "$SYNC_LOCK"
    # End of non-editable lines
    # Create the log file and insert a timestamp
    touch "$SYNC_LOGS/$LOG_FILE"
    printhl "Starting sync on $(date --rfc-3339=seconds)" | tee -a "$SYNC_LOGS/$LOG_FILE"
    for repo in ${SYNC_REPO[@]}; do
    repo=$(echo $repo | tr [:upper:] [:lower:])
    printhl "Syncing $repo" | tee -a "$SYNC_LOGS/$LOG_FILE"
    NEXT=false
    for i in ${!REPO_URL[*]}; do
    if [ $i = $repo ]; then
    mkdir -p "$SYNC_FILES/$repo"
    cd "$SYNC_FILES/$repo"
    lftp -c "\
    set xfer:log no; \
    set net:limit-rate $[RATE_LIMIT * 1000]; \
    mirror \
    --delete \
    --only-newer \
    --verbose=3 \
    ${REPO_URL[$repo]}" | tee -a "$SYNC_LOGS/$LOG_FILE"
    date --rfc-3339=seconds > "$SYNC_FILES/$repo.lastsync"
    NEXT=true
    #sleep $TIME_OUT
    fi
    done
    if $NEXT; then continue; fi
    rsync -rtvHh \
    --bwlimit=$RATE_LIMIT \
    --no-motd \
    --delete-after \
    --delete-excluded \
    --prune-empty-dirs \
    --delay-updates \
    --copy-links \
    --perms \
    --include="*/" \
    --include="latest/*x86_64.iso" \
    --include="latest/*sum*.txt" \
    --include="archboot/latest/*.iso" \
    --include="os/x86_64/*" \
    --exclude="*" \
    $SYNC_SERVER/$repo "$SYNC_FILES" | tee -a "$SYNC_LOGS/$LOG_FILE"
    # Create $repo.lastsync file with timestamp like "2007-05-02 03:41:08+03:00"
    # which may be useful for users to know when the repository was last updated
    date --rfc-3339=seconds > "$SYNC_FILES/$repo.lastsync"
    # Sleep 5 seconds after each repository to avoid too many concurrent connections
    # to rsync server if the TCP connection does not close in a timely manner
    sleep $TIME_OUT
    done
    # Insert another timestamp and close the log file
    printhl "Finished sync on $(date --rfc-3339=seconds)" | tee -a "$SYNC_LOGS/$LOG_FILE"
    printsep >> "$SYNC_LOGS/$LOG_FILE"
    # Remove the lock file and exit
    rm -f "$SYNC_LOCK"
    unset REPO_URL
    exit 0
    But I'm have problem. If I'm run
    sudo pacman -Syu
    on my server, it's fine work.
    [atommixz@fileserver ~]$ date; sudo pacman -Syu; echo "------"; date; sudo pacman -Syu
    Сбт Сен 18 19:55:47 MSD 2010
    :: Синхронизируются базы данных пакетов...
    core 35,7K 14,2M/s 00:00:00 [#############################################################] 100%
    extra 465,9K 189,3M/s 00:00:00 [#############################################################] 100%
    community 383,1K 198,6M/s 00:00:00 [#############################################################] 100%
    archlinuxfr не устарел
    :: Запускается полное обновление системы...
    нечего выполнять
    [atommixz@fileserver ~]$ date; sudo pacman -Syu
    Сбт Сен 18 19:55:48 MSD 2010
    :: Синхронизируются базы данных пакетов...
    core не устарел
    extra не устарел
    community не устарел
    archlinuxfr не устарел
    :: Запускается полное обновление системы...
    нечего выполнять
    But if I'm try it on my desktop, it work wrong. Always reget base. But it is updated properly.
    [atommixz@relentless ~]$ date; sudo pacman -Syu; echo "------"; date; sudo pacman -Syu
    Сбт Сен 18 19:58:42 MSD 2010
    :: Синхронизируются базы данных пакетов...
    core 35,7K 34,0M/s 00:00:00 [#############################################################] 100%
    extra 465,9K 58,7M/s 00:00:00 [#############################################################] 100%
    community 383,1K 57,9M/s 00:00:00 [#############################################################] 100%
    multilib 19,3K 34,7M/s 00:00:00 [#############################################################] 100%
    archlinuxfr 18,6K 42,6M/s 00:00:00 [#############################################################] 100%
    catalyst 2,2K 67,7M/s 00:00:00 [#############################################################] 100%
    :: Запускается полное обновление системы...
    нечего выполнять
    Сбт Сен 18 19:58:43 MSD 2010
    :: Синхронизируются базы данных пакетов...
    core 35,7K 34,0M/s 00:00:00 [#############################################################] 100%
    extra 465,9K 58,7M/s 00:00:00 [#############################################################] 100%
    community 383,1K 64,8M/s 00:00:00 [#############################################################] 100%
    multilib 19,3K 48,9M/s 00:00:00 [#############################################################] 100%
    archlinuxfr 18,6K 38,9M/s 00:00:00 [#############################################################] 100%
    catalyst 2,2K 55,5M/s 00:00:00 [#############################################################] 100%
    :: Запускается полное обновление системы...
    нечего выполнять
    What am I doing wrong?

    I'm not sure, but your script may be too old.  That is, it may no longer work.
    I believe that creating a local mirror is discouraged due to the high bandwidth needed to do that.
    Perhaps you could try this instead.

  • Oracle public yum down?

    Hi, I'm trying to install a new oracle database server and am doing the prereqs. I'm following this guide http://www.oracle-base.com/articles/11g/oracle-db-11gr2-installation-on-oracle-linux-5.php and when I go to http://public-yum.oracle.com/ I get Error 324 (net::ERR_EMPTY_RESPONSE) in my browser. Is it working for anyone else?

    Noise can severely limit an Intrusion detection system's effectiveness. Bad packets generated from software bugs, corrupt DNS data, and local packets that escaped can create a significantly high false-alarm rate.
    I think that's what happened to you.
    Kirill Babeyev

  • Oracle Public Yum problems today (23/01/13)

    Is anyone else having issue with the Oracle Linux public yum (http://public-yum.oracle.com/) today?
    Seems to be extremely flakey over here in the UK.. have tried from various places / connections, getting timeouts & errors such as:
    [Errno 14] PYCURL ERROR 56 - "Failure when receiving data from the peer"
    Just thought I would post this up if anyone else was experiencing issues today...
    Jeff

    I re-ran yum last night and everything updated OK.
    I had just deployed a new cluster, and this was the first VM in it which was for testing - hence I spent a while worrying about what perhaps was wrong with the cluster / VM config rather than thinking it was an Oracle yum issue.
    I totally understand that public yum is free, and is not supported. People running production systems, who want reliable updates, they should subscribe to Oracle Linux Network - at least for updates, it's not going to break the bank....
    However - sometimes we are deploying VMs to test things, so it's not always going to feasible to subscribe for Oracle Linux Network every time you create a VM for a test project - like me with my new cluster build yesterday.
    So there really should be a 'status page' where people can do and see what's what with the public yum.
    One thing that I did find worrying - was the apparent lack of geographically dispersed mirrors for the public yum. It doesn't appear there were any!
    When a download failed, it would say 'trying other mirror' which also failed, and then the entire download failed.
    Other distros that have issues with yum, I've often see it try 4 or 5 different mirrors....
    What's the situation with Oracle Linux public yum and mirrors? How many are there? Why did they all fail? Are they all in the same datacenter?
    Jeff

  • Is there a way to create a local package repository

    Is there a way to create a local package repository without technically being a mirror.  For example, setting up multiple AL box's on my network and having them grab all the latest packages from one AL box?
    Thanks,
    Craig

    What you most likely want is an ABS tree of your own, containing only the PKGBUILDs of those packages which you want to be included in your repository.
    You should already have heard of the gensync program. In short, the parameters are the root of PKGBUILDs, sorted in subdirectories (ie. like the ABS tree), the intented name and location of the repository database file, and the directory containing the binary packages.
    Let's assume you downloaded the current ABS tree to your hard drive, as well as all matching (same version as in the PKGBUILDs!) packages from a mirror, but you don't want the reiserfsprogs package in your repository. To achieve that, you must remove the /var/abs/base/reiserfsprogs directory, and may optionally remove the binary package, too. Since gensync analyzes the ABS tree you supplied as a parameter, removing the subdirectory of a specific package will cause this very package to not be included in the generated database. Assuming your packages lie in /home/arch/i686/current, your gensync call would look like this:
    gensync /var/abs /home/arch/i686/current/current.db.tar.gz /home/arch/i686/current
    If there are any discrepancies like
      - PKGBUILD, but no matching binary package found
      - PKGBUILD and binary package versions do not match
      - permission problems (writing the db file must be possible)
    gensync will gladly complain.
    Otherwise you should find the db file in the place you specified. Keep in mind that the name of the db.tar.gz file must be equal to the repository tag in the pacman.conf to use the repo.
    To make sure the db contains the right packages; use
    tar -tzf current.db.tar.gz | less
    to list the contents. Every package has it's own subdirectory including the metadata, which is rather obvious considering the file's generated from such a structure in the first place.
    The binary packages along with a correctly generated db file are all you need. Make the repository directory containing these files available through FTP if local availability doesn't cut it for you, edit your pacman.conf if needed, and use it!
    Adding packages works similar; All you need to have is the PKGBUILD in an ABS-like tree (it doesn't have to be the official tree; gensync doesn't care where the files come from. Just stick to one subdirectory per PKGBUILD, and you'll be fine), and the matching packages somewhere else, run gensync with the appropriate directories, and cackle with glee.
    HTH.

  • [SOLVED] vsftpd on Local Mirror, running but not working

    I'm building a Local Mirror on a vm (vbox) with bridged adapter and fix-ip by following this wiki.
    http://wiki.archlinux.org/index.php/Loc … cal_mirror
    After the painful rsync and those setup, I tried pacman -Syu from another Arch vm (no firewall).  I received the following error.
    :: Synchronizing package databases...
    error: failed retrieving file 'core.db.tar.gz' from 192.168.100.100 : Service not available, closing control connection
    I've tried by nmap on the hosting PC and find that the vsftpd should be running.
    Starting Nmap 4.62 ( http://nmap.org ) at 2010-08-27 01:03 HKT
    Interesting ports on 192.168.100.100:
    Not shown: 1714 closed ports
    PORT   STATE SERVICE
    21/tcp open  ftp
    MAC Address: 08:00:27:76:33:1C (Cadmus Computer Systems)
    Nmap done: 1 IP address (1 host up) scanned in 1.318 seconds
    In the wiki, it suggests to use "ftp" to replace "mirror" for ftp_username & nopriv_user.  I tried both.
    I also find that there is no "archlinux" under my /home/mirror/files as "suggested" by the following statement in vsftpd.conf
    # Chroot directory for anonymous user
    anon_root=/home/mirror/files/archlinux
    I tried both (1) amend the vsftpd.conf to remove the "archlinux", and (2) manually add that directory with owner/group=mirror.
    Meanwhile, I only find under /home/mirror/files 6 items - community core extra community.lastsync core.lastsync extra.lastsync.  Have I completed the rsync successfully?  Or, something is missing.  Is the directory structure correct?
    Is the sample vsftpd.conf in the Local Mirror wiki updated?  I've cross reference it with the vsftpd wiki but I'm not knowledgable enough to find things useful.
    What else should I check?
    I love ArchLinux so much that I really hope that it can work.
    Please help.
    Thanks.
    Last edited by dboat (2010-08-27 15:38:14)

    I have tried couple of Linux distro to learn Linux/Network.  I like ArchLinux's "simple" concept, light weight, updated packages, nice document and fast bootup/shutdown.  I have installed over ten times ArchLinux in different virtualmachines and netbook in the past week.  I will keep some, delete some and create more.  I don't have a fast internet connection and that's why I would like to set up my local mirror.  I am a newbie here, so please feel free to let me know if I am taking too much (bandwidth) from the community, and it is not encouraged for my case.  And sorry if I have already created any trouble.
    Well, back to my problem.
    1. After the rsync, including everything, the / now occupies 14G harddisk space.  Is it a normal size for a local mirror?
    2. I have inserted "Server = file:///home/mirror/files/$repo/os/i686" as the first line in its /etc/pacman.d/mirrorlist
        pacman -Syy  looks fine.
        pacman -Syu  gives a list of warning (xxx: local is newer than core), end with "there is nothing to do"
        pacman -S mplayer  starts installtion normally, but need mirrors on internet cause a large portion of software is missing/inaccessible on my local mirror.
    3. I have tried to login by FileZilla from an Ubuntu vm, and receive this error message (on FileZilla)
    Status:    Connecting to 192.168.100.100:21...
    Status:    Connection established, waiting for welcome message...
    Response:    421 Service not available.
    Error:    Could not connect to server
    Seems I have issues on both the mirror and the vsftpd.  I prefer to resolve the vsftpd problem first, but all suggestion/comment are very welcome.
    Lastly, did I post my question in a wrong place?  If yes, please let me know.

  • Public-yum ol6_latest Metadata file does not match checksum

    Update: public-yum is working today and manual checksum is matching as well.
    I am trying to apply the latest Oracle Linux 6 patches to a fresh 6.4 install from the public-yum.  I am able to successfully apply the latest Oracle Linux 6 UEK from the ol6_UEK_latest repository on public-yum.  I am unable to access the ol6_latest repository on public-yum because I receive a "Metadata file does not match checksum" error.  I have attempted to manually verify the checksum and receive a mismatch as well.  See below for more details.
    Thanks,
    Erick
    $ ### clean up yum cache directory
    $ yum clean all
    Loaded plugins: refresh-packagekit, security
    Cleaning repos: ol6_UEK_latest ol6_latest
    Cleaning up Everything
    $ ### attempt to check for updates
    $ yum check-update
    Loaded plugins: refresh-packagekit, security
    ol6_UEK_latest                                                                           | 1.2 kB     00:00
    ol6_UEK_latest/primary                                                                   | 8.0 MB     00:00
    ol6_UEK_latest                                                                                          183/183
    ol6_latest                                                                               | 1.4 kB     00:00
    ol6_latest/primary                                                                       |  29 MB     00:02
    http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum
    Trying other mirror.
    ol6_latest/primary                                                                       |  29 MB     00:02
    http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum
    Trying other mirror.
    Error: failure: repodata/primary.xml.gz from ol6_latest: [Errno 256] No more mirrors to try.
    $ ### manually verify checksum by downloading gz for UEK repository
    $ wget http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/primary.xml.gz
    --2013-06-13 09:49:17--  http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/primary.xml.gz
    Connecting to 10.87.79.250:8080... connected.
    Proxy request sent, awaiting response... 200 OK
    Length: 8409269 (8.0M) [application/x-gzip]
    Saving to: “primary.xml.gz”
    100%[======================================================================>] 8,409,269   11.1M/s   in 0.7s
    2013-06-13 09:49:18 (11.1 MB/s) - “primary.xml.gz” saved [8409269/8409269]
    $ ### now download xml for UEK repository
    $ wget http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/repomd.xml
    --2013-06-13 09:52:14--  http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/repomd.xml
    Connecting to 10.87.79.250:8080... connected.
    Proxy request sent, awaiting response... 200 OK
    Length: 1240 (1.2K) [text/xml]
    Saving to: “repomd.xml”
    100%[======================================================================>] 1,240       --.-K/s   in 0s
    2013-06-13 09:52:14 (106 MB/s) - “repomd.xml” saved [1240/1240]
    $ ### get published checksum from xml
    $ grep -nA1 primary.xml repomd.xml
    16:    <location href="repodata/primary.xml.gz"/>
    17-    <checksum type="sha">c8fc85aa170c9da4a04e8a58ab594f67c319e874</checksum>
    $ ### generate checksum for gz
    $ sha1sum primary.xml.gz
    c8fc85aa170c9da4a04e8a58ab594f67c319e874  primary.xml.gz
    $ ### UEK checksums match, clean up UEK files from directory
    $ rm primary.xml.gz repomd.xml
    $ ### manually verify checksum by downloading gz for ol6 repository
    $ wget http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/primary.xml.gz
    --2013-06-13 10:04:59--  http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/primary.xml.gz
    Connecting to 10.87.79.250:8080... connected.
    Proxy request sent, awaiting response... 200 OK
    Length: 30474994 (29M) [application/x-gzip]
    Saving to: “primary.xml.gz”
    100%[======================================================================>] 30,474,994  6.64M/s   in 4.5s
    2013-06-13 10:05:04 (6.40 MB/s) - “primary.xml.gz” saved [30474994/30474994]
    $ ### now download xml for ol6 repository
    $ wget http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/repomd.xml
    --2013-06-13 10:05:10--  http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/repomd.xml
    Connecting to 10.87.79.250:8080... connected.
    Proxy request sent, awaiting response... 200 OK
    Length: 1429 (1.4K) [text/xml]
    Saving to: “repomd.xml”
    100%[======================================================================>] 1,429       --.-K/s   in 0s
    2013-06-13 10:05:10 (55.7 MB/s) - “repomd.xml” saved [1429/1429]
    $ ### get published checksum from xml
    $ grep -nA1 primary.xml repomd.xml
    16:    <location href="repodata/primary.xml.gz"/>
    17-    <checksum type="sha">c8b3d8c353045b6e96f1eb6ed519c5d6e75faad3</checksum>
    $ ### generate checksum for gz
    $ sha1sum primary.xml.gz
    47c33491455170c1460646ab3652e40087a4aa19  primary.xml.gz
    $ ### ol6 checksums do not match, clean up ol6 files from directory
    $ rm primary.xml.gz repomd.xml
    $
    Message was edited by: esigfrid

    Thank you for your advice.  Currently public-yum is working again for my two systems (without any changes on my side).  If I run into this problem again I will give it a try.
    Erick

  • OL6 Public Yum Repo Question

    I am looking for the i386 versions of some programs for my new install of OL6.2.
    If I look in http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/i386/ I only see i686 versions. For example libXp-1.0.0-15.1.el6.i686.rpm
    if I issue "yum install libXp.i386" it cannot be found. If I look in the EL5 repositories it does have it in the i386 version.
    Since I need to run an application that requires both the 64 bit and 32 bit version be installed...how do I find it on OL6.2? It was also not on the install DVD.
    Thanks for your help.

    Linux-RAC-Admin wrote:
    If I look in http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/i386/ I only see i686 versions. For example libXp-1.0.0-15.1.el6.i686.rpm
    If you're running 64-bit Oracle Linux 6 and you need the i686 binaries, please DO NOT enable the i686 yum channel. All the required i686 binaries are mirrored into the x86_64 channel as well. Also, you should be using the _latest channel for OL6U2:
    http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/
    If you check that location, you'll see that libXpm-3.5.8-2.el6.i686.rpm is there as well as the x86_64 version.

  • Making archlinux (local) mirror

    Before I post the issue PLEASE dont send me to the wiki page, I have been there and it needs to be updated big time!
    okay, I followed the steps on the wiki and I cannot get the mirror script to function, I just get access denied. I believe that I get access denied due to the fact that I am not an official mirror. There is nowhere on the wiki that shows any other servers that can be used properly for the script. I tried the ibiolo mirror on the page but it does not work. If anyone has an unofficial mirror, can you please paste your whole sync.sh? or can someone give me an alternative mirror to rsync.archlinux.org? Thanks
    Last edited by 3nd3r (2007-07-01 05:54:50)

    i got the same with that script u say.. the thing is dir hierarchy errs  made a changes in it and works
    #!/bin/bash
    # This script is used to sync a local mirror of the Arch Linux repositories.
    # Copyright (c)2007 Woody Gilk <[email protected]>
    # Licensed under the GNU GPL (version 2)
    # Original modificado !
    # Filesystem locations for the sync operations
    SYNC_HOME="$HOME/archmirror"
    SYNC_LOGS="$SYNC_HOME/logs"
    SYNC_PKGS="$SYNC_HOME/packages"
    EXCLUDE_FROM=rsync_arch.excludes
    # This allows you to set which repositories you would like to sync
    # Valid options are: current, extra, community, unstable, testing, release
    SYNC_REPO=(current extra community )
    # servidor oficial SYNC_SERVER="rsync.archlinux.org::"
    #francia rsync://distrib-coffee.ipsl.jussieu.fr/pub/linux/archlinux/ rsync
    #francia rsync://mir1.archlinuxfr.org/archlinux rsync
    SYNC_SERVER="distro.ibiblio.org::distros/archlinux/"
    # directorio almacenar descargas temporales ( rync )
    PARTIAL_DIR="$SYNC_HOME/rsync_partial"
    # eliminar ficheros que no existan en el servidor ..
    DELETE=true
    # The following line is the format of the log file name, this example will
    # output something like this: pkgsync_20070201-8.log
    LOG_FILE="pkgsync_$(date +%Y%m%d-%H).log"
    # Do not edit the following lines, they protect the sync from running more than
    # one instance at a time.
    if [ ! -d $SYNC_HOME ]; then
    echo "$SYNC_HOME does not exist, please create it, then run this script again."
    exit 1
    fi
    SYNC_LOCK="$SYNC_HOME/sync_running.lock"
    if [ -f $SYNC_LOCK ] ; then
    echo "aparentemente existe una instancia de este programa en ejecucion... desea continuar? (s):"
    read continua
    if test "$continua" != "s"; then
    exit 1
    fi
    fi
    touch "$SYNC_LOCK"
    # End of non-editable lines
    # Create the log file and insert a timestamp
    #[c]touch "$SYNC_LOGS/$LOG_FILE"
    #[c] echo "==========================================" >> "$SYNC_LOGS/$LOG_FILE"
    #[c] echo ">> Starting sync on $(date)" >> "$SYNC_LOGS/$LOG_FILE"
    #[c] echo ">> ---" >> "$SYNC_LOGS/$LOG_FILE"
    # Rsync each of the repos set in $SYNC_REPO
    for repo in ${SYNC_REPO[@]}; do
    repo=$(echo $repo |tr [:upper:] [:lower:])
    echo "Syncing $repo to $SYNC_PKGS/$repo"
    repourl=$repo
    # If you only want to mirror 32bit packages, you can add
    # " --exclude=os/x86_64" after "--delete"
    # If you only want to mirror 64bit packages, use "--exclude=os/i686"
    # If you want both 32bit and 64bit, leave the following line as it is
    #rsync -rptv --delete --exclude=os/x86_64 rsync.archlinux.org::$repourl "$SYNC_PKGS/$repo" >> "$SYNC_LOGS/$LOG_FILE"
    parametros=" -av --progress -hh -k --stats --partial --partial-dir="$PARTIAL_DIR
    # parametros para eliminar ficheros que no existan en el servidor ..
    if $DELETE; then
    parametros=$parametros" --delete --delete-excluded"
    fi
    if test -e $EXCLUDE_FROM ; then
    parametros="$parametros --exclude-from=$EXCLUDE_FROM"
    fi
    echo $parametros
    rsync $parametros --ipv4 --exclude=os/x86_64 $SYNC_SERVER$repourl "$SYNC_PKGS/"
    # If you know of a rsync mirror that is geographically closer to you than archlinux.org
    # simply replace ".archlinux.org" with the domain of that mirror in the above line
    done
    # Remove the lock file and exit
    rm -f "$SYNC_LOCK"
    exit 0
    and if there is a file rsync_arch.excludes in the dir where app is running then it must be like this one (man rsync)
    where every line that matches a file in repo will be excluded in the download, otherside if prefixed with a + it will be
    inckuded (man rsync)
    # solo descargando paquetes,
    # si se quiere descargar tambien las imagenes de instalalcion, comentar
    current/iso/
    wesnoth*
    nexuiz*
    flightgear*
    tremulous*
    sauerbraten*
    scorched3d*
    #torcs-*
    + koffice-l10n-es*
    koffice-l10n-*
    + kde-i18n-es*
    kde-i18n-*
    festival-*
    eclipse-*
    rfc-*
    skype-*
    + openoffice-base*
    + openoffice-es*
    + openoffice-spell-en*
    + openoffice-spell-es*
    openoffice-*

  • Local Mirror - Wiki - help please

    Hello peoples,
    I have read and followed the Wiki on having a local mirror  http://wiki2.archlinux.org/index.php/Local%20Mirror
    My updates do not occur automatically, and I do not have the knowledge yet to fix this so I was wondering if there is a better way of doing this.  I have read Man:crond and crontab but am left a little confused as to how this can be incorporated into using the original script.  Also I would like to automate the time to 0200HRS.   Can somebody help me out please?   

    Thankyou for your reply i3839,
    From your answer, I can define more clearly what the issue is.  The updating I was refering to was only the ¨Current¨ and ¨Extra¨ database that I sync.  The script given in the wiki didn´t work for me so here is my version that works for me.
    #!/bin/sh
    rsync -avz --delete rsync.archlinux.org::ftp/current/os/i686/ /home/mirror/current/os/i686
    rsync -avz --delete rsync.archlinux.org::ftp/extra/os/i686/ /home/mirror/extra/os/i686
    # rsync -avz --delete rsync.archlinux.org::ftp/testing /mirror/
    # rsync -avz --delete rsync.archlinux.org::ftp/unstable /mirror/
    The two issues I have are:
    1. the script does not run automatically via cron.daily/sync file, yet when I click on the sync.sh file it works. 
    2. The updates occur at 0002hrs, which I would like to change to 0200hrs instead
    cron.daily/sync:
    !/bin/sh
    SYNCLOGFILE="/var/log/sync.log"
    SYNCLOCKFILE="/var/lock/sync.lock"
    if [ -f $SYNCLOCKFILE ]; then
    # lock file already present, bail
    exit 1
    fi
    echo -n ">>> Sync log for " > $SYNCLOGFILE
    date >> $SYNCLOGFILE
    cd /home/mirror
    touch $SYNCLOCKFILE
    su - evan -c "/home/mirror/sync.sh" >> $SYNCLOGFILE
    rm -f $SYNCLOCKFILE
    Also, the sync.log and sync.lock files have not been created. Is this a permissions issue and if so how do I fix it?
    Cheers

  • How to create a local SLD on a newly installed PI 7.3 system.

    Hello
    I wander how to create a local SLD on a newly installed PI 7.3 system.
    When installing I choosed "register in existing central SLD"(other option was "no SLD destination")
    On next screen I have choosen SLD HTTP host as the host I was currently installing PI7.3
    For SLD HTTP  port I have used 50000 as the instance was (the default) "00"
    For SLD Data Supplier User I have choosen "SLDDSUSER"
    I was warned then that "The SLD Server is not reachable. Choose "Cancell" to modify managing  configuration or "OK" to continue. If you choose to continue you will have to execute the smdsetup script later on."
    I opted "OK".
    if I understand I have to execute smdsetup script in order to create a local SLD
    Has anyone experience with it
    thank you in advace
    Jan

    Hi Jan,
    You first need to run the wizard to setup the SLD locally.
    To do this, go to the NetWeaver Administrator -> Configuration -> Scenarios -> Configuration Wizard -> Funtional Wizard Configuration UI
    There tick System Landscape Directory and select "Enable Automatically".
    After executing this, your SLD will be ready for use.
    The SMD has nothing to do with the SLD, it's just the next step in the installation process.
    Kind regards,
    Mark

  • Any ideas on how to do a local mirror for this situation?

    I'm starting a project to allow ArchLinux to be used on a Cluster environment (autoinstallation of nodes and such). I'm going to implement this where I'm working right now (~25 node cluster). Currently they're using RocksClusters.
    The problem is that the connection to internet from work is generally really bad during the day. There's a HTTP proxy in the middle. The other day I tried installing archlinux using the FTP image and I took more than 5 hours just to do an upgrade + installing subversion and other packages, right after an FTP installation (which wasn't fast either).
    The idea is that the frontend (the main node of the cluster) would hold a local mirror of packages so that when nodes install use that mirror (the frontend would use this also, because of the bad speed).
    As I think it should be better to only update the mirror and perform an upgrade not very often (if something breaks I would leave users stranded until I fix it), I thought I should download a snapshot of extra/ and current/ only once. But the best speed I get from rsync (even at night, where an HTTP transfer from kernel.org goes at 200KB/s) is ~13KB/s this would take days (and when it's done I would have to resync because of any newer package that could have been released in the meantime).
    I could download extra/ and current/ at home (I have 250KB/s downstream but I get like ~100KB/s from rsync), record several CDs (6!... ~(3GB + 700MB)/700MB) but that's not very nice. I think that maybe this would be just for the first time. Afterwards an rsync would take a lot less, but I don't know how much less.
    Obiously I could speed things a little If I download the full ISO and rsync current using that as a base. But for extra/ I don't have a ISOs.
    I think this is a little impractical (to download everything) as I wouldn't need whole extra/ anyways. But it's hard to know all packages needed and their dependencies to download only those.
    So... I would like to know if anyone has any ideas on how to make this practical. I wouldn't wan't my whole project to crumble because of this detail.
    It's annoying because using pacman at home, always works at max speed.
    BTW, I've read that HOWTO that explains how to mount pacman's cache on the nodes to have a shared cache. But I'm not very sure if that's a good option. Anyway, that would imply to download everything at work, which would take years.

    V01D wrote:After installation the packages that are in cache are the ones from current. All the stuff from extra/ won't be there until I install something from there.
    Anyway, if I installl from a full CD I get old packages which I have to pacman -Syu after installation (that takes long time).
    Oh, so that's how is it.
    V01D wrote:
    I think I'm going to try out this:
    * rsync at home (already got current last night)
    * burn a DVD
    * go to work and then update the packages on DVD using rsync again (this should be fast, if I don't wait long time after recording it)
    And to optimize further rsync's:
    * Do a first install on all nodes an try it out for a few days (so I install all packages needed)
    * Construct a list of packages used by all nodes and frontend
    * Remove them from my mirror
    * Do further rsync updates only updating the files I already have
    This would be the manual approach of the shared cache idea I think.
    Hmm... but why do you want to use rsync? You'll need to download the whole repo, which is quite large (current + extra + testing + community > 5.1GB, extra is the largest). I suggest you to download only those packages and their dependencies that you use.
    I have similar situation. At work I have unlimited traffic (48kbps at day and 128kbps at night), at home - fast connection (up to 256kbps) but I pay for every megabyte (a little, but after 100-500 megabytes it becomes very noticeable). So I do
    yes | pacman -Syuw
    or
    yes | pacman -Syw pkg1 pkg2 ... pkgN
    at work (especially when packages are big), then put new downloaded files on my flash drive, then put them into /var/cache/pacman/pkg/ at home, and then I only need to do pacman -Sy before installing which takes less than a minute.
    I have 1GB flashdrive so I can always keep the whole cache on it. Synchronizing work cache <-> flash drive <-> home cache is very easy.
    P.S.: Recently I decided to make complete mirror of all i686 packages from archlinux.org with rsync. Not for myself but for my friends that wanted to install Linux. Anyway I don't pay for every megabyte at my work. However it took almost a week to download 5.1 GB of packages.
    IMHO for most local mirror solutions using rsync is overkill. How many users are there that use more than 30% of packages from repos? So why to make full mirror with rsync when you can cache only installed packages?

Maybe you are looking for