Absbb: Archlinux abs build bot

I have been writing a script so we could have a build bot like slitaz project has. The idea is so we can have a build bot website that could be like this http://bb.slitaz.org/.
Archlinux build bot tool
Usage: absbb [command] pkg.list
Commands:
usage Print this short usage and command list.
extra-i686-build Build chroot extra i686.
extra-x86_64-build Build chroot extra x86_64.
testing-i686-build Build chroot testing i686.
testing-x86_64-build Build chroot testing x86_64.
staging-i686-build Build chroot staging i686.
staging-x86_64-build Build chroot staging x86_64.
multilib-build Build chroot multilib. (x86_64 only)
To get started you just have to do this in root:
abs
absbb extra-i686-build pkg.list
i recommend a pkg.list file like this (I used this list for my testing):
bash
zlib
linux-firmware
The pkg.list is a list of pkgbuilds you want to rebuild in chroot. There copied from /var/abs($ABSROOT) folder to /work/pkgbuild folder in chroot. I use xyne's makerepo for building the list of packages since it will allow users to have there own repo of packages.
I mount bind 4 folders into chroot /work folder.
$HOME/packages into $HOME/chroots/extra-i686/root/work/pkgdest
$HOME/repo into $HOME/chroots/extra-i686/root/work/repo
$HOME/logs into $HOME/chroots/extra-i686/root/work/logs
$HOME/sources into $HOME/chroots/extra-i686/root/work/srcdest
The repo folder will be purge on every time you called absbb. This is cause makerepo doesn't like it when packages and database are there. The packages are still save if you are not rebuild the same list over again. I added a -f to MKARGS variable in absbb.conf file so things go smoothly.
main testing branch:
http://github.com/godane/devtools-pkgbuild/tree/archbb
alpha1 release:
http://github.com/godane/devtools-pkgbu … sbb-alpha1
I hope this helps.
PS I very bad with document. I'm sorry if you have trouble understand me.

godane wrote:@arch0r
Depends are installed into chroot cause absbb runs mkarchroot and everything is build in $HOME/chroots/repo-arch/root.
ok that's nice. so do i have to run it as root, or is it also possible to build packages as normal user?

Similar Messages

  • ABS build of pavucontrol crashes on startup

    I'm trying to build pavucontrol from ABS on on x86_64. Compiling works fine, but the resulting binary immediately crashes at startup.
    Given that the pavucontrol binaries in the repo are from November 2013, could someone please confirm that a rebuilt pavucontrol works for them on 64 bit?
    Here is the gdb output. The crash is in libgtkmm apparently.
    $ coredumpctl gdb 29290
    PID: 29290 (pavucontrol)
    UID: 1000 (martin)
    GID: 100 (users)
    Signal: 11 (SEGV)
    Timestamp: So 2015-03-15 15:05:39 CET (53s ago)
    Command Line: pavucontrol
    Executable: /usr/bin/pavucontrol
    Control Group: /user.slice/user-1000.slice/session-c2.scope
    Unit: session-c2.scope
    Slice: user-1000.slice
    Session: c2
    Owner UID: 1000 (martin)
    Boot ID: 7aa16631aaf14c22a2e636ff9ac86e56
    Machine ID: 59d42bfa09814bdb904e1e8b38fff9bf
    Hostname: genesis
    Coredump: /var/lib/systemd/coredump/core.pavucontrol.1000.7aa16631aaf14c22a2e636ff9ac86e56.29290.1426428339000000.lz4
    Message: Process 29290 (pavucontrol) of user 1000 dumped core.
    GNU gdb (GDB) 7.9
    Copyright (C) 2015 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law. Type "show copying"
    and "show warranty" for details.
    This GDB was configured as "x86_64-unknown-linux-gnu".
    Type "show configuration" for configuration details.
    For bug reporting instructions, please see:
    <http://www.gnu.org/software/gdb/bugs/>.
    Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.
    For help, type "help".
    Type "apropos word" to search for commands related to "word"...
    Reading symbols from /usr/bin/pavucontrol...(no debugging symbols found)...done.
    [New LWP 29290]
    [New LWP 29291]
    [New LWP 29292]
    warning: Could not load shared library symbols for linux-vdso.so.1.
    Do you need "set solib-search-path" or "set sysroot"?
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/usr/lib/libthread_db.so.1".
    Core was generated by `pavucontrol'.
    Program terminated with signal SIGSEGV, Segmentation fault.
    #0 0x00007f12011854c6 in Gtk::Label::set_markup(Glib::ustring const&) ()
    from /usr/lib/libgtkmm-3.0.so.1
    (gdb)

    Hi Morn,
    I just tried to build pavucontrol using ABS and I can confirm that it compiles fine but the resulting executable crashes.
    Here there is the coredump
    $ LC_ALL=C coredumpctl info 30842
    PID: 30842 (pavucontrol)
    UID: 1000 (daddona)
    GID: 10 (wheel)
    Signal: 11 (SEGV)
    Timestamp: Sun 2015-03-15 15:43:25 CET (4min 12s ago)
    Command Line: pavucontrol
    Executable: /usr/bin/pavucontrol
    Control Group: /user.slice/user-1000.slice/session-c1.scope
    Unit: session-c1.scope
    Slice: user-1000.slice
    Session: c1
    Owner UID: 1000 (daddona)
    Boot ID: 60b4bc9966c74cb4a28c82ae4ec79a02
    Machine ID: 6ec27ad1fad0423f909aeb6eb0fc9a12
    Hostname: 530U3C
    Coredump: /var/lib/systemd/coredump/core.pavucontrol.1000.60b4bc9966c74cb4a28c82ae4ec79a02.30842.142
    Message: Process 30842 (pavucontrol) of user 1000 dumped core.
    [daddona@530U3C ~]$ LC_ALL=C coredumpctl gdb 30842
    PID: 30842 (pavucontrol)
    UID: 1000 (daddona)
    GID: 10 (wheel)
    Signal: 11 (SEGV)
    Timestamp: Sun 2015-03-15 15:43:25 CET (4min 20s ago)
    Command Line: pavucontrol
    Executable: /usr/bin/pavucontrol
    Control Group: /user.slice/user-1000.slice/session-c1.scope
    Unit: session-c1.scope
    Slice: user-1000.slice
    Session: c1
    Owner UID: 1000 (daddona)
    Boot ID: 60b4bc9966c74cb4a28c82ae4ec79a02
    Machine ID: 6ec27ad1fad0423f909aeb6eb0fc9a12
    Hostname: 530U3C
    Coredump: /var/lib/systemd/coredump/core.pavucontrol.1000.60b4bc9966c74cb4a28c82ae4ec79a02.30842.1426430605000000.lz4
    Message: Process 30842 (pavucontrol) of user 1000 dumped core.
    GNU gdb (GDB) 7.9
    Copyright (C) 2015 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law. Type "show copying"
    and "show warranty" for details.
    This GDB was configured as "x86_64-unknown-linux-gnu".
    Type "show configuration" for configuration details.
    For bug reporting instructions, please see:
    <http://www.gnu.org/software/gdb/bugs/>.
    Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.
    For help, type "help".
    Type "apropos word" to search for commands related to "word"...
    Reading symbols from /usr/bin/pavucontrol...(no debugging symbols found)...done.
    [New LWP 30842]
    Core was generated by `pavucontrol'.
    Program terminated with signal SIGSEGV, Segmentation fault.
    #0 0x00007ff6205aef3f in ?? ()
    (gdb) quit

  • [SOLVED] ABS - building autoconf - Can't locate unicore/PVA.pl

    Tried to build autoconf in ABS and got...
    [abs@slcyoshimitsu autoconf]$ makepkg
    ==> Making package: autoconf 2.69-1 (Tue Jul 10 12:28:09 MDT 2012)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving Sources...
    -> Found autoconf-2.69.tar.xz
    -> Found autoconf-2.69.tar.xz.sig
    ==> Validating source files with md5sums...
    autoconf-2.69.tar.xz ... Passed
    autoconf-2.69.tar.xz.sig ... Passed
    ==> Verifying source file signatures with gpg...
    autoconf-2.69.tar.xz ... FAILED (unknown public key A7A16B4A2527436A)
    ==> WARNING: Warnings have occurred while verifying the signatures.
    Please make sure you really trust them.
    ==> Extracting Sources...
    -> Extracting autoconf-2.69.tar.xz with bsdtar
    ==> Removing existing pkg/ directory...
    ==> Starting build()...
    checking for a BSD-compatible install... /usr/bin/install -c
    checking whether build environment is sane... yes
    checking for a thread-safe mkdir -p... /bin/mkdir -p
    checking for gawk... gawk
    checking whether make sets $(MAKE)... yes
    checking build system type... x86_64-unknown-linux-gnu
    checking host system type... x86_64-unknown-linux-gnu
    configure: autobuild project... GNU Autoconf
    configure: autobuild revision... 2.69
    configure: autobuild hostname... slcyoshimitsu
    configure: autobuild timestamp... 20120710T182810Z
    checking whether /bin/sh -n is known to work... yes
    checking for characters that cannot appear in file names... none
    checking whether directories can have trailing spaces... yes
    checking for expr... /usr/bin/expr
    checking for GNU M4 that supports accurate traces... /usr/bin/m4
    checking whether /usr/bin/m4 accepts --gnu... yes
    checking how m4 supports trace files... --debugfile
    checking for perl... /usr/bin/perl
    checking whether /usr/bin/perl Fcntl::flock is implemented... yes
    checking for emacs... emacs
    checking whether emacs is sufficiently recent... yes
    checking for emacs... emacs
    checking where .elc files should go... ${datarootdir}/emacs/site-lisp
    checking for grep that handles long lines and -e... /usr/bin/grep
    checking for egrep... /usr/bin/grep -E
    checking for a sed that does not truncate output... /bin/sed
    checking whether make is case sensitive... yes
    configure: creating ./config.status
    config.status: creating tests/Makefile
    config.status: creating tests/atlocal
    config.status: creating man/Makefile
    config.status: creating lib/emacs/Makefile
    config.status: creating Makefile
    config.status: creating doc/Makefile
    config.status: creating lib/Makefile
    config.status: creating lib/Autom4te/Makefile
    config.status: creating lib/autoscan/Makefile
    config.status: creating lib/m4sugar/Makefile
    config.status: creating lib/autoconf/Makefile
    config.status: creating lib/autotest/Makefile
    config.status: creating bin/Makefile
    config.status: executing tests/atconfig commands
    make all-recursive
    make[1]: Entering directory `/home/abs/core/autoconf/src/autoconf-2.69'
    Making all in bin
    make[2]: Entering directory `/home/abs/core/autoconf/src/autoconf-2.69/bin'
    rm -f autom4te autom4te.tmp
    srcdir=''; \
    test -f ./autom4te.in || srcdir=./; \
    sed -e 's|@SHELL[@]|/bin/sh|g' -e 's|@PERL[@]|/usr/bin/perl|g' -e 's|@PERL_FLOCK[@]|yes|g' -e 's|@bindir[@]|/usr/bin|g' -e 's|@pkgdatadir[@]|/usr/share/autoconf|g' -e 's|@prefix[@]|/usr|g' -e 's|@autoconf-name[@]|'`echo autoconf | sed 's,x,x,'`'|g' -e 's|@autoheader-name[@]|'`echo autoheader | sed 's,x,x,'`'|g' -e 's|@autom4te-name[@]|'`echo autom4te | sed 's,x,x,'`'|g' -e 's|@M4[@]|/usr/bin/m4|g' -e 's|@M4_DEBUGFILE[@]|--debugfile|g' -e 's|@M4_GNU[@]|--gnu|g' -e 's|@AWK[@]|gawk|g' -e 's|@RELEASE_YEAR[@]|'`sed 's/^\([0-9][0-9][0-9][0-9]\).*/\1/;q' ../ChangeLog`'|g' -e 's|@VERSION[@]|2.69|g' -e 's|@PACKAGE_NAME[@]|GNU Autoconf|g' -e 's|@configure_input[@]|Generated from autom4te.in; do not edit by hand.|g' ${srcdir}autom4te.in >autom4te.tmp
    chmod +x autom4te.tmp
    chmod a-w autom4te.tmp
    mv autom4te.tmp autom4te
    autom4te_perllibdir='..'/lib AUTOM4TE_CFG='../lib/autom4te.cfg' ../bin/autom4te -B '..'/lib -B '..'/lib --language M4sh --cache '' --melt ./autoconf.as -o autoconf.in
    Can't locate unicore/PVA.pl in @INC (@INC contains: ../lib /usr/lib/perl5/site_perl /usr/share/perl5/site_perl /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5/core_perl /usr/share/perl5/core_perl .) at /usr/share/perl5/site_perl/utf8_heavy.pl line 97.
    BEGIN failed--compilation aborted at /usr/share/perl5/core_perl/constant.pm line 45.
    Compilation failed in require at /usr/lib/perl5/core_perl/Data/Dumper.pm line 275.
    BEGIN failed--compilation aborted at /usr/lib/perl5/core_perl/Data/Dumper.pm line 275.
    Compilation failed in require at ../lib/Autom4te/C4che.pm line 33.
    BEGIN failed--compilation aborted at ../lib/Autom4te/C4che.pm line 33.
    Compilation failed in require at ../bin/autom4te line 37.
    BEGIN failed--compilation aborted at ../bin/autom4te line 37.
    make[2]: *** [autoconf.in] Error 2
    make[2]: Leaving directory `/home/abs/core/autoconf/src/autoconf-2.69/bin'
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/home/abs/core/autoconf/src/autoconf-2.69'
    make: *** [all] Error 2
    ==> ERROR: A failure occurred in build().
    Aborting...
    Not sure what it is that I botched-up so hard to get into this state.
    Last edited by Ishpeck (2012-07-13 15:57:38)

    I ended up doing this...
    cd /usr/share/
    rm -rf perl5
    pacman -S perl
    Everything works now.  Somehow, undesirable stuff ended up polluting my perl files.  All is right once again.

  • How to rebuild entire system using ABS

    ArchLinux ABS Wiki says:
    The Arch Build System (ABS for short) is used to:
        * Rebuild your entire system using your compiler flags, "a la FreeBSD"
    But how? I was looking for a wiki, an article or example - no results Please explain, how to rebuild entire system using ABS?
    Thanks in advance for your help!
    Last edited by rics (2008-08-21 13:25:03)

    Allan wrote:makeworld script in the abs package
    So easy?! Cool!
    Thanks! My problem is solved

  • Build option in Pacman

    Hi there,
    Sorry if this is a double post.
    I know of srcpac, but have it been discussed if pacman should have an option to compile packages? It doesn't seam like a particularly big feature---I would even be willing to make it myself. If this has been discussed, what were the main arguments pro/con?
    Jacob

    Trilby wrote:As far as I know (which may not be all that far in that case - so correct me if I'm wrong) pacman doesn't have a currently built in mechanism to get PKGBUILDS.  ..... I'm quite happy to have this package, and the abs tree - but I'm also happy that I chose to install it as a separate tool.  I can't see what possible advantage merging the two packages (pacman + abs) would serve.  .....
    I don't see any advantage of adding a -B option to pacman.  The advantage of separating the "build from source" functionality is (imo) greatly increased flexibility.  As others have noticed there is not much reason to just build packages from source with custom CFLAGS.  The real motivation for using the source is to control dependencies and options, and possibly to integrate mutliple upstream sources (abs, aur, ...).
    For example I build everything from source using my own "build" command.  It integrates abs and aur (in fact, it optionally integrates the unity, antergos, and manjaro source repos). So for example I can "build firefox-kde-opensuse" and build an aur package.  Or "build unity" and build the unity meta-package.  All "build" knows is that it should follow a search path in a collection of "upstream buildscripts" until it finds the desired package.  A "pacman -B" option is probably not going to be that simple or allow for that integration.  ("build" also needs access to a database of packages so that it knows the "pkgbase" corresponding to any "pkgname").
    To further the example:  the function for fetching the pkgbuild is
    FROMDIR=""
    function from() { # dir -- copy dir to Buildfolder and set FROMDIR
    cp -r --no-clobber $1 "${Buildfolder}"
    FROMDIR=$1
    # get the Pkgbase directory into $Buildfolder and initialize the dirty Pkgbase_build
    function get_pkgbuild() {
    cd "${Buildfolder}"
    [ -d "${Pkgbase}_build" ] && rm -rf "${Pkgbase}_build"
    if [[ $REBUILD == 1 && -d $MASTER/$Pkgbase ]]; then
    from "$MASTER/$Pkgbase"
    cp -r "${Buildfolder}/$Pkgbase" "${Buildfolder}/${Pkgbase}_build"
    return 0
    fi
    for repo in ${UPSTREAM[*]}; do
    if [ -d $repo ]; then
    cd $repo
    for folder in */; do
    if [[ $folder == $Pkgbase/ ]]; then
    from "${repo}/${Pkgbase}"
    cp -r "${Buildfolder}/$Pkgbase" "${Buildfolder}/${Pkgbase}_build"
    return 0
    fi
    done
    fi
    done
    return 1
    Just adding directories to the UPSTREAM array gives access to abs/core, abs/extra, abs/community, aur, unity, ... whatever.
    The help message for "build" may give an idea of where you may end up by following this kind of factoring that I'm recommending:
    └─> build --help
    A utility for Archlinux to build packages from source code.
    Syntax: build options packages
    Build the packages named on the command line (and packages listed in the file
    using the -f option). Packages are built in the order listed. Packages named
    on the command line are built after any given in a file.
    Use control-c to interrupt the build then "build --resume" when ready to continue.
    Options:
    --rebuild Copy pkgbuilds from the master repository, ignore upstream
    -c, --chroot Build packages in $MYROOT using makechrootpkg, initializing
    the root with packages from the mylinux current repo (or from
    Archlinux upstream if the package is not in the mylinux repo).
    -f file Read package names from file. Only one file can be used.
    -m, --merge Interactively merge in changes master repository.
    -e, --edit Edit each PKGBUILD before building the package.
    -k, --keep-going Do not stop to show namcap errors.
    -L, --log Enable makepkg build logging (do not use for interactive builds).
    -r, --resume Resume the previously interrupted build in this directory.
    -s, --save Save the buildscript for each package (see build.conf).
    -S, --src Also save the unpacked and patched source code (./src)
    -h, --help Print this message.
    Examples:
    Compile the kernel, headers, and docs:
    build -mes linux
    Upgrade all packages except those in groups that are in FREEZE:
    pacman -Qq | versions -qu | freeze | order | unify >upgrades
    build -msLf upgrades
    Best practice:
    build is intentionally dumb and builds just what you ask for in the way you specify.
    * always use the -s option to accumulate pkgbuilds into a local master collection
    * always use the -m option to merge back into upstream any local changes
    * always use the order command to get packages in correct build order
    See also the help messages for:
    broken freeze group need order packages pkgtree release requires
    unify use versions update-package-database
    I know that that is way too much info but maybe the details of an example can stimulate folks who think similarly.
    Last edited by sitquietly (2013-08-05 23:59:46)

  • Bots not visible in Xcode

    Hi forum users,
    today i wanted to test Xcode bots and the integration into the UI. Unfortunately the bots are not visible in Xcode if i set the "view-only for" to anything else as  "anyone". I only want that the stats are visible to people who are registered.
    This bug is really annoying as anyone could access my Xcode build bots via "/xcode" over the web interface.
    I really hope that someone could give me a tip how i could secure the webinterface from unregistered users without loosing the bot controls in my Xcode.
    Thanks in advance,
    Stefan

    Looks like i have that the bot needs to be visible once for Xcode.
    Xcode could grab the bots when view access is available for all. After its shown first time in XCode i've set back the view access to only registered users and everything works as it should.
    Pretty dirty workaround but i can live with it till next the update

  • Abs cleans out /var/abs

    Question: Does abs now clean out all non-recognized directories in /var/abs intentionally? I used to keep everything from the AUR in /var/abs/aur, including some of my own stuff I *was* working on, but after running the newly upgraded abs I find everything in /var/abs gone except for, core,  extra,  local,  README,  testing, and  unstable -- my AUR directory is gone, which is at least moderately upsetting... :S
    [edit]
    /etc/abs = /var/abs in first sentence.
    Last edited by B-Con (2008-08-17 20:00:02)

    Allan wrote:@Misfit138: I think you missed the point. If you named you separate build directory /var/abs/build, then it was deleted...
    You're absolutely right, I missed that, sorry. *Sigh* I need to get more sleep.

  • [SOLVED] (PKGBUILD) problem with makepkg for go-openoffice

    [SOLVED]: In the PKGBUILD, there's a function before build(), named mksource() and a comment "source PKGBUILD && mksource".
    Just run it first, then copy the "/tmp/go-openoffice-source/go-openoffice-ooo320-m12.tar.xz", run "makepkg -s" as usual.
    Thanks :D
    Hi, I tried to makepkg for extra/go-openoffice (from abs) by using "makepkg -s" on my x86_64 laptop, and get the following error (before the build begin):
    ==> ERROR: go-openoffice-ooo320-m12.tar.xz was not found in the build directory and is not a URL.
    I understand that this comes from
    pkgname=go-openoffice
    _ootag=ooo320-m12 # m12 = OOo 3.2.0 RC5
    source=(${_mirror}/${_go_tree}/ooo-build-${_GOver}.tar.gz
        ArchLinux.patch
        ${pkgname}-${_ootag}.tar.xz
    But, I don't know where to find this "go-openoffice-ooo320-m12.tar.xz". Can you please help me to figure out where to find it? Thank you!
    Following is the PKGBUILD for extra/go-openoffice (from abs)
    # $Id: PKGBUILD 74152 2010-03-30 17:02:00Z andyrtr $
    # Maintainer: AndyRTR <[email protected]>
    pkgname=go-openoffice
    _GOver=3.2.0.9 # = post OOo 3.2.0 final bugfix
    pkgver=${_GOver}
    pkgrel=1
    pkgdesc="OpenOffice.org - go-oo.org enhanced version of SUN's office suite"
    arch=('i686' 'x86_64')
    _go_tree="OOO320"
    _ootag=ooo320-m12 # m12 = OOo 3.2.0 RC5
    license=('LGPL3')
    url="http://go-oo.org/"
    install=${pkgname}.install
    depends=("curl>=7.19.6" "hunspell>=1.2.8" "python>=2.6.4" 'libwpd' 'redland>=1.0.10'
    'libxaw' "neon>=0.28.6" "icu>=4.2.1" 'hsqldb-java' 'libxslt' 'libxtst' 'lpsolve'
    'beanshell' 'saxon' 'vigra' 'hyphen' 'libmspack' 'libldap' 'gtk2' 'lucene'
    'hicolor-icon-theme' 'shared-mime-info' 'desktop-file-utils') # 'libmythes' 'libgraphite'
    optdepends=('java-runtime: adds java support'
    'libcups: adds printing support'
    'gconf: adds additional gnome support'
    'nss: adds support for signed files/macros'
    'pstoedit: translates PostScript and PDF graphics into other vector formats'
    'poppler: for the pdfimport extension'
    'mesa: for the OGLTrans extension'
    'mono: allows UNO automation with Mono'
    'gstreamer0.10-base: + some gstr-plugins to support multimedia content, e.g. in impress'
    'kdelibs: for kde integration')
    makedepends=('automake' 'autoconf' 'wget' 'bison' 'findutils' 'flex' 'gawk' 'gcc-libs' 'libart-lgpl'
    'pam' 'sane' 'perl-archive-zip' 'pkgconfig' 'unzip' 'zip' "xulrunner>=1.9.2-4" 'apache-ant' 'cairo'
    'gperf' 'libcups' 'pstoedit' 'gconf' "openjdk6>=1.5.2" 'unixodbc' 'mesa>=7.5' 'poppler>=0.10.7'
    'gstreamer0.10-base>=0.10.26' 'mono>=2.6.1' 'kdelibs>=4.4.0' 'libjpeg' 'boost' 'git' 'rsync')
    backup=(usr/lib/go-openoffice/program/sofficerc)
    provides=('openoffice-base')
    conflicts=('openoffice-base')
    _mirror="http://download.go-oo.org/"
    source=(${_mirror}/${_go_tree}/ooo-build-${_GOver}.tar.gz
    ArchLinux.patch
    ${pkgname}-${_ootag}.tar.xz
    http://download.go-oo.org//DEV300/ooo-cli-prebuilt-3.2.tar.bz2
    http://cairographics.org/releases//cairo-1.4.10.tar.gz
    http://download.go-oo.org//SRC680/mdbtools-0.6pre1.tar.gz
    http://download.go-oo.org//SRC680/extras-3.tar.bz2
    http://download.go-oo.org//SRC680/biblio.tar.bz2
    http://tools.openoffice.org/unowinreg_prebuild/680//unowinreg.dll
    http://download.go-oo.org//DEV300/scsolver.2008-10-30.tar.bz2
    http://download.go-oo.org//libwpd/libwpd-0.8.14.tar.gz
    http://download.go-oo.org//SRC680/libwps-0.1.2.tar.gz
    http://download.go-oo.org//SRC680/libwpg-0.1.3.tar.gz
    http://download.go-oo.org//DEV300/ooo_oxygen_images-2009-06-17.tar.gz
    http://download.go-oo.org/src//seamonkey-1.1.14.source.tar.gz
    http://archive.apache.org/dist/ant/binaries/apache-ant-1.7.0-bin.tar.gz
    buildfix_64bit_system_libjpeg.diff
    system-redland.patch
    localize-ooo.diff)
    #options=('!distcc' '!ccache' '!makeflags')
    options=('!makeflags')
    noextract=(ooo-cli-prebuilt-3.2.tar.bz2 cairo-1.4.10.tar.gz mdbtools-0.6pre1.tar.gz extras-3.tar.bz2 biblio.tar.bz2 unowinreg.dll
    scsolver.2008-10-30.tar.bz2 libwpd-0.8.14.tar.gz libwps-0.1.2.tar.gz libwpg-0.1.3.tar.gz ooo_oxygen_images-2009-06-17.tar.gz)
    # source PKGBUILD && mksource
    mksource() {
    mkdir /tmp/$pkgname-source
    pushd /tmp/$pkgname-source
    wget ${_mirror}/${_go_tree}/ooo-build-${_GOver}.tar.gz
    tar -xvf ooo-build-${_GOver}.tar.gz
    cd ooo-build-${_GOver}
    ./configure --quiet --with-distro=ArchLinux
    ./download --all
    pushd src; tar -cvJf ../../${pkgname}-${_ootag}.tar.xz clone; popd
    popd
    build() {
    unset J2REDIR; unset J2SDKDIR; unset JAVA_HOME; unset CLASSPATH
    [ -z "${JAVA_HOME}" ] && . /etc/profile.d/openjdk6.sh
    [ -z "${MOZ_PLUGIN_PATH}" ] && . /etc/profile.d/mozilla-common.sh
    [ -z "${ANT_HOME}" ] && . /etc/profile.d/apache-ant.sh
    cd ${srcdir}/ooo-build-${_GOver}
    # our ArchLinux distribution patch until we go upstream
    patch -Np0 -i ${srcdir}/ArchLinux.patch || return 1
    # buildfix for broken language settings in build
    patch -Np0 -i ${srcdir}/localize-ooo.diff || return 1
    # fix bugs with recent system redland
    patch -Np1 -i ${srcdir}/system-redland.patch || return 1
    # hotfixes not yet upstream
    # cp ${srcdir}/*.diff ${srcdir}/ooo-build-${_GOver}/patches/hotfixes/
    cp ${srcdir}/buildfix_64bit_system_libjpeg.diff ${srcdir}/ooo-build-${_GOver}/patches/hotfixes/
    # export C(XX)FLAGS
    # http://www.openoffice.org/issues/show_bug.cgi?id=103205
    unset CFLAGS
    unset CXXFLAGS
    # export ARCH_FLAGS="$CFLAGS"
    if [ "$CARCH" = "x86_64" ]; then
    EXTRAOPTS="--without-stlport"
    else EXTRAOPTS="--with-stlport"
    fi
    # autoreconf
    ./configure --with-distro=ArchLinux \
    --with-build-version="${_GOver} ArchLinux build-${pkgrel} (${_ootag})"\
    --with-srcdir=${srcdir} \
    --with-max-jobs=${MAKEFLAGS/-j/} \
    --with-installed-ooo-dirname="${pkgname}" \
    --prefix=/usr --exec-prefix=/usr --sysconfdir=/etc \
    --with-docdir=/usr/share/doc/packages/"${pkgname}" \
    --mandir=/usr/share/man \
    --with-lang="" \
    --with-dict=ALL\
    --with-binsuffix=no \
    --disable-ldap \
    --enable-cairo\
    --disable-kde\
    --enable-kde4\
    --enable-lockdown\
    --with-system-boost\
    --with-system-cairo\
    --enable-crashdump\
    --without-gpc\
    --enable-opengl \
    --enable-minimizer \
    --enable-presenter-console \
    --enable-pdfimport \
    --enable-wiki-publisher \
    --enable-ogltrans \
    --with-ant-home="/usr/share/java/apache-ant"\
    --with-system-saxon\
    --with-saxon-jar=/usr/share/java/saxon/saxon9he.jar\
    --with-system-lucene\
    --with-lucene-core-jar=/usr/share/java/lucene-core.jar\
    --with-lucene-analyzers-jar=/usr/share/java/lucene-analyzers.jar\
    --with-system-beanshell\
    --with-system-vigra\
    --with-system-altlinuxhyph\
    --with-system-lpsolve\
    $EXTRAOPTS || return 1
    # --with-system-mythes\
    # --with-system-graphite\
    # --with-tag=${_ootag}
    # --enable-report-builder \
    # --with-additional-sections="OOXMLExport"
    unset MAKEFLAGS
    # ./download
    LD_PRELOAD="" make || return 1
    package() {
    cd ${srcdir}/ooo-build-${_GOver}
    LD_PRELOAD="" make DESTDIR=${pkgdir} install || return 1
    # install all built dictionaries from source tree
    pushd ${srcdir}/ooo-build-${_GOver}/build/${_ootag}/dictionaries/unxlng?6.pro/bin/
    for i in `ls -1 dict-??.oxt`; do
    install -D -m644 $i ${pkgdir}/usr/lib/"${pkgname}"/share/extension/install/$i || return 1
    done
    popd
    # install all other built extensions
    pushd ${srcdir}/ooo-build-${_GOver}/build/${_ootag}/solver/320/unxlng?6.pro/bin/
    install -m644 pdfimport/pdfimport.oxt ${pkgdir}/usr/lib/"${pkgname}"/share/extension/install/pdfimport.oxt || return 1
    install -m644 swext/wiki-publisher.oxt ${pkgdir}/usr/lib/"${pkgname}"/share/extension/install/wiki-publisher.oxt || return 1
    install -m644 minimizer/sun-presentation-minimizer.oxt ${pkgdir}/usr/lib/"${pkgname}"/share/extension/install/sun-presentation-minimizer.oxt || return 1
    install -m644 presenter/presenter-screen.oxt ${pkgdir}/usr/lib/"${pkgname}"/share/extension/install/presenter-screen.oxt || return 1
    popd
    # fix unopkg call for mktemp, #15410
    sed -i "s:\/bin\/mktemp:\/usr\/bin\/mktemp:" ${pkgdir}/usr/lib/go-openoffice/program/unopkg || return 1
    #fix http://bugs.archlinux.org/task/17656
    find ${pkgdir} -perm 444 -exec ls -lh {} \;
    find ${pkgdir} -perm 444 -exec chmod 644 {} \;
    find ${pkgdir} -perm 555 -exec ls -lh {} \;
    find ${pkgdir} -perm 555 -exec chmod 755 {} \;
    Last edited by timefairy (2010-04-08 10:41:35)

    jryarch wrote:Thanks for your response, how do I actually run it automated? I'm doing it manual right now, but I suppose that's not the idea.
    I suppose that the function can be run as $1, I don't know (I mean `bash PKGBUILD mksource`). Nevertheless, you know what to do now
    Last edited by flamelab (2010-04-07 11:44:58)

  • Can't load vboxdrv driver on lts kernel

    I am having trouble starting vbox machine on 3.14.28-1-lts kernel after recent update.
    I got virtualbox-host-modules-lts and checked https://wiki.archlinux.org/index.php/VirtualBox so everything looks good in general...
    The error I get is this:
    Kernel driver not installed (rc=-1908)
    The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing
    'pacman -S virtualbox-host-modules'
    as root. If you don't use our stock kernel, install virtualbox-host-dkms and execute dkms autoinstall.
    I tried few things but the end result is I can't load the vboxdrv, here is some output:
    [root@fatal amp]# dkms install vboxhost/$(pacman -Q virtualbox|awk '{print $2}'|sed 's/\-.\+//') -k $(uname -rm|sed 's/\ /\//')
    Kernel preparation unnecessary for this kernel. Skipping...
    Building module:
    cleaning build area....
    make KERNELRELEASE=3.14.28-1-lts -C /usr/lib/modules/3.14.28-1-lts/build M=/var/lib/dkms/vboxhost/4.3.20/build..........
    cleaning build area....
    Kernel cleanup unnecessary for this kernel. Skipping...
    DKMS: build completed.
    vboxdrv.ko:
    Running module version sanity check.
    - Original module
    - No original module exists within this kernel
    - Installation
    - Installing to /usr/lib/modules/3.14.28-1-lts/kernel/misc/
    vboxnetflt.ko:
    Running module version sanity check.
    - Original module
    - No original module exists within this kernel
    - Installation
    - Installing to /usr/lib/modules/3.14.28-1-lts/kernel/misc/
    vboxnetadp.ko:
    Running module version sanity check.
    - Original module
    - No original module exists within this kernel
    - Installation
    - Installing to /usr/lib/modules/3.14.28-1-lts/kernel/misc/
    vboxpci.ko:
    Running module version sanity check.
    - Original module
    - No original module exists within this kernel
    - Installation
    - Installing to /usr/lib/modules/3.14.28-1-lts/kernel/misc/
    depmod....
    DKMS: install completed.
    [root@fatal amp]# vboxreload
    Unloading modules:
    DKMS autoinstall
    Loading modules: modprobe: ERROR: could not insert 'vboxnetadp': Invalid argument
    modprobe: ERROR: could not insert 'vboxnetflt': Invalid argument
    modprobe: ERROR: could not insert 'vboxpci': Invalid argument
    modprobe: ERROR: could not insert 'vboxdrv': Invalid argument
    [root@fatal amp]# modprobe -vvv vboxdrv
    modprobe: INFO: custom logging function 0x40a1b0 registered
    insmod /lib/modules/3.14.28-1-lts/extramodules/vboxdrv.ko.gz
    modprobe: INFO: Failed to insert module '/lib/modules/3.14.28-1-lts/extramodules/vboxdrv.ko.gz': Invalid argument
    modprobe: ERROR: could not insert 'vboxdrv': Invalid argument
    modprobe: INFO: context 0x10f5340 released
    Please advice.

    I was also having problems.  I took Rokixz advice and built virtualbox-host-modules-lts from abs.  This fixed it for me.
    Problem was:
    # modprobe vboxdrv
    modprobe: ERROR: could not insert 'vboxdrv': Invalid argument
    Here's some info-
    $ uname -a
    Linux archbook 3.14.29-1-lts #1 SMP Fri Jan 16 20:57:17 CET 2015 x86_64 GNU/Linux
    $ pacman -Qi linux-lts
    Name : linux-lts
    Version : 3.14.29-1
    Description : The Linux-lts kernel and modules
    Architecture : x86_64
    URL : http://www.kernel.org/
    Licenses : GPL2
    Groups : None
    Provides : kernel26-lts=3.14.29
    Depends On : coreutils linux-firmware kmod mkinitcpio>=0.7
    Optional Deps : crda: to set the correct wireless channels of your country
    Required By : virtualbox-guest-modules-lts virtualbox-host-modules-lts
    Optional For : None
    Conflicts With : kernel26-lts
    Replaces : kernel26-lts
    Installed Size : 72.71 MiB
    Packager : Andreas Radke <[email protected]>
    Build Date : Fri 16 Jan 2015 01:59:49 PM CST
    Install Date : Wed 21 Jan 2015 01:56:46 PM CST
    Install Reason : Explicitly installed
    Install Script : Yes
    Validated By : Signature
    This is what my virtualbox-host-modules-lts looked like before my abs build.
    $ pacman -Qi virtualbox-host-modules-lts
    Name : virtualbox-host-modules-lts
    Version : 4.3.20-2
    Description : Host kernel modules for VirtualBox
    Architecture : x86_64
    URL : http://virtualbox.org
    Licenses : GPL
    Groups : None
    Provides : virtualbox-host-modules=4.3.20
    Depends On : linux-lts>=3.14 linux-lts<3.15
    Optional Deps : None
    Required By : virtualbox
    Optional For : None
    Conflicts With : virtualbox-modules-lts
    Replaces : virtualbox-modules-lts
    Installed Size : 167.00 KiB
    Packager : Bartłomiej Piotrowski <[email protected]>
    Build Date : Mon 08 Dec 2014 12:01:20 PM CST
    Install Date : Wed 21 Jan 2015 02:16:01 PM CST
    Install Reason : Explicitly installed
    Install Script : Yes
    Validated By : Signature
    This is what my virtualbox-host-modules-lts looked like after my abs build.
    $ pacman -Qi virtualbox-host-modules-lts
    Name : virtualbox-host-modules-lts
    Version : 4.3.20-2
    Description : Host kernel modules for VirtualBox
    Architecture : x86_64
    URL : http://virtualbox.org
    Licenses : GPL
    Groups : None
    Provides : virtualbox-host-modules=4.3.20
    Depends On : linux-lts>=3.14 linux-lts<3.15
    Optional Deps : None
    Required By : virtualbox
    Optional For : None
    Conflicts With : virtualbox-modules-lts
    Replaces : virtualbox-modules-lts
    Installed Size : 187.00 KiB
    Packager : Unknown Packager
    Build Date : Wed 21 Jan 2015 02:23:28 PM CST
    Install Date : Wed 21 Jan 2015 02:40:43 PM CST
    Install Reason : Explicitly installed
    Install Script : Yes
    Validated By : None
    What's strange is that the "Version" is the same - 4.3.20-2, but the "Installed Size" varies...  167 KiB vs 187 KiB.
    Also, the build date on the virtualbox-host-modules package (still the current one in the repos) was Dec 8, 2014...  The linux-lts package build date is Jan 16, 2015.
    -David

  • Trouble with booting system after upgrade udev= systemd

    Hi everybody,
    I have been trouble with my system since last upgrade (udev => systemd)
    My issue is something like this: https://bbs.archlinux.org/viewtopic.php?pid=1106157 but advice from this discussion doesn't work.
    When system booting, *immediately* (very fast, too fast) display login screen after start parsing hook [udev]
    Of course, i can't login - type username and i have redraw screen again on all /dev/tty* - i have no chance to type password.
    Many invalid logins suspend init for 5 minutes and allow me see display error due stop redraw screen - libpam.so.0 cannot find.
    I suspect that, partitions aren't mount (this fast login screen doesn't have even hostname). I have a 4 discs, with many partitions - mounting
    this take a some time (+- 5 secs).
    In rescuecd, i can mount all partitions and chroot. In chroot all works fine - /bin/login (i was checked authorization on all users),
    paths and pams are ok. Of course i try ,,default rescue trick'': `pacman -Suy linux udev mkinitcpio` and 'mkinitcpio -p linux' on rescuecd
    but nothing it's changed after reboot. I checking grub config, and unpack and check initramfs-linux.img - all ok.
    In my mkinitcpio.conf ofcourse i have MODULES="ext3" (for my filesystems).
    Please help.

    crab wrote:
    This may or may not be related... but I saw this message just now during an upgrade:
    (121/168) upgrading mkinitcpio [###################] 100%
    ==> If your /usr is on a separate partition, you must add the "usr" hook
    to /etc/mkinitcpio.conf and regenerate your images before rebooting
    And am wondering what the message means by if /usr is on a separate partition - separate partition to what?  /boot? / ?
    I have my /usr partition in the same partition as /  (but /boot is in a different partition)
    Logic tells me I'm safe (haven't rebooted yet), as / is "master", and anything else is a separate partition, and I have /usr on the same partition as /.
    Do you guys have separate /usr and/or /boot partitions?  As stated in first sentence this may not be related, but looks important...
    It means separate from /. So yes, you're right, you are "safe" from having to do anything with this message on your system.
    And to the other people on this thread: make sure you do have all your packages uniformly updated, including any pam-related AUR or ABS-build packages. libpam and the pam module directory (.../lib/security) were moved from /lib to /usr/lib a little while back, so make sure that anything that cares about where these may be have been updated so they aren't confused by this move.
    Last edited by ataraxia (2012-06-03 22:40:22)

  • Distrib -e "Arch(org|code|pkgs|aur|forum|wiki|bugs|.*)?" -- thoughts

    design the output of every tool we use to look like merge-able chains in a DSCM.
    this is something i've been thinking about long before i came to Arch.  i want to see next-generation package/configuration/change management in a distributed distribution.
    I am familiar with git, and most of what i have tried is relating with it; however. it's only because i know much about it.  other possibilities would be bazaar?/mercurial/fossil/etc.  i like fossil; i have not tried it but it looks closest to what i want to achieve.  i don't think it could scale to the levels we'd need however.
    ASSERTIONS
    ) all PKGBUILD are DSCM (git/?) based
    ) bugs should ride along with software, and be merge-able when branches merge
    ) cryptographic signatures for each user
    ) wiki for each software
    ) forum "channels" for each software
    ) P2P sharing of SCM (blobs/trees/commits in git) units
    ) P2P sharing of common SCM (packs in git) pack
    ) P2P sharing of user configs and ABS build trees; each user may host their own binary/source repo, and sign their packages)
    ) P2P and distribution are good
    essentially, everything is a branch/tree and we use facilities of DSCM with a P2P layer above.  the arch servers could become another node in the system and a long term record keeping peer.  others could add servers.  you could open the wiki/bugs/etc offline, in a web browser, and merge later.  when you edit your PKGBUILDS, they can be forked by others and improved, maybe pushed to the core/community repos.  official repo builds could be signed by an official Arch GPG key.  bring everything as close to source as possible, and spread it out.
    this is completely brainstorming right now, but i have done some tricky cool stuff with git.  i want to keep all/most of the logic/information withing the git DAG (commit graph).  i think we could do neat stuff with the git index, git grafts, and several operations could safely be done in parallel.  we could do mapreduce type calculations on the "ArchNet" to get crazy statistics and visualizations.
    i intend to actually build something soon-ish-awhile.  right now im working on an app that can produce 3D visualizations in VPython from any kind of input stream... i want to hook that kind of stuff up and visualize the arch/linux/gnu/buzz.
    another offshoot project for me was to use VPython (that app is really fun) to navigate and manipulate git repositories in real time.  imagine visualizing your system in 3D while working on it.  like a 3D admin panel where you overlay others configs and entire systems on to your own to see what changes/etc. DSCM can do this.
    thoughts?  what other kinds of things could we do if everything Arch behaved like a P2P super-repository?
    Last edited by extofme (2010-02-14 01:34:37)

    Anntoin wrote:Some interesting ideas. But you need to start small first and make a proof of concept before you try and tackle everything. A detailed plan of how packages are handled and a basic implementation would be a start for example (still not a small job though), then testing the behaviours that you are interested with that framework. You have an idea where you want to go with this but you will need to focus on a few core features and show their benefit before anyone will consider this.
    ah yes of course.  the first step is getting a distributed index, and a "package" format (packages become fuzzy items, below/above); this will be realized in the form of:
    "AUR3 [aur-pyjs] implementation in python (pyjs) + JSON-RPC"
    https://bbs.archlinux.org/viewtopic.php?pid=823972
    i have a package in the AUR for that [aur-pyjs], but it's old as i haven't been able to update in awhile, and won't be until i secure a development job in my new city (next week hopefully).  aur-pyjs will be built on top of the concepts i have outlined in this thread, and will in time become prototype.  check it out; pretty neat even though it can't do much yet :-).  soon though, i will update the package, and it will then be able to run as a native python desktop app (pyjamas allows the same code to run as a website or a desktop app).  at that point, it will be trivial to implement connectivity to the old AUR/repos, and we will in effect have a pacman+aur replacement.  from there i will tackle bugs+forum, of which there are already several implementations on top of DSCM sub-systems to research and learn from.
    stefanwilkens wrote:
    Dieter@be wrote:Interesting ideas, I think i like them.
    but trying to store too many big files (ie package files) inside a VCS seems like a bad idea.  space requirements will be much bigger then what we have now, unless you make the VCS "forget" about older versions or something...
    mostly this.
    the base of what you're proposing is a tremendous amount of data that would never be touched after a new version is released, how do your suggestions fit the rolling release model. Especially relatively large packages updated with high frequency (nvidia binary drivers, for instance) could cause the space requirement to increase rapidly unless moderated.
    packages are not stored in the DSCM, their contents are.  the package itself is simply a top-level tree object in git, linking to all other trees and blobs comprising the package state, and a reference to said tree.  this means everything and anything that is common between _any_ package and _any_ version will be reused; if ten unrelated packages reference the same file, only one copy will ever exist; blobs are the same.  however, some packages may indeed create gigantic, singular blob type objects that always change, and this will be addressed (next...).
    git compresses the individual objects itself, in gz format; this could be changed to use the xz format, or anything else.  it also generates pack files full of differentiated objects, also compressed. it would not always be necessary to have the full history of a package (if you look somewhere above, i breifly touch this point with various "kinds" of packages, some capable of source rebuild, some capable of becoming any past source/binary version, some a single version/binary only, etc.).  you would not have to retain all versions of "packages" if you did not want, but you could retrieve them at anytime so long as their components existed somewhere on the network.  servers could be set up to provide all packs, all version, effectively and automatically performing the intended duty of the "arch rollback machine".  the exact mechanism is not defined yet, but it will likely involve some sort of SHA routing protocol, to resolve missing chunks.
    git's data model is stupid simple; structures can be created to represent a package, it's history, it's bugs/status, and it's information (wiki/etc.), in an independent way so they do not depend on each other, but still relate to each other, and possess knowledge of how to "complete" and find each other.  it will not be structured in the typical way git is used now.  unfortunately this is very low level git stuff, and difficult to explain properly, so i won't go there; just know that ultimately the system will only pull the objects you need to fulfill the directive you gave it, and there will be rules to control your object cache.  your object cache can then be used to fulfill the requests of others; ie. P2P.
    since git itself is in a rather poor state when it comes to bindings, i will be using the pure python git library, dulwich, instead.  while in time this could be changed to use proper bindings, or some bits written as C modules, it's possible pypy will make all that unnecessary.  i don't need anything git core offers except its data structures and concepts; although, i intend to make the entire system (adding bugs/updating packages/editing wiki/editing forum/etc.) _completely_ 100% accessible from a basic git client.  for example, you could write a post in the forum by "committing" to a special branch; you could search the entire wiki, and its history from the terminal while installing; you could add a bug, and link a patch to it, directly usable and buildable by others for testing; this could all be done offline, and pushed once a connection was available... this will lead to all sorts of interesting paths...
    in one super run-on sentence:
    i intend to construct a "social", 100% distributed distribution platform, where everyone is a [potentially] contributing node and has [nearly] full access to all informations, each node's contributions are cryptographically verifiable, each node may easily participate in testing/discussion or lend computing resources, each node may republish variations of any object or collection under their own signature, each node may "track" or "follow" any number of signatures with configurable aggressiveness (no such thing as "official repos"; your personal "repo" is the unique overlay of other nodes you trust, and by proxy some nodes they trust; "official repos" degrade into an Arch signature, a Debian signature, Fedora, etc.), and finally, do all of this is a way that is agnostic to the customized distribution (or other package managers) above it, or it's goals, and eventually spread to other distros, thus creating a monstrous pool of shared bandwidth, space, ideas, and workload, whilst at the same time converging user/developer/tester/vendor/packager/contributor/etc. toward: person.
    piece of cake
    C Anthony
    Last edited by extofme (2010-09-11 05:23:46)

  • [SOLVED] makepkg ignoreing CFLAGS

    It seems no matter what I do makeflags ignores my CFLAGS and CXXFLAGS options.
    I have added
    CFLAGS="-march=sandybridge -mtune=sandybridge -O3 -pipe"
    to all the locations that makepkg.conf has been installed..
    /etc/makepkg.conf
    /home/jwdev/abs/chroot/jwdev/etc/makepkg.conf
    /home/jwdev/abs/chroot/root/etc/makepkg.conf
    /var/abs/core/pacman/makepkg.conf
    as well as to the makechrootpkg command
    mkarchroot -C /etc/pacman.conf -M /etc/makepkg.conf $CHROOT/root base-devel
    arch-nspawn $CHROOT/root pacman -Syu
    cd $1
    makechrootpkg -c -r $CHROOT -- CFLAGS="-march=sandybridge -mtune=sandybridge -O3 -pipe" CXXFLAGS="-march=sandybridge -mtune=sandybridge -O3 -pipe"
    But still, when I run makepkg I see...
    g++ -c -pipe -O2 -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong --param=ssp-buffer-size=4 -Wall
    This is the case if I makepkg or makechrootpkg
    is there some otehr secert location for these args? or are they hardcoded in
    I do see that some packages have hardcoded args, like libaio, which kind of defeats the point of abs
    build() {
    cd "$srcdir/$pkgname-$pkgver"
    CFLAGS="-march=${CARCH/_/-} -mtune=generic -O2 -pipe"
    make
    but in my test case, smplayer, this is not the case
    build() {
    cd "$pkgname-$pkgver"
    make PREFIX=/usr \
    DOC_PATH="\\\"/usr/share/doc/smplayer\\\"" \
    QMAKE_OPTS=DEFINES+=NO_DEBUG_ON_CONSOLE
    thanks
    UPDATE:  running makepkg on "joe" I see that the CFLAGS are correct... so it seems that only some (all but 1 so far actually) packages respect the makepkg? while the other just do their 'own thing'?  I am new to Arch, so maybe I am missing a huge piece of information
    Last edited by ntisithoj (2015-03-17 06:35:08)

    The trick desribed in wiki:
    https://wiki.archlinux.org/index.php/makepkg#CFLAGS.2FCXXFLAGS.2FCPPFLAGS_in_makepkg.conf_do_not_work_for_QMAKE_based_packages
    works only with those sources, that use qmake (qmake-qt4) for preconfigure. In those case, you should to define QMAKE variable in PKGBUILD and next invoke it in "build" section. See example of QupZilla that works for me (it's modified qupzilla-qt5-qtwebkit-git's PKGBUILD by Alex Talker):
    pkgname=qupzilla-qt5-qtwebkit-git
    pkgver=r3348.0c37b62
    pkgrel=1
    pkgdesc="A new and very fast open source browser based on WebKit core, written in Qt Framework."
    arch=('i686' 'x86_64')
    url="http://qupzilla.com/index.php"
    license=('GPL')
    depends=( 'qt5-base' 'qt5-script' 'qt5-webkit')
    makedepends=('git')
    provides=('qupzilla' 'qupzilla-git')
    conflicts=('qupzilla' 'qupzilla-git' 'qupzilla-qt5-git')
    source=('git+https://github.com/QupZilla/qupzilla.git')
    md5sums=('SKIP')
    CFLAGS="-march=native -mtune=bdver2 -O2 -pipe -fstack-protector-strong --param=ssp-buffer-size=4"
    CXXFLAGS="-march=native -mtune=bdver2 -O2 -pipe -fstack-protector-strong --param=ssp-buffer-size=4"
    pkgver() {
    cd qupzilla
    printf "r%s.%s" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)"
    build() {
    cd "$srcdir/qupzilla"
    export USE_WEBGL="true"
    export KDE_INTEGRATION="true"
    export QUPZILLA_PREFIX="/usr/"
    export USE_LIBPATH="/usr/lib"
    qmake-qt5 "$srcdir/qupzilla/QupZilla.pro" \
    PREFIX=/usr \
    CONFIG+=LINUX_INTEGRATED \
    INSTALL_ROOT_PATH="$pkgdir" \
    QMAKE_CFLAGS_RELEASE="${CFLAGS}" \
    QMAKE_CXXFLAGS_RELEASE="${CXXFLAGS}"
    make
    package() {
    cd ${srcdir}/qupzilla
    make INSTALL_ROOT="$pkgdir/" install
    Modified sections are:
    - in "headers" - where I add CFLAGS="something" and CXXFLAGS="something" variable and
    - in "build" section by add:
    QupZilla.pro to
    qmake-qt5 "$srcdir/qupzilla/
    in original PKGBUILD and
    CONFIG+=LINUX_INTEGRATED \
    INSTALL_ROOT_PATH="$pkgdir" \
    QMAKE_CFLAGS_RELEASE="${CFLAGS}" \
    QMAKE_CXXFLAGS_RELEASE="${CXXFLAGS}"
    But, in case of smplayer, it looks like, that is "preconfigured". There isn't *.pro file, you don't use "qmake" to preconfigure sources, and typical configuration for it is only:
    make
    sudo make install
    because everything else is in Makefile.txt. You should use another trick (I don't know which will be correct).

  • Yaourt, expected?

    Hello,
    I have been using AUR for more important packages that I feel should be optimized and compiled on my system (firefox-pgo, mplayer, etc.), but I've been looking for an easier way to check the AUR packages (that I already have installed) for updates, and automagically download, makepkg, and update them. I've installed yaourt in hopes that it could take care of this for me..
    However, I have run into (percieved?) issues. Running 'yaourt -Sybu - -aur' gives me:
    :: Synchronizing package databases...
    core is up to date
    extra is up to date
    community is up to date
    archlinuxfr is up to date
    ==> Packages to build from sources:
    mplayer 29411-3
    Proceed with compilation and installation ? [Y/n]y
    Source Targets:  mplayer live-media
    Proceed with upgrade?  [Y/n] y
    ==> Building mplayer from sources
    mplayer was not found on abs repos.archlinux.org
    ==> Building live-media from sources
    live-media was not found on abs repos.archlinux.org
    I don't think that yaourt is even looking in the AUR, and also why is this extra live-media dep existing now, as I don't see it being a dependency on AUR for mplayer..?
    Please advise.

    I have written a different patch which does not require ABS to be installed and uses the new WebSVN, this is more similar to the current yaourt mechanism
    --- yaourt-original 2009-10-20 10:59:54.000000000 +0200
    +++ /usr/bin/yaourt 2009-10-20 11:00:28.000000000 +0200
    @@ -32,7 +32,8 @@
    AUR_URL="http://aur.archlinux.org/packages.php?setlang=en&do_Search=SeB=nd&L=2&C=0&PP=100&K="
    AUR_URL3="http://aur.archlinux.org/packages.php?setlang=en&ID="
    ABS_URL="http://archlinux.org/packages/search/?category=all&limit=99000"
    -ABS_REPOS_URL="http://repos.archlinux.org/viewvc.cgi"
    +#ABS_REPOS_URL="http://repos.archlinux.org/viewvc.cgi"
    +ABS_REPOS_URL="http://repos.archlinux.org/wsvn/packages"
    [ -z "$LC_ALL" ] && export LC_ALL=$LANG
    --- abs.sh-original 2009-10-20 10:53:50.000000000 +0200
    +++ /usr/lib/yaourt/abs.sh 2009-10-20 16:45:01.000000000 +0200
    @@ -44,9 +44,9 @@
    #if [ "$repository" = "testing" ]; then
    # repository="all"
    #fi
    +
    # Manage specific Community and Testing packages
    - if [ "$repository" = "community" ]; then
    + if [ "$repository" = "community" ]; then
    # Grab link to download pkgbuild from AUR Community
    [ "$MAJOR" != "getpkgbuild" ] && msg $(eval_gettext 'Searching Community AUR page for $PKG')
    aurid=`findaurid "$PKG"`
    @@ -67,23 +67,15 @@
    # Grab link to download pkgbuild from new repos.archlinux.org
    source /etc/makepkg.conf
    [ -z "$CARCH" ] && CARCH="i686"
    - wget -q "${ABS_REPOS_URL}/$PKG/repos/" -O - > "$YAOURTTMPDIR/page.tmp"
    - if [ $? -ne 0 ] || [ ! -s "$YAOURTTMPDIR/page.tmp" ]; then
    - echo $(eval_gettext '$PKG was not found on abs repos.archlinux.org'); manage_error 1 || continue
    - fi
    - repos=( `grep "name=.*i686" "$YAOURTTMPDIR/page.tmp" | awk -F "\"" '{print $2}'` )
    - # if package exists in testing branch and in current branch, select the right url
    - if [ ${#repos[@]} -gt 1 -a $USETESTING -eq 1 ]; then
    - url="$ABS_REPOS_URL/$PKG/repos/${repos[1]}/"
    - else
    - url="$ABS_REPOS_URL/$PKG/repos/${repos[0]}/"
    - fi
    + ## legolas558: full path at WebSVN
    + url="${ABS_REPOS_URL}/$PKG/repos/${repository}-${CARCH}/"
    fi
    # Download Files on SVN package page
    wget -q "$url" -O "$YAOURTTMPDIR/page.tmp"
    manage_error $? || continue
    - files=( `grep "name=.*href=\"/viewvc.cgi/" "$YAOURTTMPDIR/page.tmp" | awk -F "\"" '{print $2}'`)
    + files=( `sed -nr 's/(.*)href="?([^ ">]*).*/\2\n\1/; T; P; D;' "$YAOURTTMPDIR/page.tmp" | grep op=dl \
    + | grep -v isdir=1 | sed -e 's/\&/\&/g' `)
    if [ ${#files[@]} -eq 0 ]; then echo "No file found for $PKG"; manage_error 1 || continue; fi
    echo
    if [ "$MAJOR" != "getpkgbuild" ]; then
    @@ -98,12 +90,11 @@
    for file in ${files[@]}; do
    echo -e " ${COL_BLUE}-> ${NO_COLOR}${COL_BOLD}$(eval_gettext 'Downloading ${file} in build dir')${NO_COLOR}"
    - if [ "$repository" = "community" ]; then
    - eval $INENGLISH wget --tries=3 --waitretry=3 --no-check-certificate "$ABS_REPOS_URL/community/$category/$PKG/$file?root=community\&view=co" -O $file
    - else
    - eval $INENGLISH wget --tries=3 --waitretry=3 --no-check-certificate "${url}${file}?view=co" -O $file
    - fi
    + pfname=$( echo $file | awk -F[/?] '{ print $(NF-1) }' )
    + ## legolas558: do not use eval since will exit the script
    + $INENGLISH wget --tries=3 --waitretry=3 --no-check-certificate -O "$pfname" "http://repos.archlinux.org${file}"
    done
    + unset pfname
    [ "$MAJOR" = "getpkgbuild" ] && return 0
    @@ -114,15 +105,15 @@
    else
    runasroot=0
    fi
    +
    readPKGBUILD
    if [ -z "$pkgname" ]; then
    echo $(eval_gettext 'Unable to read PKGBUILD for $PKG')
    manage_error 1 || continue
    fi
    +
    msg "$pkgname $pkgver-$pkgrel $([ "$branchtags" = "TESTING" ] && echo -e "$COL_BLINK[TESTING]")"
    +
    # Customise PKGBUILD
    [ $CUSTOMIZEPKGINSTALLED -eq 1 ] && customizepkg --modify
    You will need to apply it to /usr/bin/yaourt and /usr/lib/yaourt/abs.sh. I will propose it to the yaourt mantainers.

  • Skulltag server

    Hi,
    As you may or may not know, Skulltag is a Doom source port. I'm trying to use Skulltag to host a Doom server on my Archlinux machine. I downloaded Skulltag (For Ubuntu) off of the Skulltag website, however when I try to run the executable I get the following:
    $ skulltag-server
    -bash: /usr/local/bin/skulltag-server: No such file or directory
    Note also that I placed the executable "skulltag-server" in /usr/local/bin. I have downloaded all of the required dependencies through pacman (With the exception of FMOD, which I manually downloaded and placed in /usr/local/lib). I'm not here to ask how to setup Skulltag, since I don't expect anyone here to know how or even care. I mainly want to know how I can tell why this executable is failing. It's saying there's no such file or directory when this is obviously not the case. I checked the log files in /var/log but I'm not seeing anything logged.
    Help is greatly appreciated.

    void.pointer wrote:
    skottish wrote:
    Back on track...
    Some of the dependencies for this program are not existing lib32 libraries from community, nor has anyone put together the dependencies for AUR.
    Things that don't exist yet: flac, fmod, nasm, p7zip, timidity++
    Things that are in community: lib32-gtk2, lib32-libjpeg, lib32-sdl
    Pulling in the stuff from community will also bring in a whole bunch of other libraries too.
    You can find good examples of how lib32 packages are made from ABS. Or you can go with a chroot environment. Either way there's a lot of work to be done before this program will run.
    I think you're assuming I know a lot more than I really do
    I've literally only been using linux for the past 2-3 months. I've been a Windows user my entire life. So, basically, I'm not very familiar with Archlinux either as a result, since I spend the first month using Ubuntu
    When I do "pacman -S some_library", what am I getting? 32-bit or 64-bit libraries? What do you expect me to be doing with ABS, building all of the dependencies myself?
    I'm just use to having you around! Actually, this is how I learned too. I still have a long way to go...
    I'm curious about something before you go any farther. In the file /etc/pacman.d/mirrorlist, all of the entries there end in x86_64, right? For instance:
    Server = http://archlinux.unixheads.org/$repo/os/x86_64
    Last edited by skottish (2008-10-16 21:15:26)

  • Internal Arch repository?

    Ok, a summary first, I'm the systems administrator for a small company with a callcentre, few servers and a bunch of desktop machines. I plan to replace the antiquated SuSe and Mandrake installs with some nice fresh Arch installs.
    Although arch is designed to be updated to bleeding edge at all points, i want to use it for
    1. Pacman
    2. Stability
    3. I like the way everything is done
    However, I won't blindly roll out updates to the whole callcentre, my plan is therefore to sync my local repository to the global one every month, sync one machine to it and 2 weeks later, if the machine is uncrashed, sync the rest. This keeps machines within a month and a half of the latest (obviously i'll update security related packages instantly) but it requires an internal repository, I have NO idea of the best way to set one up, so I come to you for advice.
    Is there an easy way to produce an internal arch repository that 40-50 machines can use as a base without having to rsync the entire repository from the server / otherwise needlessly consume mirrors' bandwidth?

    hi all,
    i am new to these forums, so i'd like to start by saying thanks for this great distro! i've been using arch for about two months now and i find it by far the best distro i've tried. arch is always up to date, maintenance is mostly hassle-free and pacman is a breeze to use. i _really_ like the ABS build system. oh, and it also runs quite decently on my 500Mhz 192Mb laptop. wow!
    *ahem*, i suppose i will get on topic now:
    as i do not have internet at home, i like to keep a local repository. this way i don't have to drag my laptop to a connection each time i want to install a package. the easiest way to do this is by sync'ing from rsync.archlinux.org every two months or so.
    i fully understand that you guys are not to eager to have home users rsync from your server. so my question is: are there mirrors that can be used for rsync-ing?
    i already tried syncing from a ftp.gwdg.de/pub/linux/metalab/distributions/archlinux. this works ok, except for the fact that it leaves me with a mirror containing several versions of each package. i guess i could sync from a mirror and then delete superfluous packages by doing
       rsync -av ftp.gwdg.de::pub/linux/metalab/distributions/archlinux/current ./current
       rsync -av --delete rsync.archlinux.org::current ./current
    but this method wastes gigabytes of bandwith. anyone have a better idea?

Maybe you are looking for

  • PC networking to iMac booted in Vista, and in vista emulation-Small office

    Re: Wireless and Wired Network: probably using a dlink 825 dual n band wireless router. 1. How is a typical Dell/ vistax64 networked to an imac booted in vista x64? 2. How is a typical Dell/ vistax64 networked to an imac running vista x64 in emulatio

  • Radio not working on any OS?

    I can confirm that radio is not working any of my Macs.

  • Trigger Process Chain from Source System into Target System

    Hello...actually in source system BZD, we have a process chain, in the target system BWD, we have another process chain as well. We want to have a process of combining these two process chain together. When the process chain in source system is compl

  • Stuck in "working" when trying to logon via Elements 10

    I use Photoshop Elements 10. When I click on the home button in the upper right hand corner I am asked to sign in/provide password. I do, then I get stuck in "working". What is happening during this time. Is there something happeneing that I should j

  • Boot Sun Ray to specific server in FOG

    I want to boot 5 Sun Ray DTU's to a specific server in our FOG. I recall something from a while back where you could use a file with the mac address of the DTU to point to a specific server but can't find anything on it now. Thanks in adavnce,