[SOLVED]Concerning sirius: some building + PKGBUILD/.SRCINFO questions

[first from scratch pkgbuild]
Hello,
I'm so far as I have an *almost* working (only dummy package() function yet) PKGBUILD (link; plain) for sirius , sadly the build itself won't work, as it fails with a python syntax error, eg:
if prot <> 'public':
on the '>'. It does not seem to be a sourcecode bug, neither a Python3 regression (tested with venv for Python2, too).
I'm thus currently a bit clueless.
On the general side:
I've read in some .SRCINFO file "generated by makepkg" - how do you do that? I haven't found an option for that neither in the manpages nor in the help :-/
Is there an application to auto-test for "automatically matched" dependencies - ie. when a dependency is already satisfied due to an other dependencies dependency on it? This would be helpful.
It's my first from-scratch PKGBUILD, so, if you find anything to improve I'ld appreciate it if you'ld point it out :-)
Last edited by LeonardK (2015-05-19 09:17:16)

LeonardK wrote:
[first from scratch pkgbuild]
Hello,
I'm so far as I have an *almost* working (only dummy package() function yet) PKGBUILD (link; plain) for sirius , sadly the build itself won't work, as it fails with a python syntax error, eg:
if prot <> 'public':
on the '>'. It does not seem to be a sourcecode bug, neither a Python3 regression (tested with venv for Python2, too).
I'm thus currently a bit clueless.
Are these commands the upstream instructions? If so, you might open an issue with them.
On the general side:
I've read in some .SRCINFO file "generated by makepkg" - how do you do that? I haven't found an option for that neither in the manpages nor in the help :-/
makepkg --source ?
Is there an application to auto-test for "automatically matched" dependencies - ie. when a dependency is already satisfied due to an other dependencies dependency on it? This would be helpful.
namcap
It's my first from-scratch PKGBUILD, so, if you find anything to improve I'ld appreciate it if you'ld point it out :-)
pkgver for git VCS always makes me cry
Last edited by Alad (2015-03-20 18:12:26)

Similar Messages

  • Macbook pro starts up but i see picture of some files and a question mark

    So I was on my laptop today and it just outta nowhere froze , the song i was listening stopped and i couldn't change the page, i could move the trackpad but nothing else.
    so i clicked on the off button and turned it off.
    then i tried to turn it on again but while loading it just stayed blank and this picture of some files and a question mark on them appeared, i have no idea what's going on.
    I turned it off again and turned it back on and the little apple appeared and it was loading but then a sign appeared like one of prohibited (the one that's a circle and has a slash across it) and idk what to do! i'm so frustrated all my school files are there and idk why this happened in the first place.
    i turned it off and just put it to charge.
    lately i've been using it a lot and i had a few things on when it froze , so im just gonna leave it off for a while but what can i do?
    how do u solve this? has anyone had this happened to their macbook pro already? i bought it just last year idk why this is happening.

    The folder and question mark means it cannot find a valid OSX Boot Volume, you need to Boot from your original install DVD (hold c at start) and then run Disc Utility from the top menu bar and run both 'Repair Disc' and 'Repair Permissions'. If it still doesn't boot from the internal hard drive then you need to boot from that install DVD again and do an archive (re)install of the operating system. (Your User data will be saved that way)

  • [SOLVED]"permission denied" when building pkg

    most consistantly, I get the error when a pkgbuild has the line: ./autogen.sh
    in building linphone-git, mediastreamer-git and ortp-git
    another time, I got burned with the line: ./system.tmp
    from building wiimms-iso-tools
    and the last pkg I tried to build, I got it on the line: ./file2h
    while building linux-wbfs-manager
    ...it really bugs me about these packages, because my desktop has the exact same install and works fine (so well, I can build them on the desktop and install them on the laptop... but that becomes tedious after a while, so I'd like to fix this issue... I searched for the issue in the forums, and it always semed to be related to the actual pkgbuild... but that's not the case in my scenario because the build works fine on the desktop and kicks me in the teeth on the laptop.  Please ask whatever questions necessary to get the info needed to deal with this issue.  I'm not really strong at pkgbuild configurations (only make a couple for my own to test my abiliities) so I'm not sure what to include exactly.  I can include the line numbers for the pkgbuilds in question, but I'm sure the PKGBBUILDs are not the issue here... it's something to do with my laptop...
    come to think of it, my desktop isn't quite up to date as my laptop, I think... because the laptop ran into an issue where python-qt and python-sip were attempting to replace python2-qt and python2-sip, but both python2 pkgs were deps for like 50+ other packages... so I'm not sure if that's related.  Pretty sure I didn't update my desktop and get that issue... not in the last couple of days.  And that issue started on my laptop today... both issues, actually, so maybe they are related.
    SOLUTION: removed "noexec" from tmpfs options for /tmp ramdisk setup in fstab
    Moral of the story: be mindful of how you configure things, because an old configuration can ruin a new package installation or functionality.
    Last edited by CPUnltd (2011-01-24 22:09:34)

    ==> Starting make...
    patching file Makefile
    Hunk #1 succeeded at 165 (offset 11 lines).
    Hunk #2 succeeded at 459 (offset 19 lines).
    ./setup.sh: line 47: ./system.tmp: Permission denied
    ***  create templates.sed     
    ***  create version.h         
    /bin/bash: ./gen-template.sh: Permission denied
    make: *** [version.h] Error 126
        Aborting...
    The build failed.
    ==> Starting build()...
    gcc -march=x86-64 -mtune=generic -O2 -pipe -Wall -DLARGE_FILES -D_FILE_OFFSET_BITS=64 -Ilibwbfs -I. -pthread -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/libglade-2.0 -I/usr/include/gtk-2.0 -I/usr/include/libxml2 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/pango-1.0 -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libpng14    -c -o file2h.o file2h.c
    gcc -o file2h file2h.o
    ./file2h wbfs_gui_glade.h wbfs_gui.glade
    make: execvp: ./file2h: Permission denied
    make: *** [wbfs_gui_glade.h] Error 127
        Aborting...
    The build failed.
    ==> Starting make...
    /tmp/packerbuild-1000/linphone-git/linphone-git/PKGBUILD: line 39: ./autogen.sh: Permission denied
        Aborting...
    The build failed.
    /tmp/packerbuild-1000/mediastreamer-git/mediastreamer-git/PKGBUILD: line 35: ./autogen.sh: Permission denied
        Aborting...
    The build failed.
    ==> Starting make...
    /tmp/packerbuild-1000/ortp-git/ortp-git/PKGBUILD: line 34: ./autogen.sh: Permission denied
        Aborting...
    The build failed.

  • Wrapper script to build PKGBUILD's in RAM

    I've been wanting to do 2 things for the last couple weeks: 1) create a wrapper script to build Arch pkg's in RAM and 2) create KDE trunk PKGBUILDs for those adventurous enough to use them (such as myself).  Anyway, I sat down and began the task today; I haven't progressed too far, and probably won't do much more with it until next weekend because of school, but here is the current, very alpha-ish but working, wrapper script I will use:
    #!/bin/bash                                                                           
    # =============================================================
    # kdepkg is a wrapper script to build KDE-TRUNK packages in RAM.
    # =============================================================
    # Holds the location of the original directory
    ORIG_DIR="`pwd`"
    # Build each package listed as arguments
    for dir in ${@}; do                     
            cd ${dir}                       
            # Create some local variables
            PKG_NAME="`grep pkgname PKGBUILD| head -n1 | sed -e "s/'//g" | sed -e 's/"//g' |
                    sed -e 's/=/ /' | awk '{print $2}'`"
            SRC_DIR="`pwd`/src/`grep _svnmod PKGBUILD | head -n1 | sed -e "s/'//g" |       
                    sed -e 's/"//g' | sed -e 's/=/ /' | awk '{print $2}'`"                 
            BUILD_DIR="/dev/shm/${PKG_NAME}-build"                                         
            # Create the necessary directories
            mkdir -p ${SRC_DIR} \     
                     ${BUILD_DIR}     
            # Link the build directory to the source directory
            ln -sf ${BUILD_DIR} ${SRC_DIR}
            # Create a unique stamp for the backup PKGBUILD
            UNIQUE="`date +"%m%d%y_%H%M%S"`"
            # Modify the PKGBUILD && make a pkg
            mv -f PKGBUILD PKGBUILD.${UNIQUE}.bak
            sed -e "s:[.][.]:${SRC_DIR}:" PKGBUILD.${UNIQUE}.bak > PKGBUILD && makepkg
            mv -f PKGBUILD.${UNIQUE}.bak PKGBUILD
            # Remove the build directory
            rm -rf ${BUILD_DIR} \
                   ${SRC_DIR}/${PKG_NAME}-build
    done
    # Return to the original directory
    cd ${ORIG_DIR}
    It's currently very picky about what it will build, but is sufficient enough to build my packages.  I plan to extend its abilities to include building any package, whether it uses a build directory or not, regardless of the packaging technique utilized.  I will also include options to specify the temp directory to use as the base for build directories and whether to delete them after package creation.
    Here's an example of one of the PKGBUILD's this will work with:
    # KDE Trunk Libraries               
    # Contributor: Dylon Edwards <[email protected]>                   
    pkgname=kdetrunk-kdelibs
    pkgver=860962           
    pkgrel=1               
    pkgdesc="KDE Core Libraries"
    arch=('i686' 'x86_64')     
    url='http://www.kde.org'   
    license=('GPL' 'LGPL' 'FDL')
    groups=('kde')             
    depends=('libxcursor' 'kdetrunk-phonon' 'shared-mime-info' 'qt-copy' 'libxpm'
             'enchant' 'jasper' 'openexr' 'kdetrunk-strigi' 'bzip2' 'libxslt' 'libxtst'
             'giflib' 'kdetrunk-soprano' 'ca-certificates' 'heimdal' 'pmount')         
    makedepends=('pkgconfig' 'cmake' 'kdetrunk-automoc4' 'intltool' 'avahi' 'libgl')   
    replaces=('arts')                             
    conflicts=('kdelibs' 'kdemod-kdelibs')                                   
    options=('docs')                                                         
    source=()                                                                 
    md5sums=()
    _svnmod='kdelibs'
    _svntrunk="svn://anonsvn.kde.org/home/kde/trunk/KDE"
    build() {
            cd ${srcdir}
            # update the src as needed
            [ -d ${_svnmod}/.svn ] && svn up ${_svnmod} || svn co ${_svnroot}/${_svnmod}
            cd ${_svnmod}                                                                   
            # Create a build directory
            mkdir -p ${_svnmod}-build && cd ${_svnmod}-build
            # Configure the package
            cmake -D CMAKE_BUILD_TYPE=Release \
                  -D KDE_DISTRIBUTION_TEXT='Arch Linux' \
                  -D CMAKE_INSTALL_PREFIX=/usr \
                  -D SYSCONF_INSTALL_DIR=/etc \
                  -D KDE_DEFAULT_HOME='.kde4' .. || return 1
            # Begin make
            make || return 1
            # Install the package to the fakeroot directory
            make DESTDIR=${pkgdir} install || return 1
    What do y'all think?
    Edit: I might should add, in case my intentions weren't very clear, that this script will be extended to building PKGBUILD's other than those designed for KDE.  Its advantage over moving the entire directory to RAM and building it there is that only the build directory is located off the hard disk.  This is important to my project because I only need to update KDE's subversion files in order to keep up-to-date (if I had to download everything each time I updated the packages, I would do everything in RAM).  Building the packages in a tmpfs directory is MUCH faster than building them directly from the hard disk, and reduces the latter's wear and tear on the life and fragmentation of the hard drive.  I got the idea from here and here.
    Last edited by deltaecho (2008-09-14 22:01:08)

    Good show...pleased to see the speed and performance of a ram based PKGBUILD.
    My system has no HDD, is based on Compact Flash devices in Faunos.  I may give this procedure a tryout on a "Live" system.
    EDIT:
    Faunos already uses tmpfs so I guess the PKGBUILDS already perform in ram along with the system via aufs for downloads.
    Last edited by lilsirecho (2008-09-14 22:47:31)

  • HT201272 I have recently picked up an Apple TV. When I go to the movies tab, there are less movies available there than there are in my i-Tunes account. I've done some looking around at questions from others, but haven't found an answer that works for me.

    I have recently picked up an Apple TV. When I go to the movies tab, there less movies available there than what is available in my i-Tunes library. I have 38 movies showing in my library but only 23 are showing in the Movies tab on the Apple TV. After researching this a little, there are only 23 movies showing under Purchased in the Quicklinks of my i-Tunes account. The majority of my collection are digital copy downloads that came from DVD purchases. Some of the missing movies were added in the past couple of months, the rest are a year-or-so old. I have done some looking around at questions from others, but I have not found an answer that will fix my situation. How do I update my library to get ALL of my movies to reflect that they were "Purchased"(as it says they were in their Properties)?

    Biggles Lamb wrote:
    Chill out guys, getting personal will never ever change another persons view, right or wrong, they are entitled to them .
    The pros and cons of to CC or not to CC have been done to death
    Its a fact the CC model will work for some, especially newbies and small businesses.
    The risks associated with the CC model have been well documented.
    For long term users of CS perpetuals its generally a large hike up in cost compared to the upgrade system.
    Then there are the....... Adobe can rot it hell...... group who will never subscribe
    To each their own, you do the math to suit your cashflow whatever that is and then make an informed decision
    To those on the CC model, I'd like to offer some practical advice.........do not allow automatic updates.........make regular backups............develop an exit strategy of alternatives for when you can no longer afford the subscription costs............never ever assume that the Adobe update is bug free
    Enjoy your cloud
    Col
    Thank you for that post, and the advice. I just happen to be one of those who it does work for. I've been around long enough to know that CC isn't going to work for everyone(the large publishing/radio/web company I work for isn't upgrading to CC because of the costs involved). But it does for me as I potentially venture out into the full-time freelancing world and away from the more structured big office environment. I can't make decisions based on what is best for anyone else, or what will hurt or help Adobe. Just what affects me, and that's all.
    Brent

  • Some build specifications change on different machines

    Hi all,
    I often need to move my LabVIEW projects from a machine to another one. In this cases, I copy all the contents of my project folder in the new machine.
    My projects are built without errors, but some build specifications are not kept.
    Build specifications that are not maintained are additional installer list (see attachment).
    In the machine 1, I include only Math kernel Libraries and NI VC2008MCMS.
    When I open the same project on the Machine 2, all the elements in NI LabVIEW Run-Time engine 2012 SP1 f3 list are checked. (see attachments)
    Thank you
    Attachments:
    machine1.png ‏5 KB
    machine2.png ‏5 KB

    Hi AC_85,
    have machine 1 and machine 2 the same installed software? Is it the same version? Maybe on the machine on which you copy your project some additional installer is not installed.
    You could try to Duplicate the entire project (from project explorer File >> Save as >> Duplicate project >> Include all dependencies) before moving it from machine 1 to machine 2.
    Hoping this will help you.
    Best Regards.
    Cla_CUP
    NI ITALY

  • Why do some graphics only have question marks on my iPad?

    Why do some graphics only have question marks on my iPad?

    On web pages and in the ap store.  In place of the graphic there's a little box with a question mark in it.
    I can't even connect to a web page right now.  I've got three bars on the wifi icon but safari just keeps trying until it says it can't connect to the server.  Same with the ap store, I've been trying all afternoon to download a new ap with no success.  It came right through on my iphone on the same wifi network.  Had the same problem on my home network.   I'm new to apple and I've only had the iphone and the ipad for a little over a week, but all this seems to have started after the most recent software update.
    Is this all related?

  • [svn:bz-4.x] 16256: Do some build cleanup on the BlazeDS/4.x branch.

    Revision: 16256
    Revision: 16256
    Author:   [email protected]
    Date:     2010-05-20 10:50:31 -0700 (Thu, 20 May 2010)
    Log Message:
    Do some build cleanup on the BlazeDS/4.x branch. I had just intended to update the build scripts to exclude xalan.jar from the WEB-INF/lib directory of a bunch of the webapps but while I was doing this I found a number of other problems that I attempted to fix. Here is a breakdown of what I changed.
    Update build scripts to not add xalan.jar to the WEB-INF/lib directory of those webapps that were including it. As far as I can tell we don't need to/shouldn't be shipping xalan.jar in the WEB-INF/lib directory of any of the webapps. I also updated the build scripts to not add fxgutils.jar to the WEB-INF/lib directory of those webapps that were including it. I also can't see any reason for us to be shipping fxgutils.jar with any of the webapps. This jar file is part of the Flex SDK and we're no longer shipping BlazeDS with the Flex SDK. 
    The clean target in the main build.xml file was not calling clean on the ds-console webapp. It was also calling clean on the qa-regress webapp twice and never calling clean on qa-manual which was probably a typo. Fixed the build so clean now gets called for ds-console and qa-regress.
    The samples webapp had hsqldb.jar checked into its WEB-INF/lib directory in svn. The build script for the samples webapp was also copying hsqldb.jar from the <blazesd-home>/lib/hsqldb directory. I cleaned this up by deleting hsqldb.jar from the samples/WEB-INF/lib directory in svn so now we will just get the jar from the <blazeds-home/lib/hsqldb directory. 
    After removing hsqldb.jar from the samples/WEB-INF/lib directory none of the webapps in BlazeDS have anything checked into their WEB-INF/lib directories in svn. Most of the BlazeDS webapps were using properties from the top level build.properties file to determine which jars to delete from the WEB-INF/lib directory. This is problematic because unless we do a good job of keeping these lists of jars up to date files wind up not being cleaned from the WEB-INF/lib directory and we could potentially ship files that are no longer part of the build. I fixed this issue by updating the build scripts for all of the webapps to delete everything from the WEB-INF/lib directory as part of the clean target.
    Checkintests: passed
    Modified Paths:
        blazeds/branches/4.x/apps/blazeds/build.xml
        blazeds/branches/4.x/apps/blazeds-spring/build.xml
        blazeds/branches/4.x/apps/ds-console/build.xml
        blazeds/branches/4.x/apps/samples/build.xml
        blazeds/branches/4.x/apps/samples-spring/build.xml
        blazeds/branches/4.x/apps/team/build.xml
        blazeds/branches/4.x/build.properties
        blazeds/branches/4.x/build.xml
        blazeds/branches/4.x/qa/apps/qa-manual/build.xml
    Removed Paths:
        blazeds/branches/4.x/apps/samples/WEB-INF/lib/

    Found solution after many hours of search and test:
    I have Launch2Net Premium 2.6.2 obn my system to get USB 3G Modem stick to work. It install many files into /System/Library/Extensions. All files are SierraDIPService.kext SierraFSPService.kext SierraDevSupport.kext.
    Remove SierraDIPService.kext from the above Extensions folder allow MobileDevice.pkg and othe installer that need to update system extension like soundflower.pkg to work properly.
    So, Sierra driveris the culprit!! I have to uninstall Launch2Net off my system. But my USB Stick Modem will not work anymore
    I have written to Nova System, the developer of Launch2Net to get fix to this problem
    Will update if there is any progess

  • [svn:bz-trunk] 8210: Update source package target to remove some build artifacts that were being pickup

    Revision: 8210
    Author:   [email protected]
    Date:     2009-06-24 16:15:50 -0700 (Wed, 24 Jun 2009)
    Log Message:
    Update source package target to remove some build artifacts that were being pickup
    Modified Paths:
        blazeds/trunk/build.xml

    First, I agree with Karol, use the AUR so that pacman can do its job.  Second, when you do your make, there is no reason to run it as root (until you do the make install)  For the initial build, it is much safer to not use root; plus all the files in your home directory will continue to belong to use, not to root.
    But, try the AUR.

  • [Solved] iSpin PKGBUILD license question

    Hello fellow archers,
    I am creating a PKGBUILD (my first one) for iSpin, a Tcl/Tk GUI for Spin, that I intend to upload on the AUR.
    But I am not sure about how to fill the license field of my PKGBUILD. This is why I ask for your help here.
    Quoting Gerard J. Holzmann, iSpin's (and Spin's) author, for educational or research purposes, iSpin and Spin are not covered by any specific license except the few following conditions
    iSpin and Spin are copyright Bell Labs and/or Caltech but available without charge or limitation. you can do anything with it, including modify it, sell it, repost it, as long as you acknowledge the original source and make your changes available to everyone as well.
    As for commercial purposes, they are covered by the SPIN commercial license instead.
    So basically, iSpin is dual licensed which means I should have something like this:
    license=('custom:SPIN_PUBLIC' 'custom:SPIN_COMMERCIAL')
    But in the case of a non-commercial use, it is not really a licence which is applied.
    What should I do? Leave only the commercial license or put the quotation from Gerard Holzmann in a file and use it as the SPIN_PUBLIC license?
    Thanks in advance.
    Last edited by Ghostofkendo (2011-03-10 13:14:43)

    the link you gave points to " SPIN Software Public License " Version 1.0 " 05/15/01 "
    On the bottom of that page :
    By downloading, installing, and using the SPIN software you signify that you accept the terms of this agreement. This acceptance is only required for commercial use of SPIN. Non-commercial use is restricted to educational use only. In all cases, no guarantee whatsoever is expressed or implied by the distribution of this code or any part thereof.
    While the license doesn't look very restrictive, it seems that commercial use is everything EXCEPT educational use.
    I suggest using single licensing, SPIN Software Public License .

  • P67A-G65 New build a few questions

    It has been a while since I have built my last computer and have a few questions.
    First I am using a G65 2600k, coursair vengence ram WD Sata 6gb hard drive ATI 6850 video card.
    I have read that there are different settings for the hard drive in the Bios Like IDE and some other settings. what is the optimal setting for the hard drive?
    Next, when i first boot up should i only install one stick of ram or can i put in both. I have seen different opinions.
    If anyone else has any tips that can help me with the build i would appreciate it. I know there have been many issues with these boards but i am trying to do what i can to prevent any problems.
    thanks for the help.

    I have my drives set to IDE; but raid is probably the optimal setting. Note that raid does not require that you set up a raid; rather it allows for the most flexible protocol (i.e, raid will also enable ahci which allows for request queueing on sata drives). Why I did not use ahci; well I was daft when I did the install and I don't want to be troubled with reinstalling.
    You can certainly install both sticks of ram; but if you have any issues then you will need to back-out to 1 to ease trouble shooting. I.e, the recommendation is that you start with 1 stick. In my build I went with both sticks and have had zero issues 'cept for bsod when the system enters sleep mode under high load (I suspect an eventual bios update will fix this issue though it could also be a windows 7 issue).
    If you are going for a crossfire or sli  configuration be sure to install the case cables before the vidoe cards as jfp2 will be blocked (the power led goes there if your case has a 3 pin power led).
    It is probably a bit easier to plug in the sata cables earlier than later during the build; though this really depends on your case layout and how much clearance there is between the drive cage and the sata ports. Also video card 1 (if long) will sit over several the of intel sata ports. The ports are 90 degree so they are not blocked; but it is easier to plug them in first.
    You should use the intel ports before the marvel ports; also it might not hurt to turn off unused devices in the bios (less wattage and less concern with drivers).
    After the build be sure to run something like prime95 configured to use all the memory (torture test; custom set the memory to use) for 5 or 6 hours (some recommend 24 hours). Others recommend memtest but I've always had better luck with prime95. It is not uncommon to encounter a  bad memory stick (can't really give a % but t 2+% would not shock me).
    Don't overclock until testing other components (i.e, prime95). Once the system is stable be sure to enable xmp if you are using memory faster than 1333 and test again. Then bump the clock up (but not the multiplier). Monitor heat on first boot up in case the fan is not installed correctly (not sure if you are using stock or after market cooler). These cpu will auto shutdown in most cases before overheating but it is a good thing to fix

  • [Solved] how do i build my ISO image with archiso?

    hi all
    i've been following this wiki and it's been going really well (it hasn't gone wrong...). but the wiki page just sort of stops halfway through the configuration bit
    a google search brought me to this page which suggests using the "build.sh" script located in my chroot environment's /tmp/releng/ directory.
    running this script returns the error "build.sh: line 207: syntax error near unexpected token '('
    line 207 of the file reads:
    paste -d"\n" <(sed "s|%ARCH%|i686|g" ${script_path}/aitab.${_iso_type}) \
    i don't really know much about scripts so i can't see what's wrong
    any advice?
    sorry if i've missed a wiki or something n00bish like that. also i'm aware that someone who doesn't know what they're doing shouldn't be attempting archiso but i thought it would be a fun learning experience
    here's the full build.sh in case it is helpful:
    #!/bin/bash
    set -e -u
    iso_name=archlinux
    iso_label="ARCH_$(date +%Y%m)"
    iso_version=$(date +%Y.%m.%d)
    install_dir=arch
    arch=$(uname -m)
    work_dir=work
    out_dir=out
    verbose=""
    script_path=$(readlink -f ${0%/*})
    # Base installation (root-image)
    make_basefs() {
    mkarchiso ${verbose} -w "${work_dir}" -D "${install_dir}" -p "base" create
    mkarchiso ${verbose} -w "${work_dir}" -D "${install_dir}" -p "memtest86+ syslinux mkinitcpio-nfs-utils nbd curl" create
    # Additional packages (root-image)
    make_packages() {
    mkarchiso ${verbose} -w "${work_dir}" -D "${install_dir}" -p "$(grep -v ^# ${script_path}/packages.${arch})" create
    # Copy mkinitcpio archiso hooks (root-image)
    make_setup_mkinitcpio() {
    if [[ ! -e ${work_dir}/build.${FUNCNAME} ]]; then
    local _hook
    for _hook in archiso archiso_shutdown archiso_pxe_common archiso_pxe_nbd archiso_pxe_http archiso_pxe_nfs archiso_loop_mnt; do
    cp /lib/initcpio/hooks/${_hook} ${work_dir}/root-image/lib/initcpio/hooks
    cp /lib/initcpio/install/${_hook} ${work_dir}/root-image/lib/initcpio/install
    done
    cp /lib/initcpio/install/archiso_kms ${work_dir}/root-image/lib/initcpio/install
    cp /lib/initcpio/archiso_shutdown ${work_dir}/root-image/lib/initcpio
    cp /lib/initcpio/archiso_pxe_nbd ${work_dir}/root-image/lib/initcpio
    cp ${script_path}/mkinitcpio.conf ${work_dir}/root-image/etc/mkinitcpio-archiso.conf
    : > ${work_dir}/build.${FUNCNAME}
    fi
    # Prepare ${install_dir}/boot/
    make_boot() {
    if [[ ! -e ${work_dir}/build.${FUNCNAME} ]]; then
    local _src=${work_dir}/root-image
    local _dst_boot=${work_dir}/iso/${install_dir}/boot
    mkdir -p ${_dst_boot}/${arch}
    mkarchroot -n -r "mkinitcpio -c /etc/mkinitcpio-archiso.conf -k /boot/vmlinuz-linux -g /boot/archiso.img" ${_src}
    mv ${_src}/boot/archiso.img ${_dst_boot}/${arch}/archiso.img
    mv ${_src}/boot/vmlinuz-linux ${_dst_boot}/${arch}/vmlinuz
    cp ${_src}/boot/memtest86+/memtest.bin ${_dst_boot}/memtest
    cp ${_src}/usr/share/licenses/common/GPL2/license.txt ${_dst_boot}/memtest.COPYING
    : > ${work_dir}/build.${FUNCNAME}
    fi
    # Prepare /${install_dir}/boot/syslinux
    make_syslinux() {
    if [[ ! -e ${work_dir}/build.${FUNCNAME} ]]; then
    local _src_syslinux=${work_dir}/root-image/usr/lib/syslinux
    local _dst_syslinux=${work_dir}/iso/${install_dir}/boot/syslinux
    mkdir -p ${_dst_syslinux}
    for _cfg in ${script_path}/syslinux/*.cfg; do
    sed "s|%ARCHISO_LABEL%|${iso_label}|g;
    s|%INSTALL_DIR%|${install_dir}|g;
    s|%ARCH%|${arch}|g" ${_cfg} > ${_dst_syslinux}/${_cfg##*/}
    done
    cp ${script_path}/syslinux/splash.png ${_dst_syslinux}
    cp ${_src_syslinux}/*.c32 ${_dst_syslinux}
    cp ${_src_syslinux}/*.com ${_dst_syslinux}
    cp ${_src_syslinux}/*.0 ${_dst_syslinux}
    cp ${_src_syslinux}/memdisk ${_dst_syslinux}
    mkdir -p ${_dst_syslinux}/hdt
    wget -O - http://pciids.sourceforge.net/v2.2/pci.ids | gzip -9 > ${_dst_syslinux}/hdt/pciids.gz
    cat ${work_dir}/root-image/lib/modules/*-ARCH/modules.alias | gzip -9 > ${_dst_syslinux}/hdt/modalias.gz
    : > ${work_dir}/build.${FUNCNAME}
    fi
    # Prepare /isolinux
    make_isolinux() {
    if [[ ! -e ${work_dir}/build.${FUNCNAME} ]]; then
    mkdir -p ${work_dir}/iso/isolinux
    sed "s|%INSTALL_DIR%|${install_dir}|g" ${script_path}/isolinux/isolinux.cfg > ${work_dir}/iso/isolinux/isolinux.cfg
    cp ${work_dir}/root-image/usr/lib/syslinux/isolinux.bin ${work_dir}/iso/isolinux/
    cp ${work_dir}/root-image/usr/lib/syslinux/isohdpfx.bin ${work_dir}/iso/isolinux/
    : > ${work_dir}/build.${FUNCNAME}
    fi
    # Customize installation (root-image)
    # NOTE: mkarchroot should not be executed after this function is executed, otherwise will overwrites some custom files.
    make_customize_root_image() {
    if [[ ! -e ${work_dir}/build.${FUNCNAME} ]]; then
    cp -af ${script_path}/root-image ${work_dir}
    chmod 750 ${work_dir}/root-image/etc/sudoers.d
    chmod 440 ${work_dir}/root-image/etc/sudoers.d/g_wheel
    mkdir -p ${work_dir}/root-image/etc/pacman.d
    wget -O ${work_dir}/root-image/etc/pacman.d/mirrorlist http://www.archlinux.org/mirrorlist/all/
    sed -i "s/#Server/Server/g" ${work_dir}/root-image/etc/pacman.d/mirrorlist
    chroot ${work_dir}/root-image /usr/sbin/locale-gen
    chroot ${work_dir}/root-image /usr/sbin/useradd -m -p "" -g users -G "audio,disk,optical,wheel" arch
    : > ${work_dir}/build.${FUNCNAME}
    fi
    # Split out /lib/modules from root-image (makes more "dual-iso" friendly)
    make_lib_modules() {
    if [[ ! -e ${work_dir}/build.${FUNCNAME} ]]; then
    mv ${work_dir}/root-image/lib/modules ${work_dir}/lib-modules
    : > ${work_dir}/build.${FUNCNAME}
    fi
    # Split out /usr/share from root-image (makes more "dual-iso" friendly)
    make_usr_share() {
    if [[ ! -e ${work_dir}/build.${FUNCNAME} ]]; then
    mv ${work_dir}/root-image/usr/share ${work_dir}/usr-share
    : > ${work_dir}/build.${FUNCNAME}
    fi
    # Make [core] repository, keep "any" pkgs in a separate fs (makes more "dual-iso" friendly)
    make_core_repo() {
    if [[ ! -e ${work_dir}/build.${FUNCNAME} ]]; then
    local _url _urls _pkg_name _cached_pkg _dst _pkgs
    mkdir -p ${work_dir}/repo-core-any
    mkdir -p ${work_dir}/repo-core-${arch}
    pacman -Sy
    _pkgs=$(comm -2 -3 <(pacman -Sql core | sort | sed 's@^@core/@') \
    <(grep -v ^# ${script_path}/core.exclude.${arch} | sort | sed 's@^@core/@'))
    _urls=$(pacman -Sddp ${_pkgs})
    pacman -Swdd --noprogressbar --noconfirm ${_pkgs}
    for _url in ${_urls}; do
    _pkg_name=${_url##*/}
    _cached_pkg=/var/cache/pacman/pkg/${_pkg_name}
    _dst=${work_dir}/repo-core-${arch}/${_pkg_name}
    cp ${_cached_pkg} ${_dst}
    repo-add -q ${work_dir}/repo-core-${arch}/core.db.tar.gz ${_dst}
    if [[ ${_pkg_name} == *any.pkg.tar* ]]; then
    mv ${_dst} ${work_dir}/repo-core-any/${_pkg_name}
    ln -sf ../any/${_pkg_name} ${_dst}
    fi
    done
    : > ${work_dir}/build.${FUNCNAME}
    fi
    # Process aitab
    # args: $1 (core | netinstall)
    make_aitab() {
    local _iso_type=${1}
    if [[ ! -e ${work_dir}/build.${FUNCNAME}_${_iso_type} ]]; then
    sed "s|%ARCH%|${arch}|g" ${script_path}/aitab.${_iso_type} > ${work_dir}/iso/${install_dir}/aitab
    : > ${work_dir}/build.${FUNCNAME}_${_iso_type}
    fi
    # Build all filesystem images specified in aitab (.fs .fs.sfs .sfs)
    make_prepare() {
    mkarchiso ${verbose} -w "${work_dir}" -D "${install_dir}" prepare
    # Build ISO
    # args: $1 (core | netinstall)
    make_iso() {
    local _iso_type=${1}
    mkarchiso ${verbose} -w "${work_dir}" -D "${install_dir}" checksum
    mkarchiso ${verbose} -w "${work_dir}" -D "${install_dir}" -L "${iso_label}" -o "${out_dir}" iso "${iso_name}-${iso_version}-${_iso_type}-${arch}.iso"
    # Build dual-iso images from ${work_dir}/i686/iso and ${work_dir}/x86_64/iso
    # args: $1 (core | netinstall)
    make_dual() {
    local _iso_type=${1}
    if [[ ! -e ${work_dir}/dual/build.${FUNCNAME}_${_iso_type} ]]; then
    if [[ ! -d ${work_dir}/i686/iso || ! -d ${work_dir}/x86_64/iso ]]; then
    echo "ERROR: i686 or x86_64 builds does not exist."
    _usage 1
    fi
    local _src_one _src_two _cfg
    if [[ ${arch} == "i686" ]]; then
    _src_one=${work_dir}/i686/iso
    _src_two=${work_dir}/x86_64/iso
    else
    _src_one=${work_dir}/x86_64/iso
    _src_two=${work_dir}/i686/iso
    fi
    mkdir -p ${work_dir}/dual/iso
    cp -a -l -f ${_src_one} ${work_dir}/dual
    cp -a -l -n ${_src_two} ${work_dir}/dual
    rm -f ${work_dir}/dual/iso/${install_dir}/aitab
    rm -f ${work_dir}/dual/iso/${install_dir}/boot/syslinux/*.cfg
    if [[ ${_iso_type} == "core" ]]; then
    if [[ ! -e ${work_dir}/dual/iso/${install_dir}/any/repo-core-any.sfs ||
    ! -e ${work_dir}/dual/iso/${install_dir}/i686/repo-core-i686.sfs ||
    ! -e ${work_dir}/dual/iso/${install_dir}/x86_64/repo-core-x86_64.sfs ]]; then
    echo "ERROR: core_iso_single build is not found."
    _usage 1
    fi
    else
    rm -f ${work_dir}/dual/iso/${install_dir}/any/repo-core-any.sfs
    rm -f ${work_dir}/dual/iso/${install_dir}/i686/repo-core-i686.sfs
    rm -f ${work_dir}/dual/iso/${install_dir}/x86_64/repo-core-x86_64.sfs
    fi
    paste -d"\n" <(sed "s|%ARCH%|i686|g" ${script_path}/aitab.${_iso_type}) \
    <(sed "s|%ARCH%|x86_64|g" ${script_path}/aitab.${_iso_type}) | uniq > ${work_dir}/dual/iso/${install_dir}/aitab
    for _cfg in ${script_path}/syslinux.dual/*.cfg; do
    sed "s|%ARCHISO_LABEL%|${iso_label}|g;
    s|%INSTALL_DIR%|${install_dir}|g" ${_cfg} > ${work_dir}/dual/iso/${install_dir}/boot/syslinux/${_cfg##*/}
    done
    mkarchiso ${verbose} -w "${work_dir}/dual" -D "${install_dir}" checksum
    mkarchiso ${verbose} -w "${work_dir}/dual" -D "${install_dir}" -L "${iso_label}" -o "${out_dir}" iso "${iso_name}-${iso_version}-${_iso_type}-dual.iso"
    : > ${work_dir}/dual/build.${FUNCNAME}_${_iso_type}
    fi
    purge_single ()
    if [[ -d ${work_dir} ]]; then
    find ${work_dir} -mindepth 1 -maxdepth 1 \
    ! -path ${work_dir}/iso -prune \
    | xargs rm -rf
    fi
    purge_dual ()
    if [[ -d ${work_dir}/dual ]]; then
    find ${work_dir}/dual -mindepth 1 -maxdepth 1 \
    ! -path ${work_dir}/dual/iso -prune \
    | xargs rm -rf
    fi
    clean_single ()
    rm -rf ${work_dir}
    rm -f ${out_dir}/${iso_name}-${iso_version}-*-${arch}.iso
    clean_dual ()
    rm -rf ${work_dir}/dual
    rm -f ${out_dir}/${iso_name}-${iso_version}-*-dual.iso
    make_common_single() {
    make_basefs
    make_packages
    make_setup_mkinitcpio
    make_boot
    make_syslinux
    make_isolinux
    make_customize_root_image
    make_lib_modules
    make_usr_share
    make_aitab $1
    make_prepare $1
    make_iso $1
    _usage ()
    echo "usage ${0} [options] command <command options>"
    echo
    echo " General options:"
    echo " -N <iso_name> Set an iso filename (prefix)"
    echo " Default: ${iso_name}"
    echo " -V <iso_version> Set an iso version (in filename)"
    echo " Default: ${iso_version}"
    echo " -L <iso_label> Set an iso label (disk label)"
    echo " Default: ${iso_label}"
    echo " -D <install_dir> Set an install_dir (directory inside iso)"
    echo " Default: ${install_dir}"
    echo " -w <work_dir> Set the working directory"
    echo " Default: ${work_dir}"
    echo " -o <out_dir> Set the output directory"
    echo " Default: ${out_dir}"
    echo " -v Enable verbose output"
    echo " -h This help message"
    echo
    echo " Commands:"
    echo " build <mode> <type>"
    echo " Build selected .iso by <mode> and <type>"
    echo " purge <mode>"
    echo " Clean working directory except iso/ directory of build <mode>"
    echo " clean <mode>"
    echo " Clean working directory and .iso file in output directory of build <mode>"
    echo
    echo " Command options:"
    echo " <mode> Valid values 'single' or 'dual'"
    echo " <type> Valid values 'netinstall', 'core' or 'all'"
    exit ${1}
    if [[ ${EUID} -ne 0 ]]; then
    echo "This script must be run as root."
    _usage 1
    fi
    while getopts 'N:V:L:D:w:o:vh' arg; do
    case "${arg}" in
    N) iso_name="${OPTARG}" ;;
    V) iso_version="${OPTARG}" ;;
    L) iso_label="${OPTARG}" ;;
    D) install_dir="${OPTARG}" ;;
    w) work_dir="${OPTARG}" ;;
    o) out_dir="${OPTARG}" ;;
    v) verbose="-v" ;;
    h|?) _usage 0 ;;
    _msg_error "Invalid argument '${arg}'" 0
    _usage 1
    esac
    done
    shift $((OPTIND - 1))
    if [[ $# -lt 1 ]]; then
    echo "No command specified"
    _usage 1
    fi
    command_name="${1}"
    if [[ $# -lt 2 ]]; then
    echo "No command mode specified"
    _usage 1
    fi
    command_mode="${2}"
    if [[ ${command_name} == "build" ]]; then
    if [[ $# -lt 3 ]]; then
    echo "No build type specified"
    _usage 1
    fi
    command_type="${3}"
    fi
    if [[ ${command_mode} == "single" ]]; then
    work_dir=${work_dir}/${arch}
    fi
    case "${command_name}" in
    build)
    case "${command_mode}" in
    single)
    case "${command_type}" in
    netinstall)
    make_common_single netinstall
    core)
    make_core_repo
    make_common_single core
    all)
    make_common_single netinstall
    make_core_repo
    make_common_single core
    echo "Invalid build type '${command_type}'"
    _usage 1
    esac
    dual)
    case "${command_type}" in
    netinstall)
    make_dual netinstall
    core)
    make_dual core
    all)
    make_dual netinstall
    make_dual core
    echo "Invalid build type '${command_type}'"
    _usage 1
    esac
    echo "Invalid build mode '${command_mode}'"
    _usage 1
    esac
    purge)
    case "${command_mode}" in
    single)
    purge_single
    dual)
    purge_dual
    echo "Invalid purge mode '${command_mode}'"
    _usage 1
    esac
    clean)
    case "${command_mode}" in
    single)
    clean_single
    dual)
    clean_dual
    echo "Invalid clean mode '${command_mode}'"
    _usage 1
    esac
    echo "Invalid command name '${command_name}'"
    _usage 1
    esac
    Last edited by gav989 (2012-02-12 10:59:11)

    thanks for that, according to this it should be run like
    /path/to/build.sh build single netinstall
    from inside the chroot
    posting this for my own reference and in case it helps others, will mark as solved
    thanks again for your help
    Last edited by gav989 (2012-02-12 10:58:04)

  • [SOLVED] ipager does not build. Patched! Now it does.

    Dig in, children. ipager really is the coolest little thing!
    Some thanks should go to berbae. I just mashed up an AUR comment post of his with a little code fix I located in a blog post at the confignewton tech blog (dittoThanks).
    This patch supercedes the two other patches that package maintainer wakwanza is shipping with ipager-1.1.0-8. To clarify... Please swap out both of those patches (ipager.patch + ipager-gcc43.patch), instead using ipager-1.1.0-20120429.patch for a flawless build. (3 warnings of deprecation + 1 harmless other warning remain, though)
       ipager-1.1.0-20120429.patch:
    --- SConstruct.orig 2005-11-06 05:23:24.000000000 -0600
    +++ SConstruct 2012-04-29 06:59:11.927253922 -0500
    @@ -7,15 +7,16 @@
    # options
    ipager_optfile = [ 'scons.opts', 'user.opts' ]
    -ipager_options = Options(ipager_optfile)
    -ipager_options.AddOptions(
    - BoolOption('debug', 'build debug version', 0),
    - BoolOption('debug_events', 'debug xserve events', 0),
    - BoolOption('xinerama', 'support xinerama', 0),
    +ipager_options = Variables(ipager_optfile)
    +ipager_options.AddVariables(
    + BoolVariable('debug', 'build debug version', 0),
    + BoolVariable('debug_events', 'debug xserve events', 0),
    - PathOption('PREFIX', 'install-path base', '/usr/local'),
    - PathOption('DESTDIR', 'install to $DESTDIR/$PREFIX', '/')
    + BoolVariable('xinerama', 'support xinerama', 0),
    +
    + PathVariable('PREFIX', 'install-path base', '/usr'),
    + PathVariable('DESTDIR', 'install to $DESTDIR/$PREFIX', '/')
    @@ -31,10 +32,10 @@
    ipager_env = Environment(options = ipager_options, ENV = os.environ)
    ipager_env.Append(
    - CPPFLAGS = [ '-Wall' ],
    - CPPPATH = [ '/usr/X11R6/include' ],
    - LIBPATH = ['/usr/X11R6/lib']
    + CPPFLAGS = [ '-Wall', '-march=native', '-O' ],
    + CPPPATH = [ '/usr/include' ],
    + LIBPATH = [ '/usr/lib' ]
    +)
    ipager_options.Update(ipager_env)
    @@ -115,11 +116,10 @@
    else:
    print "yes"
    ipager_env.AppendUnique(
    - CPPPATH = imlib2_env.Dictionary()['CPPPATH'],
    - CCFLAGS = imlib2_env.Dictionary()['CCFLAGS'],
    - LIBPATH = imlib2_env.Dictionary()['LIBPATH'],
    - LIBS = imlib2_env.Dictionary()['LIBS']
    + CPPPATH = imlib2_env.Dictionary().get('CPPPATH'),
    + CCFLAGS = imlib2_env.Dictionary().get('CCFLAGS'),
    + LIBPATH = imlib2_env.Dictionary().get('LIBPATH'),
    + LIBS = imlib2_env.Dictionary().get('LIBS')
    conf.Finish()
    --- iconfig.cpp.orig 2012-04-28 19:34:36.902151855 -0500
    +++ iconfig.cpp 2012-04-28 18:31:28.000000000 -0500
    @@ -30,11 +30,11 @@
    #include <iostream>
    #include <fstream>
    #include <sstream>
    +#include <cstdlib>
    #include <sys/stat.h>
    #include <sys/types.h>
    +using namespace std;
    template <class T>
    bool from_string(T &t,
    --- ipager.cpp.orig 2012-04-28 19:34:36.928818549 -0500
    +++ ipager.cpp 2012-04-28 18:43:26.000000000 -0500
    @@ -31,6 +31,7 @@
    #include <iostream>
    #include <string>
    +#include <unistd.h>
    using namespace std;
    #include "pager.h"
    --- pager.cpp.orig 2012-04-28 19:34:36.928818549 -0500
    +++ pager.cpp 2012-04-28 19:07:28.000000000 -0500
    @@ -266,13 +266,13 @@
    /* Window updates go here */
    if (m_window_update.updateNetWindowList())
    updateNetWindowList();
    - if (m_window_update.displayIcons())
    - if (m_window_update.motion())
    + if (m_window_update.displayIcons()) {
    + if (m_window_update.motion()) {
    displayIcons(m_window_update.getX(), m_window_update.getY());
    - else
    + } else {
    displayIcons();
    + }
    + }
    /* ImLib updates go here */
    --- wm.cpp.orig 2012-04-28 19:34:36.928818549 -0500
    +++ wm.cpp 2012-04-28 18:42:07.000000000 -0500
    @@ -27,7 +27,9 @@
    #include <time.h>
    #include "atoms.h"
    +#include <cstdlib>
    +using namespace std;
    WM * WM::m_instance = 0;
    bool WM::x_error = false;
    This md5sum is what you'll get, IF you'll please add a newline at the patch EOF.
    f147f6a4ec3f8779bdc1c12cfbd5e03f ipager-1.1.0-20120429.patch
    Note: I messed with 3 pastebin sites, and now these \[code\] tags, never once able to successfully share without the truncation of the trailing newline at EOF. Hours of learning (yes, googling) how not to do it properly. Anyone care to point me to a non-truncating solution?  EDIT: [SOLVED] The patch program will ignore lines beginning with '#'. The solution is obvious, after re-reading the manual.
    P.S. My first post! (pat self on back) Sorry for my excellent english. :]
    Last edited by zero2cx (2012-04-29 20:10:01)

    berbae wrote:Thank you for sharing this.
    But it would be nice to have the maintainer wakwanza make the changes in AUR and increment the package release to 1.1.0-9
    Could you send a mail to him, and if there is no answer in one month, ask a TU in the AUR-general mailing list to orphan the package.
    And then you or someone else can adopt it and make the changes.
    You're welcome.
    wakwanza has been notified and on the clock since yesterday a.m. The aur commenting system generates an email to the maintainer with every new comment, I've read. But since this package's recent maintenance history is sketchy at best, I reached out to him/her myself. We will wait now.
    berbae wrote:
    Also you should have posted the link of the blog you found, but here it is: http://confignewton.com/?p=152
    I didn't try to rebuild the package recently, I presume it doesn't build with actual build chain tools...
    I will try your patch, but I prefer first that the maintainer updates the package in AUR to 1.1.0-9 with your modifications.
    Thank you for the link. I did not want to cross any lines into website solicitation, so I proceeded too catiously there.
    Yes, please try this and post back.
    [EDIT]: Awesome, wakwanza! --> Fresh ipager available: https://aur.archlinux.org/packages.php?ID=5415
    Last edited by zero2cx (2012-05-01 21:00:31)

  • Some very "newbie" networking questions -- trying to get started

    Hello all. Since this forum is so friendly for issues related to Arch, I thought I'd post my semi-arch related question.
    I'm trying to create a network for my house, consisting of about 3 computers. I have a book reference, TCP/IP Network Administration from O'reilly, but I think I need a bit of a "kick-start" in order to get to a level in which I can comfortably understand the book (if that makes sense).
    I've got internet access working by connecting to a router and having everything configured by DHCP (ie, in rc.conf, eth0="dhcp"). What's bothering me (and I don't even know if it should) is that the other computers on the physical network are not visible to me.
    IE, when I try to connect to them with their host name, it doesn't recognise them. My understanding of that is that either I need to manually set the hostnames of the other computers on the network in /etc/hosts, or use something like a DNS server on one of the computers in the network in order to recognize the others.
    My other doubt is that I think I've got a dynamic IP address (though it might be static, I'm not quite sure). Does that change things?
    I don't expect you guys to completely run through me setting up the network (that's what the book is for!) but some basic advice on either my misconceptions (I'm sure I have plenty) or related in general to my situation would be awsome and highly appreciated. I'd also be happy to provide any additional info that would be helpful.
    Thanks in advance.

    ralvez wrote:
    My set up is simple.
    I have a Linux router running on an old Pentium machine. It has two cards, one configured for the private network (with an address like 192.168.1.1) and the second card set up to take addresses from DHCP ( since that's how Rogers sends you the public address).
    In that machine I run my firewall, SmoothWall (smoothwall.org) and have enabled NAT so the machines in the inside network can go to the Internet.
    I have Samba set up in another machine to share files (that's an Arch system) with static IP, my daughter's machine (another Arch system), my system (obviously another Arch with static IP) and my wife's machine with Windows XP system (static IP too).
    So, the key concept here  are:
    1. Use static IPs for the private network
    2. Use DHCP for Rogers
    3. Use NAT in your router so all the machines can go to the Internet but no one from the Internet can get to them.
    Hope this helps.
    R
    p.s. feel free to contact me off the list if you need further details/help.
    Aha! So, for example, the network 192.168.0.0 where mask is 255.255.255.0 is a private network set up by my router, right?
    So the addresses in 192.168.0.0 never change, but the public IP address of the hosts do change?
    Does that mean that if, for example, I have a host called 192.168.0.32 that its address never changes? Can I simply add that to all the other hosts' /etc/hosts file and it won't change?
    All I was ever concerned about was allowing hosts in the network to contact each other via hostname. If that never changes then I'm all set.
    BTW, thanks for your continuing help!
    Last edited by Jessehk (2007-08-10 16:33:57)

  • Some php + berkeley db questions

    I have some questions about using php with bdb.
    First, I compiled Berkeley db and then linked php to it using the configure directive.
    Then I accessed bdb through the php standard dba_* APIs. This works, but it seems like locking is broken. The php documentation (and common sense) says that calls to dba_open() with a write lock will block when another such call has succeeded in another process. But my tests show many concurrent processes all getting write locks no problem.
    So then I compiled the native php_db4 extension that ships with bdb. I tried to use the API documented here:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/ext/php.html
    Can anybody direct me to a more complete (and more correct) fuller documented version of this API? For instance, the put method is not shown in the Db4 class, but it does exist.
    I'm trying to infer how the php API works from the C API docs, but it's not very easy, particularly when it comes to error codes returned. Is there a db_strerror in the php?
    I can get the simple demos that come in the db4 php dir to work, but what I need is a locking environment, much like the one documented here:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/cam/intro.html
    However, when I try to open the DBENV with the DB_INIT_CDB and DB_INIT_MPOOL flags, as directed, the call fails in the php. I cannot figure out why or how to get an errorr code or message I can debug?
    Any help will be much appreciated. If you could just point me at any real-world examples of php and berkeley db that would be a great start.

    Hi,
    From what I'm aware of, there is no extra documentation on php & BDB (maybe just php.net :) ). Also, I don't know if is there anyone who published his source code.
    What kind of application do you want to build? I think that a good option for the moment is to try to use BDB XML version ( Berkeley DB XML and http://www.oracle.com/technology/products/berkeley-db/xml/index.html ), since there are many cases there in which the BDB XML product is used via PHP, and this is why you can have a better support. I think that you can try to achieve the same approach using XML, please let me know if you agree or not.
    BDB XML's PHP API's are mapped over C++ API, and you'll have the ability to use XML and XQuery rather than tables and SQL.
    If you can point me with a specific issue in PHP APIs for BDB, and provide me with test cases, I can try to work them out. Also, in the next weeks, I 'll try to have a look on PHP APIs in my spare time, and maybe I'll be able to work on supporting the latest BDB APIs. If there is somebody working on a PHP app and is willing to help on testing and maintaining the PHP APIs, please post here.
    Regards,
    Bogdan Coman, Oracle

Maybe you are looking for

  • FF7A: Drill down layout

    Hello, when I look at my structure report in FF7A, I can drilldown till I see the customer/vendor invoices, in the layout is also the field NAME1 containing the name of the relevant customer/supplier. when I do the same drilldown for the outstanding

  • I NEED HELP WITH SYNC PROBLEM!

    I have already posted about this once and got no help! I can not afford Apple's greedy fee for helping us with a product we paid a lot for My iPad will not sync with iTunes anymore! The error I get is: The iPad "Jonnathon's iPad" cannot be synced. An

  • IE6 on XP malfunctioning after Java 2 SE v1.4.0_01

    Okay, here goes. Please bear with my ignorance. Java 2 Runtime Environ, SE v1.4.0_01 was accidentally installed on my DELL XP, when v2-something was already installed. Well, at first it caused my browser, IE6.0.2600.etc. to simply shut down when acce

  • SPAM Upgrade error

    Hello My SPAM version is 0036 and I'm trying to upgrade to 0044 version, but I got the error below. Does anyone have ever faced a similar problem? SPAM Message      The import was stopped, since an error occurred during the phase      IMPORT_PROPER,

  • Itunes won't read ipod touch

    why won't itunes read my ipod touch