[Solved] makepkg into background

Hey,
I would like to know if there is a possibility to send the 'makepkg' command into background. I have tried it by adding an '&', but at anytime the compilation stopped. Without the '&' the compilation was successfully.
Scenario which I wanna have:
1. Connect to the compilation server per ssh (e.g.: from other client, mobile phone)
2. Start compilation using 'makepkg'
3. Disconnect the ssh session
4. Anytime connect per ssh with any device and look at the compilation state.
Is this possible?
Last edited by dejavu (2012-10-18 09:56:35)

headkase wrote:Don't forget to edit your first post and put "[Solved]" somewhere in the title..
For sure. ;-)
Just thought about an additionally button for marking a thread as solved. Then it would be much easier not to forget about this. ;-)

Similar Messages

  • I just updated 3 addons and now Firefox goes into background mode and I am not able to reach the Tools/Addons to disable any addons.

    Firefox stays in foreground for maybe 10 secs before going into background. I can't reach FF to do anything with FF.
    I have to use Task Manager to close FF.
    I don't remember which 3 addons were updated, but I was notified of updates when I tried to open FF at about midnight this morning.

    hi there i tried the removal of the sessionstore.js file and and fired up the mozilla firefox up again and still no joy from the windows xp machine all other browsers work fine, its just hanging still, reading the article from the link you sent i can't any issues that i am getting but i did manage to un-install the FF8 again deleted any folders and dowloaded V3.6.24 from here http://www.mozilla.org/en-US/firefox/all-older.html
    then updated from there and all seemed to go well then yet again i got the same issue ,when i open the browser i am unable to type in the search and use the mouse on the File edit etc menus rendering the browser useless so i think its a the update to V8.0 that is causing the issue, when i reinstalled i had no themes no plugins the extensions on the 1st boot ticked the MS .net there was that and java Un-ticked when the browser booted up it was the same hanging for ever . i think i may stick with the older version for now until a fix is available as there is some thing that does not like this new version on that pc... but i hope you can help me to upgrade as its annoying now
    :-)

  • How to put program into background.

    dear all,
         i want to put one of my zreport into background execution.
    please tell me  the steps to do this.
    thanks in adance.
    Vinod

    Vinod,
      Scheduling Background Jobs:
    1.        Background jobs are scheduled by Basis administrators using transaction SM36.
    2.        To run a report in a background, a job needs to be created with a step using the report name
    and a variant for selection parameters. It is recommended to create a separate variant for each
    scheduled job to produce results for specific dates (e.g. previous month) or organizational units (e.g.
    company codes).
    3.        While defining the step, the spool parameters needs to be specified
    (Step-> Print Specifications->Properties) to secure the output of the report and help authorized users
    to find the spool request. The following parameters needs to be maintained:
    a.        Time of printing: set to “Send to SAP spooler Only for now”
    b.        Name – abbreviated name to identify the job output
    c.        Title – free form description for the report output
    d.        Authorization – a value defined by Security in user profiles to allow those users to access
    this spool request (authorization object  S_SPO_ACT, value SPOAUTH).  Only users with matching
    authorization value in their profiles will be able to see the output.
    e.        Department – set to appropriate department/functional area name. This field can be used in
    a search later.
    f.        Retention period – set to “Do not delete” if the report output needs to be retained for more
    than 8 days. Once the archiving/document repository solution is in place the spool requests could
    be automatically moved to the archive/repository. Storage Mode parameter on the same screen
    could be used to immediately send the output to archive instead of creating a spool request.
    Configuring user access:
    1.        To access a report output created by a background job, a user must have at
    least access to SP01 (Spool requests) transaction without restriction on the user
    name (however by itself it will not let the user to see all spool requests). To have
    that access the user must have S_ADMI_FCD authorization object in the profile with
    SPOR (or SP01) value of S_ADMI_FCD parameter (maintained by Security).
    2.        To access a particular job’s output in the spool, the user must have
    S_SPO_ACT object in the profile with SPOAUTH parameter matching the value used
    in the Print Specifications of the job (see p. 3.d above).
    3.        Levels of access to the spool (display, print once, reprint, download, etc) are
    controlled by SPOACTION parameter of S_SPO_ACT. The user must have at least
    BASE access (display).
    On-line reports:
    1.        Exactly the same configuration can be maintained for any output produced
    from R/3. If a user clicks “Parameters” button on a SAP Printer selection dialog, it
    allows to specify all the parameters as described in p. 3 of
    “Scheduling background jobs” section. Thus any output created by an online report
    can be saved and accessed by any user authorized to access that spool request
    (access restriction provided by the Authorization field of the spool request
    attributes, see p. 3.d of “Scheduling background jobs” section).
    Access to report’s output:
    1.        A user that had proper access (see Configuring user access above) can
    retrieve a job/report output through transaction SP01.
    2.        The selection screen can be configured by clicking “Further selection
    criteria…” button (e.g. to bring “Spool request name (suffix 2)” field or hide other
    fields).
    3.        The following fields can be used to search for a specific output (Note that
    Created By must be blank when searching for scheduled job’s outputs)
    a.        Spool request name (suffix 2) – corresponds to a spool name in p. 3.b in
    “Scheduling background jobs” section above).
    b.        Date created – to find an output of a job that ran within a certain date range.
    c.        Title – corresponds to spool Title in p. 3.c in “Scheduling background jobs”
    section above).
    d.        Department - corresponds to spool Department in p. 3.e in “Scheduling
    background jobs” section above).
    4.        Upon entering selection criteria, the user clicks the Execute button   to
    retrieve the list of matching spool requests.
    5.        From the spool list the user can use several function such as view the
    content of a spool request, print the spool request, view attributed of the spool
    request, etc. (some functions may need special authorization, see p.3 in
    Configuring user access)
    a.        Click the Print button   to print the spool request with the default attributes
    (usually defined with the job definition). It will print it on a printer that was
    specified when a job was created.
    b.        Click the “Print with changed attributed” button   to print the spool request
    with the different attributes (e.g. changing the printer name).
    c.        Click the “Display contents” button   to preview the spool request contents. A
    Print    and Download   functions are available from the preview mode.
    Pls. reward if useful....

  • [SOLVED] makepkg -s with nested AUR deps on "userless" systems

    hi..
    so for the last two years i was able to solve every Arch Linux problem i had from infos around the wiki, the forum, etc. now i am stuck.
    here's the scenario:
    i am automating installation of a full systems for headless usage.
    no user required and/or wanted.
    this works perfectly until i need something special from the AUR.
    because if a package from the AUR has no dependencies we can run makepkg as user nobody:
    ..# sudo -u nobody makepkg
    but as soon as we need
    ..# sudo -u nobody makepkg -s
    to resolve dependencies from the AUR this fails. "nobody" has no password, therefore we can not sudo.
    my preferred idea would be to create a shortlived dummyuser. but how to get this dummy to behave correctly with creation and entering of the password without actual user interaction? (remember: headless automation)
    i would really like to avoid the special workaround of running my own build-server with all the packages i need on different target machines, putting those made packages somewhere on the internet and then be able to download them to the designated machine and "pacman -U" them there.
    i hope someone with a similar use-case can give me some hints.
    cheers,
    grubernd
    Last edited by grubernd (2015-05-16 17:17:58)

    thanks.
    progandy wrote:…or if you use su, then allow password-free su execution for the wheel group (/etc/pam.d/su) and…
    nice usage of su, will have to dig more into that and i should be able to not need sudo as an extra package.
    but i am really trying to avoid the NOPASSWD route for security reasons.
    apologies, i forgot to mention that in my original post.
    Trilby wrote:
    Why not just script it:
    source PKGBUILD
    pacman -S --asdeps $depends $makedepends
    sudo -u nobody makepkg
    thanks. seeing it written down differently showed me the flaw in my wishful thinking:
    makepkg can of course resolve dependencies but it cannot ever resolve dependencies where one package in the AUR requires another un-made package from the AUR. just doesn't work. that was the block i had.
    although not really beautiful i will just drill down the package-chain and resolve dependencies myself in the installer scripts.
    (marking thread as solved)

  • [SOLVED] Makepkg makes qmake or make act weird and causes trouble

    I am trying to build an Arch package from a Qt app. The app is split into some libraries that are compiled with the main application. Installation instruction for the whole app are defined in src.pro (see the hierarchy below), which are then written into the Makefiles that are generated on-the-fly during the build process. When I run 'qmake && make && make install' manually, everything goes fine. But when I try to build the package using a PKGBUILD, for some reason the Makefile generated based on the src.pro is missing installation instruction for the libraries (lib1, lib2, etc.).
    I don't really have any idea of what is the problem. I tried to study the output of the make process and it seems like that when running makepkg, qmake first runs on each subdirectory to generate the Makefiles and only then begins the actual build process. Manually running make first builds one directory, then the second one and so on and only generates a Makefile when entering a directory. Since the libraries marked for installation do not exist in the beginning of the build process, they are not put into the Makefile at all, I think.
    My first idea for a solution was to move the installation instructions into the libraries' own .pro files but that didn't seem to change anything. The build() process in the makepkg only runs qmake, make and make install - what I also do manually.
    The directory structure of the project is like this:
    project/
        project.pro
        src/
            src.pro
            lib1/
                lib1.pro
            lib2/
                lib2.pro
    Last edited by Verge (2011-10-07 19:07:33)

    After many hours I finally came up with a solution that was too easy... I only needed an "install.pri" file in which I added two lines:
    target.path = /path/
    INSTALLS += target
    Next I just included that file into each lib's .pro file and now the makepkg process works. "target" is somewhat a magical variable for qmake.
    I consider this solved.
    Last edited by Verge (2011-10-07 19:08:22)

  • [SOLVED] makepkg error: Why is it looking for '.part' files?

    I've been seeing the following problem with every package I try to build with makepkg.  The download works, but then makepkg complains that it can't find the source file:
    ==> Making package: android-sdk r20-2 (Fri Jun 29 11:01:35 CDT 2012)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving Sources...
    -> Downloading android-sdk_r20-linux.tgz...
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    100 78.7M 100 78.7M 0 0 1640k 0 0:00:49 0:00:49 --:--:-- 2166k
    mv: cannot stat ‘/usr/local/src/android-sdk_r20-linux.tgz.part’: No such file or directory
    ==> ERROR: Failure while downloading android-sdk_r20-linux.tgz
    Aborting...
    Why is it looking for the .part file, instead of the .tgz file?  When I run makepkg again, without doing anything in between, it finds the .tgz file and goes on its merry way.
    I'm using curl, and I haven't tried wget.  Judging by my unscientific sample of forum posts it looks like wget is the de facto standard, so this may not be on anyone else's radar.
    Here's my makepkg.conf:
    # /etc/makepkg.conf
    # SOURCE ACQUISITION
    #-- The download utilities that makepkg should use to acquire sources
    # Format: 'protocol::agent'
    DLAGENTS=('ftp::/usr/bin/curl -fC - --ftp-pasv --retry 3 --retry-delay 3 -o %o %u'
    'http::/usr/bin/curl -fLC - --retry 3 --retry-delay 3 -o %o %u'
    'https::/usr/bin/curl -fLC - --retry 3 --retry-delay 3 -o %o %u'
    'rsync::/usr/bin/rsync -z %u %o'
    'scp::/usr/bin/scp -C %u %o')
    # Other common tools:
    # /usr/bin/snarf
    # /usr/bin/lftpget -c
    # /usr/bin/wget
    # ARCHITECTURE, COMPILE FLAGS
    CARCH="x86_64"
    CHOST="x86_64-unknown-linux-gnu"
    #-- Compiler and Linker Flags
    # -march (or -mcpu) builds exclusively for an architecture
    # -mtune optimizes for an architecture, but builds for whole processor family
    CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_FORTIFY_SOURCE=2"
    CXXFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_FORTIFY_SOURCE=2"
    LDFLAGS="-Wl,-O1,--sort-common,--as-needed,-z,relro"
    #-- Make Flags: change this for DistCC/SMP systems
    MAKEFLAGS="-j4"
    # BUILD ENVIRONMENT
    # Defaults: BUILDENV=(fakeroot !distcc color !ccache check !sign)
    # A negated environment option will do the opposite of the comments below.
    #-- fakeroot: Allow building packages as a non-root user
    #-- distcc: Use the Distributed C/C++/ObjC compiler
    #-- color: Colorize output messages
    #-- ccache: Use ccache to cache compilation
    #-- check: Run the check() function if present in the PKGBUILD
    #-- sign: Generate PGP signature file
    BUILDENV=(fakeroot !distcc color !ccache check !sign)
    #-- If using DistCC, your MAKEFLAGS will also need modification. In addition,
    #-- specify a space-delimited list of hosts running in the DistCC cluster.
    #DISTCC_HOSTS=""
    #-- Specify a directory for package building.
    #BUILDDIR=/tmp/makepkg
    # GLOBAL PACKAGE OPTIONS
    # These are default values for the options=() settings
    # Default: OPTIONS=(strip docs libtool emptydirs zipman purge !upx)
    # A negated option will do the opposite of the comments below.
    #-- strip: Strip symbols from binaries/libraries
    #-- docs: Save doc directories specified by DOC_DIRS
    #-- libtool: Leave libtool (.la) files in packages
    #-- emptydirs: Leave empty directories in packages
    #-- zipman: Compress manual (man and info) pages in MAN_DIRS with gzip
    #-- purge: Remove files specified by PURGE_TARGETS
    #-- upx: Compress binary executable files using UPX
    OPTIONS=(strip docs !libtool emptydirs zipman purge !upx)
    #-- File integrity checks to use. Valid: md5, sha1, sha256, sha384, sha512
    INTEGRITY_CHECK=(md5)
    #-- Options to be used when stripping binaries. See `man strip' for details.
    STRIP_BINARIES="--strip-all"
    #-- Options to be used when stripping shared libraries. See `man strip' for details.
    STRIP_SHARED="--strip-unneeded"
    #-- Options to be used when stripping static libraries. See `man strip' for details.
    STRIP_STATIC="--strip-debug"
    #-- Manual (man and info) directories to compress (if zipman is specified)
    MAN_DIRS=({usr{,/local}{,/share},opt/*}/{man,info})
    #-- Doc directories to remove (if !docs is specified)
    DOC_DIRS=(usr/{,local/}{,share/}{doc,gtk-doc} opt/*/{doc,gtk-doc})
    #-- Files to be removed from all packages (if purge is specified)
    PURGE_TARGETS=(usr/{,share}/info/dir .packlist *.pod)
    # PACKAGE OUTPUT
    # Default: put built package and cached source in build directory
    #-- Destination: specify a fixed directory where all packages will be placed
    PKGDEST=/usr/local/pkgs
    #-- Source cache: specify a fixed directory where source files will be cached
    SRCDEST=/usr/local/src
    #-- Source packages: specify a fixed directory where all src packages will be placed
    #SRCPKGDEST=/home/srcpackages
    #-- Packager: name/email of the person or organization building packages
    PACKAGER="Whitney Marshall <[email protected]>"
    #-- Specify a key to use for package signing
    GPGKEY="E4FB694E"
    # EXTENSION DEFAULTS
    # WARNING: Do NOT modify these variables unless you know what you are
    # doing.
    PKGEXT='.pkg.tar.xz'
    SRCEXT='.src.tar.gz'
    # vim: set ft=sh ts=2 sw=2 et:
    Last edited by wmarshall (2012-07-02 15:39:16)

    It looks like this code is the culprit.  I don't have time to dig into it right now, but presumably this works with wget.  I would have thought any cmdline downloader would manage renaming the .part file itself...?
    makepkg, lines 395-417 (pacman 4.0.3-2):
    395 # replace %o by the temporary dlfile if it exists
    396 if [[ $dlcmd = *%o* ]]; then
    397 dlcmd=${dlcmd//\%o/\"$file.part\"}
    398 dlfile="$file.part"
    399 fi
    414 # rename the temporary download file to the final destination
    415 if [[ $dlfile != "$file" ]]; then
    416 mv -f "$SRCDEST/$dlfile" "$SRCDEST/$file"
    417 fi

  • SOLVED: makepkg fails at signing package

    Hi all,
    I'm trying to build a package from AUR but makepkg fails to sign the package.
    I receive the following message: "==> WARNING: Failed to sign package file." and nothing else.
    I have, as far as i can tell, installed all the necessary keys (pacman-key --recv-key and pacman-key --lsign). I even tried to completely reset the keyring  (remove of /etc/pacman.d/gnupg, pacman-key --init, ...) but it does not solve my problem.
    I can see the key when issuing pacman-key -l.
    Any idea is welcome cause I'm completely loss right now and I fear becoming nuts...
    Last edited by calyce (2012-05-25 23:02:49)

    https://wiki.archlinux.org/index.php/De … g_Packages
    That should explain everything you need to sign packages
    To sign packages you need your own key, what comes with pacman is the developers' keys for the packages in the repos
    Last edited by ethail (2012-05-25 16:16:27)

  • [SOLVED] makepkg git clone fails (firewalled), how to use https

    At my new employer git(hub) traffic has been blocked so I am unable to clone a repo with "git clone git://someurl/..." but I can clone via https "git clone https://someurl/..."
    However, makepkg uses the former and fails with the following error:
    Cloning into bare repository '/tmp/cower-git/cower' ...
    fatal: unable to connect to github.com:
    github.com[0: 192.30.252.130]: errno=Connection refused
    Can I configure makepkg to use https cloning?  I can edit makepkg itself, but this seems far from ideal.
    For the time being, I just cd'ed into 'src' and manually cloned via https, then backed out and used 'makepkg -ei'.  So I now have cower up and running, but this is a bit tedius to do for all git packages.

    Change it how?  I tried changing it to "https://github..." but then makepkg didn't treat it as a git repo and just downloaded the webpage.
    EDIT: nevermind, I'm an idiot:
    source=("git+https://github/...")
    I know I had done this before, but I wasn't finding it when I needed it.  Thanks.
      - Trilby: Moderator and Noob!

  • [Solved] Makepkg fails on all Mercurial (*-hg) packages

    Hello,
    I'm not sure if this is a result of recent updates, but I noticed that makepkg now fails on all mercurial packages. I attempted to install both the dillo browser and pytyle, and received the following messages (this is for pytyle2-hg, but the messages were basically identical):
    ==> Determining latest hg revision...
    warning: pytyle.googlecode.com certificate with fingerprint e9:f0:26:b1:ff:27:28:33:81:8e:51:7b:fd:a7:de:df:4c:1e:ee:14 not verified (check hostfingerprints or web.cacerts config setting)
    real URL is https://pytyle.googlecode.com/hg/
    requesting all changes
    adding changesets
    adding manifests
    adding file changes
    added 39 changesets with 303 changes to 115 files
    updating to branch default
    36 files updated, 0 files merged, 0 files removed, 0 files unresolved
    -> Version found: 38
    ==> Making package: pytyle2-hg 38-1 (Tue Feb 7 22:02:49 EST 2012)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving Sources...
    ==> Extracting Sources...
    ==> Entering fakeroot environment...
    ==> Starting build()...
    ==> Connecting to Mercurial server...
    warning: pytyle.googlecode.com certificate with fingerprint e9:f0:26:b1:ff:27:28:33:81:8e:51:7b:fd:a7:de:df:4c:1e:ee:14 not verified (check hostfingerprints or web.cacerts config setting)
    real URL is https://pytyle.googlecode.com/hg/
    pulling from https://pytyle.googlecode.com/hg//pytyle
    searching for changes
    no changes found
    ==> ERROR: A failure occurred in build().
    Aborting...
    Rerunning makepkg in the same directory fails to get through the version check phase:
    ==> Determining latest hg revision...
    warning: pytyle.googlecode.com certificate with fingerprint e9:f0:26:b1:ff:27:28:33:81:8e:51:7b:fd:a7:de:df:4c:1e:ee:14 not verified (check hostfingerprints or web.cacerts config setting)
    real URL is https://pytyle.googlecode.com/hg/
    pulling from https://pytyle.googlecode.com/hg//pytyle
    searching for changes
    no changes found
    ==> ERROR: An unknown error has occurred. Exiting...
    The mercurial portion of both packages seems to match the VCS template exactly. Executing "hg clone [hgroot] [hgrepo]" works, as does performing a pull from the cloned directory.
    Any help would be greatly appreciated.
    Regards,
    szim90
    Last edited by szim90 (2012-02-08 10:00:50)

    That does appear to be the cause of this problem. Thanks karol - marking as solved.
    Last edited by szim90 (2012-02-08 10:00:34)

  • [SOLVED] Makepkg was unable to build lib32-portaudio.

    Okay so i dusted off my old PS2 the other day and decided i wanted to play some old games for nostalgic sake when i came to realize it is still broken. So i looked for a decent emulator and found PCSX2. Now, i had some missing dependencies that needed to be installed and i worked on it for a few hours and got every dependency to install except lib32-portaudio. And don't get me wrong, Iv'e looked both at the PCSX2 website and the Arch wiki and forums and iv'e googled and looked at Ubuntu forums and whatnot. I looked everywhere i could think of, and i can't find anything at all on this issue so i'm hoping someone will be able to help me here. I guess the relevant part of the error message that i get is this:
    /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-unknown-linux-gnu/4.5.2/../../../libjack.so when searching for -ljack
    /usr/bin/ld: skipping incompatible /usr/lib/libjack.so when searching for -ljack
    /usr/bin/ld: cannot find -ljack
    collect2: ld returned 1 exit status
    make: *** [lib/libportaudio.la] Error 1
        Aborting...
    ==> ERROR: Makepkg was unable to build lib32-portaudio.
    And Iv'e re-installed portaudio at least 3 times since i figured the error had something to do with it. Through pacman, downloading the package from the dev website, and even trying the Ubuntu repos and it seems to be successfully installed each time. I'm fairly new to Linux but i'm a very persistent person and i wouldn't give up without at least a days worth of pulling out my hair in frustration, and through trial and error, so as far as my knowledge reaches, iv'e tried my best to solve it on my own. Oh, and i'm building/installing from the AUR if that would make a difference. Appreciate any help.
    edit
    Been a long time, with no help, but i finally gave it another go. This time successfull! I just recently re-installed Arch, this time running openbox, and without any issues whatsoever, it installed on the first try. I think i had a shitload of conflicting and missing dependencies on my last install and that's what caused it.. I don't know if this will help anyone if you ever run in to this problem, there isn't much to add, Only thing is that I'm running a clean install and i use the nvidia driver and nvidia-utils, and all the ALSA sound drivers and utilities. Oh and i'm running openbox as a stand-alone WM. That's all the relevant info i think. Now i'm looking forward to configuration hell for the games i'm gonna run, haha!
    Last edited by waspy (2011-05-26 18:42:23)

    ChoK wrote:It seems like you need lib32-jack and the maintainer of lib32-portaudio forgot to add it to makedepends/depends
    Iv'e got lib32-jack already installed. And by reading the comments of the AUR link to the package ( lib32-portaudio ) they supposedly fixed the PKGBUILD too.

  • [SOLVED ]makepkg asking for pacman-color binary

    After upgrading pacman to 4.1 I'm getting this error every time I run makepkg:
    /usr/bin/makepkg -s -i
    ==> ERROR: Cannot find the pacman-color binary required for dependency operations.
    Even if I try to use --nocolor:
    /usr/bin/makepkg --nocolor -s -i
    ==> ERROR: Cannot find the pacman-color binary required for dependency operations.
    I've tried enabling and disabling color options in both makepkg.conf and pacman.conf, and the error is still there.
    Any clue on what could be going on and how to solve it?
    Thank you in advance
    Last edited by ethail (2013-04-01 23:01:32)

    echo $PACMAN
    pacman-color
    I guess I've forgotten to edit some file that I changed when using pacman-color-testing from the AUR.
    EDIT: Indeed, I forgot to edit my .zprofile that used to export PACMAN as pacman-color. Solved now
    Thank you, Allan
    Last edited by ethail (2013-04-01 23:02:14)

  • Swatches loading into background color when clicked, not foreground

    Can someone advise. When I mouse click on a swatch from the brush tool the color I select is loaded into the background color instead of the foreground color. Then to use the color I have to slide the reverse tool, which is a pain and slow. How can I have the clicked color load into the foreground first? I'm in Mac CC.
    Thanks,
    Seth

    It can be hard to tell which color chip in the color panel is actually the active one.
    The active color chip in the color panel will have either a white or grey line around it, depending on the ui interface color photoshop is using)
    When you use the eyedropper are you using any modifier key like Command.
    You shouldn't need to use any modifier key with the Brush Tool selected and clicking on one of the swatches in the Swatches panel to set the foreground color.
    (with the brush tool selected, holding down the command key while clicking on one of the swatches will set the background color in the toolbox)
    If nothing seems to work, i would reset the photoshop preferences
    Press and hold the Shift+Command+Option keys just after starting the launch of photoshop
    Keep holding the keys down until you get a dialog asking if you want to delete the adobe photoshop settings file
    Press Yes

  • [SOLVED] LXQt: Black Background and File Associations Ingored

    Hi,
    I switched from Razor-qt 0.5 to LXQt 0.7 (both from AUR) recently, and I encountered two problems:
    1. After login, there is only a black background and no right-click context menu works on the desktop (see linked screenshot below).
    2. "Removable media/devices manager" opens mounted filesystems with filesight instead of dolphin.
    http://i62.tinypic.com/abt56s.png
    Both issues worked fine in Razor-qt 0.5. I guess the black background problem relates to the lxqt-session-git package. LXQt 0.7 ships an own file associations dialog. I changed the default application of MIME type "directory" and "mount point" to dolphin, but obviously it is ignored. Usually, I manage MIME types with KDE's setting control panel. There, MIME types "directory" and "mount point" are associated with dolphin.
    Maybe, anybody has a clue how to narrow down the problem.
    Thanks.
    Last edited by Schwefelsaeure (2014-05-11 17:51:43)

    Thanks, Chazza.
    Package pcmanfm-qt-git solved the issue with the black background.
    However, "Device manager" still ignores the MIME type assocations, even when creating a section for removed associations in ~/.local/share/applications/mimeapps.list (like described in the wiki):
    [Removed Associations]
    inode/directory=filelight.desktop
    inode/mount-point=filelight.desktop
    I will investigate this problem further when I have enough time (maybe LXQt uses another MIME type for mounted devices). Nevertheless, I mark this thread as SOLVED.

  • [SOLVED] makepkg fails when building libdivecomputer-git from AUR.

    I'm not sure where to start debugging this failure as I'm a newb when it comes to compiling packages.
    Can anyone point me in the right direction?
    [dan@arch libdivecomputer-git]$ makepkg -s PKGBUILD
    ==> Determining latest git revision...
    -> Version found: 20120714
    ==> Making package: libdivecomputer-git 20120714-1 (Sat Jul 14 13:43:22 EST 2012)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving Sources...
    ==> Extracting Sources...
    ==> Starting build()...
    ==> Connecting to GIT server....
    Cloning into 'libdivecomputer'...
    remote: Counting objects: 3995, done.
    remote: Compressing objects: 100% (1563/1563), done.
    remote: Total 3995 (delta 3240), reused 2983 (delta 2429)
    Receiving objects: 100% (3995/3995), 739.19 KiB | 59 KiB/s, done.
    Resolving deltas: 100% (3240/3240), done.
    ==> GIT checkout done or server timeout
    ==> Starting make...
    Cloning into '/home/dan/libdivecomputer-git/src/libdivecomputer-build'...
    done.
    libtoolize: putting auxiliary files in `.'.
    libtoolize: copying file `./ltmain.sh'
    libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'.
    libtoolize: copying file `m4/libtool.m4'
    libtoolize: copying file `m4/ltoptions.m4'
    libtoolize: copying file `m4/ltsugar.m4'
    libtoolize: copying file `m4/ltversion.m4'
    libtoolize: copying file `m4/lt~obsolete.m4'
    configure.ac:25: installing './config.guess'
    configure.ac:25: installing './config.sub'
    configure.ac:21: installing './install-sh'
    configure.ac:21: installing './missing'
    examples/Makefile.am: installing './depcomp'
    automake: warnings are treated as errors
    /usr/share/automake-1.12/am/ltlibrary.am: warning: 'libdivecomputer.la': linking libtool libraries using a non-POSIX
    /usr/share/automake-1.12/am/ltlibrary.am: archiver requires 'AM_PROG_AR' in 'configure.ac'
    src/Makefile.am:4: while processing Libtool library 'libdivecomputer.la'
    autoreconf: automake failed with exit status: 1
    ==> ERROR: A failure occurred in build().
    Aborting...
    Last edited by bergersau (2012-07-18 03:57:09)

    You may want to notify the maintainer by posting a comment on https://aur.archlinux.org/packages.php?ID=52648

  • [SOLVED] makepkg builds differently than local build

    SOLUTION: my makefile with `CFLAGS += ...` did not pick up the local CFLAGS environment variable, with the command in my second post, my local build has the same issue as the makepkg build.
    I have a project of mine alopex which I'm polishing up a new version for.  I've been running a locally built version for a while, but when I installed it with makepkg I'm getting all sorts of memory errors and crashes when I run it.
    The first obvious difference between my local build and makepkg was that makepkg strips the binary - so I stripped the local binary, but no change: the local build in ~/code/alopex/ worked fine, but the makepkg/pacman installed version in /usr/bin/ did not.
    The next step was to source /etc/makepkg.conf for the local build to use all the same CFLAGS and LDFLAGS as a makepkg build.  But this also lead to no change in the results.
    Here are the two files, both from the exact same git source, both built with makepkg.conf build flags, and both stripped.
    $ ls -l /usr/bin/alopex
    -rwxr-xr-x 1 root root 48280 Feb 13 14:21 /usr/bin/alopex
    $ file /usr/bin/alopex
    /usr/bin/alopex: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=03e8e16e7208d6c97bce6bd545f45d20c7a7606a, stripped
    $ ls -l ~/code/alopex/alopex
    -rwxr-xr-x 1 jmcclure users 54760 Feb 13 14:30 /home/jmcclure/code/alopex/alopex
    $ file ~/code/alopex/alopex
    /home/jmcclure/code/alopex/alopex: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0659354ed5a904fe93894f14d6bd413ea278396e, stripped
    I can't figure out why the binaries are difference sizes and have different checksums.  I suspect there is an error in my code that is only exposed when built with makepkg - but this is hard to track down without knowing what makepkg is doing differently than my local build.  To be sure relative paths couldn't be relevant, I also moved the locally built binary to /usr/bin/ (overwriting the makepkg/pacman one) and it executed just fine.
    So - my main question is, in addition to stripping a binary and using the makepkg.conf flags, what else is different about the makepkg build and my local build?  I'm guessing it might have to do with makepkg using a clean chroot build environment (right?), but namcap doesn't detect any missing dependencies or any other issues.
    Below are many details that may or may not be relevant
    EDIT: in case it's relevant, here is the `env` output for the local build with only the PS1/PS2 and LS_COLORS removed as they are long and I can't imagine relevant.
    XDG_VTNR=1
    LESS_TERMCAP_mb=
    XDG_SESSION_ID=1
    LESS_TERMCAP_md=
    LESS_TERMCAP_me=
    TERM=rxvt-unicode-256color
    SHELL=/bin/bash
    WINDOWID=6291462
    OLDPWD=/home/jmcclure
    LESS_TERMCAP_ue=
    USER=jmcclure
    _JAVA_OPTIONS=-Dawt.useSystemAAFontSettings=on -Dswing.aatext=true -Dswing.defaultlaf=com.sun.java.swing.plaf.gtk.GTKLookAndFeel
    SYSTEMD_PAGER=
    PAGER=vimpager
    MOZ_PLUGIN_PATH=/usr/lib/mozilla/plugins
    LESS_TERMCAP_us=
    MAIL=/var/spool/mail/jmcclure
    PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/jmcclure/code/bin:/home/jmcclure/code/bin
    HG=/usr/bin/hg
    LC_COLLATE=C
    PWD=/home/jmcclure/code/alopex
    JAVA_HOME=/usr/lib/jvm/java-7-openjdk
    EDITOR=vim
    LANG=en_US.UTF-8
    HISTCONTROL=erasedups
    COLORFGBG=default;default
    SHLVL=2
    XDG_SEAT=seat0
    HOME=/home/jmcclure
    TERMINFO=/usr/share/terminfo
    XDG_CONFIG_HOME=/home/jmcclure/.config
    _JAVA_AWT_WM_NONREPARENTING=1
    LESS= -RXx4
    LOGNAME=jmcclure
    LESS_TERMCAP_so=
    LESSOPEN=|/usr/bin/src-hilite-lesspipe.sh %s
    td=
    WINDOWPATH=2
    XDG_RUNTIME_DIR=/run/user/1000
    DISPLAY=:0
    COLORTERM=rxvt
    LESS_TERMCAP_se=
    _=/usr/bin/env
    As a temporary work-around, I found that the following produces a working binary and somehow avoids the step that is leading to the crashes
    makepkg
    cd src/alopex
    make clean
    make
    cd ../..
    makepkg -efi
    I've been tinkering with readelf to see what differences I can pinpoint.  I really don't know what I'm doing, but it seemed the symbols (-s) and dynamic links (-d) were all the same between the two version except for the addressess/offsets/hex-number-column.  So I tried checking the readelf -h output and I get a couple differences there - in this output, the /usr/bin/alopex version is the working one:
    diff <(readelf -h /tmp/alopex/pkg/alopex-git/usr/bin/alopex) <(readelf -h /usr/bin/alopex)
    11c11
    < Entry point address: 0x4030d8
    > Entry point address: 0x403020
    13c13
    < Start of section headers: 46488 (bytes into file)
    > Start of section headers: 52968 (bytes into file)
    17c17
    < Number of program headers: 9
    > Number of program headers: 8
    I'm not sure what these differences mean, but they are consistent with every working and non-working build.
    Readelf -l output indicates that the extra program header in the nonworking version includes the following:
    08 .init_array .fini_array .jcr .dynamic .got
    Each of these symbols exist in the working binary as well, but they are duplicated in the non-working version (still no idea what this actually means, but in case someone else can decode this ...)
    Last edited by Trilby (2014-02-13 20:38:13)

    Crap - you are, of course, right.  It seems my Makefile is not doing what I thought it was.  I define makefile variables for flags with += operators, which I thought picked up the current value from the environment.  But it seems not to do this.
    I plan to fix the crashes, but I was stumped, and I wanted to know why it worked in some builds and not others - I'm hoping this information will help narrow down what my coding error is.
    EDIT: a local build with `CFLAGS="$CFLAGS" LDFLAGS="$LDFLAGS" make` reproduces the error.
    Thanks for the help - now I know which flags expose my error, so I may be able to narrow it down.
    FOLLOW-UP: Thanks again - while I have no doubt that my troubleshooting steps are unconventional (as I have no 'conventional' training in programming) this did allow me to track down a bug in my code which I had been missing.
    Last edited by Trilby (2014-02-13 23:03:54)

Maybe you are looking for