/bin/bash uses shared libs

Not sure if this is an actual problem per se, but I had a file system problem on my box which meant I needed to run fsck on 2 partitions (/usr and /home).  So I ran 'init s' and umounted /home no probs but could not umount /usr as bash needed a few libs from /usr/lib.
This meant that I had to dig out a rescue disk and boot from that to run fsck.
Wouldn't it make sense to have bash built with these libs built in so that this dependancy wasn't an issue?

http://aur.archlinux.org/packages.php?ID=22650

Similar Messages

  • Using older shared libs for games ?

    Hi fellow archers,
    I like to play the enhanced Quake engine DarkPlaces (http://icculus.org/twilight/darkplaces/) together with a few mods (namely SDQuake and Kleshik).
    Under Ubuntu 9.04 x64 DarkPlaces runs nicely as native x64 Linux app. Under the most recent version of arch64 DarkPlaces bombed out with an error message that the libjpeg version is too new. Something like Version 0.80 found but Version 0.62 is expected. Seems to me that even the newest version of Darkplaces is build against an older version of libjpeg than the one included in arch64. Libpng is also used by DarkPlaces since textures are stored as PNG.
    My Ubuntu 9.04 x64 shows the following libs:
    /usr/lib/libpng12.a
    /usr/lib/libpng12.so.0.27.0
    /usr/lib/libjpeg.a
    /usr/lib/libjpeg.so.62.0.0
    Is it "wise" to use a custom lib folder for DarkPlaces and set it via LD_LIBRARY_PATH ?
    e.g.
    export LD_LIBRARY_PATH=/home/ds/dplibs
    Or can I simply install older versions of libjpeg and libpng without screwing up my arch64 installation ? I can remember that I also did install a few older libs on my Ubuntu 10.04 in order to get UT2k4 running.
    BTW: I would prefer not to recompile DarkPlaces against newer versions of libpng/libjpeg.
    TIA,
    D$

    Ok, thanks for all your replies !
    I will try out the approach using a dedicated lib directory for DarkPlaces under arch64.
    Unfortunately my 500GB HDD which I used for arch died
    So I have to wait until my new 1TB HDD arrives.
    stqn wrote:... but I guess if you want to play (and not rebuild) this game you don't have a choice..
    Well, the other not so slick option would be using the Windows version of the DarkPlaces engine inside a 32 chrooted WINE since for Windows all requiered libs (aka dll) are included in the right version.
    As usual shared libs also do have their disadvantages compared to statically linked stuff
    Last edited by Darksoul71 (2010-07-28 22:16:20)

  • NPE when using POJO Data Control deployed as webcenter shared Lib

    Hello everyone,
    I am using Jdevelopper 11.1.1.7...
    I have a methodAction Binding defined for a JSFF. I am executing this methodAction in a managed bean. When doing so I have the following exception:
    Caused By: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.lang.NullPointerException, msg=null
    at oracle.adf.model.binding.DCDataControlReference.getDataControl(DCDataControlReference.java:118)
    at oracle.adf.model.BindingContext.instantiateDataControl(BindingContext.java:1128)
    at oracle.adf.model.dcframe.DataControlFrameImpl.doFindDataControl(DataControlFrameImpl.java:1638)
    at oracle.adf.model.dcframe.DataControlFrameImpl.internalFindDataControl(DataControlFrameImpl.java:1507)
    at oracle.adf.model.dcframe.DataControlFrameImpl.findDataControl(DataControlFrameImpl.java:1467)
    at oracle.adf.model.BindingContext.internalFindDataControl(BindingContext.java:1261)
    at oracle.adf.model.BindingContext.get(BindingContext.java:1211)
    at oracle.adf.model.binding.DCUtil.findSpelObject(DCUtil.java:304)
    at oracle.adf.model.binding.DCBindingContainer.evaluateParameterWithElCheck(DCBindingContainer.java:1466)
    at oracle.adf.model.binding.DCBindingContainer.evaluateParameter(DCBindingContainer.java:1511)
    at oracle.jbo.uicli.binding.JUCtrlActionBinding.getResult(JUCtrlActionBinding.java:1968)
    at oracle.adfinternal.view.faces.model.binding.FacesCtrlActionBinding._execute(FacesCtrlActionBinding.java:267)
    at oracle.adfinternal.view.faces.model.binding.FacesCtrlActionBinding.execute(FacesCtrlActionBinding.java:210)
    at com.euroscript.platon.reporting.view.ReportingBean.getListLanguages(ReportingBean.java:661)
    at com.euroscript.platon.reporting.view.ReportingBean.getParamLabelsNeedingCustomLov(ReportingBean.java:87)
    at com.euroscript.platon.reporting.view.ReportingBean.initParams(ReportingBean.java:151)
    at com.euroscript.platon.reporting.view.ReportingBean.onSelectReport(ReportingBean.java:110)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.sun.el.parser.AstValue.invoke(AstValue.java:187)
    at com.sun.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:297)
    at org.apache.myfaces.trinidadinternal.taglib.util.MethodExpressionMethodBinding.invoke(MethodExpressionMethodBinding.java:53)
    at org.apache.myfaces.trinidad.component.UIXComponentBase.broadcastToMethodBinding(UIXComponentBase.java:1415)
    at org.apache.myfaces.trinidad.component.UIXEditableValue.broadcast(UIXEditableValue.java:216)
    at oracle.adf.view.rich.component.fragment.UIXRegion.broadcast(UIXRegion.java:181)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:92)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:361)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:96)
    at oracle.adf.view.rich.component.fragment.UIXInclude.broadcast(UIXInclude.java:103)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:92)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:361)
    at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:96)
    at oracle.adf.view.rich.component.fragment.UIXInclude.broadcast(UIXInclude.java:97)
    at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.broadcastEvents(LifecycleImpl.java:1086)
    at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:478)
    at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:207)
    This methodAction calls a Data Contol based on a POJO which is using a JAX-WS Proxy. This Data Control POJO is packaged in a separate JAR file named ServicesProxy.jar.
    Both ServicesProxy.jar and the ADF application using it are deployed as two distinct shared libraries used by webcenter portal application.
    An important information is that the error does not happen when debugging the ADF application on my IntegratedServer. The error happens only when deployed as a shared Lib used by webcenter portal application.
    This jar file ServicesProxy is used by other applications within our portal, so I think that the data control is properly created.
    Do you have any idea what could be the cause of such error?

    vinaykumar2 wrote:
    not really.May be you can check log in EM to find out some warning or error.try that..
    I have checked in the EM, I don't really find more helpful info...
    Here is the full stacktrace found in the EM (WC_Spaces1-diagnostic.log)
    dfd51:69a509b6:142a48c6891:-8000-0000000000000ffd,0] [APP: webcenter#11.1.1.4.0] [DSID: 0000KA^^HbLDc_l6wvicMG1IaBTa000009] ADF_FACES-60098:Faces lifecycle receives unhandled exceptions in phase PROCESS_VALIDATIONS 3[[
    javax.faces.el.EvaluationException: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.lang.NullPointerException, msg=null
            at org.apache.myfaces.trinidadinternal.taglib.util.MethodExpressionMethodBinding.invoke(MethodExpressionMethodBinding.java:58)
            at org.apache.myfaces.trinidad.component.UIXComponentBase.broadcastToMethodBinding(UIXComponentBase.java:1415)
            at org.apache.myfaces.trinidad.component.UIXEditableValue.broadcast(UIXEditableValue.java:216)
            at oracle.adf.view.rich.component.fragment.UIXRegion.broadcast(UIXRegion.java:181)
            at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:92)
            at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:361)
            at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:96)
            at oracle.adf.view.rich.component.fragment.UIXInclude.broadcast(UIXInclude.java:103)
            at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:92)
            at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:361)
            at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:96)
            at oracle.adf.view.rich.component.fragment.UIXInclude.broadcast(UIXInclude.java:97)
            at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.broadcastEvents(LifecycleImpl.java:1086)
            at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:478)
            at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:207)
            at javax.faces.webapp.FacesServlet.service(FacesServlet.java:265)
            at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
            at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
            at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:301)
            at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.portlet.client.adapter.adf.ADFPortletFilter.doFilter(ADFPortletFilter.java:32)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.webcenter.framework.events.dispatcher.EventDispatcherFilter.doFilter(EventDispatcherFilter.java:44)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.wcps.client.PersonalizationFilter.doFilter(PersonalizationFilter.java:74)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.webcenter.content.integration.servlets.ContentServletFilter.doFilter(ContentServletFilter.java:168)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.webcenter.generalsettings.model.provider.GeneralSettingsProviderFilter.doFilter(GeneralSettingsProviderFilter.java:85)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.webcenter.webcenterapp.internal.view.webapp.WebCenterShellPageRedirectionFilter.doFilter(WebCenterShellPageRedirectionFilter.java:342)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:205)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.webcenter.webcenterapp.internal.view.webapp.WebCenterShellFilter.doFilter(WebCenterShellFilter.java:953)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.adf.view.page.editor.webapp.WebCenterComposerFilter.doFilter(WebCenterComposerFilter.java:117)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.adf.share.http.ServletADFFilter.doFilter(ServletADFFilter.java:71)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:128)
            at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:446)
            at oracle.adfinternal.view.faces.activedata.AdsFilter.doFilter(AdsFilter.java:60)
            at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:446)
            at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:271)
            at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:177)
            at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.adf.library.webapp.LibraryFilter.doFilter(LibraryFilter.java:180)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.webcenter.webcenterapp.internal.view.webapp.WebCenterLocaleWrapperFilter.processFilters(WebCenterLocaleWrapperFilter.java:369)
            at oracle.webcenter.webcenterapp.internal.view.webapp.WebCenterLocaleWrapperFilter.doFilter(WebCenterLocaleWrapperFilter.java:265)
      at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.adfinternal.view.faces.caching.filter.AdfFacesCachingFilter.doFilter(AdfFacesCachingFilter.java:126)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:119)
            at java.security.AccessController.doPrivileged(Native Method)
            at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:324)
            at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:460)
            at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:103)
            at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:171)
            at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:163)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:119)
            at java.security.AccessController.doPrivileged(Native Method)
            at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:324)
            at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:460)
            at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:103)
            at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:171)
            at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3730)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3696)
            at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
            at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
            at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2273)
            at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179)
            at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1490)
            at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
            at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    Caused by: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.lang.NullPointerException, msg=null
            at oracle.adf.model.binding.DCDataControlReference.getDataControl(DCDataControlReference.java:118)
            at oracle.adf.model.BindingContext.instantiateDataControl(BindingContext.java:1128)
            at oracle.adf.model.dcframe.DataControlFrameImpl.doFindDataControl(DataControlFrameImpl.java:1638)
            at oracle.adf.model.dcframe.DataControlFrameImpl.internalFindDataControl(DataControlFrameImpl.java:1507)
            at oracle.adf.model.dcframe.DataControlFrameImpl.findDataControl(DataControlFrameImpl.java:1467)
            at oracle.adf.model.BindingContext.internalFindDataControl(BindingContext.java:1261)
            at oracle.adf.model.BindingContext.get(BindingContext.java:1211)
            at oracle.adf.model.binding.DCUtil.findSpelObject(DCUtil.java:304)
            at oracle.adf.model.binding.DCBindingContainer.evaluateParameterWithElCheck(DCBindingContainer.java:1466)
            at oracle.adf.model.binding.DCBindingContainer.evaluateParameter(DCBindingContainer.java:1511)
            at oracle.jbo.uicli.binding.JUCtrlActionBinding.getResult(JUCtrlActionBinding.java:1968)
            at oracle.adfinternal.view.faces.model.binding.FacesCtrlActionBinding._execute(FacesCtrlActionBinding.java:267)
            at oracle.adfinternal.view.faces.model.binding.FacesCtrlActionBinding.execute(FacesCtrlActionBinding.java:210)
            at com.euroscript.platon.reporting.view.ReportingBean.getListLanguages(ReportingBean.java:463)
            at com.euroscript.platon.reporting.view.ReportingBean.getParamLabelsNeedingCustomLov(ReportingBean.java:87)
            at com.euroscript.platon.reporting.view.ReportingBean.initParams(ReportingBean.java:151)
            at com.euroscript.platon.reporting.view.ReportingBean.onSelectReport(ReportingBean.java:110)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at com.sun.el.parser.AstValue.invoke(AstValue.java:187)
            at com.sun.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:297)
            at org.apache.myfaces.trinidadinternal.taglib.util.MethodExpressionMethodBinding.invoke(MethodExpressionMethodBinding.java:53)
            ... 84 more
    Caused by: java.lang.NullPointerException
            at oracle.adf.model.binding.DCDataControlReference.getDataControl(DCDataControlReference.java:113)
            ... 107 more
    any suggestion???

  • [Resolved] Distributing a Python module that uses C shared libs?

    I did some research on distutils, and I managed to find fairly detailed instructions for how one could distribute a Python module with C extensions. However, I have a Python module which uses ctypes to run code from C shared libraries (more specifically: SDL, and related libraries).
    Basically, this is what I have:
    pslab.py # requires ctypes wrapper modules below
    sdl.py # ctypes wrapper modules require respective shared libs below
    sdlmixer.py
    sdlimage.py
    sdlttf.py
    # This is for Linux -- .dll for Windows, .dylib for OSX.
    libSDL.so
    libSDL_mixer.so
    libSDL_image.so
    libSDL_ttf.so
    Is there a "standard" way to distribute modules of this type (maybe some undocumented distutils method, or something I failed to find)?
    The Python files are not the problem (distutils can handle them), but I need a way to install the appropriate shared libs for the platform in question (if necessary), so that ctypes wrapper modules can find and load them.
    Last edited by Goran (2012-08-14 08:51:19)

    You could distribute those files ... but I wouldn't recommend it.
    I believe the closest thing to a "standard" is simply to list sdl as a dependency.  A package distributor should not try to manage dependencies for users (only inform them of the dependencies).  That is either up to the user, or (more often) the distro's package management system.
    If I download a program that uses gtk, I don't expect it to include all of the gtk too.  Rather, the documentation would simply specify that gtk is a dependency.  Are you going to also distribute copies of the python interpreter?  Perhaps I'm missing something, but why would sdl be any different than python itself?  It's assumed (or speficied) that the user needs to have these installed for your package to work.
    Last edited by Trilby (2012-08-13 23:44:44)

  • Dynamicaly add a shared lib using deployment plan

    Hi,
    I would like to dynamically add a shared lib reference to my EAR application.
    It seems to be possible using deployement plan but to take effect weblogic want I redeploy the application.
    When I update my application and if I choose Update this application in place with new deployment plan changes. (A deployment plan must be specified for this option), thre is no error but when I try to instanciate my class, I get a "ClassNotFoundException"
    If I choose Redeploy this application using the following deployment files: it works fine but it is not compliant with my requirements (I don't want to redeploy for that).
    is it the normal behavior or I miss something ?
    Best Regards,
    C.

    WebLogic does the following if an application references a shared library:
    Classes of the library are added to the classpath of the application and the deployment descriptors are merged in memory.
    When an application (ear) is deployed weblogic creates application classloaders. Now if we add a shared library we need
    to create the classloaders again (to load the classes of the shared library) - this can be accomplished by redeploying the
    application.

  • How to use /bin/bash in my "request" script inside software packaging

    Hi,
    I know that the "request" script in packaging is run with /bin/sh. Is it possible to run this script with /bin/bash to be able to use more complex and helpful scripting ?
    I tried by adding #!/bin/bash in the script by I think this is disregarded by the install process.
    Thanks,
    Bianca

    Oh, so THAT's how it works... Thanks for your help! I was expecting to have them displayed before to submit the form and, most important, once the "reqldap" has finished to process the data, as some kind of callback function.
    I would like to be sure the different request doesn't prevent eachother to be processed correctly...There is no apparent reason why I can only make one request to that specific file on the server. All the next call to it will just be ignored when opening the calling form in Acrobat 7 while it will be just fine using Acrobat 9...
    Would you know anything about that?

  • Gnome-open starts /bin/bash -ic (uname) and uses 100% CPU to open gvim

    I've posted this on stackoverflow and got no answer, hopefully some archers can help me out:
    When I execute
    gnome-open textfile
    I expect gvim to open up. Instead what happens is that the command hangs. When I check htop I see:
    /bin/bash -ic (uname) > /tmp/{some random string of characters}
    using 100% of the CPU. When I kill the process (and I have to use -9), then cpu usage goes back to normal and gvim opens.
    Running
    gvim -f textfile
    without gnome-open works perfectly.
    Any ideas where this process is coming from? Or what it's even doing?
    /bin/bash -ic (uname)
    doesn't even run for me.
    Thank you!

    I've posted this on stackoverflow and got no answer, hopefully some archers can help me out:
    When I execute
    gnome-open textfile
    I expect gvim to open up. Instead what happens is that the command hangs. When I check htop I see:
    /bin/bash -ic (uname) > /tmp/{some random string of characters}
    using 100% of the CPU. When I kill the process (and I have to use -9), then cpu usage goes back to normal and gvim opens.
    Running
    gvim -f textfile
    without gnome-open works perfectly.
    Any ideas where this process is coming from? Or what it's even doing?
    /bin/bash -ic (uname)
    doesn't even run for me.
    Thank you!

  • [SOLVED] Terminator stopped working Unable to start shell:/bin/bash

    Hello,
    I just wanted to share as I didn't found anything about it,
    Since yesterday I can't launch my favorite terminal emulator which is terminator, it gives me :
    Unable to start shell:/bin/bash
    displayed in terminator window where I'm supposed to type commands.
    What is strange is that I can launch /bin/bash in tty's plus I installed xterm temporarily which works fine with bash.
    One more thing is that if I close terminator window it closes my graphical session !
    Any idea to troubleshoot this ?
    Last edited by detestable (2013-08-16 08:39:54)

    I'm getting this :
    me@latitude ~ $ gdb /usr/bin/python
    GNU gdb (GDB) 7.6
    Copyright (C) 2013 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law. Type "show copying"
    and "show warranty" for details.
    This GDB was configured as "x86_64-unknown-linux-gnu".
    For bug reporting instructions, please see:
    <http://www.gnu.org/software/gdb/bugs/>...
    Reading symbols from /usr/bin/python3.3...(no debugging symbols found)...done.
    (gdb) run /bin/terminator
    Starting program: /usr/bin/python3.3 /bin/terminator
    warning: Could not load shared library symbols for linux-vdso.so.1.
    Do you need "set solib-search-path" or "set sysroot"?
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/usr/lib/libthread_db.so.1".
    File "/bin/terminator", line 103
    except (KeyError,ValueError), ex:
    ^
    SyntaxError: invalid syntax
    [Inferior 1 (process 13780) exited with code 01]
    I also tried gnome-terminal which is not working correctly (installed right now)
    From the user interface it says in red :
    "There was an error creating the child process for this terminal"
    "grantpt failed: Operation not permitted"
    From gdb :
    m@latitude ~ $ gdb /usr/bin/gnome-terminal
    GNU gdb (GDB) 7.6
    Copyright (C) 2013 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law. Type "show copying"
    and "show warranty" for details.
    This GDB was configured as "x86_64-unknown-linux-gnu".
    For bug reporting instructions, please see:
    <http://www.gnu.org/software/gdb/bugs/>...
    Reading symbols from /usr/bin/gnome-terminal...(no debugging symbols found)...done.
    (gdb) run
    Starting program: /usr/bin/gnome-terminal
    warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7ffff7ffa000
    warning: Could not load shared library symbols for linux-vdso.so.1.
    Do you need "set solib-search-path" or "set sysroot"?
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/usr/lib/libthread_db.so.1".
    [New Thread 0x7fffea2b9700 (LWP 13839)]
    Error: GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._vte_2dpty_2derror.Code1: grantpt failed: Operation not permitted
    [Thread 0x7fffea2b9700 (LWP 13839) exited]
    [Inferior 1 (process 13835) exited normally]
    Only xterm seems to work

  • [solved]Terminator can not start /bin/bash

    Terminator show an error:
    Unable to start shell:/bin/bash
    this error comes from here:
    /usr/lib/python2.7/site-packages/terminatorlib/terminal.py
    1292 self.pid = self.vte.fork_command(command=shell, argv=args, envv=envv,
    1293 loglastlog=login,
    1294 logwtmp=update_records,
    1295 logutmp=update_records,
    1296 directory=self.cwd)
    1297 self.command = shell
    1298
    1299 self.titlebar.update()
    1300
    1301 if self.pid == -1:
    1302 self.vte.feed(_('Unable to start shell:') + shell)
    1303 return(-1)
    But I can startup terminator using root acount
    the self.pid != -1 when I'm root
    I dont remeber change any privileges ...
    Any idears ?
    Last edited by Hacksign (2013-08-27 01:47:33)

    Scimmia wrote:is /dev/pts listed in /etc/fstab? If so, remove it.
    Thanks, that's actually resolved my problem

  • RAC with 10G using shared directories

    We want to test Oracle 10G with Real Applications Cluster, but we do not have a SAN yet, can we use a disk from a normal server, share this disk and create a map network drive in the two servers that i want to install in the RAC? and use them like a shared disk??

    This is the article about what I was refering:
    Setting Up Linux with FireWire-Based Shared Storage for Oracle9i RAC
    By Wim Coekaerts
    If you’re all fired up about FireWire and you want to set up a two-node cluster for development and testing purposes for your Oracle RAC (Real Application Clusters) database on Linux, here’s an installation and configuration QuickStart guide to help you get started. But first, a caveat: Neither Oracle nor any other vendor currently supports the patch; it is intended for testing and demonstration only.
    The QuickStart instructions step you through the installation of the Oracle database and the use of our patched kernel for configuring Linux for FireWire as well as the installation and configuration of Oracle Cluster File System (OCFS) on a FireWire shared-storage device. Oracle RAC uses shared storage in conjunction with a multinode extension of a database to allow scalability and provide failover security.
    The hardware typically used for shared storage (a fibre-channel system) is expensive (see my column on clustering with FireWire on Oracle Technology Network (OTN) for some background on shared-storage solutions and the new kernel patch). However, once you’ve installed and set up the kernel patch, you will be on your way to setting up a Linux cluster suitable for your development team to use for demo testing and QA—a solution that costs considerably less than the traditional ones.
    The patch is available to the Linux and open source community under the GNU General Public License (GPL). You can download it from the Linux Open Source Projects page, available from the Community Code section of OTN. See the Toolbox sidebar for more information.
    Figure 1: Two-node Linux cluster using FireWire shared drive
    By following this guide, you’ll install the patched kernel on each machine that will comprise a node of the cluster. You’ll basically build a two-node test configuration composed of two machines connected over a 10Base-T network, with each machine linked via FireWire to the drive used for shared storage, as shown in see Figure 1.
    If you haven’t used FireWire on either machine before, be sure to install and configure the FireWire interconnect in each machine and test it with a FireWire drive or other device before you get started, to ensure that the baseline system is working. The FireWire interconnects we tested are based on Texas Instruments (TI, one of the coauthors of the IEEE specification on which FireWire is based) chipsets, and we used a 120GB Western Digital External FireWire (IEEE 1394) hard drive.
    Table 1 lists the minimum hardware requirements per node for a two-node cluster and some of the additional requirements for clusters of more than two nodes. You can use a standard laptop equipped with a PCMCIA FireWire card for any of the nodes in the cluster. We’ve successfully tested a laptop-based cluster following the same installation process described in this article.
    As shown in Table 1, for more than two nodes, you must add a four- or five-port FireWire hub to the configuration, to support connections from the additional machines to the drive. Just plug each Linux box into a port in the hub, and plug the FireWire drive into the hub as well. Without a hub, the configuration won’t have enough power for the total cable length on the bus.
    The instructions in this article are for a two-node cluster configuration. To create a cluster of more than two nodes, configure each additional node (node 3, node 4) by repeating these steps for each of the additional nodes and also be sure to do the following:
    Modify the command syntax or script files to account for the proper node number, machine name, and other details specific to the node.
    Create an extra set of log files and undo tablespaces on the shared storage for each additional node.
    It’s not yet possible to use our patched FireWire drivers to build a cluster of more than four nodes.
    Step 1: Download Everything You Need
    Before you get started, spend some time downloading all the software you’ll need from OTN. If you’re not an OTN member, you’ll have to join first, but it’s free.
    Keep in mind that these Linux kernel FireWire driver patches are true open source projects. You can download the source code and customize it for your own implementations as long as you adhere to the GPL agreement.
    See "Toolbox" for a list of the software you should download and have available before you get started.
    Step 2. Install Linux
    Once you’ve downloaded or purchased the Red Hat Linux Advanced Server 2.1 distribution (or another distribution that you’ve already gotten to work with Oracle9i Database, Release 2), you can install Linux on the local hard drive of each node (this takes about 25 minutes per node). We’ll keep the configuration basic, but you should configure one of the network cards on each machine for a private LAN (this provides the interconnect between nodes in the cluster); for example:
    hostname: node1
    ip address: 192.168.1.50
    hostname: node2
    ip address: 192.168.1.51
    Because this is a private LAN, you don’t need "real" IP addresses. Just make sure that if you do hook up either of these machines to a live network, the IP addresses don’t conflict with those of other machines. Also, be sure you download all the software you need for these machines before configuring the private network if you haven’t also configured or don’t have a second network interface card (NIC) in the machines.
    Step 3. Install Oracle9i Database
    If you haven’t done so already, you must download the Oracle software set for Oracle9i Database Release 2 (9.2.0.1.0) for Linux, or if you’re an OTN TechTracks
    For each machine that will comprise a node in the cluster, you must do the following:
    Create a mount point, /oracle/home, for the Oracle software files on the local hard disk of each machine.
    Create a new user, oracle (in either the dba or the oracle group), in /home/oracle on each machine.
    Start the Oracle Universal Installer from the CD or the mount point on the local hard disk to which you’ve copied the installation files; that is, enter runInstaller. The Oracle Universal Installer menu displays.
    From the menu, choose Cluster Manager as the first product to install, and install it with only its own node name as public and private nodes for now. Cluster Manager is just a few megabytes, so installation should take only a minute or two.
    When the installation is complete, exit from the Oracle Universal Installer and restart it (using the runInstaller script). Choose the database installation option, and do a full software-only installation (don’t create a database).
    Step 4. Configure FireWire (IEEE 1394)
    If you haven’t done so already, download the patched Linux kernel file (fw-test-kernel-2.4.20-image.tar.gz) from OTN’s Community Code area.
    Assuming that fw-test-kernel-2.4.19-image.tar.gz is available at the root mount point on each node, now do the following:
    Log on to each machine as the root user and execute these commands to uncompress and unpack the files that comprise the modules:
    cd /
    tar zxvf /fw-test-kernel-2.4.19-image.tar.gz
    modify /etc/grub.conf
    If you’re using the lilo bootloader utility instead of grub, replace grub.conf in the last statement above with /etc/lilo.conf.
    To the bottom of /etc/grub.conf or /etc/lilo.conf, add the name of the new kernel:
    title FireWire Kernel (2.4.19)
    root (hd0,0)
    kernel /vmlinuz-2.4.19 ro root=/dev/hda3
    Now reboot the system by using this kernel on both nodes. To simplify the startup process so that you don’t have to modify the boot-up commands each time, you should also add the following statements to /etc/modules.conf on each node:
    options sbp2 sbp2_exclusive_login=0
    post-install sbp2 insmod sd_mod
    post-remove sbp2 rmmod sd_mod
    During every system boot, load the FireWire drivers on each node; for example:
    modprobe ohci1394
    modprobe sbp2
    If you use dmesg (display messages from the kernel ring buffer), you should see a log message similar to the following:
    Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
    SCSI device sda: 35239680 512-byte hdwr sectors (18043 MB)
    sda: sda1 sda2 sda3
    This particular message indicates that the Linux kernel has recognized an 18GB disk with three partitions.
    The first time you use the FireWire drive, run fdisk from one of the nodes and partition the disk as you like. (If both nodes have the modules loaded while you’re running fdisk on one node, you should reboot the other system or unload and reload all the FireWire and SCSI modules to make sure the new partition table is loaded.)
    Step 5. Configure OCFS
    We strongly recommend that you use OCFS in conjunction with the patched kernel so that you don’t have to partition your disks manually. If you haven’t done so already, download the precompiled modules (fw-kernel-ocfs.tar.gz) from OTN’s Community Code area. (See the "Toolbox" sidebar for more information.)
    Untar the file on each node, and use ocfsformat on one node to format the file system on the shared disk, as in the following example:
    ocfsformat -f -l /dev/sda1 -c 128 -v ocfsvol
    -m /ocfs -n node1 -u 1011 -p 755 -g 1011
    where 1011 is the UID and GID of the Oracle account and 755 is the directory permission. The partition that we’ll use is /dev/sda1, and -c 128 means that we’ll use a 128KB cluster size; the cluster size can be 4, 8, 16, 32, 128, 256, 512, or 1,024KB.
    As the root user, create an /ocfs mountpoint directory on each node.
    To configure and load the kernel module on each node, create a configuration file /etc/ocfs.conf. For example:
    ipcdlm:
    ip_address = 192.168.1.50
    ip_port = 9999
    subnet_mask = 255.255.252.0
    type = udp
    hostname = node1 (on node2, put node2’s hostname here)
    active = yes
    Be sure that each node has the correct values with respect to IP addresses, subnet masks, and node names. Assuming that you’re using the example configuration, node 1 uses the IP address 192.168.1.50 ; while on node 2, put 192.168.1.51
    Use the insmod command to load the OCFS driver on each node. The basic syntax is as follows:
    insmod ocfs.o name=<nodename>
    For example:
    insmod /root/ocfs.o name=node1
    Each time the system boots, the module must be loaded on each node that comprises the cluster.
    To mount the OCFS partition, enter the following on each node:
    mount -t ocfs /dev/sda1 /ocfs
    You now have a shared file system, owned by user oracle, mounted on each node. The shared file system will be used for all data, log, and control files. The modules have also been loaded, and the Oracle database software has been installed.
    You’re now ready for the final steps—configuring the Cluster Manager software and creating a database. To streamline this process, you can create a small script (env.sh) in the Oracle home to set up the environment, as follows:
    export ORACLE_HOME=/home/Oracle/9i
    export ORACLE_SID=node1
    export LD_LIBRARY_PATH=/home/Oracle/9i/lib
    export PATH=$ORACLE_HOME/bin:$PATH
    You can do the same for the second node—just change the second line above to export ORACLE_SID=node2.
    Execute (source) this file (env.sh) when you log in or from .login scripts as root or oracle.
    Step 6. Configure Cluster Manager
    Cluster Manager maintains the status of the nodes and the Oracle instances across the cluster and runs on each node of the cluster.
    As user root or oracle, go to $ORACLE_HOME/oracm/admin on each node and create or change the cmcfg.ora and the ocmargs.ora files according to Listing 1.
    Be sure that the HostName in the cmcfg.ora file is correct for the machine — that is, node 1 has a file that contains node1, and node 2 has a file that contains node2.
    Before starting the database, make sure the Cluster Manager software is running. For convenience’s sake, add Cluster Manager to the rc script. As user root on each node, set up the Oracle environment variables (source env.sh):
    cd $ORACLE_HOME/oracm/bin
    ./ocmstart.sh
    The file ocmstart.sh is an Oracle-provided sample startup script that starts both the Watchdog daemon and Cluster Manager.
    Step 7. Configure Oracle init.ora, and Create a Database
    Listing 2 contains an example init.ora in $ORACLE_HOME/dbs. You can use it on each node to create initnode1.ora and initnode2.ora, respectively, by making the appropriate adjustments—that is, change node1 to node2 throughout the listing.
    You must now create the directories for the log files on node 1, as follows:
    cd $ORACLE_HOME
    mkdir admin ; cd admin ; mkdir node1 ; cd node1 ;
    mkdir udump ; mkdir bdump ; mkdir cdump
    Again, do the same for node 2, replacing node1 in the syntax example with node2.
    Make a link for the Oracle password file on each node (these files may not yet exist):
    cd $ORACLE_HOME/dbs
    ln -sf /ocfs/orapw orapw
    Now that you have the setup, the next step is to create a database. To simplify this process, use the shell script (create.sh) in Listing 3. Be sure to run the script from node 1 only, and be sure to run it only once. Run this script as user oracle, and if all has goes well, you will have created the database, added a second undo tablespace, and added and enabled a second log thread.
    You can start the database from either node in the cluster, as follows:
    sqlplus ’/ as sysdba’
    startup
    Finally, you can configure the Oracle listener, $ORACLE_HOME/network/admin/listener.ora, as you normally would on both nodes and start that as well.
    You should now be all set up!
    Wim Coekaerts ( [email protected]) is principal member of technical staff, Corporate Architecture, Development. His team works on continuing enhancements to the Linux kernel and publishes source code under the GPL in OTN’s Community Code section. For more information about Oracle and Linux, visit the OTN Linux Center or the Linux Forum.
    Toolbox
    Don’t tackle this as your first "getting to know Linux and Oracle project." This article is brief and doesn’t provide detailed, blow-by-blow instructions for beginners. You should be comfortable with the UNIX operating system and with Oracle database installation in a UNIX environment. You’ll need all the software and hardware items in this list:
    Oracle9i Database Release 2 (9.2.0.1.0) for Linux (Intel). Download the Enterprise Edition, which is required for Oracle RAC.
    Linux distribution. We recommend Red Hat Linux Advanced Server 2.1, but you can download Red Hat 8.0 free from Red Hat. (However, please note that Red Hat doesn’t support the downloaded version.)
    Linux kernel patch for FireWire driver support, available under the Firewire Patches section. (Note that we’re updating these constantly, so the precise name may have changed.)
    OCFS for Linux. OCFS is not strictly required, but we recommend that you use it because it simplifies installation and configuration of the storage for the cluster. The file you need is fw-kernel-ocfs.tar.gz.
    Two Intel-based PCs
    Two NICs in each machine (although we’re only concerned in these instructions with configuring the private LAN that provides the heartbeat communication between the nodes in the cluster)
    Two FireWire interconnect cards
    One large FireWire drive for shared storage
    To supplement this QuickStart, you should also take a look at the supporting documentation, especially these materials:
    Release Notes for Oracle9i for Linux (Intel)
    Oracle9i Real Application Clusters Setup and Configuration
    Oracle Cluster Management Software for Linux (Appendix F in the Oracle9i Administrator’s Reference Release 2 (9.2.0.1.0) for UNIX Systems)
    Table 1: Hardware inventory and worksheet for FireWire-based cluster
    Requirements Your configuration details:
    Per node minimum Node 1 Node 2
    Minimum CPU 500 MHz (Celeron, AMD, Pentium)
    Minimum RAM 256 MB
    Local hard drive free space 3 GB
    FireWire card 1 (TI chipset)
    Network interface card 2 (1 for node interconnect; 1 for public network)
    Per cluster minimum Your configuration details:
    FireWire hard drive 1 300-GB
    4-port FireWire hub Required for 3-node cluster
    5-port FireWire hub Required for 4-node cluster
    http://otn.oracle.com/oramag/webcolumns/2003/techarticles/coekaertsfirewiresetup.html
    Joel Pérez
    http://otn.oracle.com/experts

  • How to control the shared libs when creating an new OC4J  in AS 10.1.3.4

    Hi there
    I experience some wired behavior in AS 10.1.3.4!
    I Have 2 different installation of AS 10.,1.3.4 (Win 2003 server).
    When I create an OC4J instance (using the manager) at virtual server 1 I got 28 global libraries
    When I create an OC4J instance (using the manager) at virtual server 2 I got 30 global libraries, (+ apache.webservices & oracle.ifs.client )
    Why this difference ?
    And how to control it?
    Note the default instance "Home" has 28 libraries on both servers!!
    Why is it sometime possible possible to use <instance>applib for the jar, and sometime I need to create an <instance>\shared-lib\global.libraries\1.0 library ?
    Regards HAns

    Hi there
    I experience some wired behavior in AS 10.1.3.4!
    I Have 2 different installation of AS 10.,1.3.4 (Win 2003 server).
    When I create an OC4J instance (using the manager) at virtual server 1 I got 28 global libraries
    When I create an OC4J instance (using the manager) at virtual server 2 I got 30 global libraries, (+ apache.webservices & oracle.ifs.client )
    Why this difference ?
    And how to control it?
    Note the default instance "Home" has 28 libraries on both servers!!
    Why is it sometime possible possible to use <instance>applib for the jar, and sometime I need to create an <instance>\shared-lib\global.libraries\1.0 library ?
    Regards HAns

  • Trying a recovery from a livedisc and chroot fails to run /bin/bash?!?

    The title pretty much says it all.
    Trying to do a repair on my daughter's laptop, booting from a live image on USB.  It's an old eeepc with only 1 partition, all on sda1.
    I'm using the Kernel Panics wiki like I usually do when I have to go through this: ( https://wiki.archlinux.org/index.php/Kernel_panic) but when I try to chroot, I get this:
    chroot: failed to run comman '/bin/bash': No such file or directory
    Any ideas?  I'm pulling my hair out here.

    Ah, maybe you're trying to use a 64-bit binaries from the USB on a 32-bit system or vice-versa.
    Last edited by karol (2010-12-15 18:28:42)

  • [solved] ssh will only login to /bin/bash

    I have a machine with a few users and an ssh server.
    I would like to setup a user with rbash or nologin for the shell but if i do that, I can't ssh onto that user.
    my /etc/passwd
    zidar:x:1000:100:zidar:/home/zidar:/bin/bash
    smotko:x:1001:1001::/home/smotko:/bin/rbash
    now
    ssh zidar@pc
    su smotko #this works
    # but this doesnt
    ssh smotko@pc # works only if i change the passwd file back to /bin/bash
    this is with my log for ssh with rbash
    Dec 25 03:17:25 arch-dev sshd[636]: Failed password for smotko from 10.0.2.2 port 49075 ssh2
    and same with bash
    Dec 25 03:20:18 arch-dev sshd[678]: Accepted password for smotko from 10.0.2.2 port 49088 ssh2
    and this is how ssh looks with anything other than /bin/bash in /etc/passwd
    $ ssh smotko
    smotko@pc's password:
    Permission denied, please try again.
    password is correct because i can use it locally
    thank you for helping
    Last edited by zidarsk8 (2014-03-14 14:59:17)

    The Arch bash package doesn't actually have the /usr/bin/rbash command.  So not only is this not included in the /etc/shells file, but you are also trying to set the shell to something that doesn't actually exist.  See the RESTRICTED SHELL section of the bash man page. 
    I think wat you are after might be achieveable by simply using 'set' in the given user's startup files.

  • [Solved] How to save a file in /usr/bin/local using gedit as an admin?

    In my attempt to run these two programs:
    https://github.com/kerchen/export_gr2evernote
    https://github.com/spiermar/bookmarks2evernote
    I noticed they assumed the default python is python 2. I have python 3.3.1 and 2.6.8. I'm following the instructions at: https://wiki.archlinux.org/index.php/Py … ld_scripts
    Created a file with this:
    #!/bin/bash
    script=`readlink -f -- "$1"`
    case "$script" in
    /home/Dropbox/export_gr2evernote-master1/*|/home/Dropbox/bookmarks2evernote2/*)
        exec python2 "$@"
    esac
    exec python3 "$@"
    I then, tried to save it as p2 under /usr/local/bin/ where the only thing that's there is meteor, node, and npm, and it throws me an error saying I don't have enough permissions.
    How do I save p2 as an admin in /usr/bin/local using gedit?
    I've searched on the forums 'create a file in /usr/' and I haven't found anything. Can someone please help me?
    Last edited by jjshinobi (2013-04-23 05:20:47)

    jjshinobi wrote:
    sidneyk wrote:
    jjshinobi wrote:
    I moved p2 to bin, edited my projects path with the username in front of home, saved it as python, made it executable (#sudo chmod +x /usr/local/bin/python), removed p2. Ran:
    #python evernote2enex.py -m 10
    It's still referencing python 3...
    "python3: can't open file 'evernote2enex.py': [Errno 2] No such file or directory"
    My python file contains this:
    #!/bin/bash
    script=`readlink -f -- "$1"`
    case "$script" in
    /home/<user>/Dropbox/export_gr2evernote-master1/*|/home/<user>/Dropbox/bookmarks2evernote2/*)
        exec python2 "$@"
    esac
    exec python3 "$@"
    I did everything exactly like the wiki said, what seems to be the problem?
    OK. Looking at the 2 scripts you quoted, neither one is using a shbang line to set a python version to call. Without that, the python intercept doesn't even come into play. The problem that I see is the way you are starting it:
    sudo nano /usr/local/bin/python
    Is root really necessary for these if they are in your /home/<user>/Dropbox/ directory. And it appears that you are assuming python3 when you start them that way. If you are going to use that method and they are indeed python2 scripts, then change your command to this:
    #python evernote2enex.py -m 10
    You might try it as your normal user too in case root isn't really required.
    I tested out the first ten with:
    #python2 export2enex.py -m 10
    it worked!
    then proceded with
    #python2 export2enex.py GStarred -n exported.enex
    The script is fully functional. Thanks man!
    Actually, you're not even using the script when starting your python scripts that way. You are explicitly calling either python3 (python) or python2 (python2) to run the indicated python script and not even touching the work around /usr/local/bin/python script. If your python script started with a line like either :
    #!/usr/bin/env python
    or:
    #!/usr/bin/python
    And was executable, such that you called it by it's name, i.e. "someprogram.py", then the python script in /usr/local/bin/ would take precedence and effectively intercept the python call and choose the appropriate version of python as long as the path to the directory where "someprogram.py" resides is specified in that /usr/local/bin/python script.
    Last edited by sidneyk (2013-04-25 15:40:19)

  • I get error when I run Terminal /bin/bash: Please run this as root.

    Help!
    Every time I open Terminal I get:
    /bin/bash: Please run this as root.
    [Process completed]
    Any suggestions?

    FLYFI5H wrote:
    Help!
    Every time I open Terminal I get:
    /bin/bash: Please run this as root.
    [Process completed]
    Any suggestions?
    I wonder if there's a permissions problem with your /bin/bash. If you go to the /bin directory (presumably with the Finder's Go -> Go to Folder command, as Terminal isn't working for you), then do a "Get Info" on bash, what do you see as the permissions? Mine shows "system: Read & Write", "wheel: Read only", and "everyone: Read only".
    If you can't run Terminal, your diagnostic options may be limited. Do you have a second Mac that you could use to investigate the problem with this Mac in "target disk mode"?

Maybe you are looking for

  • IPod Classic 120gb resume function

    I've recently purchased a 120gb iPod classic. It does something no other iPod I've owned does, and it's driving me nuts and I want to shut it off, but don't know how. If I skip to the next track in a playlist, the next time the song that I skipped co

  • Is it possible to remove the guts of a G4 AGP and put it in a G5 Case?

    I really like my G4 PowerMac, but I want to see if it is possible to put a G4 PowerMac AGP parts in a G5 Case. Is it possible? Thanks for any help. I am considering doing this.

  • Solaris 8: Configuring HP 1300 Laserjet on

    Hi there, I had just add this printer as a �HP� PostScrip printer to a SUN Blade 1000 Workstation. The Printer daemon looks ok and It says that it is printing; the problem, as usual is that there is no impresion at all. The Blade workstation has 3 co

  • Change parameters in a popup window

    Hi, Please read the detailed stuff in the URL below and suggest what needs to be done. http://ccmishra.blogspot.com/2011/04/how-to-change-operating-unit-in-popup.html Thanks CM

  • Audio and Photos are playing out of sync once burnt onto DVD.

    Hiya, I've created a slideshow, with music in the background, and when playing it through in the preview mode it all works fine, and the songs start and end where I want them to and all the photos and video clips work. Once it's bunt onto a DVD and p