Error with makepkg - no error without [solved]

When compiling audacity with the standard PKGBUILD but with other configure options (./ configure --prefix=/usr --with-portaudio=v19 --without-portmixer), configure exits with the following error:
$ configure: warning: CC=gcc: invalid host type
$ configure: warning: CXX=g++: invalid host type
$ configure: error: can only configure for one host and one target at a time
$ configure: error: /bin/sh './configure' failed for lib-src/portaudio-v19
if run outside makepkg, the configure command runs fine.
This configure file contains a call to another configure located in a subdirectory and it's when this second one is called that the error occurs.
Is there a bug here or is it by design supposing that this kind of nested configure files should not be written like this?

Thanks for the tip. After playng with makepkg.conf, it appears that the rror occurs when audacity is compiled with the standard options:
export CFLAGS="-march=i686 -O2 -pipe"
export CXXFLAGS="-march=i686 -O2 -pipe"
Without them, it compiles well.

Similar Messages

  • [SOLVED]Error with makepkg and nspluginwrapper.

    Hi all,
    I am trying to install flash plugin for firefox.  When I run makepkg I get this error...
    ==> ERROR: install scriptlet (nspluginwrapper.install) does not exist.
    I am not sure what is wrong, am I missing something?
    Diesel1.
    Last edited by diesel1 (2007-07-26 21:09:09)

    skottish wrote:
    diesel1 wrote:do I need PKGBUILD files in each directory?
    Yes. You need to build and install nsplugwrapper before you build nspluginwrapper-flash. So, unpack nspluginwrapper, cd into that directory, and follow the instructions on the Wiki. Once successful,  do the same for nspluginwrapper-flash.
    I finally realised I was using the wrong tarball!  Now I have the correct files it works ok.
    Thanks for the help skottish.
    Diesel1.

  • Server 2008 R2 Terminal Server c0000005 and c0000006 errors with KERNELBASE.dll error in RDP sessions.

    We installed a new Server 2008 R2 server this spring. It is configured as a Terminal Server with 16 Wyse t10 thin clients connecting with RDP. We are a small resort so we run Springer Miller Host and SpaSoft. Ever since moving from PCs to the thin clients
    and TS we have been seeing SpaSoft randomly crash with KERNELBASE.dll errors and also a lot of c0000005 and c0000006 errors. Springer Miller support says its a Microsoft Networking issue so no help there. The company we worked with to install this has no clue
    so I hope to reach out to this community to solve this. I can also open support case with Microsoft Support but I am not sure if they will be of any help. Here are some examples of the errors:
    "Event 1000
    Faulting application name: SpaWin.exe, version: 3.4.0.0, time stamp: 0x2a425e19
    Faulting module name: KERNELBASE.dll, version: 6.1.7601.18409, time stamp: 0x53159a86
    Exception code: 0x0eedfade
    Fault offset: 0x0000c42d
    Faulting process id: 0x3718
    Faulting application start time: 0x01cf9ac7922487e9
    Faulting application path: \\spasoft\spaapps\TermServ\SpaWin.exe
    Faulting module path: C:\Windows\syswow64\KERNELBASE.dll
    Report Id: cfdcf9a9-06ba-11e4-a5af-000c298d9aa5"
    Also:
    "Faulting application name: VH.EXE, version: 18.80.430.0, time stamp: 0x3f73b447
    Faulting module name: VFP8R.DLL, version: 8.0.0.3117, time stamp: 0x3f73c232
    Exception code: 0xc0000006
    Fault offset: 0x0001d598
    Faulting process id: 0x2bc8
    Faulting application start time: 0x01cf99297ee1742f
    Faulting application path: J:\HOSTPLUS\fxp32\VH.EXE
    Faulting module path: J:\HOSTPLUS\fxp32\VFP8R.DLL
    Report Id: 5fbc65c4-055c-11e4-a5af-000c298d9aa5"
    And:
    "Faulting application name: SpaWin.exe, version: 3.4.0.0, time stamp: 0x2a425e19
    Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000
    Exception code: 0xc0000005
    Fault offset: 0x00000000
    Faulting process id: 0x2aa0
    Faulting application start time: 0x01cf988e6770e619
    Faulting application path: \\spasoft\spaapps\TermServ\SpaWin.exe
    Faulting module path: unknown
    Report Id: 01029179-04b9-11e4-a5af-000c298d9aa5"
    There are no network specific errors in any of the logs. I am thinking that SpaSoft needs to be run as an administrator but I am not sure how to do that in this environment. We have disabled DEP for all but essential Windows processes. All of the firewall
    and AV has been completely disabled, and still getting these errors. Is there a way to run these applications as admin without the user actually being an admin? Should I go ahead and open a Microsoft support case? Thanks

    Hi,
    Thank you for posting in Windows Server Forum.
    Explanation
    The indicated program stopped unexpectedly. The message contains details on which program and module stopped. A matching event with Event ID 1001 might also appear in the event log. This matching event displays information about the specific error that occurred.
    User Action
    If an error report was generated for this error, you might be able to obtain more information about the error by sending the report to Microsoft for analysis.
    Yeah, you can open a support case with Microsoft as they will help and guide you for proper solution. You can go through
    this source article.
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Strange Display Errors with certain Applications (ATI X1600) [SOLVED]

    Hi everyone!
    I have been using Arch now for a year or so and am very happy with it, but now there is a problem I unfortunately can't solve by my own.
    Since a month or so certain applications (like emacs, feh or gimp, if i try to manipulate the drawing surface) cause very strange display errors. When I try to take a screenshot, they can't be seen, so i made to photographs
    Image 1 (one instance of emacs under awesome-wm): http://img850.imageshack.us/img850/5186 … 112158.jpg
    Image 2 (two instances of emacs under awesome-wm): http://img24.imageshack.us/img24/822/im … 112158.jpg
    I already tried another display manager (musca) but there the same applications cause the same type of error.
    Maybe the version 6.14.0-1 of xf86-video-ati broke my system, but I don't know.
    Has anyone had similar problems or probably a solution?
    Thanks in advance
    Maak
    P.S.: I hope this is the right forum for this topic.
    Files:
    /var/log/Xorg.0.log          https://pastee.org/q3nn2
    Output of lspci                 https://pastee.org/db7n8
    UPDATE:
    Hi everyone, again!
    I just successfully downgraded the driver and the errors are gone
    If someone has the same errors, here is the PKGBUILD (taken from SVN; only the version numbers and sha1sums are modified)
    pkgname=xf86-video-ati
    pkgver=6.13.2
    pkgrel=1
    pkgdesc="X.org ati video driver"
    arch=(i686 x86_64)
    url="http://xorg.freedesktop.org/"
    license=('custom')
    depends=(libpciaccess libdrm udev pixman ati-dri)
    makedepends=('xorg-server-devel' 'libdrm' 'xf86driproto' 'mesa')
    conflicts=('xorg-server<1.9.0')
    groups=('xorg-drivers' 'xorg')
    options=('!libtool')
    source=(${url}/releases/individual/driver/${pkgname}-${pkgver}.tar.bz2)
    sha1sums=('f9d379a884a833829ab1942de4ad4f4766cdcd46')
    build() {
    cd "${srcdir}/${pkgname}-${pkgver}"
    ./configure --prefix=/usr --enable-dri
    make
    package() {
    cd "${srcdir}/${pkgname}-${pkgver}"
    make "DESTDIR=${pkgdir}" install
    install -m755 -d "${pkgdir}/usr/share/licenses/${pkgname}"
    install -m644 COPYING "${pkgdir}/usr/share/licenses/${pkgname}/"
    Last edited by Maak (2011-03-11 23:32:46)

    http://bbs.archlinux.org/viewtopic.php?id=89926

  • Synchronization errors with AD: LDAP error code 65 : orclObjectSid

    I'm trying to get synchronization working - importing data from Microsoft AD.
    The bootstrap seemed to go ok, and the synchronization is up and running - but I still get errors in the profile's trace file as follows at the end of this post.
    The error always seem to complain about the orclObjectSid attribute
    Do I need to do anything to the OID schema?
    Or is this a mapping problem?
    Either way, how would I correct this error?
    Thanks!!
    Howard Dickins
    Here's an example of the errors I'm getting:
    DN : dc=connectutilities,dc=co,dc=uk
    Normalized DN : dc=connectutilities,dc=co,dc=uk
    Processing modifyRadd Operation ..
    Proceeding with checkNReplace..
    Performing checkNReplace..
    Naming attribute: dc
    Naming attribute value: dc
    Naming attribute value: orclObjectSID
    Adding Attribute in OID : orclObjectSID
    Naming attribute value: orclobjectguid
    Adding Attribute in OID : orclobjectguid
    Total # of Mod Items : 2
    Exception Modifying Entry : javax.naming.directory.SchemaViolationException: [LDAP: error code 65 - Failed to find orclobjectsid in mandatory or optional attribute list.]; remaining name 'dc=connectutilities,dc=co,dc=uk'
    javax.naming.directory.SchemaViolationException: [LDAP: error code 65 - Failed to find orclobjectsid in mandatory or optional attribute list.]; remaining name 'dc=connectutilities,dc=co,dc=uk'
         at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3019)
         at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2934)
         at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2740)
         at com.sun.jndi.ldap.LdapCtx.c_modifyAttributes(LdapCtx.java:1440)
         at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_modifyAttributes(ComponentDirContext.java:255)
         at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.modifyAttributes(PartialCompositeDirContext.java:172)
         at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.modifyAttributes(PartialCompositeDirContext.java:161)
         at javax.naming.directory.InitialDirContext.modifyAttributes(InitialDirContext.java:146)
         at oracle.ldap.odip.gsi.LDAPWriter.checkNReplace(LDAPWriter.java:839)
         at oracle.ldap.odip.gsi.LDAPWriter.modifyRadd(LDAPWriter.java:717)
         at oracle.ldap.odip.gsi.LDAPWriter.writeChanges(LDAPWriter.java:310)
         at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:581)
         at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:306)
         at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:186)
    [LDAP: error code 65 - Failed to find orclobjectsid in mandatory or optional attribute list.]
    Entry Not Found. Converting to an ADD op..
    Processing Insert Operation ..
    Performing createEntry..
    Exception creating Entry : javax.naming.directory.SchemaViolationException: [LDAP: error code 65 - Failed to find orclobjectsid in mandatory or optional attribute list.]; remaining name 'dc=connectutilities,dc=co,dc=uk'
    [LDAP: error code 65 - Failed to find orclobjectsid in mandatory or optional attribute list.]
    javax.naming.directory.SchemaViolationException: [LDAP: error code 65 - Failed to find orclobjectsid in mandatory or optional attribute list.]; remaining name 'dc=connectutilities,dc=co,dc=uk'
         at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3019)
         at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2934)
         at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2740)
         at com.sun.jndi.ldap.LdapCtx.c_createSubcontext(LdapCtx.java:777)
         at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_createSubcontext(ComponentDirContext.java:319)
         at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.createSubcontext(PartialCompositeDirContext.java:248)
         at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.createSubcontext(PartialCompositeDirContext.java:236)
         at javax.naming.directory.InitialDirContext.createSubcontext(InitialDirContext.java:176)
         at oracle.ldap.odip.gsi.LDAPWriter.createEntry(LDAPWriter.java:1031)
         at oracle.ldap.odip.gsi.LDAPWriter.insert(LDAPWriter.java:386)
         at oracle.ldap.odip.gsi.LDAPWriter.modifyRadd(LDAPWriter.java:725)
         at oracle.ldap.odip.gsi.LDAPWriter.writeChanges(LDAPWriter.java:310)
         at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:581)
         at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:306)
         at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:186)
    DIP_LDAPWRITER_ERROR_CREATE
    Error in executing mapping DIP_LDAPWRITER_ERROR_CREATE
    DIP_LDAPWRITER_ERROR_CREATE
         at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:722)
         at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:306)
         at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:186)
    DIP_LDAPWRITER_ERROR_CREATE
    AD_OID_Import:Error in Mapping EngineDIP_LDAPWRITER_ERROR_CREATE
    DIP_LDAPWRITER_ERROR_CREATE
         at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:741)
         at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:306)
         at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:186)
    AD_OID_Import:about to Update exec status
    Updated Attributes
    orclodipLastExecutionTime: 20090617062658
    orclodipConDirLastAppliedChgNum: 12242192
    orclOdipSynchronizationStatus: Mapping Failure, Agent Execution Not Attempted
    orclOdipSynchronizationErrors:
    Sleeping for 1secs
    LDAP URL : (inexus-srv01:389 oracleextract
    Specifying binary attributes: mpegvideo objectguid objectsid guid usercertificate orclodipcondirlastappliedchgnum
    LDAP Connection success
    Applied ChangeNum : 12242192Available chg num = 12245972
    Reader Initialised !!
    LDAP URL : (inexus-srv34:389 cn=odisrv+orclhostname=inexus-srv34,cn=registered instances,cn=directory integration platform,cn=products,cn=oraclecontext
    Specifying binary attributes: mpegvideo objectguid objectsid guid usercertificate orclodipcondirlastappliedchgnum
    LDAP Connection success
    Writer Initialised!!
    Writer proxy connection initialised!!
    MapEngine Initialised!!
    Filter Initialised!!
    searchF :
    CHGLOGFILTER : (&(USNChanged>=12242193)(USNChanged<=12242692))
    Search Time 0
    Search Successful till # 12242692
    Search Changes Done
    Changenumber uSNChanged: 12242193
    targetdn distinguishedName: DC=connectutilities,DC=co,DC=uk
    ChangeRecord : ----------
    Changetype: ADDRMODIFY
    ChangeKey: dc=connectutilities,dc=co,dc=uk
    Attributes:
    Class: null Name: objectGUID Type: null ChgType: REPLACE Value: [[B@1c999c4]
    Class: null Name: objectSid Type: null ChgType: REPLACE Value: [[B@8e5360]
    Class: null Name: dc Type: null ChgType: REPLACE Value: [connectutilities]
    Class: null Name: objectClass Type: nonbinary ChgType: REPLACE Value: [top, domain, domainDNS]
    -----------

    I found a solution - I added the offending attribute orclObjectSid to the domain objectClass as an optional attribute.
    It was a bit of a "clutching at straws" solution - but it does seem to have worked.
    I'm not sure why the data being imported had such a value, but the synchronization hasn't thrown up any further errors since then.
    Thanks for your help everyone.
    Howard

  • Adcfgclone.pl dbTier error with RC-50004: Error occurred in CloneContext: null

    PROMPT :
    Target System Hostname (virtual or normal) [migrate]
    ANSWER :
    migrate
    PROMPT :
    Target Instance is RAC (y/n) [n]
    ANSWER :
    n
    PROMPT :
    Target System Database SID
    ANSWER :
    EBS12
    PROMPT :
    Target System Base Directory
    ANSWER :
    /u01/R12/ora11
    PROMPT :
    Target System utl_file_dir Directory List
    ANSWER :
    /usr/tmp
    PROMPT :
    Number of DATA_TOP's on the Target System [1]
    ANSWER :
    1
    PROMPT :
    Target System DATA_TOP Directory 1 [/u01/11i/ora11/ebs11data]
    ANSWER :
    /u01/R12/ora11/db/ebs11data
    Creating /u01/R12/ora11/db/tech_st/11.1.0/appsutil/clone/data/stage/addbhomtgt.xml which will contain Target system database mount points.
    PROMPT :
    Target System RDBMS ORACLE_HOME Directory [/u01/R12/ora11/db/tech_st/11.1.0]
    ANSWER :
    /u01/R12/ora11/db/tech_st/11.1.0
    Creating /u01/R12/ora11/db/tech_st/11.1.0/appsutil/clone/data/stage/addbhomtgt.xml which will contain Target system database mount points.
    PROMPT :
    Do you want to preserve the Display [null] (y/n)
    ANSWER :
    n
    PROMPT :
    Target System Display [migrate:0.0]
    ANSWER :
    migrate:0.0
    PROMPT :
    Do you want the the target system to have the same port values as the source system (y/n) [y] ?
    ANSWER :
    n
    Started testing the availabilty of ports in port pool 1
    Checking  Database Port on migrate:  Port Value = 1522
       Database Port available:  Port Value = 1522
    Checking  DB ONS Local Port on migrate:  Port Value = 6301
       DB ONS Local Port available:  Port Value = 6301
    Checking  DB ONS Remote Port on migrate:  Port Value = 6401
       DB ONS Remote Port available:  Port Value = 6401
    INFO: Unable to obtan DB Version!!!
    INFO: Because DB Version could not be obtained, defaulting s_jdktop and s_jretop to adxdbctx.tmp defined values
    setDestination s_contextfile to : /u01/R12/ora11/db/tech_st/11.1.0/appsutil/EBS12_migrate.xml
    Clone Context Parameters:
            Pairs File      = /tmp/adpairsfile_31321.lst
            Target XML File = /u01/R12/ora11/db/tech_st/11.1.0/appsutil/EBS12_migrate.xml
            Template File   = /u01/R12/ora11/db/tech_st/11.1.0/appsutil/template/adxdbctx.tmp
    The following values will be used to create the context file
      s_isWeb  =  YES
      s_db_listener  =  EBS12
      s_db_util_filedir  =  /usr/tmp
      s_db_rollback_segs  =  NOROLLBACK
      s_dbhome4  =  /u01/R12/ora11/db/ebs11data
      s_dbhome3  =  /u01/R12/ora11/db/ebs11data
      s_dbhost  =  migrate
      s_db_oh  =  /u01/R12/ora11/db/tech_st/11.1.0
      s_dbhome2  =  /u01/R12/ora11/db/ebs11data
      s_dbhome1  =  /u01/R12/ora11/db/ebs11data
      s_dbgroup  =  dba
      s_dbGlnam  =  EBS12
      s_dbdomain  =  evosys.co.in
      s_dbSid  =  EBS12
      s_dbuser  =  ora11
      s_isForms  =  YES
      s_database_type  =
      s_temp  =  /u01/R12/ora11/db/tech_st/11.1.0/appsutil/temp
      s_db_ons_localport  =  6301
      s_hostname  =  migrate
      s_dbSidLower  =  ebs12
      s_cmanport  =  1522
      s_dbCluster  =  false
      s_domainname  =  evosys.co.in
      s_isAdmin  =  YES
      s_isConc  =  YES
      s_contextfile  =  /u01/R12/ora11/db/tech_st/11.1.0/appsutil/EBS12_migrate.xml
      s_dbport  =  1522
      s_display  =  migrate:0.0
      s_contextname  =  EBS12_migrate
      s_db_ons_remoteport  =  6401
      s_clonestage  =  /u01/R12/ora11/db/tech_st/11.1.0/appsutil/clone
      s_base  =  /u01/R12/ora11
    Clone Context will now iteratively apply changes
    to create the new target context file.
    instantiate file:
       source : /u01/R12/ora11/db/tech_st/11.1.0/appsutil/temp.xml
       dest   : /tmp/tmpCtxClone.xml
    instantiate file:
       source : /tmp/tmpCtxClone.xml
       dest   : /tmp/tmpCtxClone.xml
    instantiate file:
       source : /tmp/tmpCtxClone.xml
       dest   : /tmp/tmpCtxClone.xml
    instantiate file:
       source : /tmp/tmpCtxClone.xml
       dest   : /tmp/tmpCtxClone.xml
    instantiate file:
       source : /tmp/tmpCtxClone.xml
       dest   : /u01/R12/ora11/db/tech_st/11.1.0/appsutil/EBS12_migrate.xml
    instantiate file:
       source : /u01/R12/ora11/db/tech_st/11.1.0/appsutil/temp.xml
       dest   : /tmp//dummy.xml
    instantiate file:
       source : /tmp//dummy.xml
       dest   : /tmp//dummy.xml
    instantiate file:
       source : /tmp//dummy.xml
       dest   : /tmp//dummy.xml
    instantiate file:
       source : /tmp//dummy.xml
       dest   : /tmp//dummy.xml
    instantiate file:
       source : /tmp//dummy.xml
       dest   : /tmp//dummy.xml
    PROMPT :
    Source and Target platforms differ.  Rapid Clone will perform a platform migration.
    Do you wish to continue?  [y]
    ANSWER :
    y
    The values for these variables will be retained from the source context
      s_admin_restrictions  =  OFF
      s_metalink_id  =
      s_parallel_max_servers  =  8
      s_systemcsi  =  N/A
      s_apps_version  =  12.1.1
      s_contextserial  =  0
      s_instThread  =  0
      s_sqlnet_expire_time  =  10
      s_contexttype  =  Database Context
      s_db_sga_target  =  1G
      s_dbsharedpool_size  =  300000000
      s_db_shared_pool_size  =  400M
      s_clusterServicePort  =  9998
      s_dbcache_size  =  163577856
      s_rapidwizloc  =  /tmp/RapidInstall
      s_db_pga_aggregate_target  =  1G
      s_dbtype  =  VISION
      s_proxyhost  =
      s_nthreads  =  5
      s_db_processes  =  200
      s_enable_listener_password  =  OFF
      s_alt_service_instances  =
      s_dbClusterInst  =  1
      s_installedFrom  =  FS
      s_undo_tablespace  =  APPS_UNDOTS1
      s_adjreopts  =  -Xms128M -Xmx512M
      s_dbQuorumDisk  =
      s_db_linkctrl  =
      s_dbcomp  =  oracle.apps.dbseed.fresh
      s_db_sessions  =  400
      s_installloc  =  /tmp
      s_isDB  =  YES
      s_proxyport  =
      s_database  =  db111
      s_dbseed  =  No Database
      s_apps_user  =  APPS
      s_dbfiles  =  512
      s_db_shared_pool_reserved_size  =  40M
      s_bits  =  32
      s_dbblock_buffers  =  20000
      s_sys_user  =  SYS
      s_dlsnstatus  =  enabled
      s_db_plsql_native_library_subdir_count  =  149
      s_country_code  =
      s_subscribe_for_node_down_event  =  OFF
      s_instNumber  =  0
    The new context file has been created at:
            /u01/R12/ora11/db/tech_st/11.1.0/appsutil/EBS12_migrate.xml
    Performing file system cleanup specific to Platform Migration:
    StackTrace:
    java.lang.NullPointerException
            at java.util.Hashtable.put(Unknown Source)
            at oracle.apps.ad.clone.util.CloneCleanser.doMigrate(CloneCleanser.java:98)
            at oracle.apps.ad.context.CloneContext.doClone(CloneContext.java:718)
            at oracle.apps.ad.context.CloneContext.main(CloneContext.java:5266)
    RC-50004: Error occurred in CloneContext:
    null
    Context file creation not succesful
    am trying to clone R12.1.1 with 11.1.0.7 from Linux5.5-32.bit to Linux5.8-64bit.
    migrating is fails.
    So please help me out of this.
    Thanks in Advance

    Hi,
    DB cloning will not suffice the migrating process. In order to perform a cloning from 32 bit to 64 bit, please follow the instructions in the below note:
    Migrating Oracle E-Business Suite R12 from Linux 32-bit to Linux 64-bit (Doc ID 471566.1)
    Thanks &
    Best Regards,

  • WEB ADI Error with Excel - Compile Error

    I am trying to export data to Excel. The process works until the data is loaded to Excel whereupon I get a Microsoft Visual Basic error "Compile Error: User-defined type not defined.
    Behind the error message is several screens from VB with the following line highlighted "Dim oParser As New SAXXMLReader30"

    Please post the details of the application release, database version and OS.
    I am trying to export data to Excel. The process works until the data is loaded to Excel whereupon I get a Microsoft Visual Basic error "Compile Error: User-defined type not defined.Can you find any details about the error in the BNE.log file? -- How to Create a BNE Log For Web Adi Issues and Errors? [ID 817023.1]
    Behind the error message is several screens from VB with the following line highlighted "Dim oParser As New SAXXMLReader30"Please see if (Web ADI: Compile Error - Userdefined Type Not Defined LEDGER_IDDetails [ID 1319992.1]) is applicable.
    Thanks,
    Hussein

  • AMS Setup Manager Diagnostics Test erroring with "invalid snapshot" error

    While running the Diagnostic Test for Setup Manager on the AMS, it's failing with errors in the below stages.
    * Testgroup: CheckEBSHome
    * Testgroup: CheckAgentStage
    * Testgroup: CheckAgentHome
    with below error
    Snapshot information is not populated
    Any idea?
    Regards
    Sree

    OID Setup Diagnostics Test failing due to this 11510 Home Page is redirecting to SSO page, if give login details, i am not able to login to 11510 instance.i created OID user same as 11510 user. i am getting internal server error in the browser.
    Thanks,
    Panneer.

  • Yosemite Server: Apache shm errors with proxy_balancer | SASL errors using user alias

    Dear OSX Server Ninjas,
    I recently upgraded my 10.9.5 Mac mini running Server 3.2.2 to Yosemite + Server v4, directly to v4.0.3. The box serves the whole Mail/Cal/Contacts/VPN palette as well as some websites, Git repos and some custom software. See below for network setup.
    After battling the most blatant issues (some easy to fix but hard to find, others hard on both), I'm almost back to a working server setup … but some things keep bugging me even after days of log-reading and OD tricks. In the last days, I have read countless discussions here as well as on stackoverflow/serverfault/apple.stackexchange and blogs, so I'll try to be as thorough and precise as possible to show what I have tried so far – sorry in advance for the long post
    … and since you might not read to the bottom: Thanks in advance for any help, especially with the Apache problem!
    My setup
    OSX Server v4.0.3 running on Yosemite 10.10.1 ➞ both latest official releases
    host + DNS
    Hostname configured for public domain mydomain.net, web traffic on selected ports comes in through router NAT
    Server has a static IP on the local subnet 192.168.178.0/24
    Server running local DNS and performing lookups for all clients in the local subnet, forwarding to the router at 192.168.178.1
    Primary zone for mydomain.net with records (A, NS, MX) pointing to said static IP
    changeip -checkhostname is successful
    Public IP is currently configured at the domain registrar through his name servers
    SSL
    Trusted third-party certificate installed for host.mydomain.net (Common Name + SAN for two subdomains)
    Used to secure all services
    Qualys SSL Test Grade B (capped due to OSX's openssl 0.9.x not being capable of TLS1.2 and Intermediate CA SHA-1 )
    TLS working both inbound and outbound according to CheckTLS.com
    OpenDirectory
    Recreated after upgrade (probably not necessary, since issue persists), re-imported groups + users from WGM backup files
    Only the server itself is bound to the directory, other devices just access services through network accounts (CRAM-MD5, MD5-Digest)
    Problems
    Apache shm errors: Apache fails to create slot memory when proxy module is enabled
    As soon as I start a service which requires the Apache proxy_balancer module (e.g. Cal/Contacts, ProfileManager), this starts filling up my Apache's error log:
    [Mon Jan 12 01:41:17.979882 2015] [proxy_balancer:emerg] [pid 2949] (28)No space left on device: AH01179: balancer slotmem_create failed for p26d9e669--1011640492
    [Mon Jan 12 01:41:17.979894 2015] [:emerg] [pid 2949] AH00020: Configuration Failed, exiting
    [Mon Jan 12 01:41:28.297127 2015] [slotmem_shm:error] [pid 3026] (28)No space left on device: AH02611: create: apr_shm_create(/private/var/run/slotmem-shm-p26d9e669--1001322955.shm) failed
    [Mon Jan 12 01:41:28.297347 2015] [proxy_balancer:emerg] [pid 3026] (28)No space left on device: AH01179: balancer slotmem_create failed for p26d9e669--1001322955
    [Mon Jan 12 01:41:28.297355 2015] [:emerg] [pid 3026] AH00020: Configuration Failed, exiting
    When I increase the Apache LogLevel to trace1, I get this as well:
    [Mon Jan 12 02:11:43.190303 2015] [slotmem_shm:debug] [pid 5501] mod_slotmem_shm.c(367): AH02602: create didn't find /private/var/run/slotmem-shm-p26d9e669-813569972.shm in global list
    This causes the Apache to crash constantly, which is … unnerving. After googling around for a while, I tried the following steps:
    Stop Services that use the Apache (Web, *DAV, ProfileManager)
    sudo apachectl stop
    Remove all orphan cache/slot files (.shm, ssl-cache, proxy.*) from /private/var/run
    Reboot the server
    Start up the Services again
    Curiously enough, this worked for a while! But I was getting several log messages about dropped proxy connections, and sometimes the ProfileManager page would time out. Then, the issue started to reappear and does not seem to be fixed again with the steps above. I looked through the Apache config files and config plists for the Services in question, as well as the default config files. The only thing I have so far is that as long as there are no active proxy connections, the Apache runs smoothly – but all goes awry when slotmem files are created (a lot of them). Sometimes, I am able to turn on the Calendar service, but switching on Contacts produces the error … one time, I even got Calendar + Contacts running, and all went well until I enabled Profile Manager.
    I found several error reports with similar or identical errors from other Apache 2.4.x users, but most of those were developer talk on mailing lists, or suggested steps that did not work for me (or were inapplicable on the OS X Server Apache environment).
    SASL errors using user alias for WebDAV-Digest authentication
    Short version: I am unable to authenticate through WebDAV-Digest with a user's alias (defined in Server Admin > Users > Context Menu > Advanced Options or WGM). Using the main short name of the same user works flawlessly. Password Server Error log just shows:
    AUTH2: {234023578237md5hash2384234, mainshortname} WEBDAV-DIGEST authentication failed, SASL error -13 (password incorrect).
    The password is 100% correct: When I set up a test CalDAV account and put in alias+PW, it did not work (OS X Dialog showed “could not be verified”, Server log as above). Leaving the password field filled and just switching the user to the main short name went through instantaneously, with the Server log showing
    AUTH2: {234023578237md5hash2384234, mainshortname} WEBDAV-DIGEST authentication succeeded.
    Notice the same MD5 hash and canonical short name, yet different results. I don't know if this is a new “feature”, a result of mail aliases being handled differently (at least I read that somewhere) …
    Additional Questions
    Should I configure the DNS for public use, instead of the Split-Brain configuration (local network gets local IP, outside traffic is directed by registrar NS)? I read several articles explaining that Split-Brain is common in large organizations, but might introduce weird networking issues. Entering the external IP as a Round Robin alternative for the internal does not seem sensible to me.
    I also have a question concerning LDAP log entries like this one below, but I'll put that in this already open discussion:
    => bdb_idl_delete_key: c_get failed: DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock (-30994)

    Today the same error occurred on a customers server.
    We don't use Calendar or Contacts.
    We only have some websites configured and want use NetInstall for deployments.
    As soon as I disable the Profilemanager the httpd starts over and other websites and NetInstall via HTTP are working fine.
    When I reenable Profilemanager the httpd processes are gone and I see the same stuff in Apache's error log...

  • Tuxedo 9.1 install errors with InnovationTargetException Java Error

    Hi Everyone,
    I have weblogic 9.2, tuxedo 9.1 and tools 8.49 installed on my Vista machine with no problems. Now another guy at the office is trying to install and when he trys to install tuxedo it pretty much closes immediately with the innovationtargetexception java error.
    We have tried comparing his machine to mine and so far everything pretty much looks the same. I have java 1.6.03 and he has 1.6.05 but I don't think that is the problem.
    Does anyone have any idea why he would be getting this java error this early in the install?
    Thanks

    For Vista Home:
    1. Check the compatbility on exe file
    2. set the registry values
    3. Install the tuxedo91_32_win_2k3_x86.exe
    4. Success
    --- More infor--------------
    Open tuxedo91_32_win_2k3_x86 properties & check the compatibility mode.. I changed into Win Serevr 2003 mode.. I avoided above LAX error.
    Try creating the registry entry
    HKEY_LOCAL_MACHINE/SOFTWARE/BEA Systems/TUXEDO/9.1/Environment
    TUXDIR <c:\bea\Tuxedo9.1>
    TUXDIR REG_SZ <tuxedo home directory>
    NLSPATH REG_SZ <tuxedo home directory\locale\C>

  • Error with workbench : - Internal Error (-2010)

    Hi guys,
    When i try to import these template.
    I get an internal error.
    Please help.
    Thank you.

    Hi Adrian Lee....
    Make sure the file should not have any special characters.
    Pls check the ff sap note. this may help you
    [1087138|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/sno/ui_entry/entry.htm?param=69765F6D6F64653D3030312669765F7361706E6F7465735F6E756D6265723D3030303130383731333826] & [1495934|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/sno/ui_entry/entry.htm?param=69765F6D6F64653D3030312669765F7361706E6F7465735F6E756D6265723D3030303134393539333426]
    Hope Helpful
    Regards
    Kennedy

  • TIME_OUT error with /sapapo/rtsinput_cube

    We are in the process of cutting over to a new
    planning area ZPA2 with data from a backup InfoCube
    (ZIC21), and the /sapapo/rtsinput_cube program
    is failing with a TIME_OUT error.
    The overall job finishes with a success message,
    although data is only partially loaded in to the
    planning area. When you look at the spool, the first
    steps are green, then the job times out for the
    subsequent steps.
    We ran this first with 3 key figures being loaded
    from cube to planning area for approx 1/5 of 464000
    CVCs for 5 years in monthly buckets only.
    This errored with SAPSQL_INVALID_FIELDNAME message as
    well as the TIME_OUT error message, and data was
    loaded partially in the ZPA2 planning area (only 2
    months).
    We then reduced the time horizon from 5 years of
    months to be only 1 year in months.
    This errored with the TIME_OUT error message. Again,
    data was partially loaded in to the planning area
    (about 7 of the 12 months was loaded).
    The prime issue is with the constant time out.
    Has anyone any idea how to solve this problem ?

    not sure what version you are using but if you are on the latesr versions, its a good idea to use parallel processing - you can find it on the additonal settings tab
    you can also set it to copy only the specified period (previously it copied the whole bucket in the planning area to cover the time period)
    if you are in versions that dont support this - then you can create parallel jobs that get triggered by an event or use a process chain and run this in parallel. As long as you dont overlap the periods it should be fine
    you should also ask your basis team to check the time out set up for the application and get it extended (you can see this in rz11 i think and choosing the correct parameter - would leave it to the basis folks to help in this) Alternately check your processing power and see if you can get it increased

  • Error in idoc with status 20 "Error triggering EDI Subsystem"

    Dear All,
    I have query related to IDOC Status 20.
    I am tirggering IDOC with standard t-code from IS-Retail system. I have created two ports a) TRFC port and b)File port .
    <b>We are having sending and receiveing application on two different OS i.e one XI on Windows and IS Retail on AIX(Unix). I am not able to send idoc to another system. I am getting these IDOC status in WE05 (STATUS 01 , 30 , 03 , 20 )</b>
    Keeping the above fatcs in mind could you please tell me how to remove the <b>error with status 20 "Error triggering EDI Subsystem".</b>

    Hi prabhat,
    You should have checked the automatic triggering possible at the file port level and the partner profile setting could be 'start subsytem'.
    Go to WE21 and do the access test for the file port.
    Once you done and find no issues then go to SM59 and test the connection for the RFC destination assigned to the ports.Possibly it has to do with the RFC destination.
    Other reasons could be yours is a test client and got refreshed recently.so the production client settings might be causing this error.Check with your basis to get it working.
    other reason could be the logical system assigned to the client should be having production client's name.
    Check these.
    I am sure you should be able to solve this issue.
    Thank you.
    regards,
    karun.M

  • IO error upload script/ 404 error

    Is there a reason why my upload script works perfectly on a
    PC, but not on a mac???
    any ideas. thanks J.
    ( I get an 'IO error' when uploading on a mac. )

    Script would probably help!, I ust thought that someone might
    have had a similar experience and knew what the problem may be...
    Anyway here is the script,thanks for taking a look....
    _global.logged='dave';
    //import the FileReference Object
    import flash.net.FileReference;
    var file_fr:FileReference = new FileReference();
    //object for listening to for FileReference events
    var list_obj:Object = new Object();
    list_obj.onSelect = function(){
    fileName.text = file_fr.name;}
    list_obj.onComplete = function(){
    fileName.text = "All Done";
    prog.rec_mc.clear();}
    list_obj.onProgress = function (bytesTotal, bytesLoaded){
    var percent = bytesLoaded/file_fr.size;
    drawRec(percent);}
    list_obj.onCancel = function(){
    fileName.text = "Cancel was selected";
    list_obj.onIOError = function(fileRef){
    fileName.text = "IO error with " + fileRef.name;
    list_obj.onSecurityError = function(fileRef, error){
    _root.removeLoading();
    fileName.text = "Security error with " + fileRef.name + ":"
    + error;
    //httpError
    list_obj.onHTTPError = function(fileRef:FileReference,
    error:Number){
    _root.removeLoading();
    fileName.text += "HTTP error: with " + fileRef.name +
    ":error #" + error;
    file_fr.addListener(list_obj);
    browseBtn.onRelease= function(){
    file_fr.browse([{description: "Image Files", extension:
    "*.jpg;"}])
    uploadBtn.onRelease=function(){
    randomNumber=Math.round(Math.random()*100000000);
    saveJpgAs=_global.logged+randomNumber+'.jpg';
    file_fr.upload("upload/uploadPic.php?username="+_global.logged+"&saveJpgAs="+saveJpgAs);
    prog.rec_mc.fillColor = Math.random()*0x1000000;
    function drawRec (per){
    loader.progressBar._xscale = Math.round((per) * 100);
    loader.percent.text = Math.round((per) * 100) + "%";}
    Thanks J.

  • [SOLVED] errors with fdisk and cryptsetup; is my drive going bad?

    I'm having issues with re-formatting an external hard drive using dm-crypt. It was previously formatted with TrueCrypt/NTFS, which I used as a shared backup drive between Windows and Arch. At some point, it stopped being able to mount, which I attributed to allowing Windows to "fix" it after improper dismount (e.g. a hard kill).
    I decided to re-format with ext4 and only use it from Arch, but now I'm wondering if I may have a hardware issue with the drive. I've tried a lot more (like going through the full zero write after mounting the drive as a temporary dm-crypt device), but here's the condensed version to illustrate the problem.
    system info
    This is on a fresh boot. Just adding that as I've had issues with kernel modules after updating if a new kernel comes through. A fresh boot removes that potential issue.
    $ uname -a
    Linux arch_840 4.0.3-1-ARCH #1 SMP PREEMPT Wed May 13 15:38:47 CEST 2015 x86_64 GNU/Linux
    $ lsmod | grep dm_
    dm_crypt 28672 2
    dm_mod 98304 5 dm_crypt
    $ lsmod |grep xts
    xts 16384 2 serpent_sse2_x86_64,twofish_x86_64_3way
    gf128mul 16384 2 lrw,xts
    smartctl status
    Figured I should check the drive. There's a lot of old age and pre-fail warnings, but this post would seem to suggest I'm okay?
    # smartctl -A /dev/sdb
    smartctl 6.3 2014-07-26 r3976 [x86_64-linux-4.0.3-1-ARCH] (local build)
    Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF READ SMART DATA SECTION ===
    SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 0
    2 Throughput_Performance 0x0026 252 252 000 Old_age Always - 0
    3 Spin_Up_Time 0x0023 090 089 025 Pre-fail Always - 3330
    4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 703
    5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0
    7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0
    8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0
    9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 3707
    10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0
    11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 104
    12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 734
    191 G-Sense_Error_Rate 0x0022 100 100 000 Old_age Always - 17
    192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0
    194 Temperature_Celsius 0x0002 064 053 000 Old_age Always - 24 (Min/Max 16/47)
    195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0
    196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0
    197 Current_Pending_Sector 0x0032 252 252 000 Old_age Always - 0
    198 Offline_Uncorrectable 0x0030 252 252 000 Old_age Offline - 0
    199 UDMA_CRC_Error_Count 0x0036 200 200 000 Old_age Always - 0
    200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 3
    223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 104
    225 Load_Cycle_Count 0x0032 079 079 000 Old_age Always - 214068
    Disk info, delete existing partition, new MBR, create new partition
    # fdisk /dev/sdb
    Welcome to fdisk (util-linux 2.26.1).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    Command (m for help): p
    Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x76d37b6d
    Device Boot Start End Sectors Size Id Type
    /dev/sdb1 63 976768064 976768002 465.8G 83 Linux
    Command (m for help): d
    Selected partition 1
    Partition 1 has been deleted.
    Command (m for help): o
    Created a new DOS disklabel with disk identifier 0x2cd60f13.
    Command (m for help): n
    Partition type
    p primary (0 primary, 0 extended, 4 free)
    e extended (container for logical partitions)
    Select (default p): p
    Partition number (1-4, default 1):
    First sector (2048-976773167, default 2048):
    Last sector, +sectors or +size{K,M,G,T,P} (2048-976773167, default 976773167):
    Created a new partition 1 of type 'Linux' and of size 465.8 GiB.
    Command (m for help): w
    The partition table has been altered.
    Calling ioctl() to re-read partition table.
    Syncing disks.
    trying to format with cryptsetup
    I had a bunch of custom options, but other Arch posts suggested just trying the default, which is what I've done here. It fails with the same error as when I try to pass a cipher, key size, etc. ("Command failed with code 5: IO error while encrypting keyslot.").
    # truecrypt -v --debug luksFormat /dev/sdb1
    bash: truecrypt: command not found
    [root@arch_840 jwhendy]# cryptsetup -v --debug luksFormat /dev/sdb1
    # cryptsetup 1.6.6 processing "cryptsetup -v --debug luksFormat /dev/sdb1"
    # Running command luksFormat.
    # Locking memory.
    # Installing SIGINT/SIGTERM handler.
    # Unblocking interruption on signal.
    WARNING!
    ========
    This will overwrite data on /dev/sdb1 irrevocably.
    Are you sure? (Type uppercase yes): YES
    # Allocating crypt device /dev/sdb1 context.
    # Trying to open and read device /dev/sdb1.
    # Initialising device-mapper backend library.
    # Timeout set to 0 miliseconds.
    # Iteration time set to 1000 miliseconds.
    # Interactive passphrase entry requested.
    Enter passphrase:
    Verify passphrase:
    # Formatting device /dev/sdb1 as type LUKS1.
    # Crypto backend (gcrypt 1.6.3) initialized.
    # Detected kernel Linux 4.0.3-1-ARCH x86_64.
    # Topology: IO (512/0), offset = 0; Required alignment is 1048576 bytes.
    # Checking if cipher aes-xts-plain64 is usable.
    # Using userspace crypto wrapper to access keyslot area.
    # Generating LUKS header version 1 using hash sha1, aes, xts-plain64, MK 32 bytes
    # KDF pbkdf2, hash sha1: 996745 iterations per second.
    # Data offset 4096, UUID 181fed4d-42f2-4f0f-8b70-cb7ba459e25f, digest iterations 121625
    # Updating LUKS header of size 1024 on device /dev/sdb1
    # Key length 32, device size 976771120 sectors, header size 2050 sectors.
    # Reading LUKS header of size 1024 from device /dev/sdb1
    # Key length 32, device size 976771120 sectors, header size 2050 sectors.
    # Adding new keyslot -1 using volume key.
    # Calculating data for key slot 0
    # KDF pbkdf2, hash sha1: 1008246 iterations per second.
    # Key slot 0 use 492307 password iterations.
    # Using hash sha1 for AF in key slot 0, 4000 stripes
    # Updating key slot 0 [0x1000] area.
    # Using userspace crypto wrapper to access keyslot area.
    IO error while encrypting keyslot.
    # Releasing crypt device /dev/sdb1 context.
    # Releasing device-mapper backend.
    # Unlocking memory.
    Command failed with code 5: IO error while encrypting keyslot.
    Things also tend to hang with respect to the drive at this point. For example, fdisk -l spits out /dev/sda partitions immediately and then just hangs instead of printing out /dev/sdb info, then eventually quits (without ever writing it).
    Any suggestions on where to look/how to troubleshoot? I found some possibly related posts, but nothing that looks promising:
    - Impossible to crypt the drive using cryptsetup (fixed by rebooting)
    - cryptsetup fails to open Udev cookie 0xd4d94f5 (semid 0) waiting for z (no responses; the hang after seems similar)
    There's a couple odds and ends references to cryptsetup 1.6.6 having issues. I downloaded 1.6.4-1 and 1.6.5-1 and -2 from ARM to try, but wanted to post this in the meantime in case something stuck out.
    Last edited by jwhendy (2015-05-29 16:01:40)

    @qinohe I thought of that and the other day started formatting with mkfs.ext4; unfortunately, it was at work and I had to leave before I could let it finish. It had been chugging along a good few hours, and I was surprised it would take that long. I was able to format it with ext4 using Windows 7 (I dual boot) with the MiniTool Partition Wizard but I didn't use it like that before trying to solve the cryptsetup issue again.
    This last time around, I was getting unresponsive behavior. I think I need to reboot each time I try something with cryptsetup, as any commands related to that drive seem to hang afterwards (fdisk, umount, eject, mkfs, or trying crypsetup again). Perhaps I'll just let it cook overnight with mkfs and see if I can at least have an unencrypted, but functional drive.
    One interesting tidbit is that even though cryptsetup fails, when I've tried to issue mkfs afterward, it asks me to confirm that I want to format the disk since it has a LUKS header... so something appears to have been written. Is it possible the header is causing some issues? I don't know much about the structure of a disk (like what range the MBR resides in, what constitutes a header, etc.) but have been wondering if there's some way to start really, really clean with the disk. Like I'd just bought it -- something appears to be lingering around from previous efforts?
    @frostschutz I'll check tomorrow. That's a good question. Just checked journalctl and here are some of the errors that appear; unfortunately, I wasn't watching so I can't tell you what matches up with what command:
    May 23 09:32:22 arch_840 systemd-udevd[7784]: inotify_add_watch(7, /dev/sdb1, 10) failed: No such file or directory
    May 23 09:32:22 arch_840 kernel: usb 3-4: stat urb: status -108
    ### there's lots like this; like 10 in a row with various sector values listed
    May 23 09:32:19 arch_840 kernel: Buffer I/O error on dev sdb1, logical block 61341696, lost async page write
    May 23 09:32:19 arch_840 kernel: blk_update_request: I/O error, dev sdb, sector 490735616
    ### there's also a bunch like this, from tab #0 -> #29 (not colored red, so not sure they're errors?)
    May 23 09:32:19 arch_840 kernel: sd 2:0:0:0: [sdb] tag#0 CDB: opcode=0x2a 2a 00 1d 07 bc 10 00 04 00 00
    May 23 09:32:18 arch_840 kernel: sd 2:0:0:0: [sdb] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD OUT
    I paged down quite a ways and those seem like the unique messages when I search the journal for "sdb". Anything stand out? I will say that the same sector numbers appeared in multiple blocks of the third error type listed, so that makes me wonder if something is genuinely wrong with the disk. I'll post the output of the full smartctl scan when I hopefully run it tomorrow.
    Thanks for chiming in!

Maybe you are looking for

  • Help on how to fix macbook pro 13inch 2010 (cpu 1 caller 0x47f5ad)?

    help on how to fix macbook pro 13inch 2010 Hi, I need some peoples help on fixing my laptop. I have a macbook pro 13 inch 2010 model, and it refuses to boot into either osx or windows. Every time I prevoiusly treid to boot into mac osx I got this err

  • HTML video works but not when publishing in MUSE

    I threw this together without using MUSE - Just a test video that I made in Dreamweaver.  The reason why I'm posting this in MUSE as well as DW forum is because maybe I'm not doing the "5" code correctly.  anyway --: http://designerandpublisher.com/h

  • If we share one iTunes account can we have multiple iCloud accounts?

    My children and I share one iTunes account.  Can each one of us have our own iCloud account.  We upgreaded the IOS and my daughter ended up with my contacts and hers.  She deleted mine and now they are all gone.  I want to avoid this issue in the fut

  • Syncing my iPhone 3gs to my new HP laptop

    I recently got a new laptop which is an HP, and I didn't ever try to sync my iPhone 3GS to it because I thought the empty library on the computer would overide my library and erase my phone's data. I was able to jump the music from my old laptop onto

  • Appleworks Spreadsheet Saving Error

    When using Appleworks 6 Spreadsheet, we tried to save the file as an Appleworks 5 Spreadsheet and received this message: "Off sheet references will not be saved". The formulas are not saved in the Appleworks 5 file. What are off sheet references?