File Adaptor CC Error

Hello All,
I am getting following error in SAP PI File Adaptor Communication Channel error:
File processing failed with Error when getting an FTP connection from connection pool: com.sap.aii.af.lib.util.concurrent.ResourcePoolException: Unable to create new pooled resource: FTPEx: Not logged in.
Is this related to User authentication error or its internal SAP PI File Adaptor error?
THanks
Rajeev

Hi Rajeev,
  Error when getting an FTP connection from connection pool - means XI server is not able to communicate with FTP server... just ask your FTP server network guys to open the firewall settings of FTP server for XI server..
Check my reply on this thread...
FTP Server Connection issue
Regds,
Pinangshuk.

Similar Messages

  • File Adaptor-picking after error

    Hi,
    I have a scenario where File adaptor has to pick files where file name is dynamic but the extension is constant e.g <dynamic name>.file
    I have configured the file adaptor for picking up the files, as  *.file but the problem comes when there is some error in the file and adaptor (field content conversion) fails to pick up the file.
    And now on when ever file adaptor polls the directory it again tries to read the same file and fails , irrespective of other  correct files(*.files).
    Is there any way to remove/archive this error file so that file adaptor polls for the next correct files.
    Regards,
    Satish

    hi satish,
    Are u talking about additional files? If so you can use NameA.optional=yes and proceed.
    Regards,
    Divya

  • Issue in File Adaptor

    Hi,
    I am using a file adaptor to poll file from a location and process the records and to call an java callout in the same proxy which will accept filename with file path as argument.But this java callout always reurns filenotfound error.I also tried to give stage directory value/archive directory along with file name.But getting the same error.Kindly help me to find the location of polled file(but proxy is still processing).
    Thanks
    Jayanthi

    poll file from a location ... proxy which will accept filename with file path as argumentJava callout will run on the host OSB server is running... File must be in that same host... If it's not you will have to write the file before passing to the java callout... But probably passing the content as a byte array to the java callout would be a better option...
    Cheers,
    Vlad

  • IDOC to file...Error for Access is denied

    Hello Friends,
    I am triggering IDOC WBBDLD from SAP ECC. It is successfully sending IDOC to XI
    In XI, in SXMB_MONI, message is successful.
    But when I am checking in message monitoring with message ID in adaptor level. I am facing given below error.
    2010-06-15 18:38:30 Error Attempt to process file failed with com.sap.aii.adapter.file.ftp.FTPEx: 550 XREFTXN.asc: Access is denied.
    2010-06-15 18:38:31 Error Exception caught by adapter framework: XREFTXN.asc: Access is denied.
    2010-06-15 18:38:31 Error Delivery of the message to the application using connection File_http://sap.com/xi/XI/System failed, due to: com.sap.aii.af.ra.ms.api.RecoverableException: XREFTXN.asc: Access is denied. : com.sap.aii.adapter.file.ftp.FTPEx: 550 XREFTXN.asc: Access is denied..
    Receiver side, I used file adaptor.
    When I check the receiver side communication channel in component monitoring. I receive below error
         An error occurred while connecting to the FTP server '10.1.45.36:21'. The FTP server returned the following error message: 'com.sap.aii.adapter.file.ftp.FTPEx: 550 XREFTXN.asc: Access is denied. '. For details, contact your FTP server vendor.
    Kindly suggest me about the issue and how can I resolve it.
    Regards,
    Narendra
    Edited by: Narendra GSTIT on Jun 16, 2010 7:00 AM

    Dear Narendra,
    An error occurred while connecting to the FTP server '10.1.45.36:21'. The FTP server returned the following error message: 'com.sap.aii.adapter.file.ftp.FTPEx: 550 XREFTXN.asc: Access is denied. '. For details, contact your FTP server vendor.
    The above error clearly telling you to contact FTP Server vendor. So call  your FTP Server administrator and tell that the user you are using in receiver File communication channel not having authorization to create file in FTP server. so that he will grant authorization to the FTP user full authorizationt to create file in FTP Directory.
    thanks,
    madhu

  • File adaptor Debugging?

    HI @,
    I am using a file adaptor which is reading the file and then archiving it after successfull reading .Now what happened that In my scenarion the file got picked up by adaptor and archived also but there is no trace available in the SXMB_MONI
    I need help in debugging the same scenarion How to dedbugg and why this happened
    Regards

    Amit,
    I would recommend looking in the Message Display Tool (http://<xiserver>:<xiport>/mdt) for errors.
    Kind regards,
    Koen

  • XML File to Flat File scenario: Reciver file adaptor content conversion

    Hello Friends,
    Currently I am working on XML File to Flat File Scenario.
    I used receiver side File adaptor with content conversion.
    My receiver side adaptor is giving error. Error is given below.
    Conversion initialization failed: java.lang.Exception: java.lang.Exception: Error(s) in XML conversion parameters found: Parameter '.fieldFixedLengths' or '.fieldSeparator' is missing
    My content conversion parameters are as:
    row.fieldNames                           Customer_ID,Name,Address,Phone
    row.fieldSeparator                      ,
    row.processConfiguration          FromConfiguration
    row.endSeparator                       'nl'
    Can you please suggest me what kind of error is this?
    Regards,
    Narendra

    Still I am facing below given issue.
    2010-06-08 15:34:25     Error     File adapter receiver channel CC_Customer_FlatFile_Distination is not initialized. Unable to proceed: null
    2010-06-08 15:34:25     Error     Exception caught by adapter framework: Channel has not been correctly initialized and cannot process messages

  • Inbound File Adaptor

    Hi Guys,
    I have a situation where I would like to receive an XML file into XI and map it to an IDOC and post a transaction in R/3.
    I have been able to pick up the XML using the inbound file adaptor.  The problem that I am experiencing is that the XML file is encoded using UTF-16.  The XML file also has BOMs.  I have tried creating a new data type from the XSD but get a parsing error. So I created a new data type manually using the values that are given to me in the XSD file.  When I try and test the mapping I get an error.  I keep getting a parsing error that indicates to me that it cannot map the XML file that it receives to the data type that I have created.  It appears that XI is not able to read this UTF-16 file.  Has anyone had this situation before? Can XI handle UTF-16 and BOM?
    When creating the data type, do I have to include the BOM and BO elements?
    Any help would be appreciated!
    Thanks very much,
    Miguel

    Hi Rajan,
    Yip, I have tried this already and it works fine.  I have also tried uploading an IDoc and that too works fine.  This leads me to think that my problem actually lies in the way that my data type is defined and not the actual file adaptor. I think that by finding out whether I should include the BOM and BO elements in my data type would probably help towards solving this problem, because I think that if XI is able to read UTF-16 fine, then it would automatically know that a BOM and BO element is going to be there and would ignore these elements in the XML file.
    Here is the test XML file that I am trying to read into XI:-
    <?xml version="1.0" encoding="UTF-16"?><BOM><BO><AdmInfo><Object>13</Object><Version>2</Version></AdmInfo><Documents><row><DocNum>127</DocNum><DocType>dDocument_Items</DocType><HandWritten>tNO</HandWritten><DocDate>20050908</DocDate><DocDueDate>20050908</DocDueDate><CardCode>0000000006</CardCode><CardName>Test 2</CardName><Address>3
    1690 MBABANE
    SWAZILAND</Address><DiscountPercent>0.000000</DiscountPercent><DocCurrency>GBP</DocCurrency><DocRate>0.000000</DocRate><DocTotal>58.750000</DocTotal><Reference1>127</Reference1><Comments/><JournalMemo>AR Invoice - 0000000006</JournalMemo><PaymentGroupCode>-1</PaymentGroupCode><DocTime>1410</DocTime><SalesPersonCode>-1</SalesPersonCode><TransportationCode>-1</TransportationCode><PartialSupply>tYES</PartialSupply><Confirmed>tYES</Confirmed><SummeryType>dNoSummary</SummeryType><WareHouseUpdateType>dwh_Stock</WareHouseUpdateType><ContactPersonCode>67</ContactPersonCode><ShowSCN>tNO</ShowSCN><Series>1000</Series><TaxDate>20050908</TaxDate><Address2>Street 1
    Garankuwa
    1560 SOUTH AFRICA</Address2><ShipToCode>Ship to</ShipToCode><Rounding>tNO</Rounding><RevisionPo>tNO</RevisionPo><BlockDunning>tNO</BlockDunning><PaymentBlock>tNO</PaymentBlock><MaximumCashDiscount>tNO</MaximumCashDiscount><DeferredTax>tNO</DeferredTax><NumberOfInstallments>1</NumberOfInstallments><ApplyTaxOnFirstInstallment>tYES</ApplyTaxOnFirstInstallment><UseShpdGoodsAct>tNO</UseShpdGoodsAct><DocumentSubType>bod_None</DocumentSubType></row></Documents><Document_Lines><row><BaseType>-1</BaseType><ItemCode>A1000</ItemCode><ItemDescription>Boxing Gloves</ItemDescription><Quantity>1.000000</Quantity><Price>50.000000</Price><Currency>GBP</Currency><DiscountPercent>0.000000</DiscountPercent><LineTotal>50.000000</LineTotal><VendorNum>Box 12321</VendorNum><WarehouseCode>01</WarehouseCode><SalesPersonCode>-1</SalesPersonCode><CommisionPercent>0.000000</CommisionPercent><AccountCode>410000</AccountCode><TaxLiable>tYES</TaxLiable><UseBaseUnits>tNO</UseBaseUnits><SupplierCatNum/><CostingCode/><ProjectCode/><BarCode/><TaxPercentagePerRow>17.500000</TaxPercentagePerRow><VatGroup>O1</VatGroup><PriceAfterVAT>58.750000</PriceAfterVAT><Height1>0.000000</Height1><Height2>0.000000</Height2><Width1>0.000000</Width1><Width2>0.000000</Width2><Lengh1>0.000000</Lengh1><Lengh2>0.000000</Lengh2><Volume>0.000000</Volume><VolumeUnit>4</VolumeUnit><Weight1>0.000000</Weight1><Weight2>0.000000</Weight2><Factor1>1.000000</Factor1><Factor2>1.000000</Factor2><Factor3>1.000000</Factor3><Factor4>1.000000</Factor4><SWW/><Address>Clockhouse Place
    London
    Middlesex TW14 8HD
    UNITED KINGDOM</Address><ShippingMethod>2</ShippingMethod><DeferredTax>tNO</DeferredTax><CorrectionInvoiceItem>ciis_ShouldBe</CorrectionInvoiceItem><CorrInvAmountToStock>0.000000</CorrInvAmountToStock><CorrInvAmountToDiffAcct>0.000000</CorrInvAmountToDiffAcct><ExciseAmount>0.000000</ExciseAmount><ConsumerSalesForecast>tNO</ConsumerSalesForecast><U_Franch>HK</U_Franch></row></Document_Lines><SerialNumbers/><BatchNumbers/><Document_LinesAdditionalExpenses/><DocumentsAdditionalExpenses/><WithholdingTaxData/></BO></BOM>
    Thanks for the help,
    Miguel

  • 0ra 12560:tns adaptor protocol error

    Hi Frenz
    I m New to oracle 10g
    when i insatlled 10g at that tym i created a database from DBCA n it is working fine...
    but now if i am creating a database from DBCA it is giving me ora 12560: TNS adaptor protocol error but Listener on machine is up n working fine...infact when i create a Database manually it is working n db created successfully....
    i want to resolve this problem urgently.
    waitting for the responses from experts.
    Thanks
    Varun

    I assumed that the Installer would create ORACLE_HOME and setting the PATH correctly.
    I installed using the Basic Selection. The install was marked successful. During the configuration, I got:
    I get an exception in System.Security.Policy.PolicyException in OraProvCfg,exe, which is not found on the Net.
    The Oracle Net Configuration Assistent succeeded
    listener.ora is:
    # listener.ora Network Configuration File: U:\app\ron\product\11.1.0\db_1\network\admin\listener.ora
    # Generated by Oracle configuration tools.
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = MyServer)(PORT = 1521))
    sqlnet.ora is:
    SQLNET.AUTHENTICATION_SERVICES= (NTS)
    NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
    I'm going to manually delete the installation and try the advanced version.
    Edited by: user8892579 on Dec 15, 2009 11:51 AM

  • Error #2038: File I/O error in muse

    I am trying to open a prieview of my website in muse and it wont allow me to and gives me the file I/O error.  Help?

    Please post in the Muse forum here http://forums.adobe.com/community/muse. They will be able to help you.

  • File in use error - copying from a faulty external hard drive

    Hi,
    I have a 750GB Western Digital external hard drive, which contains two partitions - (1) HDBACKUP - a Time Machine backup of my laptop drive, and (2) MEDIA - pictures including an Aperture library, and videos.
    When I plug the drive in, only the MEDIA partition mounts. And I get an error saying Disk Utility cannot repair the drive and to backup the files as soon as I can, and reformat the drive.
    When I try to copy the files onto another drive using Finder, it copies part of the file and then gives me a "File in use" error and stops. I tried copying via Terminal and its the same - this time I get a "Resource busy" error.
    There are no other apps accessing the files. And this happens for every file on the drive.
    So how do I go about copying the files off the drive?
    I tried using DiskWarrior but it was stuck on Step 1 - Searching for Volume Information, after 3 hours.
    I really need to get the Aperture Library off the drive. Any ideas on how I can copy it across?
    Thanks

    This did not work. I still get an error trying to repair with Disk Utility.
    It gets to
    Starting repair tool:
    Checking file system
    Checking Journaled HFS Plus volume.
    Checking extents overflow file.
    Checking catalog file.
    Checking multi-linked files.
    Checking catalog hierarchy.
    Checking extended attributes file.
    The volume MEDIA could not be verified completely.
    Volume repair complete.
    Updating boot support partitions for the volume as required.
    Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.
    So DiskUtility tells me to backup as many files as possible. However, when I try copying these files, I get a "File in use" error.

  • Apex Listener 2.0.3.221.10.13 File path syntax error: ILLEGAL_CHARACTER

    Hi all,
    I tryed to install Apex Listener 2.0.3.221.10.13 into Tomcat 7.0.42 and I got the error :
    INFO: Deploying web application archive C:\Program Files\Apache Software Foundation\Tomcat 7.0\webapps\apex.war
    ago 16, 2013 9:15:19 AM oracle.dbtools.common.util.Files checkLegal
    WARNING: File path syntax error: ILLEGAL_CHARACTER, path: C:/
    ago 16, 2013 9:15:19 AM org.apache.catalina.core.StandardContext startInternal
    SEVERE: Error listenerStart
    ago 16, 2013 9:15:19 AM org.apache.catalina.core.StandardContext startInternal
    SEVERE: Context [/apex] startup failed due to previous errors
    and also
    ago 16, 2013 9:15:19 AM org.apache.catalina.core.StandardContext listenerStart
    SEVERE: Exception sending context initialized event to listener instance of class oracle.dbtools.rt.web.SCListener
    java.lang.IllegalArgumentException: oracle.dbtools.common.util.FilePathSyntaxException: ILLEGAL_CHARACTER
      at oracle.dbtools.common.util.Files.checkLegal(Files.java:262)
      at oracle.dbtools.common.util.Files.file(Files.java:127)
      at oracle.dbtools.common.config.file.ConfigurationFolder.chooseExistingFile(ConfigurationFolder.java:172)
      at oracle.dbtools.common.config.file.ConfigurationFolder.choose(ConfigurationFolder.java:151)
      at oracle.dbtools.common.config.file.ConfigurationFolder.setup(ConfigurationFolder.java:64)
      at oracle.dbtools.rt.web.SCListener.contextInitialized(SCListener.java:72)
      at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4939)
      at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5434)
      at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
      at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
      at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
      at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633)
      at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:976)
      at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1653)
      at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
      at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
      at java.util.concurrent.FutureTask.run(Unknown Source)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
      at java.lang.Thread.run(Unknown Source)
    Caused by: oracle.dbtools.common.util.FilePathSyntaxException: ILLEGAL_CHARACTER
      at oracle.dbtools.common.util.FilePathSyntax.check(FilePathSyntax.java:45)
      at oracle.dbtools.common.util.Files.checkLegal(Files.java:255)
      ... 19 more
    My platform is :
    OS : Windows 7 64 bit
    JRE : 1.7.0_25-b17 -
    %USERPROFILE% : C:\Users\SERGIO
    %TEMP% : C:\Users\SERGIO\AppData\Local\Temp
    Apex Listener Configuration Folder : C:\Users\SERGIO\AppData\Local\Temp\apex
    Can anyone help me, please?
    Sergio

    You need to configure your config directory before you deploy. If you never configured v2 listener before then create a new folder like "c:\apex\config". Then use
    java -jar apex.war configdir  c:\apex\config
    Then do your configuration as required and then deploy again.

  • Getting `No such file or directory` error while trying to open bdb database

    I have four multi-threaded processes (2 writer and 2 reader processes), which make use of Berkeley DB transactional data store. I have multiple environments and the associated database files and log files are located in separate directories (please refer to the DB_CONFIG below). When all these four processes start to perform open and close of databases in the environments very quickly, one of the reader process is throwing a No such file or directory error even though the file actually exists.
    I am making use of Berkeley DB 4.7.25 for testing out these applications.
    The four application names are as follows:
    Writer 1
    Writer 2
    Reader 1
    Reader 2
    The application description is as follows:
    &lsquo;*Writer 1*&rsquo; owns 8 environments and each environment having 123 Berkeley databases created using HASH access method. At any point of time, &lsquo;*Writer 1*&rsquo; will be acting on 24 database files across 8 environments (3 database files per environment) for carrying out write operation. Where as reader process will be accessing all 123 database files / per environment (Total = 123 * 8 environments = 984 database files) for read activities. Similar configuration for Writer 2 as well &ndash; 8 separate environments and so on.
    Writer 1, Reader 1 and Reader 2 processes share the environments created by Writer 1
    Writer 2 and Reader 2 processes share the environments created by Writer 2
    My DB_CONFIG file is configured as follows
    set_cachesize 0 104857600 1   # 100 MB
    set_lg_bsize 2097152                # 2 MB
    set_data_dir ../../vol1/data/
    set_lg_dir ../../vol31/logs/SUBID/
    set_lk_max_locks 1500
    set_lk_max_lockers 1500
    set_lk_max_objects 1500
    set_flags db_auto_commit
    set_tx_max 200
    mutex_set_increment 7500
    Has anyone come across this problem before or is it something to do with the configuration?

    Hi Michael,
    I should have included about how we are making use of DB_TRUNCATE flag in my previous reply itself. Sorry for that.
    From writers, DB truncation happens periodically. During truncate (DB handle is not associated with any environment handle i.e. stand-alone DB
    ) following parameters are passed to db-&gt;open function call:
    DB-&gt;open(DB *db,                    
    DB_TXN *txnid,          =&gt; NULL
    const char *file,          =&gt; file name (absolute DB file path)
    const char *database,  =&gt; NULL
    DBTYPE type, =&gt; DB_HASH
    u_int32_t flags, =&gt; DB_READ_UNCOMMITTED | DB_TRUNCATE | DB_CREATE | DB_THREAD
    int mode); =&gt; 0
    Also, DB_DUP flag is set.
    As you have rightly pointed out, `No such file or directory` is occuring during truncation.
    While a database is being truncated it will not be found by others trying to open it. We simulated this by stopping the writer process (responsible for truncation) and by opening & closing the databases repeatedly via readers. The reader process did not crash. When readers and writers were run simultaneously, we got `No such file or directory` error.Is there any solution to tackle this problem (because in our case writers and readers are run independently. So readers won't come to know about truncation from writers)?
    Also, we are facing one more issue related to DB_TRUNCATE. Consider the following scenario:
    <ul><li>Reader process holds reference of database (X) handle in environment 'Y' at time t1
    </li>
    <li>Writer process opens the database (X) in DB_TRUNCATE mode at time t2 (where t2 &gt; t1)</li>
    <li>After truncation, writer process closes the database and joins the environment 'Y'</li>
    <li>After this any writes to database X is not visible to the reader process</li>
    <li>Once reader process closes the database X and re-joins the environment, all the records inserted from writer process are visible</li>
    </ul>
    Is it the right behavior? If yes, how to make any writes visible to a reader process without closing and re-opening the database in the above-mentioned scenario?
    Also, when [db_set_errfile|http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_set_errfile.html] was set, we did not get any additional information in the error file.
    Thanks,
    Magesh

  • "No such file or directory" errors on Time Machine backup volume

    I remotely mounted the Time Machine backup volume onto another Mac and was looking around it in a Terminal window and discovered what appeared to be a funny problem. If I "cd" into some folders (but not all) and do a "ls -la" command, I get a lot of "No such file or directory" errors for all the subfolders, but all the files look fine. Yet if I go log onto the Mac that has the backup volume mounted as a local volume, these errors never appear for the exact same location. Even more weird is that if I do "ls -a" everything appears normal on both systems (no error messages anyway).
    It appears to be the case that the folders that have the problem are folders that Time Machine has created as "hard links" to other folders which is something that Time Machine does and is only available as of Mac OS X 10.5 and is the secret behind how it avoids using up disk space for files that are the same in the different backups.
    I moved the Time Machine disk to the second Mac and mounted the volume locally onto it (the second Mac that was showing the problems), so that now the volume is local to it instead of a remote mounted volume via AFP and the problem goes away, and if I do a remote mount on the first Mac of the Time Machine volume the problem appears on the first Mac system that was OK - so basically by switching the volume the problem switches also on who shows the "no such file or directory" errors.
    Because of the way the problem occurs, ie only when the volume is remote mounted, it would appear to be an issue with AFP mounted volumes that contain these "hard links" to folders.
    There is utility program written by Amit Singh, the fella who wrote the "Mac OS X Internals" book, called hfsdebug (you can get it from his website if you want - just do a web search for "Mac OS X Internals hfsdebug" if you want to find it ). If you use it to get a little bit more info on what's going on, it shows a lot of details about one of the problematic folders. Here is what things look like on the first Mac (mac1) with the Time Machine locally mounted:
    mac1:xxx me$ pwd
    /Volumes/MyBackups/yyy/xxx
    mac1:xxx me$ ls -a
    . .DS_Store D2
    .. Documents D3
    mac1:xxx me$ ls -lai
    total 48
    280678 drwxr-xr-x 5 me staff 204 Jan 20 01:23 .
    282780 drwxr-xr-x 12 me staff 442 Jan 17 14:03 ..
    286678 -rw-r--r--@ 1 me staff 21508 Jan 19 10:43 .DS_Store
    135 drwxrwxrwx 91 me staff 3944 Jan 7 02:53 Documents
    729750 drwx------ 104 me staff 7378 Jan 15 14:17 D2
    728506 drwx------ 19 me staff 850 Jan 14 09:19 D3
    mac1:xxx me$ hfsdebug Documents/ | head
    <Catalog B-Tree node = 12589 (sector 0x18837)>
    path = MyBackups:/yyy/xxx/Documents
    # Catalog File Record
    type = file (alias, directory hard link)
    indirect folder = MyBackups:/.HFS+ Private Directory Data%000d/dir_135
    file ID = 728505
    flags = 0000000000100010
    . File has a thread record in the catalog.
    . File has hardlink chain.
    reserved1 = 0 (first link ID)
    mac1:xxx me$ cd Documents
    mac1:xxx me$ ls -a | head
    .DS_Store
    .localized
    .parallels-vm-directory
    .promptCache
    ACPI
    ActivityMonitor2010-12-1710p32.txt
    ActivityMonitor2010-12-179pxx.txt
    mac1:Documents me$ ls -lai | head
    total 17720
    135 drwxrwxrwx 91 me staff 3944 Jan 7 02:53 .
    280678 drwxr-xr-x 5 me staff 204 Jan 20 01:23 ..
    144 -rw-------@ 1 me staff 39940 Jan 15 14:27 .DS_Store
    145 -rw-r--r-- 1 me staff 0 Oct 20 2008 .localized
    146 drwxr-xr-x 2 me staff 68 Feb 17 2009 .parallels-vm-directory
    147 -rwxr-xr-x 1 me staff 8 Mar 20 2010 .promptCache
    148 drwxr-xr-x 2 me staff 136 Aug 28 2009 ACPI
    151 -rw-r--r-- 1 me staff 6893 Dec 17 10:36 A.txt
    152 -rw-r--r--@ 1 me staff 7717 Dec 17 10:54 A9.txt
    So you can see from the first few lines of the "ls -a" command, it shows some file/folders but you can't tell which yet. The next "ls -la" command shows which names are files and folders - that there are some folders (like ACPI) and some files (like A.txt and A9.txt) and all looks normal. And the "hfsdebug" info shows some details of what is really happening in the "Documents" folder, but more about that in a bit.
    And here are what a "ls -a" and "ls -al" look like for the same locations on the second Mac (mac2) where the Time Machine volume is remote mounted:
    mac2:xxx me$ pwd
    /Volumes/MyBackups/yyy/xxx
    mac2:xxx me$ ls -a
    . .DS_Store D2
    .. Documents D3
    mac2:xxx me$ ls -lai
    total 56
    280678 drwxr-xr-x 6 me staff 264 Jan 20 01:23 .
    282780 drwxr-xr-x 13 me staff 398 Jan 17 14:03 ..
    286678 -rw-r--r--@ 1 me staff 21508 Jan 19 10:43 .DS_Store
    728505 drwxrwxrwx 116 me staff 3900 Jan 7 02:53 Documents
    729750 drwx------ 217 me staff 7334 Jan 15 14:17 D2
    728506 drwx------ 25 me staff 806 Jan 14 09:19 D3
    mac2:xxx me$ cd Documents
    mac2:Documents me$ ls -a | head
    .DS_Store
    .localized
    .parallels-vm-directory
    .promptCache
    ACPI
    ActivityMonitor2010-12-1710p32.txt
    ActivityMonitor2010-12-179pxx.txt
    mac2:Documents me$ ls -lai | head
    ls: .parallels-vm-directory: No such file or directory
    ls: ACPI: No such file or directory
    ... many more "ls: ddd: No such file or directory" error messages appear - there is a one-to-one
    correspondence between the "ddd" folders and the "no such file or directory" error messages
    total 17912
    728505 drwxrwxrwx 116 me staff 3900 Jan 7 02:53 .
    280678 drwxr-xr-x 6 me staff 264 Jan 20 01:23 ..
    144 -rw-------@ 1 me staff 39940 Jan 15 14:27 .DS_Store
    145 -rw-r--r-- 1 me staff 0 Oct 20 2008 .localized
    147 -rwxr-xr-x 1 me staff 8 Mar 20 2010 .promptCache
    151 -rw-r--r-- 1 me staff 6893 Dec 17 10:36 A.txt
    152 -rw-r--r--@ 1 me staff 7717 Dec 17 10:54 A9.txt
    If you look very close a hint as to what is going on is obvious - the inode for the Documents folder is 152 on the local mounted case (the first set of code above for mac1), and it's 728505 in the remote mounted case for mac2. So it appears that these "hard links" to folders have an extra level of folder that is hidden from you and that AFP fails to take into account, and that is what the "hfsdebug" shows even better as you can clearly see the REAL location of the Documents folder is in something called "/.HFS+ Private Directory Data%000d/dir_135" that is not even visible to the shell. And if you look closely in the remote mac2 case, when I did the "cd Documents" I don't go into the inode 135, but into the inode 728505 (look close at the "." entry for the "ls -la" commands on both mac1 and mac2) which is the REAL problem, but have no idea how to get AFP to follow the extra level of indirection.
    Anyone have any ideas how to fix this so that "ls -l" commands don't generate these "no such file or folder" messages?
    I am guessing that the issue is really something to do with AFP (Apple File Protocol) mounted remote volumes. The TimeMachine example is something that I used as an example that anyone could verify the problem. The real problem for me has nothing to do with Time Machine, but has to do with some hard links to folders that I created on another file system totally separate from the Time Machine volume. They exhibit the same problem as these Time Machine created folders, so am pretty sure the problem has nothing to do with how I created hard links to folders which is not doable normally without writing a super simple little 10 line program using the link() system call - do a "man 2 link" if you are curious how it works.
    I'm well aware of the issues and the conditions when they can and can't be used and the potential hazards. I have an issue in which they are the best way to solve a problem. And after the problem was solved, is when I noticed this issue that appears to be a by-product of using them.
    Do not try these hard links to folders on your own without knowing what they're for and how to use them and not use them. They can cause real problems if not used correctly. So if you decide to try them out and you loose some files or your entire drive, don't say I didn't warn you first.
    Thanks ...
    -Bob

    The problem is Mac to Mac - the volume that I'm having the issue with is not related in any way to Time Machine or to TimeCapsule. The reference to TIme Machine is just to illustrate the problem exists outside of my own personal work with hard links to folders on HFS Extended volumes (case-sensitive in this particular case in case that matters).
    I'm not too excited about the idea of snooping AFP protocol to discover anything that might be learned there.
    The most significant clue that I've seen so far has to do with the inode numbers for the two folders shown in the Terminal window snippets in the original post. The local mounted case uses the inode=728505 of the problematic folder which is in turn linked to the hidden original inode of 135 via the super-secret /.HFS+... folder that you can't see unless using something like the "hfsdebug" program I mentioned.
    The remote mounted case uses the inode=728505 but does not make the additional jump to the inode=135 which is where lower level folders appear to be physically stored.
    Hence the behavior that is seen - the local mounted case is happy and shows what would be expected and the remote mounted case shows only files contained in the problem folder but not lower-level folders or their contents.
    From my little knowledge of how these inode entries really work, I think that they are some sort of linked list chain of values, so that you have to follow the entire chain to get at what you're looking for. If the chain is broken somewhere along the line or not followed correctly, things like this can happen. I think this is a case of things not being followed correctly, as if it were a broken chain problem then the local mounted case would have problems also.
    But the information for this link in the chain is there (from 728505 to the magic-135) but for some reason AFP doesn't make this extra jump.
    Yesterday I heard back from Apple tech support and they have confirmed this problem and say that it is a "implementation limitation" with the AFP client. I think it's a bug, but that will have to be up to Apple to decide now that it's been reported. I just finished reporting this as a bug via the Apple Bug Reporter web site -- it's bug id 8926401 if you want to keep track it.
    Thanks for the insights...
    -Bob

  • V8.2.1: generic file i/o error while loading VI's

    Hello,
    I had a large LV application open and running fine.  I closed it, and upon re-opening the top-level VI it gives "Generic File I/O Error" on several sub-VI's (belonging to me, not LV).  Opening those sub VI's individually reveals no problem at all, and in any case the whole code base lives in a Subversion repository and I have tried checking out a fresh copy of the whole source code tree and the problem re-appeared.  Before Labview displays the "File I/O error" it will hang for several minutes with no disk or CPU activity.
    I just tried checking out a fresh copy of the source tree on a separate computer.  Now Labview is hanging while loading "<vilib>:\DAQmx\create\channels.llb\DAQmx Create Channel (CI-GPS Timestamp).vi" after having managed to load 920 VI's.  On edit: I spoke too soon.  It is now duplicating the "Generic File I/O Error" on one of my sub VI's.
    If anyone has any clue to this strange behavior please let me know. 
    Thank you,
    -Brian
    Message Edited by brian_g on 04-08-2008 11:43 AM
    Message Edited by brian_g on 04-08-2008 11:44 AM

    What I found was this...
    My code calls a bunch of subVI's in another directory.  Those subVI's in turn had some unwise paths saved in some of them that loaded other subVI's from a network shared drive instead of the local disk directory. 
    The network shared drive had been taken off line since the computer was delivered to a customer last Friday.  When LV tried to load the VI's from the shared drive, it would hang for about 5 minutes, then give the "Generic File I/O Error" with an "Ok" button to continue.  I eventually sat and did this for about 13 VI's, taking most of an hour while waiting for the network reads to timeout. 
    Once my now-broken application loaded I was able to successfully re-direct the missing VI's to the local disk, resolving the problem.
    The "Generic File I/O Error" message could not be less useful in this case and I'm rather irritated by the genericity of the message.  Something like "Network read failed" would have immediately illustrated the problem and kept me from wasting a whole lot of time deciphering  it.

  • File not found error from scheduler

    Hi,
            We have a scheduler file with .cfa extension  in the cf scheduler  which trigger some mails to certain recipients on execution.
    At the bottom option fo the scheduled task, we have checked the option to 'save output to file' and has given the file
    path. On direct execution of the given url, the  file is executed successfully and mails are fired.
    But while trying to execute the same via scheduler it is not firing the mails. On investigation of the output file,
    found the error message 'File not found'.
    Is it possible that the file get  executed when directly browsing the url and shows 'file not found' error
    while executing via cf scheduler?(CF 8 is used.)

    To be more specific,
    the url format given in the cf admin scheduler is like this:
    "http://www.domainname.com/myfolder/myfile.cfa"
    A file path to log the output is also given to publish.
    If we copy the above url to a browser and execute, the file gets executed and mails are fired.
    If we set the interval or run the scheduler from the administrator,mails are not sent.On examination of log file given in the scheduler section, it is
    showing error '/myfolder/myfile.cfa' file not found.

Maybe you are looking for

  • After upgrading HANA from rev. 69.1 to rev.72 OOM errors occure during HDB startup

    Hello, we upgraded our HANA scale out system from rev. 69.1 to rev. 72. ~4 hours after the upgrade our HDB crashed with OOM errors on  the master indexserver. Since than we are trying to start it back up and face OOM errors during startup in the inde

  • Optical out...but wait, m

    Im sorry if this has been asked before. I did look, but failed to see anything relevant. Now, I have the Creative X-Fi Extreme gamer. I have it all working just fine under Vista. Awesome sound. Now, the problem here-in lies. I want to add a 5. system

  • Transaction Type Group

    Dear All, Where do I find the config for Transaction Type Group (TTG)? FYI - I read about this in SAP library and it stated that TTG : " The number of possible transaction type groups, as well as the characteristics of the individual groups, is speci

  • Job not getting triggered for Multiple Scheduler Events

    hi, I would like a job to be triggered for multiple scheduler events, subscribing to a single event works fine. But, when I set multiple event condition, nothing works. My objective is to run a job, whenever job starts or restarts or exceeds max run

  • RELOAD SOFTWARE 552

    Hi..badly need help on this.. what should i do i already removed the rcent apps i downloaded thru application loader but the error still appear and i cant use my phone please help me...