FTP problem - FTPEx: No such file

Hello,
I have a file adapter that has been working "normally" for some time. However recently I have discovered some errors with it. The polling of files stops and I get the following error message:
Error: Processing interrupted due to error: FTPEx: No such file.
When I log into the ftp site using another tool like WS_FTP I can see a lot of files in the directory. If I do a rename to the first file in the directory then the polling of files will start again and work normally.
It's a rather strange behavior since the adapter then can work as normal for a week or more before the problem happens again.
If you know anything about why this might occur and how to solve it, please respond.
Thanks,
Per

Thanks for your replies!
The problem has happened with both Per File Transfer and Permanently so I don't think the problem lays there.
About using . (star dot star). There are two different file types in the folder, and they are picked with their own adapter. The problem only happens with one of the adapters and not the other. Today I pick them up like this:
CAL*
DAL*
It's always CAL that goes wrong. I could change it to CAL. if you think that would make a difference. I have also looked at the note mention and will add the timeout to it.
Edited by: Per Rune Weimoth on Sep 23, 2009 12:01 PM
Edited by: Per Rune Weimoth on Sep 23, 2009 12:01 PM

Similar Messages

  • Newbie problem - enlightenment: No such file or directory

    Hi, I'm new in Archlinux and I've found a ridiculous problem after install enlightenment.
    I have put:
    exec /opt/e17/bin/enlightenment_start
    on ~/.xinitrc, but drops the error:
    -bash: /opt/e17/bin/enlightenment_start: No such file or directory
    but the file IS there!! I can see it in the directory, and I can tab the command, but bash don't found it...
    I tryed others binaries in the directory and works, but only the *-config, the others don't work, no one.
    Please, anyone know why can happen this?? I'm getting mad trying to find a solution...
    PD.: Sorry for my poor english ^_^

    Sorry for the mistake
    [terseus@terseus-pc ~] ls -l /opt/e17/bin/enlightenment_start
    -rwxr-xr-x 1 root root 8908 Mar 11 14:55 /opt/e17/bin/enlightenment_start
    I think the permissions are right, I tryed with root but doesn't work.
    PD.: The webpage of Archlinux is really great! The site and the forum work perfectly even from links

  • WRT110 router - FTP problem - Timeouts on large files

    When I upload files via FTP and if the transfer of a file takes more then 3 minutes,
    then my FTP programs doesn't jump to the next file to upload when the previous one has been just finished.
    The problem occurs only if I connect to the internet behind the wrt110 router.
    I have tried FileZilla, FTPRush and FlashFXP FTP softwares to upload via FTP.
    At FileZilla sending keep-alives doesn't help,
    only it works at FlashFXP, but I have a trial version.
    At FIleZilla's site I can read this:
    http://wiki.filezilla-project.org/Network_Configuration#Timeouts_on_large_files
    Timeouts on large files
    If you can transfer small files without any issues,
    but transfers of larger files end with a timeout,
    a broken router and/or firewall exists
    between the client and the server and is causing a problem.
    As mentioned above,  
    FTP uses two TCP connections:
    a control connection to submit commands and receive replies,
    and a data connection for actual file transfers.
    It is the nature of FTP that during a transfer the control connection stays completely idle.
    The TCP specifications do not set a limit on the amount of time a connection can stay idle.
    Unless explicitly closed, a connection is assumed to remain alive indefinitely.
    However, many routers and firewalls automatically close idle connections after a certain period of time.  
    Worse, they often don't notify the user, but just silently drop the connection.
    For FTP, this means that during a long transfer
    the control connection can get dropped because it is detected as idle, but neither client nor server are notified.
    So when all data has been transferred, the server assumes the control connection is alive
    and it sends the transfer confirmation reply.
    Likewise, the client thinks the control connection is alive and it waits for the reply from the server.
    But since the control connection got dropped without notification,
    the reply never arrives and eventually the connection will timeout.
    In an attempt to solve this problem,
    the TCP specifications include a way to send keep-alive packets on otherwise idle TCP connections,
    to tell all involved parties that the connection is still alive and needed.
    However, the TCP specifications also make it very clear that these keep-alive packets should not be sent more often than once every two hours. Therefore, with added tolerance for network latency, connections can stay idle for up to 2 hours and 4 minutes.
    However, many routers and firewalls drop connections that have been idle for less than 2 hours and 4 minutes. This violates the TCP specifications (RFC 5382 makes this especially clear). In other words, all routers and firewalls that are dropping idle connections too early cannot be used for long FTP transfers. Unfortunately manufacturers of consumer-grade router and firewall vendors do not care about specifications ... all they care about is getting your money (and only deliver barely working lowest quality junk).
    To solve this problem, you need to uninstall affected firewalls and replace faulty routers with better-quality ones.

    When I upload files via FTP and if the transfer of a file takes more then 3 minutes,
    then my FTP programs doesn't jump to the next file to upload when the previous one has been just finished.
    The problem occurs only if I connect to the internet behind the wrt110 router.
    I have tried FileZilla, FTPRush and FlashFXP FTP softwares to upload via FTP.
    At FileZilla sending keep-alives doesn't help,
    only it works at FlashFXP, but I have a trial version.
    At FIleZilla's site I can read this:
    http://wiki.filezilla-project.org/Network_Configuration#Timeouts_on_large_files
    Timeouts on large files
    If you can transfer small files without any issues,
    but transfers of larger files end with a timeout,
    a broken router and/or firewall exists
    between the client and the server and is causing a problem.
    As mentioned above,  
    FTP uses two TCP connections:
    a control connection to submit commands and receive replies,
    and a data connection for actual file transfers.
    It is the nature of FTP that during a transfer the control connection stays completely idle.
    The TCP specifications do not set a limit on the amount of time a connection can stay idle.
    Unless explicitly closed, a connection is assumed to remain alive indefinitely.
    However, many routers and firewalls automatically close idle connections after a certain period of time.  
    Worse, they often don't notify the user, but just silently drop the connection.
    For FTP, this means that during a long transfer
    the control connection can get dropped because it is detected as idle, but neither client nor server are notified.
    So when all data has been transferred, the server assumes the control connection is alive
    and it sends the transfer confirmation reply.
    Likewise, the client thinks the control connection is alive and it waits for the reply from the server.
    But since the control connection got dropped without notification,
    the reply never arrives and eventually the connection will timeout.
    In an attempt to solve this problem,
    the TCP specifications include a way to send keep-alive packets on otherwise idle TCP connections,
    to tell all involved parties that the connection is still alive and needed.
    However, the TCP specifications also make it very clear that these keep-alive packets should not be sent more often than once every two hours. Therefore, with added tolerance for network latency, connections can stay idle for up to 2 hours and 4 minutes.
    However, many routers and firewalls drop connections that have been idle for less than 2 hours and 4 minutes. This violates the TCP specifications (RFC 5382 makes this especially clear). In other words, all routers and firewalls that are dropping idle connections too early cannot be used for long FTP transfers. Unfortunately manufacturers of consumer-grade router and firewall vendors do not care about specifications ... all they care about is getting your money (and only deliver barely working lowest quality junk).
    To solve this problem, you need to uninstall affected firewalls and replace faulty routers with better-quality ones.

  • FTP adapter: exc. 550 : No such file or directory

    Hi Men,
    I'm working on 7.0 version.
    I am facing a problem with FTP adapter.
    In few words, I configured a CC FTP sender to get files from a remote server (in test mode).
    As I start this channel, CC monitor retrieves the following error message:
    "Could not process file '<getThisFile>.xml': com.sap.aii.adapter.file.ftp.FTPEx: 550 outbox: No such file or directory."
    where '<getThisFile>.xml' is filename and "outbox" is the directory.
    I presume that it recognizes the filename and the directory but the error message says "dir or file are wrongs".
    I tried the same action by command line with same user and password ftp. Result: I was able to get this file from dir "outbox".
    ...I should exclude a file permission problem.
    Here below, some Logs from "default.trc" file:
    #1.5#001125BDB332006E000000390006802E0004400DCACAD490#1196327887885#com.sap.aii.adapter.file.File2XI##com.sap.aii.adapter.file.File2XI.processFtpList()#J2EE_GUEST#0####ff6d7bd09e5b11dc9e34001125bdb332#XI File2XI[CC_FTP_SND_Order_new/BS_Arianna_DEV/]_57##0#0#Error#1
    #/Applications/ExchangeInfrastructure/AdapterFramework/Services/ADAPTER/ADMIN/File#Plain###Retrieving file 'orders.00089872.xml' failed unexpectedly with com.sap.aii.adapter.file.ftp.FTPEx: 550 outbox: No such file or directory.#
    where:
    CC_FTP_SND_Order_new is the CC sender;
    BS_Arianna_DEV related Business Service.
    'orders.00089872.xml' file to get.
    'outbox' dir path
    .... nothing else.
    Have you ever faced a similar problem? Any suggestion to fix it?
    Thanks in advance
    Alex

    Hi friends,
    FTP server is a Microsoft Windows XP [Version 5.1.2600]
    When I log in, I only change in a sub dir called outbox.
    If I put back slash before (for WinOS), the dir path is wrong (cd \outbox should mean that outbox is under root).
    Some FTP server logs:
    Nov 29 09:40:00 golem ftpd[25272]: USER 8023******
    Nov 29 09:40:00 golem ftpd[25272]: PASS password****
    Nov 29 09:40:00 golem ftpd[25272]: SYST
    Nov 29 09:40:00 golem ftpd[25272]: TYPE Image
    Nov 29 09:40:00 golem ftpd[25272]: CWD outbox
    Nov 29 09:40:00 golem ftpd[25272]: PASV
    Nov 29 09:40:00 golem ftpd[25272]: LIST orders*
    Nov 29 09:40:00 golem ftpd[25272]: CWD outbox
    Nov 29 09:40:00 golem ftpd[25272]: QUIT
    I suppose that XI works fine and FTP server receives two times the "cd outbox" command. II also suppose that this Windows FTP server  memorizes the subdir "outbox" each time I log in by FTP with user  "8023******"

  • Problem in Picking the file in  FTP to RFC Scenerio

    Hi,
    I am using FTP to RFC scenerio, i have a CSV file and need to use file Content COnversion at the sender Communication Channel. But its not picking up the file from my  FTP..
    My data is like
    11    Munish     IT
    My Message Type is like :
    MT_ABAPPROXY ->
                 HEADER    ->>
                                    Empno
                                    Empname
                                    Departmentname
    My Communication channel is like this..
    Transport Protocol:FTP
    Message Protocol:FIle COntent COnversion
    SOurce Directory : /
    File name:            file.csv
    then Server name and port is corrwctly configured
    My messe Content COnversion is like this
    Doc Name:MT_ABAPPROXY
    Namespace:http://www.munnamespace.com
    Recordsetname:Details
    Rec Str :            HEADER,*
    HEADER.fieldSeperator     ','
    HEADER.fieldNames    Empno,Empname,Departmenname
    HEADER.lastFieldsOptional   YES
    HEADER.fieldFixedLengths    12,24,24
    ignoreRecordsetName         true
    HEADER.endSeperator     'nl'
    But it is unable to pick the file , but if i wont use file conten conversion , it picks but it has no use as the date is not in XML
    SO kindly help me in solving this as the problem is mostly in File COnetent Conversion
    Thanks
    Munish SIngh

    Hi arvind,
    thankd for the reply,
    i have done the same what u said but i am getting this error in communcation channel monitoring (RWB)
         Conversion of file content to XML failed at position 0: java.lang.Exception: ERROR converting document line no. 1 according to structure 'HEADER':java.lang.Exception: Consistency error: field(s) missing - specify 'lastFieldsOptional' parameter to allow this

  • (solved)screen problem "reopen fifo blabla: no such file or directory"

    i cant start screen. it just says "reopen fifo /tmp/screens/S-ramoneur/<random number>.pts-1.arch: no such file or directory"
    ive searched google but cant find anything.
    ive looked and the folder is there, the permissions is right, and yes i have free space on my /tmp.
    i remember having a similiar problem to this before, and solved it just by doing screen -wipe. that doesnt work now.
    it seems screen just cant create the file "/tmp/screens/S-ramoneur/<random number>.pts-1.arch" which is weird.
    screen has worked perfectly before and i have a fully functioning config which ive used a long time.
    anyone know whats wrong?
    Last edited by ramoneur (2007-10-05 11:08:36)

    ah i figured it out now, i was looking for a script that would show my current ram usage in my hardstatus line, which i totally forgot i put in my .screenrc some hours ago. guess i havent started a new screen session since then
    well the script obviously fucked everything up xD
    does anyone know a simple script that will show my total available ram, and free ram in my hardstatus line?
    it doesnt hurt asking one more question i hope...
    i want my harddrives info (available gb and free gb) displayed in my hardstatus line aswell
    Last edited by ramoneur (2007-10-05 00:33:17)

  • Samba: No Such File or Directory ? Strange Problem..

    Hello,
    I'm using Arch Linux for a while and liked it so much.. I was using Debian GNU/Linux before this distro.. Any way; I have a /pub directory which is a partition on my hd.. It's always mounted to the system and I'm always using it..
    I've configured Samba to share my music let's say.. Here's my conf file:
    [01:54] (tunix@penguix ~)$ cat /etc/samba/smb.conf
    # penguix Samba Configuration File
    [global]
      workgroup = WORKGROUP
      netbios name = TUNIX
      server string = Arch Linux
      security = share
      socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
      hide unreadable = yes
      hide dot files = yes
      log file = /var/log/samba/log.%m
      max log size = 5
      log level = 0
      encrypt passwords = yes
      guest account = nobody
    [Muzik]
      comment = Muzik
      path = /pub/Muzik
      guest ok = yes
      force user = nobody
      read only = yes
      writable = no
      browsable = yes
    I can see other computers on the network.. Other computers can see my computer and access on it.. But whenever they want to access to the directory Muzik, it says there's no such file or directory.. But the directory exists:
    [01:54] (tunix@penguix ~)$ ls -l /pub/Muzik/
    toplam 128
    drwxr-xr-x  32 tunix users 65536 2006-05-08 01:42 Yabanci
    drwxr-xr-x  30 tunix users 65536 2006-05-23 01:05 Yerli
    I've looked in the log files and it says something like these:
    [2006/05/11 03:03:56, 0] smbd/service.c:make_connection(846)
      10.0.0.212 (10.0.0.212) couldn't find service muzIk
    [2006/05/11 03:20:42, 0] smbd/service.c:make_connection(846)
      10.0.0.212 (10.0.0.212) couldn't find service muzIk
    [2006/05/11 03:20:44, 0] smbd/service.c:make_connection(846)
      10.0.0.212 (10.0.0.212) couldn't find service muzIk
    [2006/05/11 03:20:45, 0] smbd/service.c:make_connection(846)
      10.0.0.212 (10.0.0.212) couldn't find service muzIk
    [2006/05/11 13:22:10, 0] smbd/service.c:make_connection(846)
      10.0.0.212 (10.0.0.212) couldn't find service muzIk
    [2006/05/11 13:22:12, 0] smbd/service.c:make_connection(846)
      10.0.0.212 (10.0.0.212) couldn't find service muzIk
    [2006/05/11 13:22:12, 0] smbd/service.c:make_connection(846)
      10.0.0.212 (10.0.0.212) couldn't find service muzIk
    Any body have an idea here what this problem is about ?
    Thanks a lot..!

    Actually the problem is with the nobody account I guess... This is my new smb.conf file:
    [global]
            netbios name = TUNIX
            workgroup = WORKGROUP
            server string = Arch Linux
            hosts allow = 10.0.
            security = share
            encrypt passwords = no
            socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
            hide unreadable = yes
            hide dot files = yes
            guest account = nobody
    [Muzik]
            comment = muzik
            path = /pub/Muzik
            guest ok = yes
            read only = yes
            browseable = yes
    [emule]
            path = /emule/
            guest ok = yes
            browseable = yes
            read only = yes
    emule works fine but when somebody tries to enter Muzik directory there's the problem.. The difference between them is, emule is on an ext3 partition and Muzik on a fat32 partition...
    I'm trying different things almost 3 hours and found nothing.. I've done lots of Google research and again found nothing useable.. Please help me, I'm really stuck here...

  • I still have problems with getting my website online. I have defined my server. Then I did the test and there was a connection via FTP. I put my files on the external server and there is a connection with the external server. But when I check to see my we

    I still have problems with getting my website online. I have defined my server. Then I did the test and there was a connection via FTP. I put my files on the external server and there is a connection with the external server. But when I check to see my website online (with Firefox, Explorer, Chrome browser) I always get the message 'Forbidden, You don't have permission to access / on this server.' Can somebody help me please? I have to get my website online..Thank you!

    Hello Els,
    it's well known, that in all these cases you describe I'm not a friend of a detailed Troubleshooting (I see Nancy#s smile already).
    To be able to be independent in all this things It is one of the reasons why I prefer an external FTP program. The difficulties with which you have to fight encourage me in this opinion, not least because we always search for experts, we don't charge a "jack of all trades".
    To manage several websites or to upload my files and sometimes for the opposite way, for a necessary download from my server or to use a "a site-wide synch", I'm using FileZilla. It simply looks easier for me to keep track of all operations precisely and generate or reflect easily the desired tree structure.
    Above all, FileZilla has a feature (translation from my German FileZilla) called "compare file list". Here it's possible to use file size or modification time as a criterion. There is also the possibility to "hide identical files", so that only these files which you want to redact remain visible.
    And even if it means you have to install a new program, I am convinced that there is an advantage. Here is the link to get it and where you can read informations about how it works:
    http://filezilla-project.org/ and http://wiki.filezilla-project.org/Tutorial#Using_the_site_manager
    Mac: Mac OS X (Use: Show additional download options)
    http://filezilla-project.org/download.php
    Of course, you also need all the access data to reach your server and for MIME issues, you should contact your web host/provider.
    Good luck!
    Hans-Günter
    P.S.
    Since I use two screens, the whole thing became even more comfortable.

  • Problem right after running runInstaller - No such file or directory

    Right after I run "runInstaller" while trying to install 9.2 on a Linux machine, I get the following error:
    ./runInstaller
    Initializing Java Virtual Machine from /tmp/OraInstall2004-08-11_12-49-38PM/jre/bin/java. Please wait...
    /tmp/OraInstall2004-08-11_12-49-38PM/jre/bin/i386/native_threads/java: error while loading shared libraries: libstdc++-libc6.1-1.so.2: cannot open shared object file: No such file or directory
    Can someone please help me with this?
    Thanks,
    Michael

    Could you check the note 252217.1 on metalink?

  • No such file or directory" which is not allowe

    Hi,
    PI --> AIX (version=5.3, arch=ppc64)
    Target --> FTP server (SunOS 5.8)
    PI Software Build Information
    Make Release NW07_08_REL
    SPS Number 08
    I used File Send Adapter in my communication channels that is FTP ... scenario is working fine..
    but in the RWB- Communication Channel Monitoring(ERROR Status)
    The FTP server was running on UNIX..
    If the file is there in the source directory it was processing successfully..
    even though the remaing all the time it was throwing the following Error Mesage in the RWB-Communication Channel Monitoring..
    Could not process file 'DELRESS*.PRD: No such file or directory': com.sap.exception.standard.SAPIllegalArgumentException:
    The parameter "argument" has the value
    "ftp://XX.XX.XX.XX:21/var/mis/interface/MRO/E100SW/get_test/DELRESS*.PRD: No such file or directory", so it contains the
    character "ftp://XX.XX.XX.XX:21/var/mis/interface/MRO/E100SW/get_test/DELRESS*.PRD: No such file or directory" which is not allowed
    drwxrwxrwx   3 develp   users        512 Oct 21 16:44 get_test
    Please Suggest me.

    Aravind
    Thank,
    1. I added Host name(SI3-XXX.XXX.XXX.XXX)
    2. I'm try to use "\" than "/"
    File Send Adapter
    Source Directory : /var/mis/interface/MRO/E100SW/get_test  --> \var\mis\interface\MRO\E100SW\get_test
    File Name : AAAAA.txt
    Server : SI3
    connection mode : Per File Transfer
    RWB Result(First case)----
    11/13/09 9:03:26 AM   Processing started
    11/13/09 9:03:21 AM   Retry interval started. Length: 10.0 seconds
    11/13/09 9:03:21 AM   Could not process file 'AAAAA.txt: No such file or directory': com.sap.exception.standard.SAPIllegalArgumentException: The parameter "argument" has the value "ftp://SI3:21/var/mis/interface/MRO/E100SW/get_test/AAAAA.txt: No such file or directory", so it contains the character "ftp://SI3:21/var/mis/interface/MRO/E100SW/get_test/AAAAA.txt: No such file or directory" which is not allowed
    RWB Result(Second case)----
    11/13/09 9:03:26 AM   Retry interval started. Length: 10.0 seconds
    11/13/09 9:03:26 AM   Could not process file 'AAAAA.txt: No such file or directory': com.sap.exception.standard.SAPIllegalArgumentException: The parameter "argument" has the value "ftp://XXX.XXX.XXX.XXX:21/var/mis/interface/MRO/E100SW/get_test/AAAAA.txt: No such file or directory", so it contains the character "ftp://XXX.XXX.XXX.XXX:21/var/mis/interface/MRO/E100SW/get_test/AAAAA.txt: No such file or directory" which is not allowed
    RWB Result(Third case)----
    11/13/09 9:04:43 AM   Retry interval started. Length: 10.0 seconds
    11/13/09 9:04:43 AM   An error occurred while connecting to the FTP server 'SI3:21'. The FTP server returned the following error message: 'com.sap.aii.adapter.file.ftp.FTPEx: 550 \var\mis\interface\MRO\E100SW\get_test: No such file or directory.'. For details, contact your FTP server vendor.
    11/13/09 9:04:43 AM   Processing started
    11/13/09 9:04:40 AM   Retry interval started. Length: 10.0 seconds
    RWB Result(4th case)----
      11/13/09 9:06:05 AM   Retry interval started. Length: 10.0 seconds
    11/13/09 9:06:05 AM   An error occurred while connecting to the FTP server 'XXX.XXX.XXX.XXX:21'. The FTP server returned the following error message: 'com.sap.aii.adapter.file.ftp.FTPEx: 550 \var\mis\interface\MRO\E100SW\get_test: No such file or directory.'. For details, contact your FTP server vendor.
    It is always same problem
    Please help.

  • FTP adapter --- Not picking up files

    Hi,
    I have set up an FTP adapter in archive mode to read files from FTP server and process it while archiving a copy of it. I have set up Poll interval at 40 secs and retry interval also set up. It works great the first time . Processes the file and archives it. But when the same files comes in again in the FTP server, I see the foll. message in the adapter monitor "Error: Stopped due to error: com.sap.aii.adapter.file.ftp.FTPEx: inputdat.txt: No such file or directory: No such file or directory."..The file exists in the FTP server but its not processed..I am not sure what I am missing...
    -Teresa

    Hi Michal,
      I have timestamp set up for archive directory..So every run will create an archive file with timestamp..But the input filename remains the same..Archive functionality would remove the file inthe FTP server..So when the file comes again into the FTP server, the FTP adapter doesn't trigger automatically. Everytime I had to activate the IB for FTP adapter to trigger the process. I know this is not how XI-FTP adapter should work..Our scenario is we would receive weekly files into the FTP adpater with the same name. We need to archive each weekly file and also send it for further processing..I can't be activating the process manually ....I know something is going wrong but can't figure it out...:(..Any help is appreciated...:)
    -Teresa

  • TS2570 Still cant access my computer, followed instructions to perform a safe boot and last line of script it shows: BootCacheControl: Unable to open /Var/db/BootCache.playlist:2 No such file or directory. Any idea?

    Brand new Mac Book Pro
    Purchased in Mexico's Department store Liverpool on June 20th 2012
    Purchased Memory upgade to 8Gb, on june 27th, at Apple Store Memorial City, Houston
    Upgraded memory from 4Gb to 8Gb on June 28th.
    Tried to write on my external HD (previously written on a windows based PC) with no success.
    Upon reccomendation from a Mac assistant, copied all my external HD contents into my Mac Hard drive.
    Then formatted my external HD and copied back all my information to it.
    deleted all the  information from my MacBook Pro HD.
    Attempted to repeat the same operation with another external HD, but got a message saying there was not enough epace on the computer HD (even though I have a 750Gb Hard Drive.
    Looked in the trash bin and there it was, all the information previously deleted..
    Could not empty the trash bin, although I got a message asking me to safely delete all the information from the trash bin,
    Could not get enough space released, after several attpempts the trash bin was finally emptied.
    All happy getting acquainted with my new MacBook Pro. Two days later, got a gray screen and not able to start the computer.
    Looking in to Mac support I found article http://support.apple.com/kb/TS2570 and followed instructions to perform a safe boot:
    "Perform a Safe Boot
    Simply performing the Safe Boot may resolve this issue.
    Shut down your Mac. If necessary, hold your Mac's power button for several seconds to force it to power down.
    Start your Mac, then immediately hold the Shift key. This performs a Safe Boot. Advanced tip: If you want to
    see the status of a Safe Boot as it progresses, you can hold Shift-Command-V during start up (instead of just Shift).
    Note: A Safe Boot takes longer than a typical start up because it includes a disk check and other operations.
    The following is the script that appears on the screen upon safe boot. and halt after the last line.
    AppleACPICPU:Processor Id=6 LocalAplicId=255 Disabled
    AppleACPICPU:Processor Id=7 LocalAplicId=255 Disabled
    AppleACPICPU:Processor Id=8 LocalAplicId=255 Disabled
    calling mpo_policy_init for TMSafetyNet
    Security policy loaded: Safety net for Time Machine (TMSafetyNet)
    calling mpo_policy_init for Sandbox
    Security policy loaded: Seatbelt sandbox policy (Sandbox)
    calling mpo_policy_init for Quarantine
    Security policy loaded: Quarantine Policy (Quarantine)
    Copyright (c) 1982, 1986, 1989, 1991, 1993
            The Regents of the University of California. All Rights Reserved.
    MAC Framework Succesfully initializad
    using 16384 buffer headers and 10240 cluster IO buffer headers
    IOAPIC: Version 0x20 Vextors 64:87
    ACPI: System State [SO S3 S4 S5] (S3)
    PFM64 (36cpu) 0xf10000000, 0xf0000000
    Aplconsole relocated to 0xf1000000
    PCI configuration changed (bridge=16 device=4 cardbus=0)
    [ PCI configuration end, bridges 12 devices 16 ]
    Firewire (OHCI) Lucent ID 5901 built-in now active, GUID 3c0754fffe9b2aa2; max speed s800.
    Pthread support ABORTS when sync kernel primitives misused
    com.apple.AppleFSCompressionTypeZlib kmod start
    com.apple.AppleFSCompressionTypeDataless kmod start
    com.apple.AppleFSCompressionTypeZlib load succeeded
    com.apple.AppleFSCompressionTypeDateless load succeeded
    AppleIntelCPUPowerManagementClient: ready
    BTCOEXIST off
    wl0: Broadcom BCM4331 802.11 Wireless controller
    5.100.98.75
    AppleUSBMultitouchDriver::checkStatus - received Status Packet, Payload 2: device was reinitializad
    rooting via boot-uuid from /chosen: 6E918706-FC0D-37460-A3A0-6268A51DF93B
    Waiting on <dict ID="0"><key><string ID="1">IOResources</string><key>IOResourceMatch</key><string ID="2">boot-uuid-media</string></dict>
    Got boot device = IOService:/AppleACPIPlatformExpert/PCI0@0/AppleAPIPCI/SATA@1F,2/AppleIntelPchSe riesAHCI/PRT0@0/AOAHCIDevice@0/AppleAHCIDiskDriver/IOAHCIBlock
    storageevice /IoBlockStorageDriver/TOSHIBA MK7559GSXF Media/IOGUIDPartit
    BSD root: disk0s2, major 14, minor 2
    Kernel is LP64
    com.apple.launchd 1   com.apple.launchd 1   *** launchd[1] has started up. ***
    com.apple.launchd 1   com.apple.launchd 1   *** Verbose boot, will log to /dev/console. ***
    Running fsck on the boot volume...
    ** /dev/rdisk0s2 (NO WRITE)
    ** Root file system
       Executing FSCK_HFS (version diskdev_cmds-540.1~25).
    BootCacheControl: UNable to open /Var/db/BootCache.playlist:2 No such file or directory
    launchctl:Dubious permissions on file (skipping): /Library/LaunchDaemons
    launchctl:Dubious permissions on file (skipping): /System/Library/LaunchDaemons
    Any help or suggestions on what to do next would be welcomed.
    I am in the middle of the Atlantick, stuck with a brand new, non working Apple MacBook Pro.
    Best regards
    Sergio Ramos

    Reinstalling MacOS does NOT fix the problem for me.  I'm still searching a solution !
    Bernard

  • Tempo problems with imported wav files

    Hey everyone, sorry if there's a quick fix for this in the forums that I couldn't find, but I've been having some tempo problems with imported .wav files.
    Long story short, my system couldn't handle playing all the tracks for a song while recording drums, so I bounced out an mp3 of the song and put it in a new Logic file so my drummer could just play along to that as I recorded him. Unfortunately, the original song is at 167 bpm, but I forgot to change the bpm in the new Logic file with the .mp3 file of the song to 167 bpm, so it was left at the default 120 bpm.
    So, the drums were recorded at the correct 167 bpm, but Logic thinks that those new drum .wav files should be played at 120 bpm, so when I import my drum tracks back into the original file, they do not play correctly at all.
    I could get record it all again, but I wanted to check if there was a way I could salvage what I already have, since my drummer lives about an hour away right now and can't just come over whenever he wants.
    Thanks for the help! I really appreciate it.

    Hi,
    First, do not use MP3 in Logic, the sound quality is less than AIFF, WAV or CAF, and Logic has to decode it for playback, making it a heavier burden on the CPU than an uncoded audiofile, such as AIFF, WAV or CAF.
    Secondly, audio files are independent of Logic's tempo. If you bounce down an audio file in any format (other than Apple Loop), it will play back at the same speed, +regardless of Logics' tempo setting+, either at recording or playback. Logic doesn't 'think' anything. The BPM is only important to MIDI tracks, or to the spacing between audio files. The audio files themselves *are not affected* by the tempo setting. If you import an audio file of tempo 167 into a 120 BPM project, the file will still play at 167, only Logic will indicate the wrong bar positions.
    regards, Erik.

  • Getting `No such file or directory` error while trying to open bdb database

    I have four multi-threaded processes (2 writer and 2 reader processes), which make use of Berkeley DB transactional data store. I have multiple environments and the associated database files and log files are located in separate directories (please refer to the DB_CONFIG below). When all these four processes start to perform open and close of databases in the environments very quickly, one of the reader process is throwing a No such file or directory error even though the file actually exists.
    I am making use of Berkeley DB 4.7.25 for testing out these applications.
    The four application names are as follows:
    Writer 1
    Writer 2
    Reader 1
    Reader 2
    The application description is as follows:
    &lsquo;*Writer 1*&rsquo; owns 8 environments and each environment having 123 Berkeley databases created using HASH access method. At any point of time, &lsquo;*Writer 1*&rsquo; will be acting on 24 database files across 8 environments (3 database files per environment) for carrying out write operation. Where as reader process will be accessing all 123 database files / per environment (Total = 123 * 8 environments = 984 database files) for read activities. Similar configuration for Writer 2 as well &ndash; 8 separate environments and so on.
    Writer 1, Reader 1 and Reader 2 processes share the environments created by Writer 1
    Writer 2 and Reader 2 processes share the environments created by Writer 2
    My DB_CONFIG file is configured as follows
    set_cachesize 0 104857600 1   # 100 MB
    set_lg_bsize 2097152                # 2 MB
    set_data_dir ../../vol1/data/
    set_lg_dir ../../vol31/logs/SUBID/
    set_lk_max_locks 1500
    set_lk_max_lockers 1500
    set_lk_max_objects 1500
    set_flags db_auto_commit
    set_tx_max 200
    mutex_set_increment 7500
    Has anyone come across this problem before or is it something to do with the configuration?

    Hi Michael,
    I should have included about how we are making use of DB_TRUNCATE flag in my previous reply itself. Sorry for that.
    From writers, DB truncation happens periodically. During truncate (DB handle is not associated with any environment handle i.e. stand-alone DB
    ) following parameters are passed to db-&gt;open function call:
    DB-&gt;open(DB *db,                    
    DB_TXN *txnid,          =&gt; NULL
    const char *file,          =&gt; file name (absolute DB file path)
    const char *database,  =&gt; NULL
    DBTYPE type, =&gt; DB_HASH
    u_int32_t flags, =&gt; DB_READ_UNCOMMITTED | DB_TRUNCATE | DB_CREATE | DB_THREAD
    int mode); =&gt; 0
    Also, DB_DUP flag is set.
    As you have rightly pointed out, `No such file or directory` is occuring during truncation.
    While a database is being truncated it will not be found by others trying to open it. We simulated this by stopping the writer process (responsible for truncation) and by opening & closing the databases repeatedly via readers. The reader process did not crash. When readers and writers were run simultaneously, we got `No such file or directory` error.Is there any solution to tackle this problem (because in our case writers and readers are run independently. So readers won't come to know about truncation from writers)?
    Also, we are facing one more issue related to DB_TRUNCATE. Consider the following scenario:
    <ul><li>Reader process holds reference of database (X) handle in environment 'Y' at time t1
    </li>
    <li>Writer process opens the database (X) in DB_TRUNCATE mode at time t2 (where t2 &gt; t1)</li>
    <li>After truncation, writer process closes the database and joins the environment 'Y'</li>
    <li>After this any writes to database X is not visible to the reader process</li>
    <li>Once reader process closes the database X and re-joins the environment, all the records inserted from writer process are visible</li>
    </ul>
    Is it the right behavior? If yes, how to make any writes visible to a reader process without closing and re-opening the database in the above-mentioned scenario?
    Also, when [db_set_errfile|http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_set_errfile.html] was set, we did not get any additional information in the error file.
    Thanks,
    Magesh

  • "No such file or directory" errors on Time Machine backup volume

    I remotely mounted the Time Machine backup volume onto another Mac and was looking around it in a Terminal window and discovered what appeared to be a funny problem. If I "cd" into some folders (but not all) and do a "ls -la" command, I get a lot of "No such file or directory" errors for all the subfolders, but all the files look fine. Yet if I go log onto the Mac that has the backup volume mounted as a local volume, these errors never appear for the exact same location. Even more weird is that if I do "ls -a" everything appears normal on both systems (no error messages anyway).
    It appears to be the case that the folders that have the problem are folders that Time Machine has created as "hard links" to other folders which is something that Time Machine does and is only available as of Mac OS X 10.5 and is the secret behind how it avoids using up disk space for files that are the same in the different backups.
    I moved the Time Machine disk to the second Mac and mounted the volume locally onto it (the second Mac that was showing the problems), so that now the volume is local to it instead of a remote mounted volume via AFP and the problem goes away, and if I do a remote mount on the first Mac of the Time Machine volume the problem appears on the first Mac system that was OK - so basically by switching the volume the problem switches also on who shows the "no such file or directory" errors.
    Because of the way the problem occurs, ie only when the volume is remote mounted, it would appear to be an issue with AFP mounted volumes that contain these "hard links" to folders.
    There is utility program written by Amit Singh, the fella who wrote the "Mac OS X Internals" book, called hfsdebug (you can get it from his website if you want - just do a web search for "Mac OS X Internals hfsdebug" if you want to find it ). If you use it to get a little bit more info on what's going on, it shows a lot of details about one of the problematic folders. Here is what things look like on the first Mac (mac1) with the Time Machine locally mounted:
    mac1:xxx me$ pwd
    /Volumes/MyBackups/yyy/xxx
    mac1:xxx me$ ls -a
    . .DS_Store D2
    .. Documents D3
    mac1:xxx me$ ls -lai
    total 48
    280678 drwxr-xr-x 5 me staff 204 Jan 20 01:23 .
    282780 drwxr-xr-x 12 me staff 442 Jan 17 14:03 ..
    286678 -rw-r--r--@ 1 me staff 21508 Jan 19 10:43 .DS_Store
    135 drwxrwxrwx 91 me staff 3944 Jan 7 02:53 Documents
    729750 drwx------ 104 me staff 7378 Jan 15 14:17 D2
    728506 drwx------ 19 me staff 850 Jan 14 09:19 D3
    mac1:xxx me$ hfsdebug Documents/ | head
    <Catalog B-Tree node = 12589 (sector 0x18837)>
    path = MyBackups:/yyy/xxx/Documents
    # Catalog File Record
    type = file (alias, directory hard link)
    indirect folder = MyBackups:/.HFS+ Private Directory Data%000d/dir_135
    file ID = 728505
    flags = 0000000000100010
    . File has a thread record in the catalog.
    . File has hardlink chain.
    reserved1 = 0 (first link ID)
    mac1:xxx me$ cd Documents
    mac1:xxx me$ ls -a | head
    .DS_Store
    .localized
    .parallels-vm-directory
    .promptCache
    ACPI
    ActivityMonitor2010-12-1710p32.txt
    ActivityMonitor2010-12-179pxx.txt
    mac1:Documents me$ ls -lai | head
    total 17720
    135 drwxrwxrwx 91 me staff 3944 Jan 7 02:53 .
    280678 drwxr-xr-x 5 me staff 204 Jan 20 01:23 ..
    144 -rw-------@ 1 me staff 39940 Jan 15 14:27 .DS_Store
    145 -rw-r--r-- 1 me staff 0 Oct 20 2008 .localized
    146 drwxr-xr-x 2 me staff 68 Feb 17 2009 .parallels-vm-directory
    147 -rwxr-xr-x 1 me staff 8 Mar 20 2010 .promptCache
    148 drwxr-xr-x 2 me staff 136 Aug 28 2009 ACPI
    151 -rw-r--r-- 1 me staff 6893 Dec 17 10:36 A.txt
    152 -rw-r--r--@ 1 me staff 7717 Dec 17 10:54 A9.txt
    So you can see from the first few lines of the "ls -a" command, it shows some file/folders but you can't tell which yet. The next "ls -la" command shows which names are files and folders - that there are some folders (like ACPI) and some files (like A.txt and A9.txt) and all looks normal. And the "hfsdebug" info shows some details of what is really happening in the "Documents" folder, but more about that in a bit.
    And here are what a "ls -a" and "ls -al" look like for the same locations on the second Mac (mac2) where the Time Machine volume is remote mounted:
    mac2:xxx me$ pwd
    /Volumes/MyBackups/yyy/xxx
    mac2:xxx me$ ls -a
    . .DS_Store D2
    .. Documents D3
    mac2:xxx me$ ls -lai
    total 56
    280678 drwxr-xr-x 6 me staff 264 Jan 20 01:23 .
    282780 drwxr-xr-x 13 me staff 398 Jan 17 14:03 ..
    286678 -rw-r--r--@ 1 me staff 21508 Jan 19 10:43 .DS_Store
    728505 drwxrwxrwx 116 me staff 3900 Jan 7 02:53 Documents
    729750 drwx------ 217 me staff 7334 Jan 15 14:17 D2
    728506 drwx------ 25 me staff 806 Jan 14 09:19 D3
    mac2:xxx me$ cd Documents
    mac2:Documents me$ ls -a | head
    .DS_Store
    .localized
    .parallels-vm-directory
    .promptCache
    ACPI
    ActivityMonitor2010-12-1710p32.txt
    ActivityMonitor2010-12-179pxx.txt
    mac2:Documents me$ ls -lai | head
    ls: .parallels-vm-directory: No such file or directory
    ls: ACPI: No such file or directory
    ... many more "ls: ddd: No such file or directory" error messages appear - there is a one-to-one
    correspondence between the "ddd" folders and the "no such file or directory" error messages
    total 17912
    728505 drwxrwxrwx 116 me staff 3900 Jan 7 02:53 .
    280678 drwxr-xr-x 6 me staff 264 Jan 20 01:23 ..
    144 -rw-------@ 1 me staff 39940 Jan 15 14:27 .DS_Store
    145 -rw-r--r-- 1 me staff 0 Oct 20 2008 .localized
    147 -rwxr-xr-x 1 me staff 8 Mar 20 2010 .promptCache
    151 -rw-r--r-- 1 me staff 6893 Dec 17 10:36 A.txt
    152 -rw-r--r--@ 1 me staff 7717 Dec 17 10:54 A9.txt
    If you look very close a hint as to what is going on is obvious - the inode for the Documents folder is 152 on the local mounted case (the first set of code above for mac1), and it's 728505 in the remote mounted case for mac2. So it appears that these "hard links" to folders have an extra level of folder that is hidden from you and that AFP fails to take into account, and that is what the "hfsdebug" shows even better as you can clearly see the REAL location of the Documents folder is in something called "/.HFS+ Private Directory Data%000d/dir_135" that is not even visible to the shell. And if you look closely in the remote mac2 case, when I did the "cd Documents" I don't go into the inode 135, but into the inode 728505 (look close at the "." entry for the "ls -la" commands on both mac1 and mac2) which is the REAL problem, but have no idea how to get AFP to follow the extra level of indirection.
    Anyone have any ideas how to fix this so that "ls -l" commands don't generate these "no such file or folder" messages?
    I am guessing that the issue is really something to do with AFP (Apple File Protocol) mounted remote volumes. The TimeMachine example is something that I used as an example that anyone could verify the problem. The real problem for me has nothing to do with Time Machine, but has to do with some hard links to folders that I created on another file system totally separate from the Time Machine volume. They exhibit the same problem as these Time Machine created folders, so am pretty sure the problem has nothing to do with how I created hard links to folders which is not doable normally without writing a super simple little 10 line program using the link() system call - do a "man 2 link" if you are curious how it works.
    I'm well aware of the issues and the conditions when they can and can't be used and the potential hazards. I have an issue in which they are the best way to solve a problem. And after the problem was solved, is when I noticed this issue that appears to be a by-product of using them.
    Do not try these hard links to folders on your own without knowing what they're for and how to use them and not use them. They can cause real problems if not used correctly. So if you decide to try them out and you loose some files or your entire drive, don't say I didn't warn you first.
    Thanks ...
    -Bob

    The problem is Mac to Mac - the volume that I'm having the issue with is not related in any way to Time Machine or to TimeCapsule. The reference to TIme Machine is just to illustrate the problem exists outside of my own personal work with hard links to folders on HFS Extended volumes (case-sensitive in this particular case in case that matters).
    I'm not too excited about the idea of snooping AFP protocol to discover anything that might be learned there.
    The most significant clue that I've seen so far has to do with the inode numbers for the two folders shown in the Terminal window snippets in the original post. The local mounted case uses the inode=728505 of the problematic folder which is in turn linked to the hidden original inode of 135 via the super-secret /.HFS+... folder that you can't see unless using something like the "hfsdebug" program I mentioned.
    The remote mounted case uses the inode=728505 but does not make the additional jump to the inode=135 which is where lower level folders appear to be physically stored.
    Hence the behavior that is seen - the local mounted case is happy and shows what would be expected and the remote mounted case shows only files contained in the problem folder but not lower-level folders or their contents.
    From my little knowledge of how these inode entries really work, I think that they are some sort of linked list chain of values, so that you have to follow the entire chain to get at what you're looking for. If the chain is broken somewhere along the line or not followed correctly, things like this can happen. I think this is a case of things not being followed correctly, as if it were a broken chain problem then the local mounted case would have problems also.
    But the information for this link in the chain is there (from 728505 to the magic-135) but for some reason AFP doesn't make this extra jump.
    Yesterday I heard back from Apple tech support and they have confirmed this problem and say that it is a "implementation limitation" with the AFP client. I think it's a bug, but that will have to be up to Apple to decide now that it's been reported. I just finished reporting this as a bug via the Apple Bug Reporter web site -- it's bug id 8926401 if you want to keep track it.
    Thanks for the insights...
    -Bob

Maybe you are looking for