Error: sysace_stdio.h: No such file or directory

i create an application "read_from_compact_flash"  with edk tool ,this application contain the flowing sources :
ReadText.c
StackAndHeap.c
StackAndHeap.h
when i try to bluid this application the errors is occur:
cygdrive/e/p08/module2/ReadText.c:1:26: error: sysace_stdio.h: No such file or directory
/cygdrive/e/p08/module2/ReadText.c: In function 'main':
/cygdrive/e/p08/module2/ReadText.c:13: error: 'FILE' undeclared (first use in this function)
/cygdrive/e/p08/module2/ReadText.c:13: error: (Each undeclared identifier is reported only once
/cygdrive/e/p08/module2/ReadText.c:13: error: for each function it appears in.)
/cygdrive/e/p08/module2/ReadText.c:13: error: 'TextFp' undeclared (first use in this function)
/cygdrive/e/p08/module2/ReadText.c:21: warning: incompatible implicit declaration of built-in function 'malloc'
/cygdrive/e/p08/module2/ReadText.c:23: error: 'NULL' undeclared (first use in this function)
make: *** [compact_flash/executable.elf] Error 1
Done!
just for information i import the library  "xilfatfs" in mss fille.
any one help me please and thinks

If you're not sure if you have the SDKs, can you confirm you have the files under /Developer? Check for /Developer/SDKs/MacOSX10.4u.sdk/usr/X11R6/include/X11
I don't remember having to do anything special to get gcc to find these files.
From what I've noted, the usual trigger seems to be that the X11SDK.pkg package wasn't installed.
Typical PATH export is:
export PATH="/bin:/sbin:/usr/bin:/usr/sbin"
If you're working with Fink, there's a call to the following needed:
. /sw/bin/init.sh
See http://finkproject.org/doc/users-guide/install.php
But I'd first make sure the X11 SDK was installed. I'd guess that is missing.

Similar Messages

  • Find: stat() error /proc/26354: No such file or directory

    Dear everyone,
    I am using Solaris 10u8 on 02 Sun Fire X4240 servers. I also installed Sun Cluster 3.2u3 and Oracle RAC 10gR2 for these 2 servers.
    When I tried to execute the find command, I receive the result I need. However, in the end of the output, I saw additional lines:
    +# find / -name * log -print | grep samba+
    +/var/svc/log/network-samba:default.log+
    +/var/samba/log+
    +/etc/svc/volatile/network-samba:default.log+
    +{color:#ff0000}find: stat() error /proc/26354: No such file or directory{color}+
    I also tried with less complex syntax but got the same result, for example:
    +# find / -name samba+
    +/var/spool/samba+
    +/var/samba+
    +/usr/sfw/lib/webmin/caldera/samba+
    +/usr/sfw/lib/webmin/mscstyle3/samba+
    +/usr/sfw/lib/webmin/samba+
    +/etc/init.d/samba+
    +/etc/webmin/samba+
    +/opt/SUNWscsmb/samba+
    +{color:#ff0000}find: stat() error /proc/24252: No such file or directory+
    +find: stat() error /proc/24345: No such file or directory{color}+
    I don't know why these unormal outputs appeared. Does anyone know what the problem is here ?
    Any advice from you is appreciated. Thanks for your help and please help me in this case.
    Thanks.
    HuyNQ.

    /proc is a virtual filesystem reperesenting running processes.
    By doing a a find from / your allowing find to pass though /proc.
    Those errors indicate processes that exited while the find was running. So the files suddenly disappeared.
    The errors are harmless. But if possible, try not to allow find to scan /proc.

  • Hard Drive Error after Erasing - "No Such File or Directory"

    I have a Western Digital My Passport 400GB external HD (bought in April 09). It's been working great but I had it on FAT formatting and let's just say transfers are slow. I borrowed another external drive to back everything up so I could reformat this drive to Mac OS.
    Problem came after I backed up everything...I ran Disk Utility to erase the HD and format it in Mac OS Extended (journaled). It successfully erased the Hard Drive on All Zeros, taking 6 days, 5 hours, 50 minutes. Then when it tried to partition, I got the error No Such File or Directory. Here's the log:
    "Preparing to zero disk : “Untitled”
    Secure Erase completed successfully in 6 days, 5 hours, 50 minutes.
    Preparing to erase : “Untitled”
    Creating Partition Map
    Disk Erase failed with the error:
    No such file or directory
    Erase complete."
    So now the hard drive icon doesn't show up on my desktop, but is recognized in DU (without a volume however). Therefore there is no Mount Point and the drive remains unformatted. Any attempt in DU to partition or erase yields the same error message. Searched these discussions for days now without any luck. Please help!

    Welcome to the Apple discussions.
    I'm not following your problem description .... You said that you successfully erased the hard drive. The next step would be to partition it by giving it a name (you can give it a name under partition or under the erase function) - don't leave it as 'untitled', using Mac OS extended (journaled), a partition size, and ensure under options it says Apple partition map. There are no erasures involved when you partition the drive, which is where I'm confused over your problem description.

  • Pacman issue - error: .. : No such file or directory.

    It seems I have my first issue, and it's with Pacman. I get this error everytime I sync the database or install a package, even though everything seems to go fine.
    error: /var/lib/pacman/extra/flite-1.3-2/desc: No such file or directory
    I don't know what flite is but it won't install it as it doesn't exist even though there is a directory called "flite-1.3-2" and it's empty. I've tried removing the directory but synching the database doesn't put it back. Also I've used 'pacman -Scc' to clear cache but still no joy.

    archuser wrote:
    It seems I have my first issue, and it's with Pacman. I get this error everytime I sync the database or install a package, even though everything seems to go fine.
    error: /var/lib/pacman/extra/flite-1.3-2/desc: No such file or directory
    I don't know what flite is but it won't install it as it doesn't exist even though there is a directory called "flite-1.3-2" and it's empty. I've tried removing the directory but synching the database doesn't put it back. Also I've used 'pacman -Scc' to clear cache but still no joy.
    delete the directory /var/lib/pacman/extra and run pacman -Sy to restore. I believe if you delete just the flite directory it won't restore with Sy.

  • Error: jndi.properties (No such file or directory)

    Hi,
    I created my EJB and a desktop application in the NetBeans.
    I load my EJB on GlassFish
    I still load my appCliente on NetBeans, nice it works.
    But when I try to run my appClient just clicking on its icon in
    its directory it doesn't work.
    The message is jndi.properties (No such file or directory), but
    the file is there.
    My peace code:
    Properties props = new Properties();
    try{
    props.load(new FileInputStream("jndi.properties"));
    TesteEJBRemote t = (TesteEJBRemote) new InitialContext(props).lookup("Teste");
    jTextField2.setText(t.getMessage() + t.toString());
    }catch (IOException ex) {                                       
    //atxErros.setText(ex.getMessage());
    Logger.getLogger(frmFrame.class.getName()).log(Level.SEVERE, null, ex);
    jTextField2.setText(ex.getMessage());
    } catch (NamingException ex) {                                       
    //atxErros.setText(this.atxErros + " " + ex.getMessage());
    Logger.getLogger(frmFrame.class.getName()).log(Level.SEVERE, null, ex);
    What shoud I do?

    I've looked into Oracle Database 10g Express Edition (Universal), a .deb file, found no occi.h but did find lots of .so files (linux shared libraries). I'll try and adapt my code and Netbeans/g++ to that and see if it works...
    I can't really install it since my 4GB computer is x64 and my i386 "only" has 186RAM... Enterprise Linux Release 5 Media Pack for x86 64 (64 bit) is 5.1GB+updates (~5GB each)... Not gonna waste my time installing that just for a project.
    I sense i'm missing something obvious... although there's nothing obvious about *.oracle.com... information company they say? ;)

  • Error: bad interpreter: No such file or directory

    Solved....if you ever executing any file from the $iAS/bin directory and get the "bad interpreter: No such file or directory" then all you do is edit that file in vi or whatever, the look at the first line where it said #!/bin/ksh (in by case, it uses Korn shell I guess to interpret the script, in fact I don't have ksh install) and check to see if you have that particular file in /bin directory. If not, all you do is just make a sym link to /bin/sh under the name of /bin/ksh (assuming you have bash shell installed). The problem will be solved then.
    Hope this help some of you out as it does for me as a newbie in Oracle products. :)

    Hi,
    Rather, you can refer the below note and use the attached SQL script to create OPS$ user. It is for windows but works for unix also.
    50088 - Creating OPS$ users on Windows NT-Oracle
    use the attachement ORADBUSER.SQL.
    OR refer the below post. I have replied  to the similar query (Oracle Procedure usign SQLPLUS)..
    [Re: Backup recovery test|Re: Backup recovery test]
    Regards,
    Rajesh Narrkhede

  • Error: occi.h: No such file or directory

    I'm working on an university project where Oracle is imposed, so i need to configure it on my machine.
    I downloaded the instant client for AMD64 (64bit version) here, for it appears that's the place where occi.h would be. I also downloaded OCCI and tried using the occiCommon.h it contained, but it too refers occi.h, so i'm stranded. Plus, that .tar.gz is for RedHat and for an old version of g++. I'm using Debian (kernel 2.6.26-1-amd64) and g++ 4.3.2 - i assume the distro is irrelevant, but g++ version might not be.
    So far the best post i've found was this, but it relies on rpm packages which are not available, only zips. Later on it provides instructions on where to unzip them to, but the files it refers seem different from those in the actual zip and i have no oracle/ in my /usr/lib
    Where can i get the required files to connect to an Oracle 10g database using C++? Are there any alternatives to OCCI? Would Netbeans be easier to use? Where do i put them? What would the g++ command be? I'm using with -l and -L pointing to the same directory, where i've unziped everything, but that doesn't seem to do it. Also the files are mostly .o and .jar, not .h.
    Thanks.

    I've looked into Oracle Database 10g Express Edition (Universal), a .deb file, found no occi.h but did find lots of .so files (linux shared libraries). I'll try and adapt my code and Netbeans/g++ to that and see if it works...
    I can't really install it since my 4GB computer is x64 and my i386 "only" has 186RAM... Enterprise Linux Release 5 Media Pack for x86 64 (64 bit) is 5.1GB+updates (~5GB each)... Not gonna waste my time installing that just for a project.
    I sense i'm missing something obvious... although there's nothing obvious about *.oracle.com... information company they say? ;)

  • [SOLVED] Error : sys\stat.h: No such file or directory

    I get "...sys\stat.h: No such file or directory" error when compiling a simple C program. i have been searching for a while on web and see people writing "# pacman -S base-devel" i have that already and some more packages like glib ...
    These are the includes i am using :
    #include <stdio.h>
    #include <stdlib.h>
    #include <io.h>
    #include <fcntl.h>
    #include <sys\stat.h>
    // here is when i use the libraries sys/stats.h
    struct stat stbuf;
    stat(argv[2], &stbuf);
    int infSize = stbuf.size;
    This is the output of the command "gcc -o copy copy.c" :
    copy.c:3:16: error: io.h: No such file or directory
    copy.c:5:22: error: sys\stat.h: No such file or directory
    copy.c: In function 'main':
    copy.c:35: error: storage size of 'stbuf' isn't known
    Either i am missing any package needed or the gcc compiler is missing any lib directory. But i dont know which one is it.
    Any ideas are apreciated. Thanks you
    EDIT : The error was actually on #Include <io.h> , but i dunno why it conflicted with #include <sys\stat.h>.
    Anyway Problem Solved.
    Last edited by puzzled (2008-10-05 17:07:03)

    If you're not sure if you have the SDKs, can you confirm you have the files under /Developer? Check for /Developer/SDKs/MacOSX10.4u.sdk/usr/X11R6/include/X11
    I don't remember having to do anything special to get gcc to find these files.
    From what I've noted, the usual trigger seems to be that the X11SDK.pkg package wasn't installed.
    Typical PATH export is:
    export PATH="/bin:/sbin:/usr/bin:/usr/sbin"
    If you're working with Fink, there's a call to the following needed:
    . /sw/bin/init.sh
    See http://finkproject.org/doc/users-guide/install.php
    But I'd first make sure the X11 SDK was installed. I'd guess that is missing.

  • Compile error during make: ansidecl.h: No such file or directory

    Hi - I was trying to make a ruby app and I ran into the following error which seems to have originated in gcc. I can't seem to find much about it on Google, would very much appreciate some help with how to get to the bottom of this, thanks!:
    checking for main() in -liberty... no
    creating Makefile
    In file included from dirfibheap.h:5,
    from dirfibheap.c:2:
    *fibheap.h:44:22: error: ansidecl.h: No such file or directory*
    make: * \[dirfibheap.o] Error 1
    gcc -fno-common -g -Os -pipe -fno-common -pipe -fno-common -pipe -fno-common -I. -I/usr/lib/ruby/1.8/universal-darwin8.0 -I/usr/lib/ruby/1.8/universal-darwin8.0 -I. -c dirfibheap.c

    Apple's gcc doesn't come with this file. It is unlikely that simply getting a version of the file will work. My guess is that you would have to either install an alternative version of gcc which includes it or edit the source code of the app you're compiling to avoid it. Whether either of these would work, I can't say. Obviously this application is not ported to or supported on Mac OS X (if it is meant to be, report the bug). That means there are likely other issues which will crop up if you sort this one. I don't say this to put you off but only as a warning. (This may be superfluous and you may know all this very well. If so, my apologies.)
    What are you trying to compile? Have you tried the forums for the application? Has anybody else compiled it for OS X? Have you seen if Fink or Mac Ports have it?
    - cfr

  • Not such file or directory, icloud garageband

    hello, i just got in a very disappointing trouble using garageband, i uploaded songs to icloud and the i donloaded them again and edited them, everything was ok, till one day , i wanted to download one of my song from icloud but when i tapped on the green arrow for downloading, it told so:
    an unexpected error has ocurred, not such file or directory"
    but the preiew image of my song seems to be just like the last time i used it, and all my other songs downloaded ok, but this one just cant be downloaded
    any solution?
    i am even able to give a $0.99 app for the one who can get me oof this trouble

    Hi--
    mahongue wrote:
    But what is strange is that even the commands stored in /usr/sbin were not recognized and this directory needed to be added as well. Strange.
    One thing you have to keep in mind is that scripts like this aren't necessarily run by the same user as you, so they get a different environment. While I'm not familiar with Sleepwatcher and how it runs, I can give you different example of where this tripped me up.
    I was setting up a backup script to run once a week as part of the periodic tasks that run overnight. When I tested them from the Terminal app, they worked fine. They even worked when I put the script into the /etc/periodic/weekly folder, and ran the weekly jobs like so:
    <pre class="command">sudo periodic weekly</pre>
    But they failed when run by the system at the appointed time. This was very frustrating until someone here pointed out that they were being run as root at the appointed time, so I need to take the root environment into account. Sure enough, if I made my self the root user like so ( "-" means to change to the new user's environment):
    <pre class="command">sudo su -</pre>
    I would get the same environment as the weekly periodic tasks were getting, so I could debug the script using the same environment where it was running.
    I'm not sure why /usr/sbin wasn't in your $PATH, but don't forget that you can always use echo in your scripts to see that value of environment variables (if your script's output gets logged somewhere). I use that constantly as I'm working on scripts.
    charlie

  • Getting `No such file or directory` error while trying to open bdb database

    I have four multi-threaded processes (2 writer and 2 reader processes), which make use of Berkeley DB transactional data store. I have multiple environments and the associated database files and log files are located in separate directories (please refer to the DB_CONFIG below). When all these four processes start to perform open and close of databases in the environments very quickly, one of the reader process is throwing a No such file or directory error even though the file actually exists.
    I am making use of Berkeley DB 4.7.25 for testing out these applications.
    The four application names are as follows:
    Writer 1
    Writer 2
    Reader 1
    Reader 2
    The application description is as follows:
    &lsquo;*Writer 1*&rsquo; owns 8 environments and each environment having 123 Berkeley databases created using HASH access method. At any point of time, &lsquo;*Writer 1*&rsquo; will be acting on 24 database files across 8 environments (3 database files per environment) for carrying out write operation. Where as reader process will be accessing all 123 database files / per environment (Total = 123 * 8 environments = 984 database files) for read activities. Similar configuration for Writer 2 as well &ndash; 8 separate environments and so on.
    Writer 1, Reader 1 and Reader 2 processes share the environments created by Writer 1
    Writer 2 and Reader 2 processes share the environments created by Writer 2
    My DB_CONFIG file is configured as follows
    set_cachesize 0 104857600 1   # 100 MB
    set_lg_bsize 2097152                # 2 MB
    set_data_dir ../../vol1/data/
    set_lg_dir ../../vol31/logs/SUBID/
    set_lk_max_locks 1500
    set_lk_max_lockers 1500
    set_lk_max_objects 1500
    set_flags db_auto_commit
    set_tx_max 200
    mutex_set_increment 7500
    Has anyone come across this problem before or is it something to do with the configuration?

    Hi Michael,
    I should have included about how we are making use of DB_TRUNCATE flag in my previous reply itself. Sorry for that.
    From writers, DB truncation happens periodically. During truncate (DB handle is not associated with any environment handle i.e. stand-alone DB
    ) following parameters are passed to db-&gt;open function call:
    DB-&gt;open(DB *db,                    
    DB_TXN *txnid,          =&gt; NULL
    const char *file,          =&gt; file name (absolute DB file path)
    const char *database,  =&gt; NULL
    DBTYPE type, =&gt; DB_HASH
    u_int32_t flags, =&gt; DB_READ_UNCOMMITTED | DB_TRUNCATE | DB_CREATE | DB_THREAD
    int mode); =&gt; 0
    Also, DB_DUP flag is set.
    As you have rightly pointed out, `No such file or directory` is occuring during truncation.
    While a database is being truncated it will not be found by others trying to open it. We simulated this by stopping the writer process (responsible for truncation) and by opening & closing the databases repeatedly via readers. The reader process did not crash. When readers and writers were run simultaneously, we got `No such file or directory` error.Is there any solution to tackle this problem (because in our case writers and readers are run independently. So readers won't come to know about truncation from writers)?
    Also, we are facing one more issue related to DB_TRUNCATE. Consider the following scenario:
    <ul><li>Reader process holds reference of database (X) handle in environment 'Y' at time t1
    </li>
    <li>Writer process opens the database (X) in DB_TRUNCATE mode at time t2 (where t2 &gt; t1)</li>
    <li>After truncation, writer process closes the database and joins the environment 'Y'</li>
    <li>After this any writes to database X is not visible to the reader process</li>
    <li>Once reader process closes the database X and re-joins the environment, all the records inserted from writer process are visible</li>
    </ul>
    Is it the right behavior? If yes, how to make any writes visible to a reader process without closing and re-opening the database in the above-mentioned scenario?
    Also, when [db_set_errfile|http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_set_errfile.html] was set, we did not get any additional information in the error file.
    Thanks,
    Magesh

  • Getting Error in CMC - source file error. [No such file or directory]: [CrystalEnterprise.Smtp]

    Dear All,
    I am able to schedule the Crystal report successfully means mail gets auto triggered as we set in SCHEDULE option. But when we tried to use the notification option we get the status as FAILED. nd in details we get "source file error. [No such file or directory]: [CrystalEnterprise.Smtp]".
    But still a mail gets triggred and each and every person will get mail. but the issue is notification will not work properly and no notification mail. And the status is despayed as FAILED with "source file error. [No such file or directory]: [CrystalEnterprise.Smtp]" this message.
    Regards,
    Rohan Ghadi

    Do you have multiple job servers in your environment? Did all the Job servers configured SMTP settings?

  • "No such file or directory" errors on Time Machine backup volume

    I remotely mounted the Time Machine backup volume onto another Mac and was looking around it in a Terminal window and discovered what appeared to be a funny problem. If I "cd" into some folders (but not all) and do a "ls -la" command, I get a lot of "No such file or directory" errors for all the subfolders, but all the files look fine. Yet if I go log onto the Mac that has the backup volume mounted as a local volume, these errors never appear for the exact same location. Even more weird is that if I do "ls -a" everything appears normal on both systems (no error messages anyway).
    It appears to be the case that the folders that have the problem are folders that Time Machine has created as "hard links" to other folders which is something that Time Machine does and is only available as of Mac OS X 10.5 and is the secret behind how it avoids using up disk space for files that are the same in the different backups.
    I moved the Time Machine disk to the second Mac and mounted the volume locally onto it (the second Mac that was showing the problems), so that now the volume is local to it instead of a remote mounted volume via AFP and the problem goes away, and if I do a remote mount on the first Mac of the Time Machine volume the problem appears on the first Mac system that was OK - so basically by switching the volume the problem switches also on who shows the "no such file or directory" errors.
    Because of the way the problem occurs, ie only when the volume is remote mounted, it would appear to be an issue with AFP mounted volumes that contain these "hard links" to folders.
    There is utility program written by Amit Singh, the fella who wrote the "Mac OS X Internals" book, called hfsdebug (you can get it from his website if you want - just do a web search for "Mac OS X Internals hfsdebug" if you want to find it ). If you use it to get a little bit more info on what's going on, it shows a lot of details about one of the problematic folders. Here is what things look like on the first Mac (mac1) with the Time Machine locally mounted:
    mac1:xxx me$ pwd
    /Volumes/MyBackups/yyy/xxx
    mac1:xxx me$ ls -a
    . .DS_Store D2
    .. Documents D3
    mac1:xxx me$ ls -lai
    total 48
    280678 drwxr-xr-x 5 me staff 204 Jan 20 01:23 .
    282780 drwxr-xr-x 12 me staff 442 Jan 17 14:03 ..
    286678 -rw-r--r--@ 1 me staff 21508 Jan 19 10:43 .DS_Store
    135 drwxrwxrwx 91 me staff 3944 Jan 7 02:53 Documents
    729750 drwx------ 104 me staff 7378 Jan 15 14:17 D2
    728506 drwx------ 19 me staff 850 Jan 14 09:19 D3
    mac1:xxx me$ hfsdebug Documents/ | head
    <Catalog B-Tree node = 12589 (sector 0x18837)>
    path = MyBackups:/yyy/xxx/Documents
    # Catalog File Record
    type = file (alias, directory hard link)
    indirect folder = MyBackups:/.HFS+ Private Directory Data%000d/dir_135
    file ID = 728505
    flags = 0000000000100010
    . File has a thread record in the catalog.
    . File has hardlink chain.
    reserved1 = 0 (first link ID)
    mac1:xxx me$ cd Documents
    mac1:xxx me$ ls -a | head
    .DS_Store
    .localized
    .parallels-vm-directory
    .promptCache
    ACPI
    ActivityMonitor2010-12-1710p32.txt
    ActivityMonitor2010-12-179pxx.txt
    mac1:Documents me$ ls -lai | head
    total 17720
    135 drwxrwxrwx 91 me staff 3944 Jan 7 02:53 .
    280678 drwxr-xr-x 5 me staff 204 Jan 20 01:23 ..
    144 -rw-------@ 1 me staff 39940 Jan 15 14:27 .DS_Store
    145 -rw-r--r-- 1 me staff 0 Oct 20 2008 .localized
    146 drwxr-xr-x 2 me staff 68 Feb 17 2009 .parallels-vm-directory
    147 -rwxr-xr-x 1 me staff 8 Mar 20 2010 .promptCache
    148 drwxr-xr-x 2 me staff 136 Aug 28 2009 ACPI
    151 -rw-r--r-- 1 me staff 6893 Dec 17 10:36 A.txt
    152 -rw-r--r--@ 1 me staff 7717 Dec 17 10:54 A9.txt
    So you can see from the first few lines of the "ls -a" command, it shows some file/folders but you can't tell which yet. The next "ls -la" command shows which names are files and folders - that there are some folders (like ACPI) and some files (like A.txt and A9.txt) and all looks normal. And the "hfsdebug" info shows some details of what is really happening in the "Documents" folder, but more about that in a bit.
    And here are what a "ls -a" and "ls -al" look like for the same locations on the second Mac (mac2) where the Time Machine volume is remote mounted:
    mac2:xxx me$ pwd
    /Volumes/MyBackups/yyy/xxx
    mac2:xxx me$ ls -a
    . .DS_Store D2
    .. Documents D3
    mac2:xxx me$ ls -lai
    total 56
    280678 drwxr-xr-x 6 me staff 264 Jan 20 01:23 .
    282780 drwxr-xr-x 13 me staff 398 Jan 17 14:03 ..
    286678 -rw-r--r--@ 1 me staff 21508 Jan 19 10:43 .DS_Store
    728505 drwxrwxrwx 116 me staff 3900 Jan 7 02:53 Documents
    729750 drwx------ 217 me staff 7334 Jan 15 14:17 D2
    728506 drwx------ 25 me staff 806 Jan 14 09:19 D3
    mac2:xxx me$ cd Documents
    mac2:Documents me$ ls -a | head
    .DS_Store
    .localized
    .parallels-vm-directory
    .promptCache
    ACPI
    ActivityMonitor2010-12-1710p32.txt
    ActivityMonitor2010-12-179pxx.txt
    mac2:Documents me$ ls -lai | head
    ls: .parallels-vm-directory: No such file or directory
    ls: ACPI: No such file or directory
    ... many more "ls: ddd: No such file or directory" error messages appear - there is a one-to-one
    correspondence between the "ddd" folders and the "no such file or directory" error messages
    total 17912
    728505 drwxrwxrwx 116 me staff 3900 Jan 7 02:53 .
    280678 drwxr-xr-x 6 me staff 264 Jan 20 01:23 ..
    144 -rw-------@ 1 me staff 39940 Jan 15 14:27 .DS_Store
    145 -rw-r--r-- 1 me staff 0 Oct 20 2008 .localized
    147 -rwxr-xr-x 1 me staff 8 Mar 20 2010 .promptCache
    151 -rw-r--r-- 1 me staff 6893 Dec 17 10:36 A.txt
    152 -rw-r--r--@ 1 me staff 7717 Dec 17 10:54 A9.txt
    If you look very close a hint as to what is going on is obvious - the inode for the Documents folder is 152 on the local mounted case (the first set of code above for mac1), and it's 728505 in the remote mounted case for mac2. So it appears that these "hard links" to folders have an extra level of folder that is hidden from you and that AFP fails to take into account, and that is what the "hfsdebug" shows even better as you can clearly see the REAL location of the Documents folder is in something called "/.HFS+ Private Directory Data%000d/dir_135" that is not even visible to the shell. And if you look closely in the remote mac2 case, when I did the "cd Documents" I don't go into the inode 135, but into the inode 728505 (look close at the "." entry for the "ls -la" commands on both mac1 and mac2) which is the REAL problem, but have no idea how to get AFP to follow the extra level of indirection.
    Anyone have any ideas how to fix this so that "ls -l" commands don't generate these "no such file or folder" messages?
    I am guessing that the issue is really something to do with AFP (Apple File Protocol) mounted remote volumes. The TimeMachine example is something that I used as an example that anyone could verify the problem. The real problem for me has nothing to do with Time Machine, but has to do with some hard links to folders that I created on another file system totally separate from the Time Machine volume. They exhibit the same problem as these Time Machine created folders, so am pretty sure the problem has nothing to do with how I created hard links to folders which is not doable normally without writing a super simple little 10 line program using the link() system call - do a "man 2 link" if you are curious how it works.
    I'm well aware of the issues and the conditions when they can and can't be used and the potential hazards. I have an issue in which they are the best way to solve a problem. And after the problem was solved, is when I noticed this issue that appears to be a by-product of using them.
    Do not try these hard links to folders on your own without knowing what they're for and how to use them and not use them. They can cause real problems if not used correctly. So if you decide to try them out and you loose some files or your entire drive, don't say I didn't warn you first.
    Thanks ...
    -Bob

    The problem is Mac to Mac - the volume that I'm having the issue with is not related in any way to Time Machine or to TimeCapsule. The reference to TIme Machine is just to illustrate the problem exists outside of my own personal work with hard links to folders on HFS Extended volumes (case-sensitive in this particular case in case that matters).
    I'm not too excited about the idea of snooping AFP protocol to discover anything that might be learned there.
    The most significant clue that I've seen so far has to do with the inode numbers for the two folders shown in the Terminal window snippets in the original post. The local mounted case uses the inode=728505 of the problematic folder which is in turn linked to the hidden original inode of 135 via the super-secret /.HFS+... folder that you can't see unless using something like the "hfsdebug" program I mentioned.
    The remote mounted case uses the inode=728505 but does not make the additional jump to the inode=135 which is where lower level folders appear to be physically stored.
    Hence the behavior that is seen - the local mounted case is happy and shows what would be expected and the remote mounted case shows only files contained in the problem folder but not lower-level folders or their contents.
    From my little knowledge of how these inode entries really work, I think that they are some sort of linked list chain of values, so that you have to follow the entire chain to get at what you're looking for. If the chain is broken somewhere along the line or not followed correctly, things like this can happen. I think this is a case of things not being followed correctly, as if it were a broken chain problem then the local mounted case would have problems also.
    But the information for this link in the chain is there (from 728505 to the magic-135) but for some reason AFP doesn't make this extra jump.
    Yesterday I heard back from Apple tech support and they have confirmed this problem and say that it is a "implementation limitation" with the AFP client. I think it's a bug, but that will have to be up to Apple to decide now that it's been reported. I just finished reporting this as a bug via the Apple Bug Reporter web site -- it's bug id 8926401 if you want to keep track it.
    Thanks for the insights...
    -Bob

  • Error 0(Native: listNetInterfaces:[3]) and error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

    Hi Gurus,
    I'm trying to upgrade my test 9.2.0.8 rac to 10.1 rac. I cannot upgrade to 10.2 because of RAM limitations on my test RAC. 10.1 Clusterware software was successfully installed and the daemons are up with OCR and voting disk created. Then during the installation of RAC software at the end, root.sh needs to be run. When I run root.sh, it gave the error: while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory. I have libpthread.so.0 in /lib. I looked up on metalink and found Doc ID: 414163.1 . I unset the LD_ASSUME_KERNEL in vipca (unsetting of LD_ASSUME_KERNEL was not required in srvctl because there was no LD_ASSUME_KERNEL in srvctl). Then I tried to run vipca manually. I receive the following error: Error 0(Native: listNetInterfaces:[3]). I'm able to see xclock and xeyes. So its not a problem with x.
    OS: OEL5 32 bit
    oifcfg iflist
    eth0 192.168.2.0
    eth1 10.0.0.0
    oifcfg getif
    eth1 10.0.0.0 global cluster_interconnect
    eth1 10.1.1.0 global cluster_interconnect
    eth0 192.168.2.0 global public
    cat /etc/hosts
    192.168.2.3 sunny1pub.ezhome.com sunny1pub
    192.168.2.4 sunny2pub.ezhome.com sunny2pub
    192.168.2.33 sunny1vip.ezhome.com sunny1vip
    192.168.2.44 sunny2vip.ezhome.com sunny2vip
    10.1.1.1 sunny1prv.ezhome.com sunny1prv
    10.1.1.2 sunny2prv.ezhome.com sunny2prv
    My questions are:
    should ping on sunny1vip and sunny2vip be already working? As of now they dont work.
    if you look at oifcfg getif, I initially had eth1 10.0.0.0 global cluster_interconnect,eth0 192.168.2.0 global public then I created eth1 10.1.1.0 global cluster_interconnect with setif. Should it be 10.1.1.0 or 10.0.0.0. I looked at the subnet calculator and it says for 10.1.1.1, 10.0.0.0 is the subnet. In metalink they had used 10.10.10.0 and hence I used 10.1.1.0
    Any ideas on resolving this issue would be very much appreciated. I had been searching on oracle forums, google, metalink but all of them refer to DOC Id 414163.1 but it does n't seem to work. Please help. Thanks in advance.
    Edited by: ayyappa on Aug 20, 2009 10:13 AM
    Edited by: ayyappa on Aug 20, 2009 10:14 AM
    Edited by: ayyappa on Aug 20, 2009 10:15 AM

    a step forward towards resolution but i need some help from the gurus.
    root# cat /etc/hosts
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    192.168.2.3 sunny1pub.ezhome.com sunny1pub
    192.168.2.4 sunny2pub.ezhome.com sunny2pub
    10.1.1.1 sunny1prv.ezhome.com sunny1prv
    10.1.1.2 sunny2prv.ezhome.com sunny2prv
    192.168.2.33 sunny1vip.ezhome.com sunny1vip
    192.168.2.44 sunny2vip.ezhome.com sunny2vip
    root# /u01/app/oracle/product/crs/bin/oifcfg iflist
    eth1 10.0.0.0
    eth0 192.168.2.0
    root# /u01/app/oracle/product/crs/bin/oifcfg getif
    eth1 10.0.0.0 global cluster_interconnect
    eth0 191.168.2.0 global public
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
    ****ORACLE_HOME environment variable not set!
    ORACLE_HOME should be set to the main directory that contain oracle products. set and export ORACLE_HOME, then re-run.
    root# export ORACLE_BASE=/u01/app/oracle
    root# export ORACLE_HOME=/u01/app/oracle/product/10.1.0/Db_1
    root# export ORA_CRS_HOME=/u01/app/oracle/product/crs
    root# export PATH=$PATH:$ORACLE_HOME/bin
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
    VIP does not exist.
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl add nodeapps -n sunny1pub -o $ORACLE_HOME -A 192.168.2.33/255.255.255.0
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl add nodeapps -n sunny2pub -o $ORACLE_HOME -A 192.168.2.44/255.255.255.0
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
    VIP exists.: sunny1vip.ezhome.com/192.168.2.33/255.255.255.0
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny2pub -a
    VIP exists.: sunny2vip.ezhome.com/192.168.2.44/255.255.255.0
    Once I execute the add nodeapps command as root on node 1, I was able to get vip exists for config nodeapps on node 2. The above 2 statements resulted me with same values on both nodes. After this I executed root.sh on both nodes, I did not receive any errors. It said CRS resources are already configured.
    My questions to the gurus are as follows:
    Should ping on vip work? It does not work now.
    srvctl status nodeapps -n sunny1pub(same result for sunny2pub)
    VIP is not running on node: sunny1pub
    GSD is not running on node: sunny1pub
    PRKO-2016 : Error in checking condition of listener on node: sunny1pub
    ONS daemon is not running on node: sunny1pub
    [root@sunny1pub ~]# /u01/app/oracle/product/crs/bin/crs_stat -t
    Name Type Target State Host
    ora....pub.gsd application OFFLINE OFFLINE
    ora....pub.ons application OFFLINE OFFLINE
    ora....pub.vip application OFFLINE OFFLINE
    ora....pub.gsd application OFFLINE OFFLINE
    ora....pub.ons application OFFLINE OFFLINE
    ora....pub.vip application OFFLINE OFFLINE
    Will crs_stat and srvctl status nodeapps -n sunny1pub work after I upgrade my database or should they be working now already? I just choose to install 10.1.0.3 software and after running root.sh on both nodes, I clicked ok and then the End of installation screen appeared. Under installed products, I see 9i home, 10g home, crs home. Under 10g home and crs home, I see cluster nodes(sunny1pub and sunny2pub) So it looks like the 10g software is installed.

  • File won't save - error message is: "...could not be saved. No such file or directory." (?)

    File will not save or 'save as'  - error message is: "...could not be saved. No such file or directory." File has been save previously. Is this a preference issue or perhaps a permissions issue?

    I repeat that everything dedicated to the save process is different.
    The app contain two sets of code, one used under 10.6.8, an other one used under Lion.
    Here are the two states of the File Menu under Lion.
    Compare them to what you get under 10.6.8.
    State #1 when the frontmost document was never saved :
    The menu item Save is replaced by Save…
    No longer Save As… menu item replaced by Duplicate
    New item : Revert Document… (disabled)
    State #2 : when the frontmost document was saved at least once.
    Save… is replaced by Save a Version
    Duplicate is always here
    Revert Document… is now enabled.
    There is an important change in the title bar too.
    When a file was edited we get the word Edited on the right of the doc's name. This word is the title of a local menu :
    Triggering Browe All Versions give us what I already posted in the thread :
    https://discussions.apple.com/message/16523021
    Yvan KOENIG (VALLAURIS, France) jeudi 27 octobre 2011 11:49:49
    iMac 21”5, i7, 2.8 GHz, 4 Gbytes, 1 Tbytes, mac OS X 10.6.8 and 10.7.2
    My iDisk is : <http://public.me.com/koenigyvan>
    Please : Search for questions similar to your own before submitting them to the community

Maybe you are looking for

  • Apple Mini DVI to s video

    I want to connect my PowerBook Pro with a mini display port to a rear projection TV.  It has both s video and RCA ports.  What is the best hook up?

  • I NEED HELP My cellular data is not working! I called and went to Apple INC. And At

    So after July 4th 2013 I experienced wi-fi problems. I wasn't able to keep connected and i kept getting kicked out my own wi-fi. I left it continue a while because i thought it was only temporary. But i noticed my cellular data with at&t hadn't been

  • Hardcoded text in ALV's o/p

    hi,   How can i get Hardcoded text in the output of an ALV-GRID DISPLAY.......... Can anyone help me out Thanks in advance

  • Is there a way to connect an old Apple II keyboard (ADB) to my USB iMac?

    I just found some old Apple ADB-equipped keyboards (closets are scary places) and would like to find a cable connector that would allow me to use the keyboards with our Mac Mini and iMac (equipped with USB only).  Is such available? and, does it work

  • Thread timer problem

    I am making a cards game that requires turns. If a player doesnt respond within say 5 minutes , he must be logged off. I have class that keeps track of the turn of each player . What I want is a process(thread) that is passed the turn and starts a ti