Finder data not being written to .DS_Store file

The symptom of the problem I am experiencing is that the position of icons on my desktop (as well as icons in other folders in my home directory, as well as Finder window settings for folder in my home directory) are lost every time I log out. On a subsequent login, all of the icons "reflow" starting from the top right corner of the screen, just below the boot volume icon. I have been able to determine that this has to do with the .DS_Store file(s) in the Desktop folder and other folders of my home directory. If a .DS_Store file exists, the timestamp does not change when I change the icon layout and logout. If I delete the .DS_Store file, it is not re-created.
In my case, my home directory (and the child Desktop folder) is being mounted off an SMB share on a Windows 2003 server. My Mac is using AD authentication to the same Windows 2003 server, which is the domain controller. I'm logging in with my AD credentials, and my home directory mapping is picked up from the home directory field in AD.
Now, Googling this problem, I found a lot of people complaining about wanting to suppress the use/creation of the .DS_Store files on Windows network volumes. This led to an Apple KB article (http://support.apple.com/kb/HT1629) on how to modify a default to prevent the creation of .DS_Store files on network volumes--essentially the very behavior I am experiencing. The upshot of the KB article is to us the following command in terminal:
*defaults write com.apple.desktopservices DSDontWriteNetworkStores true*
I did a 'defaults read' to confirm this default isn't set on my install of Mac OS X 10.5.6--and it isn't. I then tried using the following variation in the hope I could force the behavior I wanted:
*defaults write com.apple.desktopservices DSDontWriteNetworkStores false*
Predictably, this had to effect.
The upshot is, NONE of the Finder data for files and folders in my home directory (icon positions, Finder view window defaults, etc.) is preserved between logons. And this is driving me nuts! I've spent several hours over the past two evening trying to troubleshoot this, and I've had no luck.
As a footnote, I'll mention that if I drag a folder from my home directory onto the local hard disk, the .DS_Store file is created/updated and things behave as expected (icon position and Finder window defaults are preserved). But, if I then drag the folder back over to my home directory, the values in the .DS_Store file become "frozen" and don't change.

Hey, try this:
1.
Put a file in a folder on your Desktop.
Edit: not your Desktop, but rather a typical local home on an HFS+ volume ~/Desktop
2.
Use "Get Info" to add a "Comment".
The comment can be up to somewhere between 700 and 800 characters.
3.
Copy the folder to a FAT formatted flash drive, SMB share, etc.
4.
Create a new folder on the flash drive.
5.
Move the file from the first folder to the second folder.
Confirm that the "Finder" shows the comment to be there.
6.
Quit the "Finder" - in 10.4 and earlier, this action would ensure that all .DS_Store files get updated, including transfer of comments. I.e. don't "relaunch" or "kill", etc. Enable and user the "Quit" menu, or use:<pre>
osascript -e 'tell application "Finder" to quit</pre>
7.
Now where's the comment?
In step 2, you could also have just created a file on the non-HFS volume and wasted your time typing in a 700 character comment.
In step 6, a more real-world scenario is ejecting the drive or logging out, but deliberately quitting the "Finder" is the most conservative way to ensure that changes get written to .DS_Store files and comments survive. In 10.5.6, even under these conditions, comments are lost.
Icon positions and view preferences are one thing, but with comments - that's real user-inputted data that Apple is playing fast and loose with. And if they no longer support comments on non-HFS volumes, the "Finder" shouldn't be showing you the comments when you double-check them in step 5, or allow them to be added in the alternate version of step 2.
..."C'mon Apple... what gives here?"...
Unfortunately, this "Discussions" site is not frequented by Apple devs. Have you considered filing a bug report? I wonder what they would tell you, eg. if the treatment of .DS_Store files actually does turn out to be a bug or is intentional as it suspiciously seems...
http://developer.apple.com/bugreporter/index.html

Similar Messages

  • Hprof heap dump not being written to specified file

    I am running with the following
    -Xrunhprof:heap=all,format=b,file=/tmp/englog.txt (java 1.2.2_10)
    When I start the appserver, the file /tmp/englog1.txt gets created, but
    when I do a kill -3 pid on the .kjs process, nothing else is being written to
    /tmp/englog1.txt. In the kjs log I do see the "Dumping java heap..." message
    and a core file is generated.
    Any ideas on why I'm not getting anything else written to /tmp/englog1.txt?
    Thanks.

    Hi
    It seems that the option you are using is correct. I may modify it to something like
    java -Xrunhprof:heap=all,format=a,cpu=samples,file=/tmp/englog.txt,doe=n ClassFile
    This seems to work on 1.3.1_02, so may be something specific to the JDK version that
    you are using. Try a later version just to make sure.
    -Manish

  • Ocrfile is not being written to.  open file issues.  Help please.

    I've been troubleshooting an open file issue on our Test environment for quite a while now. Oracle has had me update to latest CRS bundle for 10.2.0.3, then upgrade to 10.2.0.4, then two more patches via OPatch to bring 10.2.0.4 RAC to it's most recent patch. None of these patches resolved our problem. We have ~8700 datafiles in the database and once the database is started, we're at ~11k on Production but on Test we're at ~37K or higher. It takes 1-2 days to hit the 65536 limit before it crashes. I have to 'bounce' the database to keep it from crashing. Yes, I could raise the ulimit but that isn't solving the problem.
    Over the weekend I noticed that on Production and DEV, the ocrfile is being written to constantly and has a current timestamp but on Test, the ocrfile has not been written to since the last OPatch install. I've checked the crs status via 'cluvfy stage -post crsinst -n all -verbose' and everything comes back as 'passed'. The database is up and running, but the ocrfile is still timestamped at April 14th and open files jump to 37k upon opening the database and continue to grow to the ulimit. Before hitting the limit, I'll have over 5,000 open files for 'hc_<instance>.dat, which is where I've been led down the path of patching Oracle CRS and RDBMS to resolve the 'hc_<instance>.dat bug which was supposed to be resolved in all of the patches I've applied.
    From imon_<instance>.log:
    Health check failed to connect to instance.
    GIM-00090: OS-dependent operation:mmap failed with status: 22
    GIM-00091: OS failure message: Invalid argument
    GIM-00092: OS failure occurred at: sskgmsmr_13
    That info started the patching process but it seems like there's more to it and this is just a result of some other issue. The fact that my ocrfile on Test is not being written to when it updates frequently on Prod and Dev, seems odd.
    We're using OCFS2 as our CFS, updated to most recent version for our kernel (RHEL AS 4 u7 -- 2.6.9-67.0.15.ELsmp for x86_64)
    Any help greatly appreciated.

    Check Bug... on metalink
    if Bug 6931689
    Solve:
    To fix this issue please apply following patch:
    Patch 7298531 CRS MLR#2 ON TOP OF 10.2.0.4 FOR BUGS 6931689 7174111 6912026 7116314
    or
    Patch 7493592 CRS 10.2.0.4 Bundle Patch #2
    Be aware that the fix has to be applied to the 10.2.0.4 database home to fix the problem
    Good Luck

  • IO Labels are NOT being written to prefs!

    I've followed all the advice on existing threads about this and it's definitely a serious bug for me. IO Labels are not being written to prefs for me.
    Any ideas? Already tried deleting prefs, creating new blank project and not saving, nothing has worked.

    I found a workaround for anyone having this issue - and this is the ONLY thing that has worked after a week of trying everything on several forums.
    Open Logic, set your labels how you want.
    While Logic is open go to ~/Library/Preferences and delete com.apple.logic.pro.plist
    Quit Logic. I don't think it matters whether you save your project or not. When Logic quits a new plist will be written, and this one WILL have your labels!
    Seems on my machine Logic would not update the IO labels bit of the prefs unless it was writing a complete new prefs file.

  • I have a external hard drive connected, Finder will not let me search for files in the external hard drive how do i search in a hard drive

    i have a external hard drive connected, Finder will not let me search for files in the external hard drive how do i search in a hard drive

    What are you using to search? You should be able to search any drive in from the Finder. If you're having trouble, try "EasyFind" from the App Store (free). You can search any volumes with it.
    Clinton

  • Inbox folders not being created from msf files

    Last week, I noticed that my Inbox folder files weren't appearing in my profile. I posted a generalized question here: https://support.mozilla.org/en-US/questions/996903
    The response I received didn't help that much. The folder I created appeared on the remote mailbox as empty folders.
    My mail account is synced via IMAP to a remote Zimbra mail client using STARTTLS. I can view my Inbox. The other folders in my Inbox account are not visible. I can send and recieve messages from my INBOX.
    When I select "Manage folder Subscriptions" In the account options, I can see the folders. When I select them, and click the "Subscribe" button, nothing happens.
    Inspecting my IMAP directory, I see the following structure:
    ~/.thunderbird/13zr0oo0.default/ImapMail/mail.<zimbra_account>.com:
    -rw-r--r-- 1 matthewe matthewe 1238 May 2 10:38 Archives.msf
    -rw-r--r-- 1 matthewe matthewe 1235 May 2 10:38 Drafts.msf
    -rw------- 1 matthewe matthewe 47500385 May 2 09:56 INBOX
    -rw-r--r-- 1 matthewe matthewe 375885 May 2 10:36 INBOX.msf
    -rw-r--r-- 1 matthewe matthewe 25 May 2 08:58 msgFilterRules.dat
    -rw-r--r-- 1 matthewe matthewe 1233 May 2 10:38 Sent.msf
    -rw-r--r-- 1 matthewe matthewe 0 May 2 10:38 site-errors.msf
    -rw-r--r-- 1 matthewe matthewe 0 May 2 09:59 Templates.msf
    -rw-r--r-- 1 matthewe matthewe 2275 May 2 10:33 Trash.msf
    The msf files are created when I click the "Subscribe" button but subsequent Mail files containing the mail data are not (note that non-msf files exist only for "INBOX".)
    Removing .msf files and restarting Thunderbird has no affect.
    When I launch, the errors console reports the following three errors:
    "Could not read chrome manifest 'file:///usr/lib/thunderbird/extensions/%7B972ce4c6-7e08-4474-a285-3208198ce6fd%7D/chrome.manifest'.
    -- There is no chrome.manifest file in my filesystem under /usr/lib/thunderbird/extensions/{972ce4c6-7e08-4474-a285-3208198ce6fd}"
    "While creating services from category 'profile-after-change', could not create service for entry 'Disk Space Watcher Service', contract ID '@mozilla.org/toolkit/disk-space-watcher;1'"
    This is reported as https://bugzilla.mozilla.org/show_bug.cgi?id=883621 and has been resolved at https://bugzilla.mozilla.org/show_bug.cgi?id=895964 but my system still throws the error. (I have plenty of disk space. Permissions perhaps?)
    [JavaScript Warning: "Calendar https://mail.zimbrahostedemail.com/home/matthewe@<zimbra_client>.com/Calendar has a dangling E-Mail identity configured."]
    I'm not sure what that means.
    I am using Thunderbird 24.5.0 on Ubuntu 13.10

    Is your question about files not being created, or because you cannot send and receive mail, or both?

  • Opmn logs not being written

    Hi All,
    We are facing an issue.
    No logs are being written to the opmn/logs directory. It was being written correctly till 4th December and then stopped all of a sudden.
    Are there any configuration files which may have been affected.
    Best regards,
    Brinda

    To clarify.
    We are now rotating the logfiles with the linux/unix command logrotate. I suspect that this is what is causing the issue that the logs are note being filled after rotation and we need to restart opmn for the logs to start getting populated.
    So I think we need to configure rotating logs in opmn.xml.
    The Application server version is 10.1.3. This is the log line in our opmn.xml.
    <log path="$ORACLE_HOME/opmn/logs/opmn.log" comp="internal;ons;pm" rotation-size="1500000"/>
    So the question is how do we activate opmn to rotate the log so that we do not need to use logrotate.
    In this document it says that you have to activate ODL for log rotation to work:
    http://download.oracle.com/docs/cd/B25221_04/web.1013/b14432/logadmin.htm#sthref370
    Is this true or can we rotate text logs as well. This is what we would prefer.
    Best regards,
    Gustav

  • Master Data not being uploaded to Info Object

    Hi all,
    I'm trying to upload some master data  in to an Info Object with attributes. When I trigger the DTP, the status gets stuck to Yellow and the data is not being uploaded. There are no errors being shown.
    The data has reached PSA perfectly. My data source is a flat file.
    Anyone have any ideas?
    Please help
    Thanks

    Hi,
    There could be various reasons for this. I would have gone through checking it this way.
    Activity set 1:
    1) Check in SM50 if there is any activity happening.
    2) Check in SM37 for any cancelled jobs.
    3) Check in SM21 for error logs.
    Activity set 2:
    1) Cancel the current request.
    2) Ensure my transformation (b/w datasource and InfoObject) is active.
    3) Try to reduce the data package size and try (say to 100)
    4) Run the DTP
    Activity set 3:
    1)  If Activity set 1 and 2 do not help me, I would check by deleting the request in my PSA as well. Take only one record of the flat file which I am sure has no illegal characters and try loading.
    2) If successful, this would mean it could be because of the erroneous record in the complete set (It may be taking some time to show up the error and hence the status is still yellow).
    3) If unsuccessful, this could be because of some objects which are inactive. And I would start from deleting and recreating the DTP.
    Also, do you have any routines in the transformation? If so please recheck the logic.
    Hope it helps!!

  • Date not being displayed in the custom format

    Hi ,
      I have a problem wherein i am unable to change the format of the date.
    I need the date to be in the format 'January 10,2010.' however the date is being displayed as '1/10/10'. I have tried all options for the date formatting. I specified the custom format as 'MMMM D,YYYY' as mentioned in the SAP Library. I am trying to use the std custom options like YYYY-MM-DD, the date is not being dispalyed in the custom format provided by SAP either.
    I have binded date from the context, used the Current Date field provided by SAP.  Nothing works.
    Its not working on the new form. However the old forms have the same kind of formatting and are being displayed as desired.
    Please suggest as to what the issue could be.
    Thanks,
    Soumya.

    Hi All,
    Even I'm encountering the same issue. I'm trying to control the outputted date format programmatically.Please have a look.
    data: v_int_date like sy-datum,
            v_ext_date(10).
    v_int_date = '20110201'
    write v_int_date to v_ext_date MM/DD/YYYY.
    Here I've tried to convert the date to external format using a fixed format(MM/DD/YYYY). But still it's getting outputted in the format DD/MM/YYYY only as that's the way its defined in the User Master!
    Is there any solution for this? How can I 'override' defaults in the SAP User Master?
    Thanks,
    Mahesh

  • CRMXIF_ORDER_SAVE_M - Lead_Start Date not being populate.

    Hi Gurus,
    I am migrating Lead Data in SAP CRM using CRMXIF_ORDER_SAVE_M / CRMXIF_ORDER_SAVE_U03 IDOC structure. The problem is that the Lead Start and End dates are not being passed to the database. I am populating the data at Header Level using the appointment segment. You help is much appreciated.

    Hello,
    I am not sure what you mean exactly, however you can test first in crmd_order
    to create a lead and have a look (in debugger) which fields are needed and how they are stored on database.
    Please verify also your date profile and date type and rules.
    Best regards
    Rene

  • Some client data not being populated after upgrading to NCS 1.1.0.58

    After upgrading to Cisco Prime NCS 1.1.0.58 some of the client data is not being populated or gathered. The graphs labeled "Client Count By Association/Authentication" and "Client Count By Wireless/Wired" are no longer being updated? Not sure what was changed during the upgrade or where to look to get it to start collecting the data again?                  

    Eajackson,
    Make sure your WLC, MSE and WCS/NCS code matches the compatibility matrix here:
    http://www.cisco.com/en/US/docs/wireless/controller/5500/tech_notes/Wireless_Software_Compatibility_Matrix.html
    Sent from Cisco Technical Support iPhone App

  • Jvmargs not being placed in jnlp file by ant deploy task

    Hi
    I have an ant deploy task as given below. I run ant to produce the jnlp file. Everything works ok, EXCEPT the jvmargs, and the enclosing platform element, are not copied into the jnlp file.
    Is this the intended behaviour?
    thanks
    graham
    -- ant task ---
    <project name="BuildInfoView" xmlns:fx="javafx:com.sun.javafx.tools.ant">
    <taskdef
         resource="com/sun/javafx/tools/ant/antlib.xml"
         uri="javafx:com.sun.javafx.tools.ant"
         classpath="/Library/Java/JavaVirtualMachines/jdk1.7.0_17.jdk/Contents/Home/lib/ant-javafx.jar"/>
    <fx:deploy
    outdir="../installation/bin"
         outfile="infoview"
    width="800"
         height="500"
         embeddedWidth="100%"
         embeddedHeight="100%">
         <fx:application
              name="Infoview"
              mainClass="com.ods.infoview.Main">
         </fx:application>
         <fx:info
              title="InfoView"
              description="InfoView"
              vendor="OrangeDog">
         </fx:info>
         <fx:permissions elevated="true"/>
         <fx:platform javafx="2.2+">
         <fx:jvmarg value="-Xmx400"/>
         </fx:platform>
    <!--
    cacheCertificates="true"/>
    -->
         <fx:resources>
              <fx:fileset dir="../gradle/output/"/>
         </fx:resources>
    </fx:deploy>
    </project>

    There is a FAQ note about this
    http://java.sun.com/products/javawebstart/faq.html#72
    Bill

  • Data not indexed after running HV file upload program

    hello
    we using TREX 7.1
    we rae runing CRM_MKTTG_FDS_LOAD_FILE program
    I've did a sample test with just 11 rows of data based on the original
    file I have in TREX and CRM_MKTTG_FDS_LOAD_FILE program completed and
    11 records are indexed.
    However when I run the program with the same file format with
    a much more bigger set of data, the job finished with a time out error
    and when I checked TREXADMIN, none of the data get indexed. Both sample
    file and the orginal file have the same format and both datasource
    configured the same way.
    can somebody gine us a hint? appreciate your quick respose guys!

    Hello,
    What version of LabVIEW and of Microsoft Office are you using? I recommend that you use the latest version of both, since I found a few references about this error occurring with older versions of LabVIEW. Also, there are some example VIs that ship with LabVIEW for communicating with Excel through ActiveX...have you tried those examples to see if they work? If they work, there might be a problem with your implementation of the ActiveX functions in your program.
    I hope one of these suggestions helps. Please let me know if you continue to have any problems.
    Good luck, and have a pleasant day.
    Sincerely,
    Darren Nattinger
    Applications Engineer
    National Instruments
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

  • "Sidecar file has conflict", keyword changes in LR not being written to JPEGs

    I am new to Lightroom and evalutating 3.6 on a trial.
    I performed the following test:
    1. Imported 3 pictures into Lightroom from my hard drive
    2. In Lightroom, I made a keyword change to one photo (removed a keyword).
    3. All three photos' Metadata icon show the arrow, indicating that there has been a change. (I only changed the keyword on one photo.)
    4. I click the icon for the photo whose keyword I changed.
    5. I get the message "The metadata for this photo has been changed in Lightroom. Save the changes to disk?"
    6. I click "Yes".
    7. The photo's Metadata icon shows an exclamation point. When you hover over the exclamation point, it shows the message "Sidecar has conflict".
    I have two questions:
    1. My expectation is that when I changed a keyword, or any other metadata, in LR, it will show up on the photo outside of LR. My understanding is that this is a functionality of LR. Am I wrong? (The Help files seem to indicate that this is a reasonable expectation.)
    2. I am assuming, having read many posts on the forum, that the error message "Sidecar has conflict" is a bug. Am I right? (My understanding is that JPEGs don't have sidecar files, which just makes this message even more odd.)
    I am on Windows 7 Home Premium (upgraded from Vista Home Premium), 32 bit.
    Thanks for your help,
    --Elisabeth.

    Beat, Thanks for taking the time to answer.
    Your expectation is not quite right:
    Any change to metadata will primarily be recorded in your catalog, and nowhere else. Only if you perform "Save Metadata to File" (which can be set to be done automatically in your catalog settings) or "Save the Changes to Disk" after pressing the arrow, the changes will be written into your JPEGs (or into *.xmp sidecard files for Raw images) and can be seen from outside of LR.
    Yes, you are right. What I should have written was: My expectation is that when I change a keyword, or any other metadata in LR, the change will show up outside of LR, after I "Save Metadata to File". This is not happening, instead I get the error message and different keywords in LR than outside.
    Where do your originals reside? Could it be you don't have the proper authority to rewirte the JPEGs?
    My originals reside on my Public drive: I definitely have rights. In fact, I did many other change to photos in the same drive after I posted this on the forum.

  • Tag Info Not Being Written to File

    Some of my MP3's have comments that I would like to remove.
    For example, let's say one of my MP3's in Itunes shows the comment "How are you?"...
    When I right click on the song, click GET INFO, select INFO and delete the comments, the comments are no longer visible in Itunes however, when I go to the file directly and right click and select properties, the comments that I deleted in ITunes are still showing in the files properties.
    Even weirder, if instead of deleting the comments "How are you?" I just over-write the comment with "This is a test", that comment will show when I go directly to the file and look at its properties however, if I go back into Itunes and then delete the comments "This is a test", the original comments "How Are you?" show back up in the file properties.
    Any ideas?

    your code...
    for(int i=0;i==FileData.length;i++)// this is a VERY odd looking loop
      WriteBuff.write(FileData[i]+"\r\n");shouldn't that be i<FileData.length? if FileData is indeed an array you
    will get an ArrayIndexOutOfBoundsException...
    anyway it doesn't seem to be an IO problem so the difficulty is probably
    in your loop. try this for starters...
    for(int j=0;j<FileData.length;j++){
      System.out.println("Looping "+j);
      WriteBuff.write(FileData[j]+"\r\n");// not sure about this either maybe use PrintWriter and println instead?

Maybe you are looking for