Mavericks sets file date to 1969

Since upgrading to Mavericks, I found that my system reset several modification dates on files to Dec 31, 1969 and set the email that came in on the day of the installation to 1969 instead of to the current date.  It seems that the bad file dates are on DropBox for the actual bad mod dates, but the email was on the local computer.
Anyone seen this behavior and have a fix?  From what I read it may have been something on the install where the computer was not connected to network time or something.  However,  I don't see a way to change the mod dates on files.

Same here...
Mac OS: 10.10.2 (14C109)
NAS: WD MyCloud EX2 (1.05.30 firmware)
I connect to the NAS via AFP and SMB. Via AFP moving the files resets the creation to Monday, 6 Feb 2040 07:28 and modification date to present now. Via SMB everything works fine.
This only happens when you MOVE files to the NAS via AFP. If you COPY, everything is OK. SMB works OK on both.
SMB can be a workaround for this for now. For those who don't know how to connect via SMB.... Press Cmd-K and provide the IP of NAS like this: smb://192.168.0.20 or cifs://192.168.0.10 if smb doesn't work. If NAS supports SMB (Samba), then you're OK.
BTW> I've lost date for some movies because of that. Made some family videos, moved to the NAS and BANG, now I don't know when it was taken exacly JPGs have EXIF, MOV don't

Similar Messages

  • CS6 - Data Set file doesn't refresh :(

    There appears to be a bug in CS6 - does this work in any version of Photoshop?
    Problem - data sets are not refreshed when contents of the data file are changed, despite clicking "Apply Data Set".
    The latest updates are installed, running on Windows 7 Professional, 64-bit, plenty of RAM and available storage. This has never worked before and appears to be a bug in the software.  I’ve gone so far as to close PS and even rebooted between steps 4 and 5, same problem.  Here's how to recreate it:
    1. Create a psd, define variables (I've tried both text and graphic layers)
    2. Assign csv file as "Data Set", save the psd
    3. Run export, close psd
    4. Chage data in csv that was defined as Data Set, save and close csv file
    5. Re-open same psd
    6. Click "Apply Data Set" - nothing happens
    Data is NOT refreshed regardless of whether Data Set 1 or  “All data sets” is selected. 
    The only thing that seems to work is to open the PSD and redefine the exact same file as the data set again, then batch export.  Because I'd be dealing with volumes of psd files that use volumes of Data Set files that change daily, this isn't feasible. 
    Defining (or redefining the same file) cannot be recorded in batch mode, nor through the listener (tried previous version just to see).
    Please let me know if/when this will be fixed.  Really want to use Photoshop since it's the BEST 

    Still no answers after all these months.  Really disappointing

  • In LSMW flat file date format to be converted to user profile date setting.

    hi all,
      i got a flat file in which date is in mm/dd/yyyy format.i converted the data using lsmw but this conversion is valid only if user has set his date profile as mm/dd/yyyy in his user profile setting->own data. now if user has some other setting then it will give error. so how to convert date and how to do mapping with same . please help.

    Sunil,
    use below fm to get current user date format:
    data: idate TYPE sy-datum,
              tdat8 type string.
    CALL FUNCTION 'DATUMSAUFBEREITUNG'
         EXPORTING
           IDATE                 = idate
         IMPORTING
            TDAT8                 = tdat8.
    Amit.

  • How do I set  File Sharing  in iTunes to keep files up-to-date?

    How do I set  File Sharing  in iTunes to keep files up-to-date?  i tried iTunes File Sharing, but the updates to the files never transferred across the USB during sync.  Tried to use Documents to Go, but the Apple behavior keeps them from syncing via USB, so when I am home and have no wifi/internet there is no way to transfer the files.  The other apps I have looked at also use wifi/internet to get around the problematic Apple behavior, so there is no way to connect via USB that I have found other than the iTunes.  Have not been able to locate any helpful information to configure the File Sharing to hotsync the files between pod and computer.

    I am trying to USB sync Word, Excel, and PDF files between the ipod and the PC.
    Pretty sure icloud has to have the internet to work, but I can try to see what USB options there are there when time permits.
    Pretty sure dropbox has to have the internet to work, but I can try to see what USB options there are there when time permits.

  • EHS - CG36 - Import report - Set MSDS version from key file data

    Hi.
    This is in reference to thread EHS - CG36 - Import report - how to define MSDS version in key file?
    I'm faced with the same client requirement, came across this discussion, and wondering if there was a solution to this.
    I'm on ECC6.0. My client requests to retain the Version of the MSDS at the time of export (CG54 Dok-X, VER key file data) when it gets imported to another system (CG36). Example (similar to Roy's): If the reports in export system are 1.0 and 2.0, then they must be created in the import system the same, not 1.0 and 1.1.
    Apparently, as Christoph has stated, the standard CG36 Import process doesn't make use of the VER key file data besides storing it into the Report's Additional Info (DMS Class charact.).
    I tried to get around this via the user exit. In the IMPORT fm, I set the version/subversion of the report to be created but the C1F3 fm that does the actual report creation just ignores it. If you've done a similar approach, what have I missed? I'm also afraid if I have to clone the C1F3 fm...
    I appreciate your thoughts and inputs.
    Thanks in advance.
    Excerpt from my fm ZC13G_DOKX_SDB_IMPORT:
    FORM l_create_ibd_report...
      IF e_flg_error = false.
    *   fill the report_head
        e_report_head-subid     = i_subid.
        e_report_head-sbgvid    = i_sbgvid.
        e_report_head-langu     = i_langu.
        e_report_head-ehsdoccat = i_ehsdoccat.
        e_report_head-valdat    = sy-datum.
        e_report_head-rem       = i_remark.
    *beg-LECK901211-ins
        e_report_head-version    = i_ver.
        e_report_head-subversion = i_sver.
    *end-LECK901211-ins
    * Begin Correction 15.06.2004 745589 ***********************************
        IF ( l_api_subjoin_tab[] IS INITIAL ).
    *     create the report
          CALL FUNCTION 'C1F3_REPORT_CREATE'
            EXPORTING
              i_addinf            = i_addinf
              i_flg_header        = true
              i_flg_subjoin       = false
            IMPORTING
              e_flg_lockfail      = l_flg_lockfail
              e_flg_error         = l_flg_error
              e_flg_warning       = l_flg_warning
            CHANGING
              x_api_header        = e_report_head
            EXCEPTIONS
              no_object_specified = 1
              parameter_error     = 2
              OTHERS              = 3.
        ELSE.
    Edited by: Maria Luisa Noscal on Apr 8, 2011 8:23 AM

    Solution is to incorporate the logic used by tc CG36VEN, that is, the process of performing a direct table ESTDH update after the new report is saved into the database.
    Edited by: Maria Luisa Noscal on Apr 19, 2011 7:34 PM

  • How to determine binary file data set size

    Hi all
    I am writing specific sets of array data to a binary file, appending each time so the file grows by one data set for each write operation.  I use the set file position function to make sure that I am at the end of the file each time.
    When I read the file, I want to read only the last 25 (or some number) data sets.  To do this, I figured on using the set file position function to place the file position to where it was 25 data sets from the end.  Easy math, right ?  Apparently not.
    Well, as I have been collecting file size data as I have started the initial tet run, I am finding the the file size (using file size command and getting number of bytes as a result) that the size is not growing the same amount every time.  My size and format of my data being written is identical each time, an array of four double precision numbers.
    The increments I get are as follows, after first write - 44 bytes, after 2nd - 52 bytes, 3rd - 52 bytes, 4th 44 bytes, 5th - 52 bytes, 6th - 52 bytes, 7th - 44 bytes and it appears to maintain this pattern going forward.
    Why would each write operation not be identical in size of bytes.  This means that my basic math for determining the correct file poistion to read only the last 25 data sets will not be simple and if somewhere along the line after I have accumulated hundreds or thousands of data sets, what if the pattern changes.
    Any help on why this is occuring or on a method of working around the problem would be much appreciated.
    Thanks
    Doug
    Doug
    "My only wish is that I am capable of learning each and every day until my last breath."
    Solved!
    Go to Solution.

    I have stripped out the DSC module functions from the vi and attached it here.  I also set default values to all the inputs so it will run with no other inputs.  I also included my current data files (zipped as I have four of them) though the file names are hard coded in the vi so they can be changed to whatever works locally. In fact probably will have to be to modified for the path anyway.
    If you point to a path that has no file, it will create a new one on the first run and the file size will show zero since there is no data in it. It will start to show the changes on each subsequent run.
    As I am creating and appending four different files, each with it's own set of data but always the same format (array of four double precision numbers) and the file size information always increments the same for all four files (as will be seen in the File Size Array) I don't think it is a function of the size of the acutal numbers but something idiosyncracy with how the binary file is created.
    If this proves to be a major hurdle I guess I could try a TDM file but I already got everything else working with this one and need to move on to other tasks.
    Thanks for any continued assistance
    Doug
    Doug
    "My only wish is that I am capable of learning each and every day until my last breath."
    Attachments:
    !_Data_Analysis_Simple.vi ‏40 KB
    SPC.zip ‏2 KB

  • Belle: File dates ignore time setting (uses GMT)

    I've come to discover that the device uses GMT in relation to file creation / modification dates and ignores the time zone setting. I was kept unaware of this by being able to use Nokia Multimedia Transfer to import camera files to iPhoto. Having to connect USB in Mass Storage mode reveals this limitation. However I am not at all sure if this was the case prior to Belle, though I assume so.
    Seems like a oversight in the core of the OS, but if not please fix this. The time zone setting should be more than cosmetic. File dates should not be disregarded on a device that can handle as many files and file types as modern smartphones (N8-00 here) can. It *is* important.
    Also, update all your software for Macintosh while you're at it, please.
    TIA!

    Thanks for the answer, I read everything except the man page for mktime :-(
    Well I can't remember where I read that about "gmtime" but you are right:
    I will use gmtime instead of mktime and it works, it converts it to a GMT time.
    Thanks again for your precious help.
    Cheers,
    Gilles

  • HT201250 How to restore all my mac book pro (mid 2009) files, data, Mavericks and settings after upgrading HD (250 to 500GB) from my Time Capsule?

    Hi everyone!
    Please, I need a big help from you!
    Do you know how to restore from Time Capsule all my mac book pro (mid 2009) files, data, Mavericks and settings after upgrading HD (250 to 500GB)?
    Thank you very much!
    Best regards,
    Marcio

    Recover your entire system
    See 14.
    Time Machine FAQ

  • Any way to set the file date?

    Hi-
    I have recently switched to aperture, and imported my iPhoto library.
    When I use the "Pictures" screensaver (the one where the pics fall down, looking like polaroids), all of the dates on them are 1/3/09-- the date that they were imported into the library.
    It seems that the screensaver uses the file date, rather than the metadata for the date.
    Is there any way to get the file date to match the exif date?
    The photos are stored in the aperture library, if that matters.
    Alternatively, are there any better screensavers for pictures out there (that would work with the aperture library)
    thanks,
    -jamie

    So here's the thing. I'm still looking and I still can't find it. 'Tis a feature you have that I don't but by the sounds of things it's not much use with aperture anyway!
    The issue is as I stated above. The screensaver uses the previews generated by aperture and thus references the creation date of the preview files not the original date taken from the EXIF metadata. Hence your problem. If you export jpegs out of aperture they will have today's creation date which (i' haven't been able to test as I can't find the annotations feature but) will be refernced as such by the screen saver.
    So the solutions are twofold (and you have suggested them both):
    1) Find a screensaver that reads the exif metadata
    2) Turn off the feature.
    you may possibly be able to send the contents of your smart folder back to iPhoto see if the screen saver reads it from there?
    Sorry I can't be more help.
    Anyone else able to chime in here?
    M.

  • Trying to programmatically set the data-source for a Crystal reports report.

    I've got the following existing procedure that I need to add to in order to programmatically set the data-source (server, database, username, and password) for a Crystal reports report.
     I added the connectionInfo parts, but can’t figure out how to attach this to the existing
    this._report object.
    This is currently getting the connection data from the report file, but I now need to populate this connection data from a 'config.xml' text file.
    Am I trying to do this all wrong?
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using CrystalDecisions.CrystalReports.Engine;
    using WGS.Reports.Reports;
    using CrystalDecisions.Shared;
    using WGS.Reports.Forms;
    namespace WGS.Reports
    public class ReportService
    ReportClass _report;
    ParameterFields paramFields;
    ConnectionInfo connectionInfo; // <- I added this
    public ReportService()
    public void DisplayReport(string reportName, int allocationNo)
    if (reportName.ToLower() == "allocationexceptions")
    this._report = new AllocationExceptions();
    PrepareConnection(); // <- I added this
    PrepareAllocationExceptionReport(allocationNo);
    this.DisplayReport();
    private void PrepareConnection() // <- I added this
    //test - these will come from the config.xml file
    this.connectionInfo = new ConnectionInfo();
    this.connectionInfo.ServerName = "testserv\\test";
    this.connectionInfo.DatabaseName = "testdb";
    this.connectionInfo.UserID = "testuser";
    this.connectionInfo.Password = "test";
    this.connectionInfo.Type = ConnectionInfoType.SQL;
    private void PrepareAllocationExceptionReport(int allocationNo)
    this.paramFields = new ParameterFields();
    this.paramFields.Clear();
    ParameterField paramField = new ParameterField { ParameterFieldName = "@AllocationNo" };
    ParameterDiscreteValue discreteVal = new ParameterDiscreteValue { Value = allocationNo };
    paramField.CurrentValues.Add(discreteVal);
    paramFields.Add(paramField);
    private void DisplayReport()
    frmReportViewer showReport = new frmReportViewer();
    showReport.ReportViewer.ReportSource = this._report;
    showReport.ReportViewer.ParameterFieldInfo = paramFields;
    showReport.ShowDialog();
    showReport.Dispose();
    Any help would be much appreciated.

    Hi Garry,
    Please post SAP Crystal Reports questions in their own forums here:
    SAP Crystal Reports, version for Visual Studio
    We don't provide support for this control now. Thanks for your understanding.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How can I set the data binding between Web Dynpro & Database table

    Dear friend,
    I am a beginner of Web Dynpro. I want to develop my simple project like these:
    1. Create my own database table via Dictionary Project such as TAB_USER and have 3 fields: USER_ID, USER_NAME, USER_POSITION and I have already deployed & archived it.
    2. Create my own Web Dynpro Project, and create the input fields as User ID, User name, User position and icon 'Save' on the selection screen and I have deployed it already.
    For the process, I want to input data at the screen and save the data in the table, please give me the guide line like these:
    1. How can I set the data binding between Web Dynpro and Database table ?
    2.  Are there any nescessary steps that I will concern for this case?
    Sorry if my question is simple, I had try  to find solution myself, but it not found
    Thanks in advances,
    SeMs

    Hi,
    You can write your own connection class for establishing the connection with DB.
    Ex:
    public class  ConnectionClass {
    static Connection con = null;
    public static Connection getConnection() {
    try{
    Context ctx = new InitialContext();
    DataSource ds = (DataSource) ctx.lookup("jdbc/TSPAGE");
    con = ds.getConnection();
    return con;
    }catch(Exception e){
    return null;
    You can place the above class file in src folder and you can use this class in webdynpro.
    You can have another UserInfo class for reading and writing the data into the DB .
    Regards, Anilkumar
    PS : Refer
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/simple java bean generator for database.pdf
    Message was edited by: Anilkumar Vippagunta

  • How to specify  inclusion and exclusion rules for File data sources

    This is the seed URL for a file data source: file://localhost/c:/myDir/
    I want to exclude indexing and searching of files under: file://localhost/c:/myDir/obsolete/
    What is the exact format for the exclusion URL?
    I have tried both file://localhost/c:/myDir/obsolete/ and /myDir/obsolete/
    but neither of it seems to work; it still indexes everything under /myDir/
    Should I just put: /obsolete/ as the exclusion URL?
    Also after initial crawling, if I change the inclusion and or exclusion rules and then run the crawler again, it should update the indexes accordingly. Is that right?
    The version of UltraSearch I am using is 1.0.3.
    Thanks for any help on this.

    Try "/c:/myDir/obsolete/"
    Changing inclusion/exclusion rule does not affect files already crawled. It onyl
    affects next crawling behavior.
    To do any DML to existing data set, use SQL directly on wk$url table under the instance owner.

  • Creating a Mavericks USB boot drive after the horse has bolted.  Can I create a bootable USB drive from my iMac after installing Mavericks without saving the Install OS X Mavericks.app file?  Do I need to re- download the whole 5.29 Gb again?

    Creating a Mavericks USB boot drive after the horse has bolted.  Can I create a bootable USB drive from my iMac after installing Mavericks without saving the Install OS X Mavericks.app file?  Do I need to re- download the whole 5.29 Gb from the App Store again?  My problem is my 4Gb/month allowance on a 12 month contract.  I cannot purchase a data block from my ISP and although my speed is theoretically slowed to 64k after reaching my 4Gb, it actually ceases to download in reality.

    HI tasclix, it depends what you mean by an OS X boot drive.
    If you want a recovery disk from which you can reinstall (by re-downloading) or recover from a time machine backup, then nbar is correct.
    If, however, you want to boot and run the OS X installer from the USB drive (so that you don't need to download again), then you will need a copy of "Install OS X  Mavericks.app"; see this article:
    http://support.apple.com/kb/HT5856
    Before downloading again, search your system to see if the installer is still there - it's usually in the /Applications folder unless it has been deleted, but check your whole system for it anyway, you never know, you might still have it somewhere.
    Message was edited by: SilverSkyRat

  • Integrating Flat File data to LDAP Directory using sunopsis driver

    Hello
    I need to import data from a csv file into a LDAP Directory.
    In order to acheive this, i used Demo physical and logical File data server (called FILE_GENERIC) and set up a new LDAP data server using tutorial "Oracle Data Integrator Driver for LDAP - User's Manual".
    I can manually see and update data on both file and LDAP datastores.
    The fact is that i cannot manage to import/update data from the file to the LDAP directory through a dedicated interface.
    The issue do, i think, come from the PK/FK used by sunopsis relational model to represent the directory.
    LDAP DN is represented by a set of two table representing in my example the organizational units in one hand and the persons in the other hands, linking them through FK in persons to auto-generated PK in organization units. My person table also have a auto generated PK. All the directory datastore tables have been reversed through ODI.
    In my interface, i always use my cn as update key.
    I first tried not to map the person PK in the interface, letting the driver generating it for me (or mapping a null PK). I then catch in operator a message like: " null : java.sql.SQLException: Try to insert null into a non-nullable column".
    Anyway, the first row is created in the directory and a new PK is given into ODI datastore. Curiously, this is not as i would presume the last PK value + 1.
    There are some kinds of gaps in the ID sequences.
    I even tried checking the "tolerated error" into the IKM step called "Insert new row". I'm using IKM shipped with ODI :"IKM SQL Incremental Update". The sequence is finished in operator but due, i guess, to the catched error, the other rows are not processed. (Anyway i shouldn't have to tolerate errors)
    I tried after to put not used custom PK values into my file, then map the PK column to the LDAP datastore PK column without much success: Only one row is processed. Futhermore, the id of the PK in the datastore is different of the one I put in the file.
    I finally tried to generate PK values through SQL instructions by creating new steps in the IKM modul but that did not worked much.
    I really do not see any other ideas to either have the driver construct new PK at insert/update or to make him ignore the null PK problem and process all the rows.
    If anyone do have an idea about it, please share...
    Greetings,
    Adrien

    Hi,
    I am facing an issue who is probably the same.
    using ODI 10.1.3.5, I can't insert new rows into my openLDAP.
    One of the point I see is that the execution take the LDAP server for staging area and want to create I$ table into it, so the data are already imported into the ldap Server.
    thanks for any help.

  • How to transfer a set of data from Excel spread sheet to an Access database

    hi,
    Can any one please tell me how to transfer a set of data from Excel spread sheet to an Access database using SQL query.I'm using java API currently. I have done all sorts of ODBC connection in administrative tools.The file is in correct location. I have done certain coding but with errors.Help me to get rid of these errors,
    Coding:*
    try
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    Connection datacon=DriverManager.getConnection("jdbc:odbc:exdata","",""); *//For Excel driver*
    Connection datacon1=DriverManager.getConnection("jdbc:odbc:stock1","",""); *// For mdb driver*
    Statement datast=datacon.createStatement();
    Statement datast1=datacon1.createStatement();
    ResultSet excelrs=datast.executeQuery("select item_code,sdata,closing_stock from phy "); *//phy is the excel file*
    while(excelrs.next())
    String ic=excelrs.getString("item_code");
    System.out.println(ic);
    String d=excelrs.getString("sdate");
    double cs=excelrs.getDouble("closing_stock");
    int dbrs=datast1.executeUpdate("insert into second values('"+ic+"','"+d+"',"+cs+")"); *//second is the mdb*
    excelrs.close();
    }catch(Exception e)
    System.out.println(e);
    Error:*
    java.sql.SQLException: [Microsoft][ODBC Excel Driver] The Microsoft Jet database engine could not find the object 'C:\JavaGreen\phy.xls'. Make sure the object exists and that you spell its name and the path name correctly.
    thanks,
    kumar.

    JAVA_GREEN wrote:
    No i haven't mixed up.But the file from where i have to retrieve the data is in csv format.Even though i created another csv driver.and tried but i cud not find a solution to load/transfer a set of records from one file(in Excel/csv format) to another file(in mdb format).plz help me.Is there any other methods for this data transfer.A csv file is NOT an excel file.
    The fact that Excel can import a csv file doesn't make it an excel file.
    If you have a csv file then you must use a csv driver or just use other code (not jdbc) to access it. There is, normally, a ODBC (nothing to do with java) text driver that can do that.

Maybe you are looking for

  • Performance Problem in Report built on a DSO

    Dear All, we had a report created on the top of the ODS(0sd_o06) with selection criterion as profit center,now when we click the profit center varible entry to see the selection, a window pop up and it is there for 30 mins after that only we can make

  • Error in PM report...

    created a replica of standard report program for PM, when executed for sp. plant gives TimeOut dump. SQL Trace shows the error in COBRB table read (not at beg , but after many reads), not able to find the prob. can u suggest the area to look at..  th

  • Transforming XML using XSLT more than once

    Hi, I am trying to transform an XML Document object in Java. However, it requires 2 transforms as each of them is complicated and needs to be generic enough for use by different XML strings. So I have a single XML string and two XSLT files. Currently

  • Use of Tcode SE54..

    Hi SDNers, Could anyone explain what is use of the Tcode SE54 and why do we use it? Regards, Ranjith N

  • Why does PrP not include AE-style proxy functions?

    Forgive me if it's a silly question, but I'm genuinely curious. After Effects gives the user the option to right-click any file in the Projects panel and choose Create Proxy, in any number of formats. It then automatically replaces that file with its