Max time between log

Hello,
In LabVIEW DSC 7.1 there was the parameter "maximum time between logs" setting in Configure>>Historical menu.
This parameter has been removed in LabVIEW DSC 8.2.
Is there anything I have to do to force the same "maximum time between logs" behaviour in Labview 8.2?
Thanks

Hi !
Look at the answer here.
Cordialement / Regards, Matthieu B. @ NI France.

Similar Messages

  • How to use "Days to keep historical data" & "Maximum time between logs(hh:mm:ss)

    Iam using LabVIEW DSC. Values are being logged continously into citadel.
    Is it possible to retain data of just one month and delete the earlier data as fresh data is being logged into citadel?
    Is it possible to achieve this feature with "Days to keep historical data" & "Maximum time between logs(hh:mm:ss)" options in the history menu of Tag configuration editor ?

    Yes, Days to keep historical data does what you are looking for. After the specified number of days, the old data gets overwritten with new data. So, you always have only the specified number of days' of data in Citadel.
    Note: You may sometimes see that old data doesn't get overwritten till after a day or so of your setting (depending on how much data is being logged). This is because Citadel logs in "pages" and waits till the current page it's logging to is full before it starts overwriting the old ones.
    You do not have to use the 'Max time between logs' option for this. This option forces Citadel to log data every so-many hh:mm:ss regardless of whether or not the data has changed. Note that this is NOT a way to "log data on
    demand". Because, this periodic logging of data would change for a particular tag when its data changes. So, even with this setting all data may not get logged at one shot. Anyways, as I said, you do not have to use this setting for what you're trying to do.
    Regards,
    Khalid

  • I miss the "maximum time between logs" - option

    Hello,
    with LabVIEW DSC 7.0 I could configure my Citadel Database with an option called  "maximum time between logs".
    I miss this in LabVIEW DSC 8.2.1.
    Where ist it gone?
    Thank you
    Thomas

    That's a good point. I will verify this behavior and report it to our developers if I find the same issue. As a workaround you could use the Database Writing vis (DSC Module >> Historical >> Database Writing). With these vis you can write data to the database programmatically at a rate that you can define by your own
    I hope this helps,
    Jochen
    Message Edited by Jochen on 08-01-2007 10:05 AM
    Attachments:
    write trace vis.jpg ‏18 KB

  • Determine max differences between times per day

    What is the simplest query to determine the max time between date fields per day? In other words we have records being entered all the time to a certain table. I want to watch that table to measure the max time at the end of the day between each record that was entered. I then want to go back historically and look at what the average for max times was.

    Test data:
    CREATE TABLE test274 AS
    WITH src AS(
    SELECT '10/19/2010 3:23:53 PM' dta from dual union all
    SELECT '10/19/2010 3:28:04 PM' from dual union all
    SELECT '10/19/2010 3:28:12 PM' from dual union all
    SELECT '10/19/2010 3:40:46 PM' from dual union all
    SELECT '10/19/2010 3:50:22 PM' from dual union all
    SELECT '10/19/2010 3:55:17 PM' from dual union all
    SELECT '10/19/2010 3:58:49 PM' from dual union all
    SELECT '10/19/2010 4:00:06 PM' from dual union all
    SELECT '10/19/2010 4:00:06 PM' from dual union all
    SELECT '10/19/2010 4:00:06 PM' from dual union all
    SELECT '10/19/2010 4:08:00 PM' from dual union all
    SELECT '10/19/2010 4:08:24 PM' from dual union all
    SELECT '10/19/2010 4:09:05 PM' from dual union all
    SELECT '10/19/2010 4:11:16 PM' from dual union all
    SELECT '10/19/2010 4:19:36 PM' from dual union all
    SELECT '10/19/2010 4:29:32 PM' from dual union all
    SELECT '10/19/2010 4:34:05 PM' from dual union all
    SELECT '10/19/2010 4:47:33 PM' from dual union all
    SELECT '10/19/2010 4:47:33 PM' from dual union all
    SELECT '10/19/2010 4:48:07 PM' from dual union all
    SELECT '10/19/2010 4:49:55 PM' from dual union all
    SELECT '10/19/2010 4:55:44 PM' from dual union all
    SELECT '10/19/2010 5:01:20 PM' from dual union all
    SELECT '10/19/2010 5:13:42 PM' from dual union all
    SELECT '10/19/2010 5:13:42 PM' from dual union all
    SELECT '10/19/2010 5:16:31 PM' from dual union all
    SELECT '10/19/2010 5:21:28 PM' from dual union all
    SELECT '10/19/2010 6:44:38 PM' from dual union all
    SELECT '10/19/2010 6:46:54 PM' from dual union all
    SELECT '10/19/2010 6:48:54 PM' from dual union all
    SELECT '10/19/2010 7:04:23 PM' from dual union all
    SELECT '10/19/2010 7:13:43 PM' from dual union all
    SELECT '10/19/2010 7:53:18 PM' from dual union all
    SELECT '10/19/2010 11:42:29 PM' from dual union all
    SELECT '10/20/2010 7:32:32 AM' from dual union all
    SELECT '10/20/2010 7:57:51 AM' from dual union all
    SELECT '10/20/2010 8:02:38 AM' from dual union all
    SELECT '10/20/2010 8:22:32 AM' from dual union all
    SELECT '10/20/2010 8:25:29 AM' from dual union all
    SELECT '10/20/2010 8:37:51 AM' from dual union all
    SELECT '10/20/2010 8:37:51 AM' from dual union all
    SELECT '10/20/2010 9:18:40 AM' from dual union all
    SELECT '10/20/2010 9:37:21 AM' from dual union all
    SELECT '10/20/2010 9:41:30 AM' from dual union all
    SELECT '10/20/2010 10:07:21 AM' from dual union all
    SELECT '10/20/2010 10:24:51 AM' from dual union all
    SELECT '10/20/2010 10:37:48 AM' from dual union all
    SELECT '10/20/2010 10:42:35 AM' from dual union all
    SELECT '10/20/2010 11:02:45 AM' from dual union all
    SELECT '10/20/2010 11:02:54 AM' from dual union all
    SELECT '10/20/2010 11:07:10 AM' from dual union all
    SELECT '10/20/2010 11:29:08 AM' from dual union all
    SELECT '10/20/2010 11:29:31 AM' from dual union all
    SELECT '10/20/2010 11:42:30 AM' from dual union all
    SELECT '10/20/2010 11:42:30 AM' from dual union all
    SELECT '10/20/2010 11:46:49 AM' from dual union all
    SELECT '10/20/2010 11:58:00 AM' from dual union all
    SELECT '10/20/2010 12:17:43 PM' from dual union all
    SELECT '10/20/2010 12:20:56 PM' from dual union all
    SELECT '10/20/2010 1:24:48 PM' from dual union all
    SELECT '10/20/2010 1:32:38 PM' from dual union all
    SELECT '10/20/2010 1:38:35 PM' from dual union all
    SELECT '10/20/2010 1:41:23 PM' from dual union all
    SELECT '10/20/2010 2:09:40 pm' from dual union all
    select '10/20/2010 2:20:29 PM' from dual
    select to_date( dta, 'mm/dd/yyyy hh:mi:ss am', 'nls_date_language = american' ) dta
    from src
    ;You didn't mention your Oracle version, so this query was tested on Oracle 11.2:
    SELECT trunc(dta) dta,
           NVL( max( diff ), numtodsinterval( 0, 'day' ) ) maxdiff
    FROM (
        SELECT dta,
               numtodsinterval(
                    dta - ( lag( dta) over ( partition by trunc(dta) order by dta )),
                    'day'
               ) diff
        FROM test274
    GROUP BY trunc(dta)
    DTA                       MAXDIFF    
    2010/10/19                0 3:49:11.0
    2010/10/20                0 1:3:52.0

  • "logon time" between USR41 and security audit log

    Dear colleagues,
    I got a following question from customer for security audit reason.
    > 'Logon date' and 'Logon time' values stored in table  USR41 are exactly same as
    > logon history of Security Audit Log(Tr-cd:SM20)?
    Table:USR41 saves 'logon date' and 'logon time' when user logs on to SAP System from SAP GUI.
    And the Security Audit Log(Tr-cd:SM20) can save user's logon history;
    at the time when user logged on, the security audit log is recorded .
    I tried to check SAP GUI logon program:SAPMSYST several ways, however,
    I could not check it because the program is protected even for read access.
    I want to know about specification of "logon time" between USR41 and security audit log,
    or about how to look into the program:SAPMSYST and debug it.
    Thank you.
    Best Regards.

    Hi,
    If you configure Security Audit you can achieve your goals...
    1-Audit the employees how access the screens, tables, data...etc
    Answer : Option 1 & 3
    2-Audit all changes by all users to the data
    Answer : Option 1 & 3
    3-Keep the data up to one month
    Answer: No such settings, but you can define maximum log size.
    4-Log retention period can be defined.
    Answer: No !.. but you can define maximum log size.
    SM19/SM20 Options:
    1-Dialog logon
    You can check how many users logged in and at what time
    2-RFC login/call
    Same as above you can check RFC logins
    3-Transaction/report start
    You can see which report or transaction are executed and at what time
    (It will help you to analyise unauthorized data change. Transactions/report can give you an idea, what data has been changed. So you can see who changed the data)
    4-User master change
    (You can see user master changes log with this option)
    5-System/Other events
    (System error can be logged using this option)
    Hope, it clear the things...
    Regards.
    Rajesh Narkhede

  • When using historical datalogging, is it possible to hide the elapsed time between tag updates when viewing on a graph?

    When my process stops, I am reading an array of tags(datapoints) and writing the max and average to memory tags for data logging. However, when viewing the data, the elapsed time between cycles spreads the data out unevenly. It could be 90 seconds between cycles or maybe two hours or longer. Is there a way to convert the time axis data to be just consecutive datapoints?? It would be like logging data based on a particular condition happening rather than time-based trending. Should I try to use the data set logger examples instead?? I would prefer to use the built-in datalogging features rather than writing to databases.

    You could export your data to a spreadsheet file and then actually write then again in a second database using this example program in the devzone
    http://zone.ni.com/devzone/conceptd.nsf/webmain/5A921A403438390F86256B9700809A53?opendocument
    Using this program (if you don't want to modify it, which would take a reasonable amount of time specially if you are not familiar with VI-Based Server) You would have to generate a collum in your spreadsheet file to be the timestamp, it would be a artificial timestamp.
    What you could do in your application is to first save the data to file and then read from file, substitute the collum timestamp for the "artificial one" and then write it to the database, again, with that you would not need to modify this program.
    However if you have the time and is willing to work with VI-based server you could try to modify the example program to be adapted for your purposes.
    I hope it helps
    Good Luck
    Andre Oliveira
    Applications Engineer
    National Instruments

  • Getting error while taking MAX DB trans log backup.

    Hi,
    I am getting error while taking trans log backup of Maxdb database for archived log through data protector as below,
    [Critical] From: OB2BAR_SAPDBBAR@ttcmaxdb "MAX" Time: 08/19/10 02:10:41
    Unable to back up archive logs: no autolog medium found in media list
    But i am able to take complete data and incremental backup through data protector.
    I have already enabled the autolog for MAX DB database and it is writing that log file directly to HP-UX file system. Now i want to take backup of this archived log backup through data protector i.e. through trans log backup. So that the archived log which is on the file system after trans log backup completed will delete the archived logs in filesystem.  So that i don;t have to manually delete the archived logs from file system.
    Thanks,
    Subba

    Hi Lars,
    Thanks for the reply...
    Now i am able to take archive log backup but the problem is i can take only one archive file backup. Not multiple arhive log files generated by autolog at filesystem i.e /sapdb/MAX/saparch.
    I have enabled autolog and it is putting auto log file at unix directory i.e. /sapdb/MAX/saparch
    And then i am using the DataProtector 6.11 with trans log backup to backup the archived files in /sapdb/MAX/saparch. When i start the trans backup session through data protector it uses the archive stage command as "archive_stage BACKDP-Archive LOGBackup NOVERIFY REMOVE" If /sapdb/MAX/saparch has only one archive file it will backup and remove the file successfully. But if /sapdb/MAX/saparch has multiple archive files it gives an error as below,
      Preparing backup.
                Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
                Setting environment variable 'BI_REQUEST' to value 'OLD'.
                Setting environment variable 'BI_BACKUP' to value 'ARCHIVE'.
                Constructed Backint for MaxDB call '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.
    bsi_in -c'.
                Created temporary file '/var/opt/omni/tmp/MAX.bsi_out' as output for Backint for MaxDB.
                Created temporary file '/var/opt/omni/tmp/MAX.bsi_err' as error output for Backint for MaxDB.
                Writing '/sapdb/data/wrk/MAX/dbm.ebf' to the input file.
                Writing '/sapdb/data/wrk/MAX/dbm.knl' to the input file.
            Prepare passed successfully.
            Starting Backint for MaxDB.
                Starting Backint for MaxDB process '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.
    bsi_in -c >>/var/opt/omni/tmp/MAX.bsi_out 2>>/var/opt/omni/tmp/MAX.bsi_err'.
                Process was started successfully.
            Backint for MaxDB has been started successfully.
            Waiting for the end of Backint for MaxDB.
                2010-09-06 03:15:21 The backup tool is running.
                2010-09-06 03:15:24 The backup tool process has finished work with return code 0.
            Ended the waiting.
            Checking output of Backint for MaxDB.
            Have found all BID's as expected.
        Have saved the Backup History files successfully.
        Cleaning up.
            Removing data transfer pipes.
                Removing data transfer pipe /var/opt/omni/tmp/MAX.BACKDP-Archive.1 ... Done.
            Removed data transfer pipes successfully.
            Copying output of Backint for MaxDB to this file.
    Begin of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
                #SAVED 1009067:1 /sapdb/data/wrk/MAX/dbm.ebf
                #SAVED 1009067:1 /sapdb/data/wrk/MAX/dbm.knl
    End of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
            Removed Backint for MaxDB's temporary output file '/var/opt/omni/tmp/MAX.bsi_out'.
            Copying error output of Backint for MaxDB to this file.
    Begin of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
    End of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
            Removed Backint for MaxDB's temporary error output file '/var/opt/omni/tmp/MAX.bsi_err'.
            Removed the Backint for MaxDB input file '/var/opt/omni/tmp/MAX.bsi_in'.
        Have finished clean up successfully.
    The backup of stage file '/export/sapdb/arch/MAX_LOG.040' was successful.
    2010-09-06 03:15:24
    Backing up stage file '/export/sapdb/arch/MAX_LOG.041'.
        Creating pipes for data transfer.
            Creating pipe '/var/opt/omni/tmp/MAX.BACKDP-Archive.1' ... Done.
        All data transfer pipes have been created.
        Preparing backup tool.
            Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
            Setting environment variable 'BI_REQUEST' to value 'OLD'.
            Setting environment variable 'BI_BACKUP' to value 'ARCHIVE'.
            Constructed Backint for MaxDB call '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.bsi_
    in -c'.
            Created temporary file '/var/opt/omni/tmp/MAX.bsi_out' as output for Backint for MaxDB.
            Created temporary file '/var/opt/omni/tmp/MAX.bsi_err' as error output for Backint for MaxDB.
            Writing '/var/opt/omni/tmp/MAX.BACKDP-Archive.1 #PIPE' to the input file.
        Prepare passed successfully.
        Constructed pipe2file call 'pipe2file -d file2pipe -f /export/sapdb/arch/MAX_LOG.041 -p /var/opt/omni/tmp/MAX.BACKDP-Archive.1 -nowait'.
        Starting pipe2file for stage file '/export/sapdb/arch/MAX_LOG.041'.
            Starting pipe2file process 'pipe2file -d file2pipe -f /export/sapdb/arch/MAX_LOG.041 -p /var/opt/omni/tmp/MAX.BACKDP-Archive.1 -nowait >>/var/tmp/tem
    p1283767880-0 2>>/var/tmp/temp1283767880-1'.
            Process was started successfully.
        Pipe2file has been started successfully.
        Starting Backint for MaxDB.
            Starting Backint for MaxDB process '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.bsi_
    in -c >>/var/opt/omni/tmp/MAX.bsi_out 2>>/var/opt/omni/tmp/MAX.bsi_err'.
            Process was started successfully.
        Backint for MaxDB has been started successfully.
        Waiting for end of the backup operation.
            2010-09-06 03:15:25 The backup tool process has finished work with return code 2.
            2010-09-06 03:15:25 The backup tool is not running.
            2010-09-06 03:15:25 Pipe2file is running.
            2010-09-06 03:15:25 Pipe2file is running.
            2010-09-06 03:15:30 Pipe2file is running.
            2010-09-06 03:15:40 Pipe2file is running.
            2010-09-06 03:15:55 Pipe2file is running.
            2010-09-06 03:16:15 Pipe2file is running.
            Killing not reacting pipe2file process.
            Pipe2file killed successfully.
            2010-09-06 03:16:26 The pipe2file process has finished work with return code -1.
        The backup operation has ended.
        Filling reply buffer.
            Have encountered error -24920:
                The backup tool failed with 2 as sum of exit codes and pipe2file was killed.
            Constructed the following reply:
                ERR
                -24920,ERR_BACKUPOP: backup operation was unsuccessful
                The backup tool failed with 2 as sum of exit codes and pipe2file was killed.
        Reply buffer filled.
        Cleaning up.
            Removing data transfer pipes.
                Removing data transfer pipe /var/opt/omni/tmp/MAX.BACKDP-Archive.1 ... Done.
            Removed data transfer pipes successfully.
            Copying output of Backint for MaxDB to this file.
    Begin of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
    End of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
            Removed Backint for MaxDB's temporary output file '/var/opt/omni/tmp/MAX.bsi_out'.
            Copying error output of Backint for MaxDB to this file.
    Begin of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
    End of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
            Removed Backint for MaxDB's temporary error output file '/var/opt/omni/tmp/MAX.bsi_err'.
            Removed the Backint for MaxDB input file '/var/opt/omni/tmp/MAX.bsi_in'.
            Copying pipe2file output to this file.
    Begin of pipe2file output (/var/tmp/temp1283767880-0)----
    End of pipe2file output (/var/tmp/temp1283767880-0)----
            Removed pipe2file output '/var/tmp/temp1283767880-0'.
            Copying pipe2file error output to this file.
    Begin of pipe2file error output (/var/tmp/temp1283767880-1)----
    End of pipe2file error output (/var/tmp/temp1283767880-1)----
            Removed pipe2file error output '/var/tmp/temp1283767880-1'.
        Have finished clean up successfully.
    The backup of stage file '/export/sapdb/arch/MAX_LOG.041' was unsuccessful.
    2010-09-06 03:16:26
    Cleaning up.
        Have encountered error -24919:
            Can not remove file '/var/tmp/temp1283767880-0'.
            (System error 2; No such file or directory)
        Could not remove temporary output file of pipe2file ('/var/tmp/temp1283767880-0' ).
        Have encountered error -24919:
            Can not remove file '/var/tmp/temp1283767880-1'.
            (System error 2; No such file or directory)
        Could not remove temporary output file of pipe2file ('/var/tmp/temp1283767880-1' ).
    Have finished clean up successfully.
    Thanks,
    Subba

  • The Job Failed Due to  MAX-Time at ODS Activation Step

    Hi
            I'm getting these errors "The Job Failed Due to  MAX-Time at ODS Activation Step" and
             "Max-time Failure"
    How to resolve this failure?

    Hi ,
    You can check the ODs activation logs in ODs batch montior >> click on taht job >>> job log
    for this first you check in sm37 how many jobs are running if there are many otehr long running jobs then . the ODs activation will happen very slowly as the system is overloaded .. due to the long running jobs ..this causes the poor performance of system ..
    for checking the performance of system .. you first check the lock waits in ST04..and check wether they are progressing or nt ..
    check sm66 .. for the no of process running on system ..
    check st22 >> for short dumps ..
    check Os07> to check the cpu idle tiem if its less tehn 20% . then it means the cpu is overlaoded .
    check sm21 ...
    check the table space avilable in st04..
    see if the system is overlaoded then the ODS wont get enough work process to create the backgorund jobs for that like(Bi_BCTL*)..jobs .the updation will happen but very slowly ..
    in this case you can kill few long running jobs which are not important .. and kill few ODs activations also ..
    Dont run 23 ODs activation all at a time .. run some of them at a time ..
    And as for checking the key points for data loading is check st22,cehck job in R/3 , check sm58 for trfc,check sm59 for Rfc connections
    Regards,
    Shikha

  • Understanding Time Machine logs

    I posted a similar question earlier, but retracted it when I realized I didn't know enough about how Time Machine worked, and hard links in particular. Having done my research, I still have my questions. Looking at the log below:
    Dec 22 13:00:50 Feezmoe /System/Library/CoreServices/backupd[2134]: Backup requested by user
    Dec 22 13:00:50 Feezmoe /System/Library/CoreServices/backupd[2134]: Starting standard backup
    Dec 22 13:00:50 Feezmoe /System/Library/CoreServices/backupd[2134]: Backing up to: /Volumes/Desktop Backup/Backups.backupdb
    Dec 22 13:01:06 Feezmoe /System/Library/CoreServices/backupd[2134]: No pre-backup thinning needed: 113.3 MB requested (including padding), 399.90 GB available
    Dec 22 13:01:43 Feezmoe /System/Library/CoreServices/backupd[2134]: Copied 2576 files (20.0 MB) from volume Wintermoot.
    Dec 22 13:01:45 Feezmoe /System/Library/CoreServices/backupd[2134]: Copied 2588 files (20.0 MB) from volume Feezmoe.
    Dec 22 13:01:46 Feezmoe /System/Library/CoreServices/backupd[2134]: No pre-backup thinning needed: 100.0 MB requested (including padding), 399.87 GB available
    Dec 22 13:01:50 Feezmoe /System/Library/CoreServices/backupd[2134]: Copied 683 files (7 KB) from volume Wintermoot.
    Dec 22 13:01:51 Feezmoe /System/Library/CoreServices/backupd[2134]: Copied 695 files (7 KB) from volume Feezmoe.
    Dec 22 13:01:55 Feezmoe /System/Library/CoreServices/backupd[2134]: Starting post-backup thinning
    Dec 22 13:02:10 Feezmoe /System/Library/CoreServices/backupd[2134]: Deleted backup /Volumes/Desktop Backup/Backups.backupdb/Feezmoe/2007-12-21-090046: 399.91 GB now available
    Dec 22 13:02:10 Feezmoe /System/Library/CoreServices/backupd[2134]: Post-back up thinning complete: 1 expired backups removed
    Dec 22 13:02:10 Feezmoe /System/Library/CoreServices/backupd[2134]: Backup completed successfully.
    Dec 22 13:49:50 Feezmoe /System/Library/CoreServices/backupd[2240]: Backup requested by automatic scheduler
    Dec 22 13:49:50 Feezmoe /System/Library/CoreServices/backupd[2240]: Starting standard backup
    Dec 22 13:49:50 Feezmoe /System/Library/CoreServices/backupd[2240]: Backing up to: /Volumes/Desktop Backup/Backups.backupdb
    Dec 22 13:50:01 Feezmoe /System/Library/CoreServices/backupd[2240]: No pre-backup thinning needed: 2.07 GB requested (including padding), 399.91 GB available
    Dec 22 13:59:39 Feezmoe /System/Library/CoreServices/backupd[2240]: Copied 20841 files (1.7 GB) from volume Wintermoot.
    Dec 22 13:59:49 Feezmoe /System/Library/CoreServices/backupd[2240]: Copied 20853 files (1.7 GB) from volume Feezmoe.
    Dec 22 13:59:52 Feezmoe /System/Library/CoreServices/backupd[2240]: No pre-backup thinning needed: 115.4 MB requested (including padding), 398.14 GB available
    Dec 22 14:00:14 Feezmoe /System/Library/CoreServices/backupd[2240]: Copied 1870 files (12.8 MB) from volume Wintermoot.
    Dec 22 14:00:16 Feezmoe /System/Library/CoreServices/backupd[2240]: Copied 1882 files (12.8 MB) from volume Feezmoe.
    Dec 22 14:00:25 Feezmoe /System/Library/CoreServices/backupd[2240]: Starting post-backup thinning
    Dec 22 14:00:25 Feezmoe /System/Library/CoreServices/backupd[2240]: No post-back up thinning needed: no expired backups exist
    Dec 22 14:00:25 Feezmoe /System/Library/CoreServices/backupd[2240]: Backup completed successfully.
    I have several questions.
    The first is why Time Machine is backing up so much data. As you can see, on Dec 22 at 13:59:39 Time Machine backed up 1.7 GB of data. I know for a fact that I did not add anywhere near 1.7 GB of data to my machine in the time between the last backup and that one. Where did Time Machine get that number? Is it backing up files I'm not aware of but which need backing up, like system files or the swap files from /private/var/vm?
    Second, in the list are two drives, Wintermoot and Feezmoe. Wintermoot is my main drive, while Feezmoe is a storage drive. Because of this, the contents of Feezmoe do not change very often. However, Time Machine always seems to back up the same amount of data from both, even when nothing has been added to Feezmoe. Is Time Machine actually wasting space in this way, or is there something I don't know about how Time Machine presents information in the log which is confusing me?
    Third, there always seem to be two backup cycles for each timestamp. Is the second cycle perhaps related to the Spotlight database, or do I have a setting in my Time Machine set up which needs to be changed?
    I want to say that, overall, I am delighted with Time Machine's performance, but I want to make sure it is working properly and that I have set it up properly.

    Is there actually a problem, ie is your backup folder growing too quickly?
    I have just looked at my TM log and in the same line as you are quoting my latest backup said 214.7 MB. Now my entire backup folder is not much greater than 40 MB so the numbers being produced by the log are clearly wrong. Could be the log is confused by the multiple hard links used by TM but I'm guessing.
    TM seems to working fine for me (not that I've restored anything hugely complicated to date).

  • Elapsed time between 2 points

    Hello,
    I'm trying to measure the time between 2 datapoints.
    When the data acquirement begins the time should be saved and when the signal reaches 90% of it's max.
    Those 2 times then get subtracted and then you have the elapsed time.
    But I'm not quite sure how to do this... I was thinking with flat sequences.
    Solved!
    Go to Solution.

    Are you constantly capturing a signal?  What is your sample rate?
    What you need to do first to capture the signal.  You can then find the maximum value with the Array Max & Min function.  Calculate your 90% (0.9*Max).  Search the data until your get greater or equal to the 90% mark.  Get that sampling number.  Your time is then number of samples to the 90% mark divided by the sample rate (in Samples/Second).
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • SQL Command to select MAX(date) & Max(time)

    My am having a really hard time getting a sql command to return data with the max(date) & max(time) combination. I think I need a sub query but can not figure it out. I have looked all over the internet but still don't understand it. Below is my command which returns the max date & max time but they are not from the same row of data. Please help.
    SELECT  "labor"."order-no", "labor"."oper-no", MAX("labor"."end-date") AS myDate, Max("labor"."end-time") AS myTime
    FROM   "E940LIVE"."PUB"."tm-log" "labor"
    WHERE  "labor"."order-no"='73153-bc' AND "labor"."company"='01'
    GROUP BY  "labor"."order-no", "labor"."oper-no"

    Progress does not support the DATEADD function. Waht if I forget the number is time and look at it just as a number. Here is my data and what I have tried (again I don't understand the multiple select concept yet).
    Data
    oper-no   end-date    end-time
      20      2/2/2010     41,975
      30      2/3/2010     45,906
      30      2/16/2010    32,941
      40      2/4/2010     46,099
      40      2/4/2010     50,227
      40      2/4/2010     59,466
      40      2/4/2010     62,024
      40      2/16/2010    43,838
      60      2/17/2010    32,679
      90      2/25/2010    35,270
    SQL Command
    SELECT a."oper-no", a."end-time", a."end-date"
    FROM   "E940LIVE"."PUB"."tm-log" a, (SELECT "end-time", max("end-date") AS max_date FROM "E940LIVE"."PUB"."tm-log" WHERE  "order-no"='73153-bc' AND "company"='01' GROUP BY "end-date", "end-time") b
    WHERE  a."end-time" = b."end-time" AND a."end-date" = b.max_date AND a."order-no"='73153-bc' AND a."company"='01'
    Result
    oper-no   end-date     end-time
      20      2/2/2010      41,975
      30      2/3/2010      45,906
      40      2/4/2010      50,227
      40      2/4/2010      46,099
      40      2/4/2010      59,466
      40      2/4/2010      62,024
      30      2/16/2010     32,941
      40      2/16/2010     43,838
      60      2/17/2010     32,679
      90      2/25/2010     35,270
    Desired Result
    oper-no   end-date    end-time
      20      2/2/2010     41,975
      30      2/16/2010    32,941
      40      2/16/2010    43,838
      60      2/17/2010    32,679
      90      2/25/2010    35,270
    Thanks for any and all help!
    Wayne

  • Message: "The database structure has been modified" every time I log to SAP

    Hello,
    "The database structure has been modified. In order to resume this process, all open windows will be closed". Every time I log to one of my companies in SAP Business One this message appears.
    I haven't installed any new addons and made no changes in database structure (and any other user hasn't done any changes), but this message appears always when I log to company for the first time (when I try to log on another user or log to another company there is no message). Can anyone help me with this problem?
    Best regards
    Ela Świderska

    Hi Ela Świderska,
    You may check this thread first:
    UDFs disappeared
    Thanks,
    Gordon

  • Every time I log onto firefox it comes up in dutch. This also affects everything as it is all in dutch...why? I am english!

    For the last couple of weeks every time I log onto firefox it is in Dutch. Someone is accessing my facebook account from Amsterdam and that also appears in Dutch. Hopefully that one has been resolved but it is a big worry. I did nothing to change my login to firefox so why is this happening as it is a major irritation!

    You can find the English Firefox version here:
    * Firefox 3.6: http://www.mozilla.com/en-US/firefox/all-older.html
    * Firefox 4.0: http://www.mozilla.com/en-US/firefox/all.html

  • Every time I want to log in to my Hotmail account I have to type my password over again, and every time I log in I try to save my password. I have tried hundreds of times, but it seems impossible. What am I doing wrong?

    When A person want to log in to a hotmail account to read or send an E-mail, one have to enter the e-mail adress and password. There is also a small square you can mark i you want Firefox to save your password so that you don't have to enter password every time you log in to the e-mail account (in this case hotmail)
    Every time I log in to hotmail I mark the square so that Frefox will save my password, but my password has never been saved. I'm so tired of always having to enter my password, over and over again.
    What do I do wrong? How can I solve tis problem?

    * Websites remembering you and automatically log you in is stored in a cookie.
    * You need an allow cookie exception (Tools > Options > Privacy > Cookies: Exceptions) to keep that cookie, especially for secure websites and if you let cookies expire when Firefox closes
    * Make sure that you do not use [[Clear Recent History]] to clear the "Cookies" and the "Site Preferences"
    See also http://kb.mozillazine.org/Cookies

  • When i log on to thunderbird, the in box has the message, 'building summary file', this then takes ages to do . how do I stop it doing this every time I log on?

    Hello,
    This problem has only started happening in the last few days. Every time i log on the get emails, the inbox has a message saying 'building summary file'. it then takes ages to do this. i have tried deleting most of the in box and compacting the file but it still continues.

    Try restart the operating system in '''[http://en.wikipedia.org/wiki/Safe_mode safe mode with Networking]'''. This loads only the very basics needed to start your computer while enabling an Internet connection. Click on your operating system for instructions on how to start in safe mode: [http://windows.microsoft.com/en-us/windows-8/windows-startup-settings-including-safe-mode Windows 8], [http://windows.microsoft.com/en-us/windows/start-computer-safe-mode#start-computer-safe-mode=windows-7 Windows 7], [http://windows.microsoft.com/en-us/windows/start-computer-safe-mode#start-computer-safe-mode=windows-vista Windows Vista], [http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/boot_failsafe.mspx?mfr=true" Windows XP], [http://support.apple.com/kb/ht1564 OSX]
    ; If safe mode for the operating system fixes the issue, there's other software in your computer that's causing problems. Possibilities include but not limited to: AV scanning, virus/malware, background downloads such as program updates.

Maybe you are looking for

  • Kits with Free Goods

    Can anybody help me please I need to customize free good and kits together , now im have a kit  with item category group LUMF and each material has a price  that is ok but besides of this  , if i have a promotion that has a  kit and this kit has a fr

  • Lookup Columns in newform in sharepoitn 2010

    I have lookup columns in the newform in SharePoint 2010.  I have a reset button on the form, which refreshes the fields using Form action.  All the other fields value is cleared on the form  except for the lookup columns. how to clear the lookup valu

  • Get RFCOPENEX return code long text

    Hi All. I want make an external connection with RFCOPENEX to SAP, but how can I get the message text to the return code directly from SAP? THX, Nils

  • Do not disconnect not going away

    I have a 60 gb video ipod. I have been uploading songs to my ipod before and all of a sudden, it stopped putting them on my ipod. I would watch it transfer in itunes but as soon as i took my ipod off the dock the songs wont be on there. And another t

  • Jdev 9.0.3.3 and struts wizard

    Hi ! I cannot create an ActionForm with the struts wizard with jdev 9.0.3 the wizard work fine ! Is there a way to make the new version work ? Thank you !