JFS creation timestamp

Greetings!
I am new to Arch, but not necessarily to Linux.  I have recently moved back from the Windows world and would like to bring over all of my data files and documents from NTFS.   The problem I am having with this involves creation timestamps.  I would like to preserve these in the Linux system.  Strangely, many linux filesystems do not track this piece of information about a file.  I found that JFS has many benefits including creation timestamp meta data.  I set up a spare drive with JFS and rsynced my NTFS data over to it, only to find I couldn't see how to get or set this creation timestamp through 'ls' or otherwise.  I did find a thread somewhere that mentioned you could use jfs_debugfs to see the creation timestamp value as di_otime if you provided that tool the files inode.  But it is hexadecimal format and this seems a rather convoluted way to access this information, not to mention doing anything programmatically on a mass scale with the creation timestamp information for a file.
Is there any higher level way to get and set the creation timestamp data on a file in JFS?  Getting would be sufficient if I could verify that the creation timestamp is getting preserved properly after the rsync from NTFS to JFS.
Thank you for any assistance.

Doesn't look promising, but this is interesting: http://www.mail-archive.com/jfs-discuss … 00419.html
General Googling says Linux (and other Unices) doesn't preserve creation time. It does, however, store a time that's the last time the file's metadata was modified. Things like renames and permission changes will also trigger this though.

Similar Messages

  • How to set different creation timestamps in the Incidents coming from different countries?

    Hello experts, I need that when a user calls to our Service Desk in order to create an incident, the incident gets the creation timestamp of the user country. We support three countries from our service desk, and in order to calculate SLAs and having local factory calendars for each country we need the things this way, is it possible?
    Best Regards,
    Paul

    Hi Paul,
    what about the Option of determining one of three SLA Service Products (with assigned Service + Response Profile) via the Country of the Reporter?
    Please check the following BadI:
    CRM_SLADET_BADI with Method CRM_DETERMINE_PROFILE
    Regards,
    Robert

  • Problem with Time Capsule under Samba + timestamp

    I´m using TimeCapsule with a Linux machine via samba. I have found out that there seems no way (over samba) to copy a file to the time capsule and maintain it´s creation date.
    The same command for rsync -av ./folder_source /Volumes/TimeCapsule/ on the MacOs behaves different over AFP (copies correctly the file creation date) and Samba (doesn´t copy the file creation date, and instead uses the current time).
    This means I can´t use Rsync with Linux, as everytime, it tries to copy everything again, instead of only copying new files.
    Help I tried doing the same command on a Windows samba share, and it correctly maintains the file date. Is this a time capsule behaviour bug? Is there any config I can make to change this?

    So this is probably a dead thread, but I came across this same problem and want to note what I did to solve a very similar problem. For me it was copying from the TC, via samba, to a linux box using rsync.
    The key resolution, use cifs, not smbfs, as the mount type. When I used smbfs, the files on the TC didn't report the proper timestamp. Some where stating the year 1940 (which isn't even any epoch date). Also, for files over 2G the dates were fubared (perhaps overflow in the stat struct passed from the TC) and won't copy anyways due to smbfs file size limitations.
    Here is a sample cifs fstab line:
    //10.0.1.1/TimeCapsuleShare /mnt/timecapsule cifs ro,noauto,user,credentials=/etc/samba/tc.creds,lfs,uid=farl,gid=farl,filemode=0770,dirmode=0770 0 0
    This does not solve the funky creation timestamp issue, most of my creation dates when checked over smb were Fat32 epoch of Jan 1 1980, but were correct when checked over afp from a Mac. I'm uncertain if this is a smb deficiency or TC bug, but I didn't look too much into that since by default rsync uses sizes and modification dates (AFAIK), so it shouldn't matter.
    Also, if you were like me and only discovered this after copying huge amounts of data that you don't want to recopy here's a quick script I found and modified to change local timestamps to match what's on the TC and prevent rsync from recopying files that shouldn't be recopied:
    #! /usr/bin/perl -w
    # assumes TC is mounted on /mnt/timecapsule.
    # assumes /mnt/store/timecapsule/weekly.0/store/ is the rsync backup of /mnt/timecapsule/store/, the directory you want to pull the true timestamps from.
    use strict;
    if (`mount` !~ m/timecapsule/) {
    printf ("srcmount not mounted\n");
    exit 1;
    my $srcdir="/mnt/timecapsule/store/";
    my $dstdir="/mnt/store/timecapsule/weekly.0/store/";
    my @files = `find $srcdir -type f -print`;
    chomp @files;
    foreach my $file (@files) {
    my $mtime = localtime((stat($file))[9]);
    $file =~ s/$srcdir/$dstdir/;
    printf qq!touch -m -d "$mtime" "$file"\n!;
    my $sout = `touch -m -d "$mtime" "$file"`;
    printf $sout;
    exit 0;

  • Calendar modification timestamps

    If multiple people have access to a shared calendar, is there any way to
    see who created a specific appointment? Also, is there a way to see
    who has modified an appointment? I've tried looking at numerous
    columns which sound appropriate like author, creator, etc, but none
    contain the information I'm looking for.
    Along those same lines, I see that there is a 'created' field which
    contains the creation timestamp, and a 'modified' field which also
    contains the creation timestamp and doesn't change even if the
    appointment has been modified. Is there a column that I can view that
    shows the timestamp of when the last time the appointment was modified?
    I am needing this information for a legal request. Even if the
    Groupwise client cannot provide this information, is it possible that
    it's stored in the database and could be queried in some other way, like
    soap for example?
    -Mike

    Looks like I didn't wait long enough for the modified field to show a
    time change so that's all good, but any help determining WHO modified an
    appointment would be really great.
    -Mike
    On 6/28/2013 9:46 AM, Mike wrote:
    > If multiple people have access to a shared calendar, is there any way to
    > see who created a specific appointment? Also, is there a way to see
    > who has modified an appointment? I've tried looking at numerous
    > columns which sound appropriate like author, creator, etc, but none
    > contain the information I'm looking for.
    >
    > Along those same lines, I see that there is a 'created' field which
    > contains the creation timestamp, and a 'modified' field which also
    > contains the creation timestamp and doesn't change even if the
    > appointment has been modified. Is there a column that I can view that
    > shows the timestamp of when the last time the appointment was modified?
    >
    > I am needing this information for a legal request. Even if the
    > Groupwise client cannot provide this information, is it possible that
    > it's stored in the database and could be queried in some other way, like
    > soap for example?
    >
    > -Mike

  • ECCS timestamp of documents

    I'm looking at ECCS from a BW point of view.
    What I need to do is find a way of indicating the new entries in the ECCS summary table ECMCT.
    Do you know of a table which would have the key etc.
    Similar to the concept in FI where there is a document number and creation timestamp.
    Any ideas?

    Hi Paul,
    I am currently working on the EC-CS module. As you have done the EC-CS extraction into BW , i have a question..... Wht is the datsource to extract the details/ Item of the EC-CS module.
    ECMCT is a totals table and ECMCA is Actuals table.
    3EC_CS_1A gets the data from ECMCT, but do you the datasource which extracts from ECMCA table.
    Our client wants to see the data in detail rather than the totals level.
    Please advice.
    Thanks,
    Bala

  • Can not refresh snapshot changes after importing data of master site

    Hello !
    I have two database computer,one as master site,one as snapshot site.Because the error of the hard disk of master computer,I use the exporting data file to recover my database.after importing ,I found I can't refresh the refreshgroup on snapshot,who can tell me why?
    thinks in advance!
    (exp system/manager full=y inctype=complete file='/home/save/backdata/xhsdcomp.dat')
    (imp system/manager inctype=system full=Y file='/home/save/backdata/xhsdcomp.dat'
    imp system/manager inctype=restore full=Y file='/home/save/backdata/xhsdcomp.dat')
    null

    You haven't listed the errors that you're receiving when attempting to refresh your refresh group, but if your snapshots are attempting to fast refresh, I suspect it's because the creation timestamp of the snapshot log on the master site is newer than the creation timestamp of the snapshot. In this case you will need to do a complete refresh of the snapshot (or drop and recreate the snapshot) before you will be able to fash refresh it again.
    If this is not the case, please post the errors you are receiving when you attempt to refresh the refresh group.
    HTH,
    -- Anita
    Oracle Support Services
    null

  • To Create an utility program for query.

    Hello all,
    I have a requirement where I need to develop an utility program which should create a downloadable report with more or less the following content.
    System
    Client
    Query name
    Query description
    Creation timestamp
    Creator
    Infoset
    Tables involved
    If possible, we would like to know how many times they were run and when was the last time it ran.
    Can you please let me know where are the queries saved?  And from where can we get the details for these above mentioned fileds? ( Except system and Client).
    Do feel free to add any other information you may consider relevant.
    Thanks in Advance,
    Rachana.

    Hi ,
    You can use AQGQCAT table to retrieve query details.
    Further tables on query are AQG*
    Hope this Helps
    Nag

  • Open a file from anywhere - with credentials?

    Hi;
    Is there a way to open a file from anywhere where I give the uri and uname/password and it gives me back a Stream to read? Including:
    1. ftp - with/without credentials
    2. http - anon, basic, digest, & windows authentication
    3. REST where you must do basic on the first request (anon returns an error message as an html page).
    4. share on the same domain - requires a different user's credentials.
    5. share on a different domain
    6. share on a workgroup computer
    7. share on a non-Windows server
    8. ???
    thanks - dave
    Edited by: [email protected] on Sep 15, 2010 2:04 PM

    [email protected] wrote:
    Hi;
    I just wanted to add the use case so you would understand why we need to handle all cases. I suggest that you actually ask customers what they want initially instead of assuming that you need all of them initially.
    We have a reporting system we sell and one thing users can do is import images from an external source. Some users put their images on a web or FTP server requiring credentials. Others put it on a server that requires credentials. And we're now getting requests to pull from SharePoint.Fine. So you have the following task that has the following steps.
    1. Define source of image and where to store it.
    2. Create retriever based on 1.
    3. Run instance of 2
    4. Log errors from 3 or put retrieved file into location specified by 1.
    In the above step 2 is a plug in that represents different ways of retrieving information.
    And expanding on something that was said before you have a HUGE task in front of you in terms of supporting everything that you list above.
    For example using only ftp these are some of the things I see.
    1. There are at least three types of secure "ftp" servers. The protocol for handling security is different for each.
    2. Although FTP is a protocol the implementation of how the servers work can be substantially different. For general handling the will likely require scripting for each targetted server.
    3. Locations on FTP servers not necessarily simple. Examples.
    ...a. Login to server. Cd to a directory. Get a file which has a date as part of the name.
    ...b. Login to a server. Cd to a directory with a timestamp in the name. Get a file.
    ...c. Login to a server. Get a file which has a random name based on the last creation timestamp of each file.
    Note that FTP is actually easier than some of the other cases.
    And in the above I didn't deal with any of the possible errors.

  • Various Data DIctionary VIews

    After posting something here a few days back about the myriad views needing to be digested for the Fund.I Exam, I have just gleaned this lot from the Couchman book. No doubt some of the pros out there may well correct me, but this is simple what I have collected from the book, in the last 3 hours. Cheers.
    Dictionary Views
    Data Dictionary
    Which users are in the database password file:
    V$PWFILE_USERS
    Where values set in the init.ora file can be viewed – all parameters:
    V$PARAMETER
    Script used to create the objects that comprise the data dictionary:
    catalog.sql
    To grant a special role to users so they can look at DBA views:
    SELECT_CATALOG_ROLE
    Information about all database objects in the database:
    DBA_OBJECTS
    Information about all tables in the database:
    DBA_TABLES
    Information about all indexes in the database:
    DBA_INDEXES
    Information about all views (including dictionary views) in the database:
    DBA_VIEWS
    Information about all sequences in the database:
    DBA_SEQUENCES
    Information about all users in the database:
    DBA_USERS
    Information about all constraints in the database:
    DBA_CONSTRAINTS
    Information about all table columns that have constraints on them:
    DBA_CONS_COLUMNS
    Information about all columns that have indexes on them in the database:
    DBA_IND_COLUMNS
    Information about all columns in all the tables in the database:
    DBA_TAB_COLUMNS
    Information about all the roles in the database:
    DBA_ROLES
    Information about all object privileges in the database:
    DBA_TAB_PRIVS
    Information about all system privileges granted to all users in the database:
    DBA_SYS_PRIVS
    Displays all PL/SQL source code in the database:
    DBA_SOURCE
    Information about all triggers in the database:
    DBA_TRIGGERS
    Information about object privileges granted to roles
    ROLE_TAB_PRIVS
    Information about system privileges granted to roles
    ROLE_SYS_PRIVS
    Information about roles granted to roles
    ROLE_ROLE_PRIVS
    Information about all tablespaces in the database:
    DBA_TABLESPACES
    Information about all profiles in the database:
    DBA_PROFILES
    For all parameters?
    V$PARAMETER
    General information about the database mounted to your instance:
    V$DATABASE
    Most information about the performance of the database is kept here:
    V$SYSSTAT
    Most information about the performance for individual user sessions is stored here:
    V$SESSION , V$SESSTAT
    Information about online redo logs (2)
    V$LOG, V$LOGFILE
    Information about datafiles
    V$DATAFILE
    Basic information about control files, and the two columns it has:
    V$CONTROLFILE. STATUS / NAME
    An object you can query to obtain a listing of all data dictionary objects (4)
    CATALOG, CAT, DICTIONARY, DICT.
    When the control file was created, Sequence Number, most recent SCN:
    V$DATABASE
    Information stored in different sections of the control file, Sequence Number:
    V$CONTROLFILE_RECORD_SECTION
    To see the names and locations of all control files in the db? (2)
    V$PARAMETER. V$CONTROLFILE
    Tablespace and Datafiles
    Temporary Segments:
    Name, tablespace location, and owner of temporary segments:
    DBA_SEGMENTS
    Size of temporary tablespaces, current number of extents allocated to sort segments, and sort segment high-water mark information. Space usage allocation for temporary segments:
    V$SORT_SEGMENT
    Types of sorts that are happening currently on the database
    V$SORT_USAGE
    To see the username corresponding with the session:
    V$SESSION
    Information about every datafile in the database associated with a temporary tablespace:
    DBA_TEMP_FILES
    Similar to DBA_TEMP_FILES, this performance view gives Information about every datafile in the database associated with a temporary tablespace:
    V$TEMPFILE
    Storage Structures
    A summary view, contains all types of segments and their storage parameters, space utilization settings:
    DBA_SEGMENTS
    Tablespace quotas assigned to users:
    DBA_TS_QUOTAS
    Segment name, type, owner, total bytes of extent, name of tablespace storing the extent:
    DBA_EXTENTS
    The location and amount of free space by tablespace name:
    DBA_FREE_SPACE
    The location of free space in the tablespace that has been coalesced:
    DBA_FREE_SPACE_COALESCED
    Information about datafiles for every tablespace
    DBA_DATAFILES
    Performance view for information for datafiles for every tablespace
    V$DATAFILE
    To see the total amount of space allocated to a table?
    DBA_EXTENTS
    Table creation timestamp, information about the object ID:
    DBA_OBJECTS
    High water mark, all storage settings for a table, and statistics collected as part of the analyze (for row migration) operation on that table
    DBA_TABLES
    Information about every column in every table:
    DBA_TAB_COLUMNS
    To determine how many columns are marked unused for later removal?
    DBA_UNUSED_COL_TABS
    To find the number of deleted index entries ?
    INDEX_STATS
    To determine the columns on a table that have been indexed:
    DBA_ID_COLUMNS
    The dynamic view to show whether the index is being used in a meaningful way?
    V$OBJECT_USAGE
    To see whether a constraint exists on a particular column?
    DBA_CONS_COLUMNS
    To see the constraints associated with a particular table:
    DBA_CONSTRAINTS
    To find the username, ID number, (encrypted) password, default and temporary tablespace information, user profile of a user, password expiry date:
    DBA_USERS
    To all objects, which objects belong to which users, how many objects a user has created?
    DBA_OBJECTS
    Resource-usage parameters for a particular profile:
    DBA_PROFILES
    Identifies all resources in the database and their corresponding cost:
    RESOURCE_COST
    Identifies system resource limits for individual users:
    USER_RESOURCE_LIMITS
    Shows all system privileges:
    DBA_SYS_PRIVS
    Show all object privileges:
    DBA_TAB_PRIVS
    Shows all privileges in this session available to you as the current user:
    SESSION_PRIVS
    Views for audits currently taking place are created by this script:
    cataudit.sql
    a list of audit entries generated by the exists option of the audit command:
    DBA_AUDIT_EXISTS
    A list of audit entries generated for object audits:
    DBA_AUDIT_OBJECT
    A list of audit entries generated by session connects and disconnects:
    DBA_AUDIT_SESSION
    A list of audit entries generated by statement options of the audit command:
    DBA_AUDIT_STATEMENT
    A list of all entries in the AUD$ table collected by the audit command:
    DBA_AUDIT_TRAIL
    To determine the roles available in the database, the names of all the roles on the database and if a password is required to use each role:
    DBA_ROLES
    Names of all users and the roles granted to them:
    DBA_ROLE_PRIVS
    All the roles and the roles that are granted to them:
    ROLE_ROLE_PRIVS
    Which system privileges have been granted to a role:
    DBA_SYS_PRIVS
    All the system privileges granted only to roles:
    ROLE_SYS_PRIVS
    All the object privileges granted only to roles:
    ROLE_TAB_PRIVS
    All the roles available in the current session:
    SESSION_ROLES
    Which object privilege has been granted to a role:
    DBA_TAB_PRIVS
    To display the value of the NLS_CHARACTERSET parameter:
    NLS_DATABASE_PARAMETERS
    DA

    You can also find a lot of stuff by doing:
    SELECT *
    FROM dictionary;

  • Build payments error

    I am trying to run a cheque payment batch but the Build payments request is erroring. I have run this successfully many times a month or so ago.
    I have tried searching metalink, on OTN and googling the error but no luck.
    Here is the log from the request:
    Payments: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    IBYBUILD module: Build Payments
    Current system time is 10-DEC-2012 17:36:39
    **Starts**10-DEC-2012 17:36:40
    **Ends**10-DEC-2012 17:36:43
    BUILD PROGRAM ERROR - CANNOT COMPLETE CHECK NUMBERING
    Start of log messages from FND_FILE
    Enter Build :: Concurrent Request ID::4699011
    |STEP 1: Insert Payment Service Request:: Timestamp:10-DEC-12 05.36.40.324885000 PM +11:00|
    |STEP 2: Insert Documents :Timestamp:10-DEC-12 05.36.40.363742000 PM +11:00|
    |STEP 3: Account/Profile Assignment :Timestamp:10-DEC-12 05.36.42.262375000 PM +11:00|
    Request status after assignments: ASSIGNMENT_COMPLETE
    |STEP 4: Document Validation :Timestamp:10-DEC-12 05.36.42.305476000 PM +11:00|
    Request status after document validation: DOCUMENTS_VALIDATED
    |STEP 5: Document Re-Validation: Timestamp:10-DEC-12 05.36.42.425959000 PM +11:00|
    |STEP 6: Payment Creation :Timestamp:10-DEC-12 05.36.42.427070000 PM +11:00|
    Request status after payment creation: PAYMENTS_CREATED
    |STEP 7: Payment Re-Creation:Timestamp:10-DEC-12 05.36.42.942589000 PM +11:00|
    Final status of payment request 683 (calling app pay req cd: tdl chq 10-dec-12 #6) before exiting build program is PAYMENTS_CREATED
    |STEP 8: Check PICP Kickoff Flag: Timestamp:10-DEC-12 05.36.42.945151000 PM +11:00|
    |STEP 9: Payment Instruction Creation :Timestamp:10-DEC-12 05.36.42.947062000 PM +11:00|
    FV_FEDERAL_PAYMENT_FIELDS_PKG.GET_PAY_INSTR_SEQ_NUM: FV: Federal Enabled profile is not turned on
    Return status of payment instruction creation: S
    |STEP 10: Check Numbering : Timestamp:10-DEC-12 05.36.43.441961000 PM +11:00|
    After numbering, return status: E, and return message:
    Exception occured when numbering checks of payment instructions. Check numbering will be aborted ..
    SQLCODE: -20001
    SQLERRM: ORA-20001:
    Build program error: Exception occured when attempting to provide printed document numbers (check numbers) for the payments of the provided payment service request.
    End of log messages from FND_FILE
    Executing request completion options...
    Output file size:
    439
    Finished executing request completion options.
    Concurrent request completed
    Current system time is 10-DEC-2012 17:36:43
    ---------------------------------------------------------------------------

    Hi,
    Since you were able to run this successfully earlier, i would rule out the possibility of this being a BUG...
    The second most likely cause is that, the payment document used by this payment process request is also being used in another payment process request which is in incomplete status....So, make sure there are no other payment process requests pending apart from this one ... either complete those or terminate those...
    The Third cause is that, the payment document defined would have exhausted its sequence numbers, hence you should either increase the sequence or use a different payment document which has enough number of sequences available for generating checks...
    By sequence i am referring to the number of check leafs...
    If the above does not apply, then you would be better off logging a service request with oracle, as it could be something new which many are not aware of ...
    Regards,
    Ivruksha

  • How to find the last refresh of database?

    Hi,
    Can someone tell me how to find when was the last refresh of a database was done.

    Now it is much better!!!
    1. If the cloning was done using a cold backup of
    database, how can we know when was the last cloning
    on testing db done.If the cloned database has been opened with RESETLOGS option, you can try checking out V$DATABASE.RESETLOGS_TIME. if the V$DATABASE.CREATED is not equal to V$DATABASE.RESETLOGS_TIME...there is a possibility that it might be opened with resetlogs option. I don't have the required set up to check and confirm this myself....but this is something you can get it a shot.
    2. If export/import utility is used for this purpose,
    the how can we find out the last time the testing db
    was 'refreshed'?If import was used, check the object creation timestamps of majority of them...may be it will help you find the required information.
    HTH
    Thanks
    Chandra Pabba

  • Failed to delete file after processing FTP

    Failed to delete file after processing. The FTP server returned the following error message: 'com.sap.ai i.adapter.file.ftp.FTPEx: 550 Unexpected reply code *.txt: The process cannot access the file because it is being used by another process. '. For details, contact your FTP server vendor.
    I got this error many times for the same interface. Not sure what is the reason for this.
    Searched on internet go comments that this is because of FTP version!
    Please help

    It is the "Msecs to Wait Before Modification Check" in the Sender Adapter that ensures this. It works like this: PI starts processing, finds a file, then waits the number of miliseconds specified and checks the file again to see if it has changed over the waiting period. If so, then it waits again to make sure the file is written completely. Only if no changes took place over the waiting period, it starts processing the file.
    And the fact that your file was successfully processed at retry only confirms that it might have been still written to by the sender system. You can try comparing file's creation timestamp (in OS level) with its processing start time in PI - this could prove me right.
    Edited by: Grzegorz Glowacki on Jan 13, 2012 2:15 PM

  • SOA FTP Adapter Performance Optimization

    The application scenario is as below
    There is a remote FTP service which receives real-time XML files (may be 2 files per second). In my SOA 11g application, I am using FTP adapter to connect to the remote FTP server. The FTP adapter polls for new files based on the file creation timestamp and after processing, deletes the files. All good so far, no issues at all.
    However, there is a huge backlog in the files. How can improve or speed up the FTP adapter process to match or better the remote FTP file creation rate? Please note I am not doing much processing here, just perist the files to a database. The consuming application expects real-time file feed.
    There should be some configuration in SOA FTP adapter to optimize the performance and beat the file spitting FTP monster.
    Thanks

    However, the inbound FTP location receives around 30 files in a second max. How do I configure the FTP adapter to beat this rate. I looked through the document link and tried few trials
    MaxRaiseSize: 20
    SingleThreadModel: both true and false
    ThreadCount:20
    However, I still cannot beat the rate at which the files are put to this location. Please what configuration changes shhould I consider.
    Thanks
    Edited by: user5108636 on 17/10/2010 21:34

  • CcBPM - ParForEack block not processed in parallel

    Hi Experts,
    I am using a par for each block in my integration process.
    Within the block I send several synchronous calls to a JDBC adapter to execute stored procedure calls.
    The stored procedure calls need to be executed in parallel.
    In the process list ( Transaction SWF_XI_SWI1 ) I can see, that all the workitems are creat at the same time.
    However they stay in status READY and are getting processed by PE runtime only sequentially.
    Any idea, what is the reason of this system behaviour? Are there any workflow setting missing?
    thanks
    Barbara

    Hi,
    in case of ParForEach, the workflow log ( view with technical details ), my the node strucure of my block looks like this.
    -->Block1
             --> Branch 1
           --> Block1 
            --> <my synchronous send step>
       --> Branch 2
           --> Block1
            --> <my synchronous send step>
       --> Branch 3
           --> Block1
            --> <my synchronous send step>
       --> Branch 4
           --> Block1
            --> <my synchronous send step>
       --> Branch 5
           --> Block1
            --> <my synchronous send step>
       --> Branch 6
           --> Block1
            --> <my synchronous send step>
    The nodes printed bold are the nodes, where a workitem is created. So I have 6 different workitems created in my example.
    All these workitems have the same creation timestamp. But the nodes below ( send steps ) have different ( ascending)  timestamps, which means, that these nodes are processed sequentially.
    In case I change the mode to ForEach, the workflow log looks like this:
    -->Block1
    --> Block1
    --> Loop 1
       --> my synchronous send step
    --> Loop 2
    --> my synchronous send step
    --> Loop 3
    --> my synchronous send step
    --> Loop 4
    --> my synchronous send step
    --> Loop 5
    --> my synchronous send step
    Barbara

  • FileInfo.Delete method appears to run in background

    My app is set up to delete output file before regenerating them.  It appears that the delete operation is running on a basckground thread.  I am not certain what is happening but here is what I do know.
    If the process to create the data to be written out runs quicker than normal, the file creation timestamp remains unchanged.
    If I add a 1 second delay after deleting the file, the file creation timstamp is always the current time.
    My users rely on the file creation timestamp to locate the latest changes.
    Any ideas on how to change this behavior without adding a call to sleep()?
    Mac

    I think it's not about background thread. Instead you're bitten unexpectedly by
    file system tunneling.
    Try to get the system administrator
    disable it for you and see if the symptom disappears. (Mind you, a few common programming techniques rely on this behavior so disabling it could break things. Always set up a test server to simulate normal operations for some time before deploy the change
    to production. If you confirmed this is indeed the source of issue but cannot verify the possible impact, you're advised to seek for other techniques like saving the timestamp on database/NOSQL.)

Maybe you are looking for