File Dependency Question..

I was wondering when generating a new file dependency can I do embedded unix commands?
For example, I wanted to check the exisitance of a file but with a date structure:
/home/test/`date +%Y-\%m-\%d`/testfolder/testfile.csv
Essentially, that gets todays date in a specific format for a directory name. Its the same as doing an "ls" on that path and seeing if it exists, except the shell will interpret the `date +%Y-\%m-\%d` portion and return the date in that format.
Is this possible within the context of a file dependency?
Thanks

You can use the Tidal System Date variable to accomplish this.  Click the Variables button, System Variables, System Date, Date Format.  This will insert the system date when Tidal starts the new scheduled day.  This is probably at midnight, but if you have an offset schedule this could be at another time.  The important thing to remember is the date is when the current day's schedule goes live, not the date when the job's dependencies are met.
You can also use the System Variable, Production Date, Date Format if you use an offset schedule.  This will use the Production Date associated with the offset calendar.  For example, if you have offset your schedule to start at 8am and the current date at 8am is 10/28, then the production date is 10/28 from 8am today until 7:59:59am tomorrow.
Thanks.

Similar Messages

  • I got new hard driver for my MacBook I don't have the cd but I do have flash drive that has the software I need help because when I turn on my laptop it shows me a file with question mark how can I install the software from the flash driver?

    I got new hard driver for my MacBook I don't have the cd but I do have flash drive that has the software I need help because when I turn on my laptop it shows me a file with question mark how can I install the software from the flash driver?

    Hold down the Option key while you boot your Mac. Then, it should show you a selection of devices. Click your flash drive and it will boot from that.

  • File Dependency not working properly

    Hi,
    In our project we have a job group which will move a set of file from one directory to another directory. The first job in that group will be run only when a file with an extension "ind" is present in the source directory, i.e the dependency is set to a file with extension *.ind(Please see attached screeshot for reference).
    But when we run the job the dependency is not being satisfied even though the file is present in the source directory.
    What could be the reason for this?
    Note: If I overide the job, then the files are properly griing moved. But since this is automated, the job run must be based on the dependency we have set.

    This sounds like the runtime user of the job is a different account than used by the agent.
    File Dependencies and File Events are evaluated by the agent process.  This means the account running the agent service must have access to the file.
    When the job runs it uses the runtime user.  If the runtime user is different account than the agent account you can encounter the problem you describe.
    If this is a Windows agent running as a Local System account, the agent will only have access to files local to the server.  So if the file is on another server the agent will not have access to it.
    If this isn't your issue, could you provide details about the agent (windows/unix), file location (local to agent/UNC path), and whether the agent running the job is the same as the agent being used to evaluate the file dependency.
    Thanks.

  • HT1752 My MacBook on startup shows file with question mark ,, what now?

    My Mac shows file with question mark, what now.

    Press the power button down to force a emergency use only hardware shutdown.
    Press and hold the Option/alt key on the built in keyboard, boot the machine.
    If MacintoshHD appears, select it and click the arrow, then in System Preferences > Startup Disk reset that. Done.
    Sometimes a NVRAM reset is required then the above done again.
    Folder with question mark issue
    ..Step by Step to fix your Mac
    If MacintoshHD doesn't appear, or if it boots to gray screen or other issues, then it's more complicated of a fix, but once inside OS X then reset the Startup Disk.
    Gray, Blue or White screen at boot, w/spinner/progress bar
    ..Step by Step to fix your Mac
    You might need this if you don't have a recent backup and your problem is software related, Disk Utiltiy can't fix the drive and recommends you backup, erase and install.
    .Create a data recovery/undelete external boot drive
    If Disk Utility + Hardware Test shows no boot drive, then you have a dead drive or cable, or Mac issue.
    My computer is not working, is my personal data lost?

  • Splitting file depends upon no of occurence of element

    Hi
    I have the scenario as follows where i need to split the file depends upon number of occurences of element e1 .
    I have input xml file having structure as follows:
    <ROOT>
    <root1 attribute>
       <field1> </field1>
       <e1>
           <field2> ....</field2>
       </e1>
        <e1>
            <field2> .......</field2>
       </e1>
    </root1>
    <ROOT>
    <u>element e1 has "1 to unbounded" occurence.</u>
    I would like to have output file structure as
    <ROOT>
    <root1 attribute>
       <field1> </field1>
       <e1>
           <field2> ....</field2>
       </e1>
    </root1>
    <ROOT>
    the key thing for every occcurence of element e1, the output file will be generated and
    attribue value of root1 and element position will  be included in the file name .
    Thanking you in adbvance.
    Regards
    Piyush

    Hi Piyush,
    Create a message mapping for the spliting you want.
    Under <b>Messages</b> tab
    <b>Source Message:-</b> Your message type name example
    <ROOT>
    <root1 attribute>
    <field1> </field1>
    <e1>
    <field2> ....</field2>
    </e1>
    <e1>
    <field2> .......</field2>
    </e1>
    </root1>
    <ROOT>
    <b>Note:- Occurence is 1</b>
    <b>Target Message:-</b> your message type name example
    <ROOT>
    <root1 attribute>
    <field1> </field1>
    <e1>
    <field2> ....</field2>
    </e1>
    </root1>
    <ROOT>
    <b>Note:- Occurence is 0 to unbounded</b>
    In the Design tab do your graphical mapping one to one mapping
    <b>Note:- <e1> of the source element (0 to unbounded) should be mapped to <ROOT> of the target element (0 to unbounded).</b>
    Then you can test your mapping by giving your test data
    <b>While Creating the Interface mapping please make sure that the target interface occurence is <i>0 to unbounded</i>.</b>
    that's it you are done with it.
    Make sure in BPM use <b>transformation step</b> to convert the source to target messages and use <b>multiline container element</b> and use <b>foreach block</b> to catch the message

  • File and question mark when starting

    Flashing file and question marl when i start my computer

    There are four general causes of this issue:
    1. The computer's PRAM no longer contains a valid startup disk setting when there aren't any problems with the disk itself. This can be checked for by pressing the Option key and seeing if the drive appears.
    2. The internal drive's directory structure has become damaged. This requires usage of an alternate bootable system to perform the repair.
    3. Critical system files have been deleted. This requires usage of an alternate bootable system to reinstall them.
    4. The internal drive has died or become unplugged. This is the most likely case if the computer took a sharp impact or there are unusual sounds coming from its location.
    (103563)

  • .AVI file format question

    Upgrading from CS2 to Photoshop CS5
    I found out after the fact that in order to edit video you need CS5 Extended.
    So, I then went and returned regular and upgraded to Photoshop CS5 extended.
    (See my other thread for details, but installing the 12.01 patch is not an option at this time.)
    Try to use file open to open a .AVI file from one of my cameras and get a message of "unrecognized file format".
    Was able to import the 7 second video into layers.
    Checked support issues, and installed Quicktime 7.6.8
    Also, due to 12.01 issues, reinstalled Photoshop CS5 Extended AFTER upgraded Quicktime to 7.6.8
    Machine specs:  Toshiba Satellite P30  Intel Pentium 4 CPU 3.60 GHZ (Hyperthreading Dual Core) 2GB RAM ATI Mobility Radeon X600 Win XP Professional SP3 About 12 GB disk space available at install time.
    Note that Quicktime will open the .AVI file in question with no problems, as well as Roxio software (trying to get rid of that with CS5)

    I talked to support via phone and it appears the the problem is in installation.  The install of CS5 extended did not take over the regular install.  So, I need to start by completely uninstalling AGAIN and installed extended from scratch AGAIN.
    I did try the program mention GSPOT as well.  It says AVI codecs are installed on the machine inquestion.  I will get more informaiton tonight afte rI do the re-install and see what happens.

  • Dependency Question: jnc from AUR

    Hi,
    I do "maintain" one package in AUR (jnc) and have a dependency question about this package:
    I built an extra package for x86_64 because of some dependency issues on my machine, but after uploading I recognised, that the issues comes with the juniper binary and not with my package.
    What is the right way? It is just a perl script, so the dependcies are just perl and the other system commands used in the perl script, or it is ok to say, to use the perl script you need these juniper binaries, and if you have a x86_64 machine you need these dependencies to run the juniper binary?
    The missing dependencies are 'lib32-glibc' 'lib32-zlib' ...
    Cheers yannsen

    you can combine both architectures in 1 package, use something like this :
    depends=('libglade')
    if [[ ${CARCH} == "i686" ]]; then
    depends+=('glibc')
    else
    depends+=('lib32-glibc')
    fi
    Edit :
    Chances are the i686 package also needs glibc and zlib as dependencies , namcap can help with determining that.
    Last edited by Lone_Wolf (2012-09-05 09:56:36)

  • File Serializable question

    hi,
    i know i've seen plenty of posts asking how to transfer files, my question is why doesn't simplying sending a file object from the client to the server work, file just implement Serializable? is it because the contents don't travel with the object? i'm pretty sure the object will go across, via socket programming or rmi, but why not the contents, or will they?
    Thank you.

    Because the file object, to the best of my knowledge,
    does not hold the contents, it just contains
    information about the file, such as its path.Very true. I never thought about it in this depth before, but the [url http://java.sun.com/j2se/1.4.1/docs/api/java/io/File.html]API says:
    "An abstract representation of file and directory pathnames."
    That's all it is, a representation, not an actualy file.
    Cheers,
    Radish21

  • Checkpointing - control file contents question

    Some clarification is needed if possible...
    When you commit a transaction:
    - commit scn is recorded in the itl of the data block and undo segment header
    - lgwr records the committed scn (for all data blocks involved) to the redo log
    Checkpoint Event
    - (3 seconds or possibly less passes by) CKPT wakes up and signals DBWn to write dirty (modified and committed)
    blocks to disk
    - CKPT records the scn of those blocks in the control file (data file and redo thread sections) and the data file
    header (task of checkpoint when a log switch occurs)
    - Checkpoint position in the Redo Log is forwarded
    Control file contents question:
    When LGWR writes the commit scn to the redo log, who writes the scn to the control file? LGWR or CKPT?
    Also, when is the redo thread scn written?
    Matt

    Matt,
    This is my understanding of the stuff. Feel free to correct me.
    Checkpoint SCN , as I mentioned in my last reply is the marker of the point till which the data is "chekpointed" to the datafiles. This marker tells the controlfile that in the case of the crash, where to start recovery of the datafile and have to go which extent in the redo stream? This is only available in the datafile header and in the controlfile. This doesn't get recorded in the redo log file/stream.
    I mentioned checkpoint queue in my reply too. Though I couldn't find any reference directly mentioned between this and in the checkpoint SCN but I believe my theory , if not totall, partially is correct. The incremental checkpoint is the stuff which makes the decision that how many redo blocks needs to be applied to the datafile if its closed without a proper checkpoint. So this part is maintained in the Datafile header itself in the form of the checkpoint SCN. When not matched with the conrolfile checkpoint SCN, which is always higher than this, a recovery is reported.
    I hope its somewhat correct. Do let me know your views too.
    Cheers
    Aman....

  • Can't log in white screen with flashing file and Question mark

    I've been have trouble with getting on line and screen freezing up. Now I have a solid white screen with a icon of a file and question mark on it flashing. Iv tried unplugging and resetting with no luck.

    Have a look at > A flashing question mark appears when you start your Mac
    Dennis

  • File dependency fulfilled, but service still offline.

    A service test-app is enabled, and it's file dependency is absent.
    It goes into offline state as one would expect.
    # svcs -x /site/test-app
    State: offline since Wed Jun 21 22:59:52 2006
    Reason: Dependency file://localhost/opt/test/test-app.cfg is absent.
    Impact: This service is not running.Then I made the file available, and expected that the service now would try to go online. But nothing happend. Still offline...
    # svcs -x /site/test-app
    State: offline since Wed Jun 21 22:59:52 2006
    Reason: Unknown.
    Impact: This service is not running.In the output of svcs -x the Reason has changed to "Unknown"
    A svcs -l shows that the file dependency is online:
    svcs -l /site/test-app
    enabled      true
    state        offline
    next_state   none
    state_time   Wed Jun 21 22:59:52 2006
    dependency   require_all/none file://localhost/opt/test/test-app.cfg (online)Why doesn't SMF try to start the service?
    If I do a svcadm refresh /site/test-app or svcadm disable/enable the service goes online.
    A svcadm restart /site/test-app doesn't change anything.
    In the smf-man page it says:
    Service instances may have dependencies on services or files.
    When the dependencies of an enabled service are not satisfied, the service is kept in the offline state.
    When its dependencies are satisfied, the service is started.
    If the start is successful, the service is transitioned to the online state.
    I also tried to replace the file-dependency with a service-dependency, and then it worked just perfect.
    Please tell me if I have completely misunderstood this, or if it could be a bug with the file-dependencies.
    Brgds,
    /JK

    While that bug exists and explains why "restart" didn't behave the same as "disable/enable" the other problem isn't strictly a bug, but a limitation.
    SMF does not continuously re-evaluate file dependencies to see if it should online/offine services. So touching a file is never supposed to bring a service online.
    File dependencies are only evaluated during service transitions. Not continuously like service dependencies are. It's been discussed about trying to do that, but constantly reading a filesystem tends to be very expensive. The documentation might need to be more explic
    it about this.
    Darren

  • Log file sync question

    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?

    Tony Hasler wrote:
    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?It depends on what you mean by facts - presumably only the people who wrote the code know what really happens, the rest of us have to guess.
    You're right about point 1 in the MOS note: it should include "or wait for current lgwr write and posts to complete".
    This means, of course, that your session could see its "log file sync" taking twice the "redo write time" because it posted lgwr just after lgwr has started to write - so you have to wait two write and post cycles. Generally the statistical effects will reduce this extreme case.
    You've been pointed to the two best bits of advice on the internet: As Kevin points out, if you have lgwr posting a lot of processes in one go it may stall as they wake up, so the batch of waiting processes has to wait extra time; and as Riyaj points out - there's always dtrace (et al.) if you want to see what's really happening. (Tanel has some similar notes, I think, on LFS).
    If you're stuck with Oracle diagnostics only then:
    redo size / redo synch writes for sessions will tell you the typical "commit size"
    redo size + redo wastage / redo writes for lgwr will tell you the typical redo write size
    If you have a significant number of small processes "commit sizes" per write (more than CPU count, say) then you may be looking at Kevin's storm.
    Watch out for a small number of sessions with large commit sizes running in parallel with a large number of sessions with small commit sizes - this could make all the "small" processes run at the speed of the "large" processes.
    It's always worth looking at the event histogram for the critical wait events to see if their patterns offer any insights.
    Regards
    Jonathan Lewis

  • File size questions

    new to Apple MAC and iPhoto...so....I shot my new camera in jpeg, files are roughly 20mb...then I shot in RAW and after editing, the saved jpegs are ~ 5-10mb and up to 15mb....what happens to make this discrepency? how could my RAW converted jpegs be ~ 1/3 the size of straight out of camera jpegs?
    another question...iPhoto is 1.7 gb, my iPhoto Library size it 37gb, but my pictures in iPhoto total 17gb.....is the discrepecy due to stuff like Slideshows I created? Faces just is referring the files right, not making duplicate files?...having trouble wrappping my head around stuff I''m seeing that doesn't add up...
    thanks

    It's all a bit more complex that that. There is no correlation between file size and the quality of a shot. A well exposed shot of 2 or 3 mb will print just as well as a well exposed shot of 25mb. There is no difference in what gets printed. A poorly exposed shot of 25mb will print like garbage.
    It gets worse, we used to suggest a rule of thumb that printing at 300dpi was a reasonable giude to printing quality. Not any more. Printers and cameras have improved, iPhoto books are uploaded at something akin to 150 dpi.
    Again, a 3 mb jpeg will print exactly as well as a 30 mb tiff.
    Remember, the internet is full of high quality images that weigh in at a lot less than 500kb.
    Where file size comes into play is when you're using destructive editing on a lossy format, like Jpeg, and, as I said above, that doesn't come into play in a non-destructive editing system like iPhoto.
    The output from the export will - depending on the settings you use - either be smaller or larger than the jepg from a camera. It means very little - unless you're going to go an an edited destructively.

  • AVCHD in Premiere Pro CS 5 - exporting file type questions

    Firstly, this is my first post on here, so apologies if I'm breaking any kind of etiquette. Also, I'm not a very technical person and would appreciate any comments or advice to be in layman's terms!
    So, I film and edit weddings as part of a small company. I've recently had my camera and editing system upgraded to a Sony NXcam and Premiere Pro CS5 and I'm having a little trouble getting used to the new file types and settings (previously used miniDV tapes in SD with a Canon XL2 in an older version of Premiere).
    My pc handles the AVCHD footage fine but the way that I edit means that I'll want to export a sequence of edited clips as one long clip, and then re-import it. For example, I'm editing the wedding ceremony from footage from two cameras - I'm going to edit the full version of the ceremony and then I'm also going to edit it down into a version that is a highlights montage in slo-motion to music. The way I do this is once I'm happy with the full version of the ceremony I'll export it and then re-import it so it's one long clip I can play with rather than two sets of footage chopped up on a sequence. So I can alter the speed of this new long clip, and then chop it up and stick in some dissolves to put together the montage. Then I use this montage in the middle of my final film, between the preparations and the reception. The full, edited version of the ceremony is used as an extra on the DVD.
    Previously this was fairly straight forward. I was always working with footage in 16:9 and could export it as an avi file, then re-import that avi file with no problems - presto; the exact same-looking footage as I was previously working with.
    Now I'm editing with avchd mts footage (1920x1080 - 25p - square pixels?). Naturally, if I export it as a 16:9 avi and re-import it, the aspect ratio and frame size is all wrong for the project sequence.
    I just want to know what the best file format and settings are to use to export footage that will match up.  There is no option that I can see to export edited footage as mts files. There is a long list of file types, most of which I don't recognise or know anything about and have never needed to use in the past.
    Sorry if this is a silly question, I've tried searching for "working with AVCHD in premiere pro cs5" in google and not been able to find anything that can help me on there, so I thought I'd try a forum like this.
    Cheers
    Adam

    I think we may have a slight miscommunication.  This is the way that I interpreted your original post.
    Lets say you shot the Smith/Jones wedding this past weekend.  You start a new project and have a Service Sequence and a Reception Sequence.  You finish editing both the service and reception and just a standard full length video.  Now you want to create a montage with clips from both the service and reception.  In this montage you will have different effects like Black and white, Slo Mo, blurs, etc.  So what you do is export the service and reception and then import those two files (or one long one) and then cut that apart and add your music and effects to make your montage.  Is that correct?
    The way that I do it works the exact same way except that you skip the export part.
    When I edit a wedding I generally have 4 sequences when I am finished.  I have a Pre Wedding montage, service, reception, and ending highlights.  The Pre Wedding montage is just made up of shots from before the service (bride and bridesmaids getting ready, shots of the venu, groom and groomsmen and so on).  Then I have a full edit of the service and a full edit of the reception.  After I am finished with those 3 sequences I start working on the ending highlight collage.  I make a new sequence for the collage then I go to my bin and drag the "Service Sequence" up into the preview monitor (I know in my previous post I said double click.  That does not work, because that just opens it up as a sequence.  Sorry about that).  In the preview monitor I scrub through the footage until I find a section that I want in the montage.  Then I set in and out points around that clip and drop it into my Montage timline, and what ever effects to it that I want, then move on to the next clip and do the same thing.  To me it seems that we have the same process except that you have to wait for the export (which is always going to give you a quality loss, how much of a loss depends on format and settings).
    Please let me know if I have misread your process.
    Phil

Maybe you are looking for

  • Help - "Cannot copy file: Data error (Cyclic Redundancy Check)"

    Hey there, I'm having problems with my Zen Mosaic EZ00. Up until a few weeks ago it has been working fine however, since then, has begun to show the error message? "Cannot copy file: Data error (Cyclic Redundancy Check)" whenever I try to copy files

  • Satellite Pro L300 - What connection order is right?

    Some people recommend plugging the adapter into the wall outlet BEFORE connecting the laptop. They think that removing the load (ie. the laptop) from the adapter reduces the inrush current. They also think this protects the laptop from any power surg

  • Too many java consoles

    '''looking at my add-ons list, I see several Java Consoles, going all the way from 6.0.07 to 6.0.29 - they are all enables. do I need all of them or should I do a cleanup and keep only the latest one?'''

  • Backing up a managed library in Aperture 3

    So after much reading this is what I'm thinking..I'm hoping some of you very experienced aperture people will comment on this. Planning a new managed library (may make it referenced in the future but not for now). Have (2) 3 TB external Seagate HD an

  • Where are Adobe Reader and Firefox ??

    Maybe I lack understanding in Mac environment, but I downloaded (and tought I had installed) Adobe Reader and Firefox, but can<t find anything else than rhw .dmg files (instead of the normal icons for the installed programs) on the desktop. How ca I