NW2004s Dynamic Job chains - RSPCRUNVARIABLES

Hi all,
we have to implement process chains per company code. It would be helpful to have just one single dynamic process chain per sourcesystem/client. (The BW objects exist per sourcesystem/client => minimum no of process chains).
Is it possible to supply the process chain with the information about the company code ? Each step then could read the company code and restrict istself to this selection criteria (see below list).
What's the table "RSPCRUNVARIABLES" for ?
How can precess steps share information across the process chain ?
Any other ideas, which help to create dynamic process chains ?
Any hints appreciated.
Marcus
mandatory process steps:
<b>infpackage</b>
user exit for selection could read the restrictions, if only the comp.-code was known
<b>data transfer process</b>
function for filter criteria could read the restrictions, if only the comp.-code was known
<b>ABAP program</b>
could read the respective company code without the need for a static variant
<b>deletion of PSA table</b>
no need for restriction. delete all requests, which have been update successfully or have failed trying
<b>deletion of requests in infocubes</b>
user exits, which determines the requests for deletion.  could read the respective company code

Hi Chetan,
it seems you're familiar with the use of this table. Am I on the right track ?
Can you please give me a hint on how to use this variables?
How do I create the entry (ABAP, API ?) in the table and
how can the process steps read the variable ?
Is there any samples available ?
I'm familiar with all kinds of user-exits since 2.0B, but I have no clue. In the course materials there's nothing about these variables! (Yes: Flexible execution path, client dependency and so on..)
I also searched the online help.
http://help.sap.com/saphelp_nw70/helpdata/de/e3/e60138fede083de10000009b38f8cf/frameset.htm
Marcus

Similar Messages

  • Error while scheduling the Email Alert JOB chain

    Hi All,
    I have defined a job chain in CPS and when i am going to schedule it then it is giving me error message.We have taken the trial version.
    Please find the log attached below.
    11:18:31 PM:
    JCS-111004: Queue ETD.sapetd00_Queue has no ProcessServer with the required JobDefinitionType/Service/Resource for Job 932 (submitted from ETD.Z_MONI_BATCH_DP copy from 2009/12/30 18:22:23,113 Australia/Sydney) (submitted from Job Definition ETD.Z_MONI_BATCH_DP (Copy from 2009/12/30 18:22:23,113 Australia/Sydney)): Job Definition Type CSH/Service PlatformAgentService/"Empty"
    JCS-102064: Job 934 (submitted from System_Mail_Send copy from 2009/12/29 17:54:16,608 Australia/Sydney) is global but refers (via Job) to an object in an isolation group
    JCS-102064: Job 934 (submitted from System_Mail_Send copy from 2009/12/29 17:54:16,608 Australia/Sydney) is global but refers (via Chain Step) to an object in an isolation group
    JCS-102064: Job 934 (submitted from System_Mail_Send copy from 2009/12/29 17:54:16,608 Australia/Sydney) is global but refers (via Parent Job) to an object in an isolation group Show error details
    Thanks
    Rishi Abrol

    Hi
    Are you logged into the correct isolation group ?
    Ensure the process server is also assigned to the queue.
    Regards

  • Job Chaining and Quickcluster

    I always get
    Status: Failed - HOST [Macintosh.local] QuickTime file not found.
    after the first part of the job is successful.
    If I just submit with "This Computer" it works fine. Original file is ProRes 422, first job uses ProRes 422 to scale it down to 480x270. Second job compresses to h.264. I found some info on this board from 2008 saying that job chaining and quickclusters don't work together. Is that still how it is? that's really useless..
    I also found this from Jan 2009
    David M Brewer said:
    The reason the second rendering is failing is.......this has happen to me a few times until I figured it out.....make sure you set the dimension to the video in the h.264 settings, set to the same size as the Pro Rez dimensions.
    For the most part the dimensions are left blank for the second link, h.264. And don't use 100% of source. Put the physical numbers into the spaces. When you link one video to another, the second codec doesn't know the specs settings you made for the first video settings.
    Also make sure you (at least check) set the audio for the second video. I usually have the Pro Res do the audio conversion and just past-through to the second video settings. Again it can happen the the audio is disable in the h.264 settings. This has happen a few time for me........... Check and double check your settings!
    He doesn't mention anything about with or w/o Quickclusters, but I tried what he said and could not get it to work with quickclusters...
    Anyone got any new info on this?

    Studio X,
    Thanks for taking the time to run some tests and post your results.
    I'm finding the same results with converting ProRes422 to mp4, But...
    Other codecs are giving me very different results.
    I've run some random tests to try to get a grip on whats happening.
    First I was playing around with the # of instances. I've read here and on Barefeats that (at least for my model MacPro) the instances should be set to (# of processors/2), so I've been using 4 for quite a while now and thought I'd test it for myself.
    A single 5min ProRes422 1920x1080 29.97 file to h.264
    This Computer- 15:28
    2 Instances- 14:56
    3 Instances- 13:52
    4 Instances- 14:48
    5 Instances- 13:43
    6 Instances- 13:48
    7 Instances- 13:58
    In this case 5i was the fastest but not using a Quickcluster wasn't far off
    A single 2m30s ProRes422 1920x1080 29.97 file to h.264
    This Computer- 3:19
    2 Instances- 3:45
    3 Instances- 3:45
    4 Instances- 3:45
    5 Instances- 3:50
    6 Instances- 4:00
    7 Instances- 4:00
    Interesting...not using a Quickcluster is fastest
    A single 2m30s ProRes422 1920x1080 29.97 file Scaled down using original codec
    This Computer- 5:20
    4 Instances- 4:10
    5 Instances- 4:10
    7 Instances- 4:11
    A single 1m30s ProRes422 1920x1080 29.97 file to mpeg-2
    This Computer- 2:12
    5 Instances- 2:10
    When Quickclusters are faster, 4-5 instances does seem to be the sweet spot(again for my setup).
    In the mpeg-2 test, I should have used a longer clip to get a better test but it was getting late and I was just tring to get an idea of the codecs usage of my resources. I was also monitoring CPU usage with Activity Monitor in all tests.
    Now multiclip batches:
    I forgot to write down the length of the clips in this first test but it consisted of 8 ProRes 422 clips. 3 about 1m long and the rest between 13s and 30s
    8 ProRes 422 clips to mp4
    This Computer- 11:25
    4 Instances- 5:16
    Same results as Studio X
    Next tests with 5 clips(total 1m51s)
    5 ProRes 422 clips to h.264
    This Computer- 5:00
    4 Instances- 4:52
    5 ProRes 422 clips to mpeg-2
    This Computer- 2:55
    4 Instances- 3:01
    5 ProRes 422 clips to DV NTSC
    This Computer- 6:40
    4 Instances- 5:12
    5 ProRes 422 clips to Photo Jpeg
    This Computer- 2:44
    4 Instances- 2:46
    I re-ran the last test with 7 clips because of the time it took reassemble the segmented clips
    7 ProRes 422 clips to Photo Jpeg(total 3m14s)
    This Computer- 4:43
    4 Instances- 3:41
    One last test,
    A single ProRes 422 clip to Photo Jpeg(4:05;23)
    This Computer- 5:52
    4 Instances- 4:10
    Let me start off by saying it is clear that there are many factors that effect compression times such as # of clips, length of clips, and codecs, but here are some of the things I noted:
    1)Some codecs themselves seem to be "more aware" of the computers resources than others.
    When I compress to h.264 w/o a cluster it will use about 80-85% of all resources
    When I compress to h.264 with a cluster it will use about 90-95% of all resources
    When I compress to PhotoJpeg w/o a cluster it will use about 20-25% of all resources
    When I compress to PhotoJpeg with a cluster it will use about 80-85% of all resources
    2)The time it takes to reassemble clips can be quite long and could effect overall speed
    In the very last test, compressing a single file to photoJpeg using 4 instances took 4m10s. Watching Batch Monitor I noted that it took 2m0s to compress and 2m10s to reassemble.Wow...
    It would be interesting to see how the disassemble/reassemble of bigger and larger batches using clusters effect overall time. But that would take some time.
    I think the thing I will be taking with me from all of this is your workflow is your own. If you want to optimize it, you should inspect it, test it and adjust it where it needs adjusting. Now if anyone has the time and was to run similar tests with very different results I'd love to know about it...

  • How to Schedule a Job Chain to start automatically on SAP CPS.

    Hi,
    I did a job chain and i want to run automatically on sap cps Tuesday thru Saturday at 6:00 a.m., i make a calendar on sap cps with this specific options but the job chain doesn't start running.  I don't know if i need to do something more, so if someone can give a little help with this i will apreciate a lot.
    Thanks,
    Omar

    It finished ok but on the operator message i got the following message:
    Unable to resubmit this job.
    Details:
    com.redwood.scheduler.api.exception.TimeWindowExpectedOpenWindowException: CalculateNextClose should only be called on an open time window
    at com.redwood.scheduler.model.method.impl.TimeWindowMethodImpl.calculateNextCloseIntersectionInt(TimeWindowMethodImpl.java:388)
    at com.redwood.scheduler.model.method.impl.TimeWindowMethodImpl.calculateNextCloseIntersectInt(TimeWindowMethodImpl.java:249)
    at com.redwood.scheduler.model.TimeWindowImpl.calculateNextCloseIntersectInt(TimeWindowImpl.java:212)
    at com.redwood.scheduler.model.method.impl.SubmitFrameMethodImpl.calculateNextInt(SubmitFrameMethodImpl.java:178)
    at com.redwood.scheduler.model.SubmitFrameImpl.calculateNext(SubmitFrameImpl.java:176)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.resubmitSubmitFrameJob(JobStatusChangePrepareListener.java:763)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.resubmitJob(JobStatusChangePrepareListener.java:637)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.processJobToFinalState(JobStatusChangePrepareListener.java:520)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.modelModified(JobStatusChangePrepareListener.java:233)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.informListeners(LowLevelPersistenceImpl.java:728)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.writeDirtyObjectListRetry(LowLevelPersistenceImpl.java:207)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.access$000(LowLevelPersistenceImpl.java:38)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl$WriteDirtyObjectListUnitOfWork.execute(LowLevelPersistenceImpl.java:79)
    at com.redwood.scheduler.persistence.impl.PersistenceUnitOfWorkManager.execute(PersistenceUnitOfWorkManager.java:34)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.writeDirtyObjectList(LowLevelPersistenceImpl.java:102)
    at com.redwood.scheduler.cluster.persistence.ClusteredLowLevelPersistence.writeDirtyObjectList(ClusteredLowLevelPersistence.java:59)
    at com.redwood.scheduler.model.SchedulerSessionImpl.writeDirtyListLocal(SchedulerSessionImpl.java:648)
    at com.redwood.scheduler.model.SchedulerSessionImpl.persist(SchedulerSessionImpl.java:626)
    at com.redwood.scheduler.apiint.model.UnitOfWorkManager.perform(UnitOfWorkManager.java:32)
    at com.redwood.scheduler.apiint.model.UnitOfWorkManager.perform(UnitOfWorkManager.java:13)
    at com.redwood.scheduler.jobchainservice.JobChainService.childJobFinalStatus(JobChainService.java:223)
    at com.redwood.scheduler.core.processserver.ProcessServerRuntime.childJobFinalStatus(ProcessServerRuntime.java:836)
    at com.redwood.scheduler.core.processserver.ProcessServerRuntime.onMessage(ProcessServerRuntime.java:248)
    at com.redwood.scheduler.infrastructure.work.MessageEnabledWork.run(MessageEnabledWork.java:104)
    at com.redwood.scheduler.infrastructure.work.WorkerImpl.run(WorkerImpl.java:109)
    at java.lang.Thread.run(Thread.java:534)

  • Can we schedule steps in Job Chain to run at a particular time of the Day.

    Hi ,
    We have created a Job chain for 3 steps.our requirement is we want to step 1 to run as per the schedule of Job chain but we want  step2 to run on fri 2 gmt and step 3 to run on saturday 1 gmt.
    is ther any setting in Job chain so that we can schedule subsequent steps to run at a particular time.
    Regards
    Rajesh

    Hi,
    You can add a timewindow to the jobdefinitions that you call in step 2 and 3, to restrict the start times for these jobs to the desired time.
    Regards,
    Anton.

  • Scheduling a BI job chain in Redwood

    The problem I am having is we are trying to schedule a BI job chain via Redwood software and are not getting any response. Within Redwood, I have executed these jobs IMPORT_BW_CHAINS, IMPORT_BW_CHAIN_DEFINITION, IMPORT_BW_INFOPACKAGES using BI job chain  0fcsm_cm_10 which is defined in BI as a job chain. These jobs run to completion but nothing is moved into Redwood to schedule as you would see from a import of a CCMS job. When I run job RUN_BW_CHAIN using the same BI job chain ID I receive the below error.  Not sure what I’m missing or doing with the process to get to schedule the BI job chains with Redwood.
    ORA-06502: PL/SQL: numeric or value error
    ORA-06512: at "RSI.RSIEXEC", line 1638
    ORA-06512: at "RSI.RSIEXEC", line 1759
    ORA-06512: at "RSI.RSI_RUN_BW_CHAIN", line 21
    ORA-06512: at "RSI.RSI_RUN_BW_CHAIN", line 80
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_SYS_SQL", line 1200
    ORA-06512: at "SYS.DBMS_SQL", line 323
    ORA-06512: at "SYSJCS.DDL", line 1085
    ORA-06512: at "SYSJCS.DDL", line 1118
    ORA-06512: at "SYSJCS.DDL", line 1177
    ORA-06512: at line 3
    JCS-00215: in statement RSOJ_EXECUTE_JOB

    I am also seeing the same issue 
    anton the last information  you requested
    The following products are installed in the Cronacle repository:
    Product                                  Version    Status
    Cronacle for SAP solutions               7.0.3      Production 
    Cronacle Forecast Module                 7.0.3      Production 
    Cronacle Reports Module                  7.0.3      Production 
    Cronacle &module Module                  7.0.2      development
    Cronacle Mail Module                     7.0.3      Production 
    Cronacle Audit Module                    7.0.2 r2.2 Production 
    Cronacle Process Manager for Web         7.0.3      Production 
    Cronacle Module Installer                7.0.3      Production 
    Cronacle Repository                      7.0.3.34   Production 
    Cronacle Monitor Module                  7.0.3      Production

  • Error in redwood job chain for Infopackage

    Hi,
    We have recently installed redwood for handling sap jobs and are able to run all the job chains with job step in abap program successfully.
    However, for APO and BW job chains we have intermediate step for executing BW infopackage where the job is getting failed with the below error:
    SAP/BW Error Message: rfc call failed 089: Job BI_BTC<infopackage_name>has not (yet ?) been started        
    The preceding abap job steps are getting executed successfully. After the infopackage step is failed the consecutive steps all fails.
    This problem is common to all job chains with infopackage.
    Any help is greatly appreciated.
    Regards,
    Sandeep.

    Hello Anton,
    We are facing the same problem: Same log error message.
    The infopackage is correctly started and ended in BW.
    Here our versions:
    Redwood Explorer 7.0.4.2 SP2
    BW :   SAP_BASIS 70016
               SAP_BW 70018
    Do you think applying SAP CPS SP3 would solve the problem?
    Or can we solve it by modifying some specific parameters?
    Thanks in advance.
    Regards;
    Mathieu

  • Backing up Jobs, Chains and Programs in Oracle Job Scheduler

    What is the best way to back up Jobs, Chains and Programs created in the Oracle Job Scheduler via Enterprise Manager - and also the best way to get them from one database to another. I am creating quite a long chain which executes many programs in our test database and wish to back everything up along the way. I will also then need to migrate to the production database.
    Thanks for any advice,
    Susan

    Hi Susan,
    Unfortunately there are not too many options.
    To backup a job you can use dbms_scheduler.copy_job. I believe EM has a button called "create like" for jobs and programs but I am not sure about chains and this can be used to create backups as well.
    A more general purpose solution which should also cover chains is to do a schema-level export using expdp i.e. a dump of an entire schema.
    e.g.
    SQL> create directory dumpdir as '/tmp';
    SQL> grant all on directory dumpdir to public;
    # expdp scott/tiger DUMPFILE=scott_backup.dmp directory=dumpdir
    You can then import into a SQL text file e.g.
    # impdp scott/tiger DIRECTORY=dumpdir DUMPFILE=scott_backup SQLFILE=scott_backup.out
    or import into another database (and even another schema) e.g.
    # impdp scott/tiger DIRECTORY=dumpdir DUMPFILE=scott_backup
    Hope this helps,
    Ravi.

  • SQL Developer 3.1: Job Chains - 2 Bugs.

    Hi.
    Just having a play with defining job chains in SQL Developer 3.1 and I came across these bugs...
    1) When you try to define a new chain using SQL Developer you get, "ORA-01741: illegal zero-length identifier"
    If you look on the SQL tab you can see that it is messing up the double quotes, placing two sets of double quotes around the chain_name.
    BEGIN
        sys.dbms_scheduler.create_chain(
            comments => '',
            chain_name => '"TEST".""test_chain_3""'
          sys.dbms_scheduler.enable(name=>'"TEST".""test_chain_3""');
    END;If you remove the extra quotes and run the code it works fine.
    2) The job chain diagram seems to be case sensitive as far as the rule definitions are concerned.
    If you follow the chain setup in SQL*Plus described here (http://www.oracle-base.com/articles/10g/SchedulerEnhancements_10gR2.php#job_chains), then check the associated diagram in SQL Developer it is displayed properly.
    Now repeat the chain definition from SQL*Plus, but this time define the conditions and actions of the rules in lower case (which is valid and functions as expected).
    BEGIN
      DBMS_SCHEDULER.define_chain_rule (
        chain_name => 'test_chain_1',
        condition  => 'TRUE',
        action     => 'start chain_step_1',
        rule_name  => 'chain_rule_1',
        comments   => 'First link in the chain.');
      DBMS_SCHEDULER.define_chain_rule (
        chain_name => 'test_chain_1',
        condition  => 'chain_step_1 completed',
        action     => 'start chain_step_2',
        rule_name  => 'chain_rule_2',
        comments   => 'Second link in the chain.');
      DBMS_SCHEDULER.define_chain_rule (
        chain_name => 'test_chain_1',
        condition  => 'chain_step_2 completed',
        action     => 'start chain_step_3',
        rule_name  => 'chain_rule_3',
        comments   => 'Third link in the chain.');
      DBMS_SCHEDULER.define_chain_rule (
        chain_name => 'test_chain_1',
        condition  => 'chain_step_3 completed',
        action     => 'END',
        rule_name  => 'chain_rule_4',
        comments   => 'End of the chain.');
    END;
    /Now check the diagram again in SQL Developer and you will see that all the steps are present, but the links (rules) are not correctly displayed. Seems like it is case sensitive...
    If you need any more info, please ask.
    Cheers
    Tim...
    Edited by: TimHall on Feb 10, 2012 4:23 PM

    I have confirmed your two test cases are indeed bugs and I have raised them. The first seems to be if you have a mixture of _ and lowercase characters in the name but we will look at other cases too. The second is that the parsing of the condition/action does not uppercase unquoted ids. The second can be cirumvented by using quotes etc.

  • How to setup job chain with time dependant steps

    Hi
    I need to setup the following job chain with next job step waiting until specified time before starting
    Step 1 Job A Time 07:30 if complete go to job step 2
    Step 2 Job B start time 09:00 if complete go to job step 3
    Step 3 Job C start time 09:30 (end job)
    Have tried to use a pre-condition in step 2 and 3 but that just caused thoes steps to be skipped.
    can somebody point me in the right direction
    Thanks in advance
    Jon

    Hi Anton/Babu
    Thanks for your help, I managed to set this up by embedding multiple Job Chains with their own time window in a master job chain.
    As we continue our roll out not sure how scalable the solution will be as you have to create multiple job chains for essentially the same job with the only difference being a specific time window.
    I guess only time will tell.
    Thanks for you help again guys
    Jon

  • Restart/resubmit job chain waiting of a file event

    Hi Experts,
    how can I restart/resubmit a job chain waiting of a file event?
    In my case the variables OriginalPath and FinalPath are used in job chain parameters as an Default Expression (waitEvents...). If I restart the job chain I get the message "Could not evaluate default value for parameter...".
    We are using SAP-CPS Build M28.20-37214.
    Regards
    Mathias

    Hi,
    You probably want to restart it because there was a failure somewhere in the chain. In this case it is easier to not let the chain end in error, but to let it go to status Console using the "Request Restart" postcondition.
    Then when the step is restarted, or the chain is restarted from the beginning, it is restarted within the original chain and it reuses the already determined parameter values.
    Regards,
    Anton.

  • Creating a dynamic job in OEM

    Hi,
    I need to run an SQL script every night across all of my database targets. My problem is that the data contained in the script will change every day, so I need some way of creating a dynamic job. Any ideas on this?
    Could I create/submit the job using emcli called from a korn shell cron job for example?
    Any other suggestions?

    If you execute a script from the filesystem, it needs to be available at the Agent side Host.
    So therefor it would be better to include the script itserlf in the Job Specification, you can then decide to store the job in the Job Library. Then the script is stored in the Library in a central location and you can execute it on any host target you like
    Regards
    Rob
    http://oemgc.wordpress.com

  • Steps max limit  in a Job Chain

    Hello experts,
    The SAP recommanded that the minimum number of steps in Job Chain for performance purpose.
    Can anybody tell , Is there any max limit on the number of steps in a Job chain?
    Thanks ,
    Suresh Bavisetti

    Can you reference the documentation where you see a minimum number of steps in a job chain recommended? I think there is no upper limit for the number of steps in a chain. How many steps are you thinking of adding?
    Or are you talking about nesting job chains within other job chains? Pleas eclarify if this is the case.
    Rgds,
    David Glynn

  • Submit Multiple Job Definitions/Job Chains with same Time window/Submit frame in mass

    Hi,
    We have a requirement to submit multiple Job Definition/Job Chains which are part of common Time Window/Submit frame/Queue....
    Ex. We have over 50+ different jobs/job chains which will runs Monday to Friday for every 2 hours on same Queue "XXX_Queue".  Instead of submitting each job/job chain manually, we would like to know if we could use any script that can achieve our requirement? since we have couple of other jobs which fall under same scenarios...
    We are on M33.104 version. Please let me know if any one has any scripts or alternate way of submitting multiple jobs/job chains in mass.
    Thanks in advance!
    Nidhi.

    Hi Nidhish,
    Here is some code to set some stuff on a job:
    //Get the partition, for global this is not necessary as global is default
    Partition part = jcsSession.getPartitionByName("GLOBAL");
    //Get the job definition
    JobDefinition jobdef=jcsSession.getJobDefinitionByName(part, "System_Info");
    //Get the submit frame
    SubmitFrame sf = jcsSession.getSubmitFrameByName(part, "SF_Every_Year");
    //Get the time window
    TimeWindow tw = jcsSession.getTimeWindowByName(part, "System_Week_WorkingHours");
    //Set the start time
    DateTimeZone dtz = new DateTimeZone(2015, 10, 18, 15, 0, 0, 0);
    //Get the Queue
    Queue SystemQ=jcsSession.getQueueByName(part, "System");
    //Create the Job
    Job infoJob=jobdef.prepare();
    //Attach queue to job
    infoJob.setQueue(SystemQ);
    //Attach submit frame, time window, start time
    infoJob.setSubmitFrame(sf);
    infoJob.setTimeWindow(tw);
    infoJob.setRequestedStartTime(dtz);
    //Print out the jobid of the job
    jcsOut.println(infoJob.getJobId());
    //Submit the job
    jcsSession.persist();
    Regards,
    HP

  • Job Chain in RAC Environment

    Hi,
    i've a job chain defined in a RAC environment.
    I need that all the steps of my chain are performed on the same instance.
    How can i do this?
    Now stepA run on the instance 1 and stepB run on the instance 2.
    Thanks.

    Again, If it is a file system write issue, use ACFS (11.2.0.x) to create a shared file system that all nodes can see. What are you using to delete/create these files? A shell script?
    1) create an ACFS file system of sufficient size to handle your data
    2) mkdir /some/acfs/location
    3) using sqlplus create a database directory " create directory foo as '/some/acfs/location' "
    4) put the file in this location
    5) use UTL_FILE to delete AND create the file (see fremove() - http://docs.oracle.com/cd/E11882_01/appdev.112/e10577/u_file.htm)

Maybe you are looking for

  • Restoring contacts after using find my iphone

    Hi, I deleted my partners contacts by using find my iphone, he has never sync on my pc or used icloud. is there anyway he can restore his contacts. thanks

  • Loss of back light to screen - screen kind of dead!

    I opened my MacBook Pro earlier today and the screen appeared dead. I have it connected to an external monitor and that's working fine. On closer investigation I notice that I can just about see some windows on the MacBook Pro screen - looks like the

  • JSP, BEANS AND DAO best practice... (and a little of STRUTS & JSTL)

    Hi, Want to get your opinion on how to really use bean and dao in jsp pages. Imagine that you wants to display a news list on the jsp index page for example. You've got a dao with method getLastNews(int size) which returns a Collection of News beans.

  • Cairo-compmgr + openbox

    I'm trying to get cairo-compmgr working with openbox. I start it in my .config/openbox/autoexec file as cairo-compmgr -n & it really works fine, except that when I close a window the window stays visible (but loses it's decoration) for about 5 to 10

  • Multiple computer installations?

    I use logic on my studio G5, as well as a Macbook - I just take my dongle with me... With the removal the dongle key, does anyone know if I'll still be able to use it on both of my computers?