How to submit a job into a job group

Using  job events/actions we occasionally need for a scheduled job to submit another job into the current schedule.
The issue here is that the submitted jobs go straight into the highest level view.  We will need to do this for over 400
times in one night, and do not wish to flood the operators view, so are simply trying to find a way to submit it so that
it shows up within an already existing group.  Any ideas?

It’s just a thought…… but you could..
Job Group A
                Jobaa- run
                Jobab – echo a blank file to a shared directory, example: /C "echo ''> \\USNPKETLP03\Infa_Shared\File_Inbound\zcmtyr_eu.csv
Job Group B
                Jobba – run dependent on zcmtyr_eu.csv being present. Use the check box, “rerun each time the dependencies are met”
                Jobbb – delete the echo file.

Similar Messages

  • How to run a job group mult x per day M-F but only 1x on the weekend

    I currently have a job group that runs M-F, repeating several times each day. I'd like to now have this same group also run on Sat and Sun only once for these days. Is this possible on a single job group configuration or must I create a 2nd copy of the group and apply a weekend calendar with no repeat? I'm trying to avoid copying the group because I assume any changes I make to the group I'd then have to make twice? One approach I'm trying to figure out is if I can trigger the group to run on Sat and Sun via variable trigger and hoping this will not cause it to repeat in this case?

    Micheal
    What version are you running?
    In 5.31, what I would do is to have the main set M-F then use a different job (like a cmd echo or powershell write-host) on weekends that inserts the MF set at the time you want with a job event \ job insert action. This would override the time and execute when you want (you could probably just use one for  SAT, SUN). we do similar things to avoid maintenance windows.
    You might lose any downstream dependecies if the original is in a nested group but that might even work with a slight modification to the job depdency with the match occurance check box (relative to group, otherwise, for day) option
    It would be best if tidal let you add multiple calendars to one job, not sure if that is in the works but it should be on their radar.
    Marc

  • How to turn mailing lists into address book groups

    I'm looking for better ways to manage groups in the address book.
    I especially need a way add recipients on distribution list for a particular piece of email to a particular group. I have a bunch of implicit groups I've created that way in gmail and I'd like create explicit address book groups out of my (informal) email distribution lists. Mail.app does a good job of downloading the mails via imap or pop, but now I want to select a message and say "Add everyone who received this mail to my alumni [or whatever] group in the address book."
    I'm also looking for other ideas, tips, scripts for managing groups in my address book.

    OK, how about you open your csv file in Excel and put a tag identifying the custom label in front of each of the items (so eg 05991444555 with label special becomes sp-05991444555), then do your import. Then after the import run an AppleScript to strip the tag and reset the label.
    AK

  • How to set a program into backgroud job

    hi experts,i want to set a program into backgroud job.
    the original code like this:
    ...some statements...
    PERFORM FRM_SEND_MAIL USING WA_YA_LX.
    ...some statements...
    i want to set 'PERFORM FRM_SEND_MAIL USING WA_YA_LX.' into background job.
    is that code like this?
      CALL FUNCTION 'JOB_OPEN'
         EXPORTING
           JOBNAME  = WA_TBTCJOB-JOBNAME
           JOBCLASS = 'A'
         IMPORTING
           JOBCOUNT = WA_TBTCJOB-JOBCOUNT.
    PERFORM FRM_SEND_MAIL USING WA_YA_LX.
      CALL FUNCTION 'JOB_CLOSE'
        EXPORTING
          JOBCOUNT                          = WA_TBTCJOB-JOBCOUNT
          JOBNAME                           = WA_TBTCJOB-JOBNAME
          SDLSTRTDT                         = SY-DATUM
          SDLSTRTTM                         = WA_TBTCJOB-SDLSTRTTM  .
    hunger for you advice,thanks a lot.

    See the following simple prog to schedule in background.
    You cannot schedule the subroutine i.e perform to run in background job. Instead write the subroutine in another program.
    to pass any value to that program, declare selection screen parameter and pass the value from the first one using submit.
    Job open
    call function 'JOB_OPEN'
    exporting
    delanfrep = ' '
    jobgroup = ' '
    jobname = jobname
    *sdlstrtdt = sy-datum
    *sdlstrttm = sy-uzeit
    importing
    jobcount = jobcount
    exceptions
    cant_create_job = 01
    invalid_job_data = 02
    jobname_missing = 03.
    if sy-subrc ne 0.
    write:/ 'error in opening a job'.
    endif.
    Insert process into job
    SUBMIT ZSDQ_BCK_TEST
    and return
    with p_type = 'F'   "Selection screen Parameter
    user sy-uname
    via job jobname
    number jobcount.
    if sy-subrc > 0.
      WRITE:/ 'ERROR PROCESSING JOB'.
    endif.
    Close job
    call function 'JOB_CLOSE'
    exporting
    *event_id = starttime-eventid
    *event_param = starttime-eventparm
    *event_periodic = starttime-periodic
    jobcount = jobcount
    jobname = jobname
    *laststrtdt = starttime-laststrtdt
    *laststrttm = starttime-laststrttm
    *prddays = 1
    *prdhours = 0
    *prdmins = 0
    *prdmonths = 0
    *prdweeks = 0
    *sdlstrtdt = sdlstrtdt
    *sdlstrttm = sdlstrttm
    strtimmed = 'X'
    *targetsystem = host
    RECIPIENT_OBJ = RECIPIENT_OBJ
    exceptions
    cant_start_immediate = 01
    invalid_startdate = 02
    jobname_missing = 03
    job_close_failed = 04
    job_nosteps = 05
    job_notex = 06
    lock_failed = 07
    others = 99.
    ***This is the second program which will run in background
    REPORT ZSDQ_BCK_TEST .
    TYPES: BEGIN OF TY_ADRC,
            HOUSE_NUM1 LIKE ADRC-HOUSE_NUM1,
            NAME3 LIKE ADRC-NAME3,
            NAME4 LIKE ADRC-NAME4,
            LOCATION LIKE ADRC-LOCATION,
          END OF TY_ADRC.
    DATA: IT_ADRC TYPE STANDARD TABLE OF TY_ADRC WITH HEADER LINE.
    Parameters: p_type type c.
    START-OF-SELECTION.
    SELECT HOUSE_NUM1
            NAME3
            NAME4
            LOCATION
           UP TO 40000 rows
            FROM ADRC
            INTO TABLE IT_ADRC.
           WHERE ADDRNUMBER = '0000022423'.
    IF SY-SUBRC = 0.
       LOOP AT IT_ADRC.
         WRITE:/ IT_ADRC-HOUSE_NUM1, IT_ADRC-NAME3.
       ENDLOOP.
       write:/ p_type.
    ENDIF.

  • Submit background job in APEX -- count on all_jobs return shadow jobs.

    Hi, I am trying to submit a job in APEX. The setups are as below:
    On Submit - After Computation and Validations
    Run Process: Once Per Page Visit
    Process:
    DECLARE
    v_instance_cnt NUMBER;
    job_no NUMBER;
    BEGIN
    SELECT COUNT(0)
    INTO v_instance_cnt
    FROM user_jobs
    WHERE what LIKE 'pagl_refresh.master_refresh%';
    IF NVL(v_instance_cnt,0) = 0 THEN
    DBMS_JOB.SUBMIT(job_no, 'pagl_refresh.master_refresh('''||:G_BSYS_USER_NAME||''');');
    :P3_MESSAGE:= 'Job has submitted. Number is '||TO_CHAR(job_no);
    ELSE
    :P3_MESSAGE :='The refresh is in progress. Please wait ... ('||to_char(v_instance_cnt);
    END IF;
    END;
    Now, if I run the process, the :P3_MESSAGE message returns "'The refresh is in progress. Please wait ... (5)." . This is due to the count is 5 instead of expected 0.
    If I SELECT count(*) FROM dba_jobs WHERE lower(what) LIKE 'pagl_refresh.master_refresh%'; in SQLPLUS, it returns 0. Same result from all_jobs as well.
    My suspect is that it returns job counts include those that has been removed before. Yet, how APEX can see this? Does APEX use some special way to look into Job queue?
    Please help
    Thanks

    From the looks of it, the job is being submitted and run - although I would check the elapsed time to see if it's anywhere close to the 20-30 minutes you anticipate. Assuming not, I would suggest that the problem is in one of the following areas:
    1. The way in which you are passing in the arguments is not conforming to the expected input format or values and it's therefore not executing as expected.
    2. Your process implictly relies on the state of your apex application in some manner, which is not being reproduced within the procedure when the job is submitted.
    In the former case, I would check the procedure's specification against the page items types being passed in - you might have to explicitly convert some of your arguments into the appropriate type
    In the latter case, well... bearing in mind that we don't know what your procedure looks like and it's therefore kind of difficult to diagnose the problem, you'll possibly need to pass your session information into the procedure as additional parameters and re-create your session from within the code.

  • How to run a job in background programatically after 10 sec

    Hi Forum,
    Can anyone tell me How to run a job in background programatically after 10 sec..
    Thanks in advance

    Hi,
    Here is the example code
    *Submit report as job(i.e. in background) 
    data: jobname like tbtcjob-jobname value
                                 ' TRANSFER TRANSLATION'.
    data: jobcount like tbtcjob-jobcount,
          host like msxxlist-host.
    data: begin of starttime.
            include structure tbtcstrt.
    data: end of starttime.
    data: starttimeimmediate like btch0000-char1.
    * Job open
      call function 'JOB_OPEN'
           exporting
                delanfrep        = ' '
                jobgroup         = ' '
                jobname          = jobname
                sdlstrtdt        = sy-datum    " You need to give the Date for execution the Job
                sdlstrttm        = sy-uzeit    " You need to give the Time for execution the Job
           importing
                jobcount         = jobcount
           exceptions
                cant_create_job  = 01
                invalid_job_data = 02
                jobname_missing  = 03.
      if sy-subrc ne 0.
                                           "error processing
      endif.
    * Insert process into job
    SUBMIT zreport and return
                    with p_param1 = 'value'
                    with p_param2 = 'value'
                    user sy-uname
                    via job jobname
                    number jobcount.
      if sy-subrc > 0.
                                           "error processing
      endif.
    * Close job
      starttime-sdlstrtdt = sy-datum + 1.
      starttime-sdlstrttm = '220000'.
      call function 'JOB_CLOSE'
           exporting
                event_id             = starttime-eventid
                event_param          = starttime-eventparm
                event_periodic       = starttime-periodic
                jobcount             = jobcount
                jobname              = jobname
                laststrtdt           = starttime-laststrtdt
                laststrttm           = starttime-laststrttm
                prddays              = 1
                prdhours             = 0
                prdmins              = 0
                prdmonths            = 0
                prdweeks             = 0
                sdlstrtdt            = starttime-sdlstrtdt
                sdlstrttm            = starttime-sdlstrttm
                strtimmed            = starttimeimmediate
                targetsystem         = host
           exceptions
                cant_start_immediate = 01
                invalid_startdate    = 02
                jobname_missing      = 03
                job_close_failed     = 04
                job_nosteps          = 05
                job_notex            = 06
                lock_failed          = 07
                others               = 99.
      if sy-subrc eq 0.
                                           "error processing
      endif.
    Regards
    Sudheer

  • How to share a job with Compressor 4.1?

    Can anyone explain how to set up Compressor on two or more computers to share a coding job? I was never successful, neither with the old versions nor the new one. I have connected two computers running Mavericks via Ethernet. They appear in the preferences list of Compressor as inactive and can be selected (with a tick). Starting a job produces no error. Only the little network monitor window shows some activity: "inactive" (in white or yellow), sometimes: "not found" (in red). The computer which sends the job waits endlessly.
    I deactivated the firewall, connected the computer with DHCP or fixed IP but no success. What else do I have to do?

    Hi Steffen, hats off to you for gathering this valuable information!  I'm going to title this post:
    Setup Distributed Node Processing for Distributed Segmented MULTIPASS Transcoding in Compressor.app V4.1 (2013 version)
    Summary:
    A quick look at those logs of yours.., Qmaster is having trouble accessing its cluster storage and probably your transcode source and target elements.
    This is a bit of a giveaway - looks like the part time helpers at Apple didn’t look at it hard enough
    msg="CSwampService::startupServer: servicecontroller:com.apple.stomp.transcoderx couldn't advertise server, error = error: DNSServiceRegister failed: CRendezvousPublisher::_publish"/>
    <mrk tms="412112937.953" tmt="01/22/2014 20:48:57.953" pid="1975" kind="begin" what="log-session"/>
    <mrk tms="412113195.964" tmt="01/22/2014 20:53:15.964" pid="1975" kind="begin" what="service-request" req-id="D6BAF26C-DD43-4F29-BD72-81BC9CF25753:1" msg="Processing."></mrk>
    <log tms="412113209.037" tmt="01/22/2014 20:53:29.037" pid="1975" msg="Shared storage mount failed: exception = CNFSSharedStorage::_subscribe: command [/sbin/mount 127.0.0.53:/Users/steffen/Library/Application Support/Compressor/Storage/21D262F0-BF7EC314/shared] failed, error = 61"/>
    Let’s look at this and then propose a clean method of establishing and consolidating your cluster.
    Simply the Bonjour service is having a hard time trying to find you and also qmaster been running ragged trying to mount your Cluster 21D262F0-BF7EC314 storage.
    Let's fix it.
    Basics for the above with Compressor v4.1 and Qmaster.
    much has been abstracted from the user to help easy implentation and use. This is good me thinks!
    avoid ticking every option that is available on each host , such facilities aggravate and add to the complexity of your workflow environment
    isolate the network subnets to use for all host access, file system paths, communication, diagnosis, monitoring  and finally data transfer (see later)
    use source objects that will develop segments that will be distributed for processing. A 3 minute clip generally won't segment to cause distribution.
    review any workflow gains by distributed transcoding: slow node hols up process and additional time to assemble qt segments. A cluster dedicated to an iMac or macpro can often be faster.  (Have several clusters defined and submit accordingly (long , most and short )!!)
    All elements/objects used in the source and any target folders SHOULD (not must) be mounted and accessible by each Qmaster node.  You can use sym links I recall. For reasons of efficiently and easy of diagnosis.   
    So.. I'd propose you try and test your setup as follows .
    Try this and start from beginning.  Do your best to perform these work instructions. Try also not to deviate if u can
    Simple Architecture Approach:
    Your main Macbookpro or main work mac (refered to by you as "master") shall be designated the qmasterd controller that services batch submissions AND that also provides transcode services.
    The other macs ("service or slave nodes) will ONLY perform transcoding services and will NOT accept batch submissions.  The slaves / service nodes  will not be able to send their jobs to your master controller for transcoding for example.
    Keep it simple! and please follow these steps.
    Step 1: Quiesce your clusters and Qmaster
    in Compressor.app v4.1 / Preferences / Shared Computers, stop / disable all hosts (both your macs) from automatic file sharing - tick it OFF (it causes issue u have).. More later
    In Compressor.app v4.1 / Preferences / My Computer, stop / disable all hosts (both your macs) stop allowing others to add batches to your host. Slide to OFF
    On all hosts, quit or force out compressor.app v4.1
    On all hosts (macs) use activity monitor.app or unix ps command to Force Quit ( or kill) any currently running qmasterd task and any compressord tasks if you can.
    On all hosts, purge | clean out | delete  the Qmaster and compressor structures. This is documented by several of us on this forum but fundamentally you want to preserve your settings and destination templates and delete the rest.  Do these sub-steps on all hosts where u intend to deploy compressor/Qmaster for your distributed transcode processing
    a. Navigate to /Users/shared/library/Application Support/ and delete the Compressor folder if it exists. By all means use the osx Finder to put it in the trash or just use the faithfully unix rm command to delete it immediately without serialisation : rm -rf /Users/Shared/Library/Application Support/Compressor
    b. Navigate to your home directory ~/Library/Application Support/Compressor and move or copy any personalised values to your desktop so we can reinstate them later. Copy these two folders if they exist.
    Settings
    Layouts
    And also copy /move any customised destination templates that u used. These are files ending in ".template"
    Now using the Finder or unix command delete your ~/Library/Application Support/Compressor folder and all it's objects such: rm -rf  ~/Library/Application Support/Compressor
    c. Dismount (+E or drag into trash) any shared file systems you have manually or automatically shared between your hosts. (Your two macs) . Turn off any auto mounts you may have setup in login items for Qmaster and your source and target libraries.
    d. After you have done Steps 1a - 1c on all your hosts ,
    then RESTART
    and log back into your hosts
    attempt to delete the trash on each.
         6. Check the activity monitor and confirm there are no compressord sub tasks running. Qmasterd might be running. That's ok
    Step 2: set up your dedicated network for your transcoding cluster .. Important!
    In this step you will isolate your cluster network to a dedicated subnet. BTW ,o DNS is needed unless you get exotic with many nodes!
    You will:
    use the macs Wifi network as your primary network for NON TRANSCODING work such as email, internet , iChat, iCloud and bonjour (.local) I'm assuming u have this in place
    use the Ethernet on your macs as your dedicated Qmaster cluster subnet.
    For this procedure will make an IP address range 1.1.1.x subnet and manually assign the IP addresses to each host. Ofcourse you can use a smart DHCP router if u have one or a simple switch and use OSX SERVER.app erver 10.9 ($HK190 , €19) on your MAcbookpro... The later for another time
    a). using system preferences/network on your controller mac("master"), configure the built in Ethernet to manual IP address of 1.1.1.1 Yes yes, dhcp would be nice if we also had a dns for this subnet to identify each host (machine name) however we don't. Leave the subnet default to 255.255.255.0, and router to 1.1.1.1 .. It's not escaping anywhere! ( very private LAN for your cluster! )
    b) repeat step 2a to set the other "slaves" service node only macs whose built in Ethernet to 1.1.1.2 , 1.1.1.3 and so on
    c) connect these hosts (macs) ethernets together in a dedicated hub / zoned switch or if only two macs, just use a cat5/cat6 Ethernet cable
    d) on each host (mac) using system preferences/network check for a Green light against the built in Ethernet
    e) on each host (mac)system preferences/network to make a new network configuration (so that you can fall back incase of error) :  using system preferences/network make a new network location on each mac
    - edit the Location listbox, edit and DUPLICATE the current location
    - select and over type the name and change it to "qmaster work" or some name u like save and close the dialogue
    - back in sys prefs / network select your new location "qmaster work" (from the location list box and "apply"
    - now click the gear wheel icon on lower left and Reorder the network interfaces so that the WIfi is top (first) followed by Builtin Ethernet .
    - click "apply"
    - confirm Ethernet is still green status
    Do this on each host (mac) .. The slave/service nodes
    f) on each host (mac) verify that you can address each mac over you new subnet. There's many ways to do it however do it simply via the /applications/utilities/Terminal.app.
    From mac #1 whose IP address is 1.1.1.1,
    Enter:
    traceroute 1.1.1.2 press return and one line should come back.
    ping 1.1.1.2 and a continuous lines appear with packets and time is ms. Watch 3-4 then use control+C to stop
    Do the same to the other nodes you may have such as 1.1.1.3 etc
    Repeat the above from the other hosts. For example from one of the service (slave) macs say 1.1.1.2, t
    Test the network path back to your main mac 1.1.1.1: using terminal.app from that slave,
    Enter:
    traceroute 1.1.1.1 press return and one line should come back.
    ping 1.1.1.1 and a continuous lines appear with packets and time is ms. Watch 3-4 lines then use control+c  to stop
    At this point you should have a solid network path between your hosts over Ethernet on the subnet 1.1.1.x
    Step 3: mount all filesystems over the Ethernet 1.1.1.x subnet that are to be used for transcoding source (input | read )  and target (output | to be written )
    Simplicity is important at this stage to make sure you know what being accessed.  This is one reason for disabling all the automatic compressor settings
    You will use the Finder's "Connect to Server" (+k) from each slave (server) node to access the source and target filesystems on your master mac for the transcoding.
    These can be saved as favourites in the "Connect to Server" dialogues
    Do this:
    A) locate the volumes / filesystems and folders on your mac master where your source objects is contained. Do the same for where the final distribution transcode is to be written with you user access.. "Steffen"
    B) On each slave mac, use the Finder's "connect to server" dialogue to MOUNT those folders as network volumes on your slave macs
    mount the Source folder. Finder / Go / Connect to Server  or +K
    enter "[email protected]//Users/steffen/movies/my-fcpx-masters. ( choose you source directory path) .
    Click connect & password and use the "+" sign to save as favourite
    - mount the target folder. Finder / Go / Connect to Server  +k
    - enter "[email protected]/users/movies/my-fcpx-transcodes. ( choose your target directory path) . Click connect n password and use the "+" sign to save as favourite
    Do these for all your slave macs.  Remember you are specifying file paths over the 1.1.1.x subnet
    Obviously make sure your slaves have read and write access. Yes and you could also mark these folders and Shared in Finder info then everyone can see them ... your choice
    C) verify your access: on each slave use the finder to create a ftest older in those recently mounted volumes. Delete the test folder.
    So now all your networks and workflow folders are all mounted and accessible by your slave hosts and it's user
    step 4: Set up Compressor v4.1 and Qmaster
    Care is initially required here NOT to click needlessly on options.
    Recall that you purged most of the compressor.app v4.1 state information on step 1. ?
    Compressor.app v4.1 will appear new when u start it. 
    on the master mac 1.1.1.1 , launch compressor.app v4.1
    open compressor v4.1 preferences (command+,)
    using compressor.app V4.1 preferences:
    My Computers tab: turn OFF . Don't allow others to process on this computer
    Share Computers tab: UNTICK (disable) automatically share files.
    Advanced tab: WHen Sharing My Computer listbox: select Ethernet as the preferred network interface 1.1.1.x). Don't use all interfaces : - trouble
    Do not specify additional instances yet! Less to troubleshoot
    On each slave mac 1.1.1.2 -1.1.1.x
    launch compressor.app v4.1
    open compressor v4.1 preferences (command+,)
    using compressor.app preferences:
    My Computers tab: turn ON (yes ON) to allow others to process their stuff on this slave computer
    Share Computers tab: UNTICK (disable) automatically share file
    Advanced tab: WHen Sharing My Computer listbox select Ethernet as the preferred network interface 1.1.1.x). Don't use all interfaces : - trouble
    Do not specify additional instances yet! Less to troubleshoot !
    On the master mac, 1.1.1.1
    using Compressor.app v4.1, select destinations and add a new CUSTOM destination and navigate the dialogue to your target folder you specified in step 3b ~/movies/my-fcpx-transcodes as an example.
    Use this custom destination on the batch later
    in Compressor.app V4.1 preferences/Shared Computers, click the plus "+" sign in bottom left cornet to create a new cluster called "unnamed ".
    - click in the name and change the name to "Steffenscluster" (I'm not connected to my network as I do this..)
    - tick on the slaves 1.1.1.2 to 1.1.1.x to add the to your new cluster .. assumethee are on the right in a list.
    Your cluster "Steffenscluster" is now active!
    Care care! One more thing to do. You SHOULD make the cluster storage for master 1.1.1.1 available to all your slaves. This is important for this test !!
    On the master 1.1.1.1, Use the finder to verify that you have these directories built from compressor.app v4.1
    /Users/Shared/Library/Application Support/Compressor
    and your own home directory: ~/Library/Application Support/Compressor
    Dig deeper for the /storage folder in each to see the hexadecimal named folder that represents this cluster "Steffencluster"!
    These should be manully mounted on each slave. Recall we DISABLED automatic file sharing.
    on each slave mac 1.1.1.2 - 1.1.1.x, mount the masters cluster storage file systems. Do this for verifying access of each cluster Slave
    on each slave mac, use the Finder's "connect to server" dialogue to MOUNT those folders as network volumes on your slave macs
    mount Qmaster cluster storage folder 
    Use Finder / Go / Connect to Server  or +k
    enter "[email protected]/Users/Shared/Library/Application Support/Compressor/Storage
    Click connect & password and use the "+" sign to save as favourite
    - mount users Qmaster cluster storage folder 
    Use Finder / Go / Connect to Server  or command+k
    enter "[email protected]/Users/steffen/Library/Application Support/Compressor/Storage
    Click connect & password and use the "+" sign to save as favourite
    This you may have 4 new network volumes (file systems) mounted on each mac slave over your dedicated 1.1.1.x subnet!
    Step5: submit a job
    On master mac 1.1.1.1 launch the new compressor.app v4.1 "Network Encoder Monitor " .. Use +E . new in COmpressor.app V4.1
    You should see all your nodes all cluster service points for your master and slaves.  Heres you see just one (this macbookair!)
    On each host (mac) Launch the activity monitor.app and filter compressord .. There they are on each host! 
    Nearly there.  
    On the mac that's the master 1.1.1.1 (controller )
    Submit a job:
    Use finder to move some footage etc that more than 15 mins for example into your source directory folder from step 3b (eg ~/movies/my-fcpx-masters.)
    In compressor.app v4.1 Add this +I to the batch
    Drag your custom destination on to the batch
    Set your encoding setting (apple devices best)
    Open the inspector in compressor.app and select "video" and make sure MULTIPASS is checked. Then change to frame controls at your leisure.  Better means slower
    Select the General tab and make sure JOB SEGMENTING is ticked!
    Now cross fingers and submit it (or +B)
    Step 6: Monitoring the Workflow Progress
    In compressor.app v4.1 Use the active tab to watch the progress
    Open the disclosure triangle and see the segments
    Unfortunately u can really see which node is processing. (No more share monitor .. btw for those who care.. thats buried now in /Applications/Compressor.app/Contents/PlugIns/Compressor/CompressorKit.bundle/Contents/Embedde dApps/Share Monitor.app/Contents/MacOS/Share Monitor
    Look at the network encoder monitor (cmd + E) to see the instances processing your work
    Lots of small and overdetailed steps here Steffen and its worth working through.
    Simply all these things need to be availble to get your cluster to work EVERYTIME.
    I might update this and post a more detailed disalogue/transcript on my blog and post it here.
    Epilogue:
    I for one rather like the new compressor.app V4.1. Qmaster is buried and works well when not teased or unintentionally fooled.
    I would like the ability to :
    specify the location of the qmaster cluster storage rather than have it in the root systems file system - I used to have it on my disk array
    compressor to be applescriptable
    Post your results for others to see.
    Warwick
    Hong Kong

  • Problem Submit Via Job in BADI

    Hello All
    I am using SUBMIT VIA JOB in BADI "work order update" and but no job is created....also sy-subrc is 0.
    Here is the code
      call function 'JOB_OPEN'
        exporting
          jobname          = name
        importing
          jobcount         = number
        exceptions
          cant_create_job  = 1
          invalid_job_data = 2
          jobname_missing  = 3
          others           = 4.
      if sy-subrc = 0.
        submit z_idoc_create_process_order and return
                              via job name number number
                                   with p_aufnr = it_header1-aufnr
                                   with p_werks = it_header1-werks
                                   with p_autyp = c_autyp
                                   with p_auart = it_header1-auart
                                   with p_dispo = it_header1-dispo
                                   with p_opt   = c_opt
                                   with p_mestyp = c_mestyp.
        if sy-subrc = 0.
          call function 'JOB_CLOSE'
            exporting
              jobcount             = number
              jobname              = name
              strtimmed            = 'X'
            exceptions
              cant_start_immediate = 1
              invalid_startdate    = 2
              jobname_missing      = 3
              job_close_failed     = 4
              job_nosteps          = 5
              job_notex            = 6
              lock_failed          = 7
              others               = 8.
          if sy-subrc <> 0.
          endif.
    Any reason why job is not created?
    Thanks in advance.
    regads
    VInit

    Hi guys,
    I tried this
        SUBMIT z_idoc_create_process_order USER creator using selection-set lv_variant TO SAP-SPOOL
                               SPOOL PARAMETERS print_parameters
                               WITHOUT SPOOL DYNPRO
                               WITH p_aufnr EQ it_header1-aufnr
                               WITH p_werks EQ it_header1-werks
                               WITH p_autyp EQ c_autyp
                               WITH p_auart EQ it_header1-auart
                               WITH p_dispo EQ it_header1-dispo
                               WITH p_opt   EQ c_opt
                               WITH p_mestyp EQ c_mestyp
                               VIA JOB name NUMBER number
                               AND RETURN.
    Now the job is getting created but my Variant has no values
    How to pass values to variant? below values are not getting tranferred.
                               WITH p_aufnr EQ it_header1-aufnr
                               WITH p_werks EQ it_header1-werks
                               WITH p_autyp EQ c_autyp
                               WITH p_auart EQ it_header1-auart
                               WITH p_dispo EQ it_header1-dispo
                               WITH p_opt   EQ c_opt
                               WITH p_mestyp EQ c_mestyp

  • How to make a job run on a appointed node

    I have a problem. The database is Oracle 10g, RAC, two nodes
    There are some job which run on everyday night. Now, something happened, the job run on Node A can't work as normal, but if run on node B, It will work well.
    so I just want to submit the jobs on Node B, I submited the jobs on Node B, but the jobs always run on Node A.
    I want to know how to make a job run on a appointed node.

    the Job just transfer some procedure to add partitions for some table, to merge the records from some tables to one table and so on.
    the problem is when the job run on Node A, it is very very slow, the procedure would run hours and not success end, but if on Node B it will successfully end in seconds.
    restart the Node A would be a way for the problem, but may it will leave a hidden trouble. I want save the scene to find out the problem
    so can you help me

  • How to schedule a job in SAP CPS

    Hi,
    I am new to SAP CPS.
    So please tell how to schedule a job in SAP CPS. And which kind of job cam be schedule means BAP report.
    Thanks
    Anurodh

    Hi,
    In the installation and administration guide you'll probably find some examples.
    The Job Definition you need is SAP_AbapRun to run any ABAP.
    You submit this, specify the parameters as desired, and scheduling information, and that should do the trick.
    That is assuming you have already connected CPS to an SAP system.
    Check the topics in the docs within the product and on SDN on:
    - Connecting to an SAP system
    - Submitting Jobs
    - SAP_AbapRun
    Regards,
    Anton.

  • How to schedule a job which needs to run evry day 1(AM) o clk?

    begin
    DBMS_SCHEDULER.create_job (
    job_name=> 'BJAZPROPMAINTAIN',
    job_type=> 'PLSQL_BLOCK',
    job_action=> schemaname.schedule_procedure;',
    start_date=> '02-aug-08 01:00:00 PM',
    repeat_interval=> 'FREQ=DAILY; BYHOUR=01',
    enabled =>TRUE,
    auto_drop=>FALSE);
    end;
    Hi all,
    i want to schedule a job which needs to be run every day one o clock early morning i haven't set the job_scheduler before this. by searching thru net and prev scheduler coding i have written the above code here for running evry day early morning 1 o clock i m little bit of confused in the time
    repeat_interval=>'FREQ=DAILY;BYHOUR=01'; whether is is correct one or wrong?
    and also there are some other job is scheduled in the same time . will it create any problem of executing at the sametime or we need to change the timing of 1:15 like that?
    please advise me..

    Thanks a lot so it will be executing every night 1 o clock am i right?
    It should.But I shall say that schedule it and than only we can be sure about it.About the timing part, its correct syntatically.
    i saw that job_priority column in dba_scheduler_jobs table but dont know what it does?
    and also how can fetch this job scheduler sid,serial# i checked v$session but how to correlate this ..
    please explain me
    In schedulerjobs,there is a column ,client_id.You can map it to the sid from the V$session.I don't have a box running Oracle at the moment so I wont be test it for you.Do it and post feedback.
    what will happen if more than one job is scheduled in the sametime
    i think for this only we set the priority on the two which one needs to be first exec(depends on the high priority)
    let me know about this.
    Jobs are prioritized by two parts,within the class they are a part of and individualy.If you have two jobs in the same class than they can be make run with a different priority with the priority clause set within them.This has a number which start from 1 meaning highest priority.So if there are two jobs scheduled for the same time,you need to check which job class they fall into. If they are in the same class than you have to change the priority of them.
    I suggest you read the books,they cover all these topics in much more detail.
    Also there is a dedicated forum about Scheduler.In future for Scheduler regarded questions, you can visit there.
    Scheduler
    Aman....

  • Getting an unusual error message in Compressor 4.1 when I try to submit a job

    I'm running Mavericks and have Compressor 4.1 on my Mac Pro along with FCP 10.1.  When I submit a job to compressor, I then add the Dolby Digital and Mpeg-2 6.2 Mbps/90.  When I hit Start Batch I get this error message
    /var/folders/k9/f6fyk4sj4f3_rj2wlrlwx9hr0000gn/T/46BDF064-B30F-4BF1-8D9C-D22DE91 8342B.export-archive
    I've tried to uninstall and re-install Compressor but to no avail.  What is this error message referring to and how do I rectify it?
    Thank you
    Drew

    Hi Drew if you haven't resolved this. TRy this to see if the issue is a TRANSIENT object access error instead for submitting directly to Compressor. Do this to isolate any content error defects before making you distribution (older 2 stage workflow)..
    in FCPX 10.1, make sure the projects Primary Storyline is completely rendered - select all and cntl+R (render selection or control+shift+R)In FCPX 10.1 watch the background tasks (+9) and wait for all the rendering to be completed.... (make sure no object errors)
    in FCPX 10.1 export a master File/Share/Master File (+E) as PRORES 4xx and save it ~/Movies/Drews_final_master.mov (prores)
    in Compressor.app V4.1,
    create a new BATCH ,
    import the MASTER object ~/Movies/Drews_final_master.mov)
    (and add your setting and submit it. (dont use FCPX SEND TO COMPRESSOR just yet! until you resolve this issue)
    This process will avoid the use of transient  file storage areas that seem to be used in the FCPX to COMPRESSOR.
    Post your results for others to see
    Warwick
    Hong Kong

  • How to restrict the job start conditions (only "Immediate" type) ?

    Hi,
    We allow our users to schedule and execute in background mode transactions (example IP19, IW38). We gave them for that authorizations (object S_BTCH_JOB with LIST, PROT, RELE and SHOW - objetct S_PROGRAM with BTCSUBMIT).
    We would like that users can schedule and execute their jobs only with the u201CImmediateu201D job start condition (in the Start Time screen for the type of start condition : Immediate, Date/Time, After job, After event, or At operation mode).
    Another solution: prohibit the scheduling and the execution background job in a certain time interval ...
    How can restrict the job start conditions ?
    Thank you.
    Patrice.

    Hi Jan,
    Yes, sa38 makes it possible indeed to execute in background into immediate mode a job but
    the user have to know the name of the program to be carried out ...
    The user knows only the name of these transactions trade. For example, IW38.
    In the menu of this transaction, SAP gives the possibility to execute in background :
    Program --> Execute in Background --> display of Start Time screen for the type of start condition :
    Immediate, Date/Time, After job, After event, or At operation mode).
    It is at this time there that we want that the user can only choose the "immediate" mode.
    We must thus prohibit the other choices (Date/Time, After job, After event, or At operation mode) ... and
    and we don't know how to restrict these other options in this screen "Start Time screen for the type of start condition".
    Thank you.
    By.

  • How to create a JOb

    Hi Experts,
    Previously i waw using bdc and was creating session.
    Now in place of BDC i am using BAPI .
    Now i cannot create  a session  so i am creating a job.
    Problem is that when i use bapi the record is created at same time.
    So before execting Job  using transaction SM37
    the records gets created.
    Can any  body please explain me how to solve this issue.
    Thanks & regards,
    Chetan

    Hi chetan,
    To create a new batch job you can go to transaction Sm36,wherein you can define the job name,job class and target server..then you can go to start condition and select whether it is a periodic job,or immediate job and accordingly you can schedule the job..Also the very important thing is in the Step funtion you should have valid ABAP program or any external program which should be used at the background...
    If you know the program then you can go to SE38 and create a variant for that program which can be later used in the newly created job.You can also copy an existing job and modify it according to the requirement by checking the job details in SM37.
    Just to summarise the Key transactions are : SM36 and SE38.
    JOB_OPEN: Create a Background Processing Job
    Use JOB_OPEN to create a background job. The function module returns the unique ID number which, together with the job name, is required for identifying the job.
    Once you have "opened" a job, you can add job steps to it with JOB_SUBMIT and submit the job for processing with JOB_CLOSE.
    For more information, please see the online documentation in the function module facility (transaction SE37)
    Sample Program: Creating a Job with JOB_OPEN
    Create your job with JOB_OPEN. The module returns a unique job
    number. Together with the jobname, this number identifies the
    job. Other parameters are available, but are not required.
    JOBNAME = 'Freely selectable name for the job(s) you create'.
    CALL FUNCTION 'JOB_OPEN'
    EXPORTING
    JOBNAME = JOBNAME
    IMPORTING
    JOBCOUNT = JOBNUMBER
    EXCEPTIONS
    CANT_CREATE_JOB = 01
    INVALID_JOB_DATA = 02
    JOBNAME_MISSING = 03
    OTHERS = 99.
    IF SY-SUBRC > 0.
    <Error processing>
    ENDIF.
    thanks
    katrhik

  • How to check my job name from the database...

    I have written one job scheduler which is as follows :
    SQL> ED
    Wrote file afiedt.buf
    1 DECLARE
    2 X NUMBER;
    3 JobNumber NUMBER;
    4 BEGIN
    5 SYS.DBMS_JOB.SUBMIT
    6 (
    7 job => X
    8 ,what => 'scott.SPLITSMS;'
    9 ,next_date => SYSDATE+1/1440
    10 ,interval => 'SYSDATE+1/1440 '
    11 ,no_parse => FALSE
    12 );
    13 JobNumber := to_char(X);
    14* END;
    15 /
    PL/SQL procedure successfully completed.
    Now I want to check whether the job has been really created or not?
    so for that I have used the following command line:
    SQL> SELECT JOB_NAME FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'SCOTT.SPLITSMS';
    SELECT JOB_NAME FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'SCOTT.SPLITSMS'
    ERROR at line 1:
    ORA-00942: table or view does not exist
    how to check my job name from the database...
    what is the command used to check the job_name ????
    and how can i ensure that my job scheduler is running properly...???
    please help ........my dear friends.....!

    957029 wrote:
    Now I want to check whether the job has been really created or not?
    so for that I have used the following command line:
    SQL> SELECT JOB_NAME FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'SCOTT.SPLITSMS';
    SELECT JOB_NAME FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'SCOTT.SPLITSMS'
    ERROR at line 1:
    ORA-00942: table or view does not existYou can use DBA_* views only if the User has been Granted a DBA Privilege.
    how to check my job name from the database...
    what is the command used to check the job_name ????You can use USER_JOBS view to check. But, is it not that you have just created the Job, so you must be knowing it?
    and how can i ensure that my job scheduler is running properly...???If USER_JOBS.FAILURES is Non Zero, that means the Job has encountered a problem that needs to be Investigated. Similarly, the LAST_DATE, LAST_SEC, NEXT_DAY, NEXT_SEC can be used to determine if the Job has been running successfully.
    if you are on 11g, you should consider using DBMS_SCHEDULER.

Maybe you are looking for

  • Memory in Android devices

    To use Android devices efficiently, users should be aware of the different types of device memory. This knowledge is important in order to understand, for example, where music, photos and videos are saved; how many apps can be downloaded from Google

  • Aperture will not open, even with re-install

    I have not been able to open Aperture from my desktop or from the library.  My library is huge, 993 GB.  I have been using Aperture for years and with the latest upgrade to 3.2.3, it won't open.  I get an error message that states "Aperture cannot be

  • IPhoto won't open after upgrade to Lion

    Hi everyone, i need HEEEELP ! Since installed Lion, iPhoto don't work anymore, the library is in the package of contents, but i tried to follow allthe necessary steps to repair permissions, but nothing at all, after the last message it quit and that'

  • Mac Mini auto disconnects external drives

    Disconnects external drives using USB 3.0 drives were attached to Windows XP machine drives have no password/lock.  Genius Bar tried as well same results, tried using new MacBook Pro same result suspect software OS X issue looking for work around to

  • Getting error 'generation failed' when tried to build the CAF application

    Hi All, I have created the webdynpro application for the human activity.I have created CAF Application for automated activity.But when I tried to generate and build the CAF application,I am getting an error 'generation failed-com.sap.engine.services.