Submit remote job to HDInsight cluster using its IP address.

Hi there,
I am using HDInsight and trying to submit jobs in it. I am able to submit jobs using the API provided by Azure. This works fine. Also, I am able to submit job in remote machine by opening the remote machine using VM. 
I am now trying to submit the job to the HDInsight cluster from my machine using the IP address of the remote machine. I am not able to submit any job into it. It throws out some sort of error.
Please help me on this.
Regards,
Athiram S

Hi Sudhir,
Thanks for looking into this.
We can submit job in hadoop cluster using the IP address by the following method.
1) Configure certain XML files like core-site.xml,Hdfs-site.xml, Yarn-site.xml in cluster machine(namenode) with the IP address of the machine. I also make the similar change in the configuration files in my machine under the location "..\\etc\\hadoopcluster_IPAddress".
2)Now, execute the command pig --config "..\\etc\\hadoopcluster_IPAddress" in my machine(which is connected to namenode machine of the cluster through LAN). Now, the Map-reduce job gets executed in remote machine.
I am trying a similar approach for submitting the job in HDInsight cluster. I use the Headnode IP address and modified the configuration files and used the same command as above. But, I am wondering why it not working.
I am able to get the job successfully executed to my cluster machine and job submission in HDInsight cluster fails.
Please help me on this issue.
Regards,
Athiram S

Similar Messages

  • Submit Statement for calling smartform and using its data in another report

    Hello Everybody
    There is one report which display smartform as an output.  My requirement is to call that report and return back without showing its output and use its data for futher use in the report. i Hope i m clear with my query. Plzz reply its urgent.
    Currectly i m using this statement :
    SUBMIT  ZPOPRINT  WITH PO_NUM EQ IT_EKKO1-EBELN exporting list to memory and return.
    while executing the program, after this statement i m getting an output.  but i need to store it in memory, not to display the output.
    Waiting for ur reply..
    Thanks and Regards
    Virendra

    Hi.
    I have not done this kind of requirement. What i said was just an option which came to my mind. Also not sure whether it will work for you. While submitting ,instead of exporting list to memory do submit to SAP-SPOOL. Then later read the spool content to internal table.
    I am referring you two links for this.
    [Submit to SAP-SPOOL|http://help.sap.com/abapdocu_70/en/ABAPSUBMIT_LIST_OPTIONS.htm]
    [Spool to Internal table|spool file data to internal table].
    Also search for related topic in SCN.
    Regards
    Kesav

  • Is it possible to get the exact model of a product manufactured by Apple using its MAC address alone?

    Didn't know where to really put this, but if anyone could help me out I'd much appreciate it. 

    You might want to have a look at MAC address - Wikipedia, the free encyclopedia for a description of how MAC addresses are assigned and what they mean.
    Basically part of the MAC address will identify the manufacturer, the rest of the address can be used as the manufacturer sees fit as long as it is unique.
    So it is possible some manufacturers might encode the device into the MAC address but it is not universal or mandatory.

  • Automating the creation of a HDinsight cluster

    Hi,
    I am trying to automate the creation of a HDinsight cluster using Azure Automation to execute a powershell script (the script from the automation gallery). When I try and run this (even without populating any defaults), it errors with the following error:
    "Runbook definition is invalid. In a Windows PowerShell Workflow, parameter defaults may only be simple value types (such as integers) and strings. In addition, the type of the default value must match the type of the parameter."
    The script I am trying to run is:
    <#
     This PowerShell script was automatically converted to PowerShell Workflow so it can be run as a runbook.
     Specific changes that have been made are marked with a comment starting with “Converter:”
    #>
    <#
    .SYNOPSIS
      Creates a cluster with specified configuration.
    .DESCRIPTION
      Creates a HDInsight cluster configured with one storage account and default metastores. If storage account or container are not specified they are created
      automatically under the same name as the one provided for cluster. If ClusterSize is not specified it defaults to create small cluster with 2 nodes.
      User is prompted for credentials to use to provision the cluster.
      During the provisioning operation which usually takes around 15 minutes the script monitors status and reports when cluster is transitioning through the
      provisioning states.
    .EXAMPLE
      .\New-HDInsightCluster.ps1 -Cluster "MyClusterName" -Location "North Europe"
      .\New-HDInsightCluster.ps1 -Cluster "MyClusterName" -Location "North Europe"  `
          -DefaultStorageAccount mystorage -DefaultStorageContainer myContainer `
          -ClusterSizeInNodes 4
    #>
    workflow New-HDInsightCluster99 {
     param (
         # Cluster dns name to create
         [Parameter(Mandatory = $true)]
         [String]$Cluster,
         # Location
         [Parameter(Mandatory = $true)]
         [String]$Location = "North Europe",
         # Blob storage account that new cluster will be connected to
         [Parameter(Mandatory = $false)]
         [String]$DefaultStorageAccount = "tavidon",
         # Blob storage container that new cluster will use by default
         [Parameter(Mandatory = $false)]
         [String]$DefaultStorageContainer = "patientdata",
         # Number of data nodes that will be provisioned in the new cluster
         [Parameter(Mandatory = $false)]
         [Int32]$ClusterSizeInNodes = 2,
         # Credentials to be used for the new cluster
         [Parameter(Mandatory = $false)]
         [PSCredential]$Credential = $null
     # Converter: Wrapping initial script in an InlineScript activity, and passing any parameters for use within the InlineScript
     # Converter: If you want this InlineScript to execute on another host rather than the Automation worker, simply add some combination of -PSComputerName, -PSCredential, -PSConnectionURI, or other workflow common parameters as parameters of
    the InlineScript
     inlineScript {
      $Cluster = $using:Cluster
      $Location = $using:Location
      $DefaultStorageAccount = $using:DefaultStorageAccount
      $DefaultStorageContainer = $using:DefaultStorageContainer
      $ClusterSizeInNodes = $using:ClusterSizeInNodes
      $Credential = $using:Credential
      # The script has been tested on Powershell 3.0
      Set-StrictMode -Version 3
      # Following modifies the Write-Verbose behavior to turn the messages on globally for this session
      $VerbosePreference = "Continue"
      # Check if Windows Azure Powershell is avaiable
      if ((Get-Module -ListAvailable Azure) -eq $null)
          throw "Windows Azure Powershell not found! Please make sure to install them from 
      # Create storage account and container if not specified
      if ($DefaultStorageAccount -eq "") {
          $DefaultStorageAccount = $Cluster.ToLowerInvariant()
          # Check if account already exists then use it
          $storageAccount = Get-AzureStorageAccount -StorageAccountName $DefaultStorageAccount -ErrorAction SilentlyContinue
          if ($storageAccount -eq $null) {
              Write-Verbose "Creating new storage account $DefaultStorageAccount."
              $storageAccount = New-AzureStorageAccount –StorageAccountName $DefaultStorageAccount -Location $Location
          } else {
              Write-Verbose "Using existing storage account $DefaultStorageAccount."
      # Check if container already exists then use it
      if ($DefaultStorageContainer -eq "") {
          $storageContext = New-AzureStorageContext –StorageAccountName $DefaultStorageAccount -StorageAccountKey (Get-AzureStorageKey $DefaultStorageAccount).Primary
          $DefaultStorageContainer = $DefaultStorageAccount
          $storageContainer = Get-AzureStorageContainer -Name $DefaultStorageContainer -Context $storageContext -ErrorAction SilentlyContinue
          if ($storageContainer -eq $null) {
              Write-Verbose "Creating new storage container $DefaultStorageContainer."
              $storageContainer = New-AzureStorageContainer -Name $DefaultStorageContainer -Context $storageContext
          } else {
              Write-Verbose "Using existing storage container $DefaultStorageContainer."
      if ($Credential -eq $null) {
          # Get user credentials to use when provisioning the cluster.
          Write-Verbose "Prompt user for administrator credentials to use when provisioning the cluster."
          $Credential = Get-Credential
          Write-Verbose "Administrator credentials captured.  Use these credentials to login to the cluster when the script is complete."
      # Initiate cluster provisioning
      $storage = Get-AzureStorageAccount $DefaultStorageAccount
      New-AzureHDInsightCluster -Name $Cluster -Location $Location `
            -DefaultStorageAccountName ($storage.StorageAccountName + ".blob.core.windows.net") `
            -DefaultStorageAccountKey (Get-AzureStorageKey $DefaultStorageAccount).Primary `
            -DefaultStorageContainerName $DefaultStorageContainer `
            -Credential $Credential `
            -ClusterSizeInNodes $ClusterSizeInNodes
    Many thanks
    Brett

    Hi,
    it appears that [PSCredential]$Credential = $null is not correct, i to get the same
    error, let me check further on it and revert back to you.
    Best,
    Amar

  • Is this the correct syntax to submit a job using DBMS_JOB.SUBMIT?

    Hello,
    Is this the correct syntax to submit a job?
    DECLARE
    v_job_number NUMBER;
    v_job_command VARCHAR2(1000) := 'PREPARE_ORACLE_TEXT_SEARCH;';
    v_interval VARCHAR2(1000) := 'trunc(SYSDATE)+1+7/24';
    BEGIN
    DBMS_JOB.SUBMIT(v_job_number, v_job_command, sysdate, v_interval, false);
    COMMIT;
    END;
    Thanks
    Doug

    DECLARE
    v_job_number NUMBER;
    v_job_command VARCHAR2(1000) := 'BEGIN
    PREPARE_ORACLE_TEXT_SEARCH; END;';
    v_interval VARCHAR2(1000) :=
    'trunc(SYSDATE)+1+7/24';
    BEGIN
    DBMS_JOB.SUBMIT(v_job_number, v_job_command, sysdate,
    v_interval, false);
    COMMIT;
    END;
    About your error:
    PLS-00201: identifier 'PREPARE_ORACLE_TEXT_SEARCH'
    must be declared
    ORA-06550: line 1, column 96:
    PL/SQL: Statement ignored
    The problem is that the job cannot find the procedure
    (maybe own by an other user). The user who run the
    job is not the same as the owner of the package.
    Bye, AronYou forget the semicolon after END.
    But we don't need here begin - end Block.
    So it's OK.
    v_job_command VARCHAR2(1000) := 'PREPARE_ORACLE_TEXT_SEARCH;'[b];
    As you right mentioned, it is probably problem with owner or typo in the name of procedure.
    Regards
    Dmytro Dekhtyaryuk
    Message was edited by:
    dekhtyar

  • Unable to add remote system into cluster using osx 10.5.2

    About a month ago, I had a quartermaster managed compressor cluster setup with three (3) systems. I was running FCP 6.0. on one system with quartermaster on that system managing the cluster. Compressor, quatermaster, and quicktime were installed on the other systems. All systems were running osx 10.5. FCP 6.0 suite tools was installed on one system. One of the systems was an intel and I had a two (2) instances setup as well as a virtual cluster on the intel. All worked perfectly.
    All machines were upgraded, to the latest quartermaster, compressor, quicktime, and osx 10.5.2 with the leopard graphics updates.
    Now I can no longer join the remote systems into the cluster. On these systems I have share and managed set, for both rendering and compressor. Yet in quartermaster they only show up as rendering nodes. If I remove the shared option, then the nodes appear as a unmanaged compressor service. But they are greyed and cannot be added to a cluster.
    Before the update, they would display in quartermaster as both rendering and compressor services and could be added to a managed compressor cluster.
    Did the updates break something or is there a new requirement that I am missing?
    thanks

    I'm having exactly the same problem on multiple machines, both Intel octocore and G5 quadcore. I'm running 10.5.4 with all the latest updates on all machines. Everything was working, now we can't drag any of the machines into a cluster to make a new one. Like you say, they only appear if Managed is unchecked (on the machine providing the QMaster service), and then are still greyed out, and not draggable. And you can't save a cluster without specifying the cluster controller, which you can't because nothing can be dragged in. The nodes appear to be unlocked (although the icon isn't very obvious), but even if they're locked, there is no password entry that pops up when clicked, and none have a password set in their QMaster System Preferences.
    To test, I did a totally 100% fresh pristine Leopard install and a dual G5, ran all OS upgrades, then did a fresh FCP Studio 2 install, and ran upgrades again and repaired permissions just for good measure. No dice. Exactly the same problem as on the other machines. This is a brand new install and it doesn't work!
    Very frustrating problem and I can't believe more people aren't seeing it. Totally fresh install, what else can be done? Well, time to call AppleCare, I guess.

  • Can't submit my job - no cluster listed

    Hi,
    I'm having problems with Compressor (3.5) where when I click submit it brings up the submit screen but for the 'cluster' dropdown box it doesn't give any options, therefore the submit button is greyed out. I've tried manually adding the host but it doesn't make a change to the list. I've tried stopping and starting sharing through Qmaster (through the control panel) no luck. The only thing that I've done since it last worked is installed "Final Cut Studio Maintenance Pack" and run Compressor Repair which is supposed to "diagnose and repair the fragile links between Compressor and Qmaster" - ya it didn't work!! Any ideas?
    Cheers,
    Phil

    did you blow out the preferences?
    Do that first, for both compressor and Qmaster.
    if that doesn't work, remove them and reinstall. Digital Rebellion publishes some excellent software for removing the apps selectively. It's important to completely remove them before reinstalling. Otherwise you can end up with the same preference errors causing the same problems.
    From Jon Chappell at Digital Rebellion:
    http://www.digitalrebellion.com/blog/posts/howto_trashpreferences.html

  • Execute commonj.WorkManager work on remote member of application cluster

    Hi,
    We have configured a WorkManager for our web application which works with Spring & Quartz to execute jobs asynchronously. However, all jobs get picked up by the same machine and the other members of the cluster are not given any tasks to execute.
    The commonj documentation says that a Serializable Work returns a RemoteWorkItem which can execute jobs on a remote jvm. Weblogic docs, however, say that it doesn't support this interface and implements its own cluster load balancing.
    How do we configure the Workmanager such that work is distributed across remote jvms in a cluster.
    Thanks.
    Regards
    Kaizer

    Hi,
    Do you have TestStand 2.x or are you using a later version TS 3.5.
    TestStand 2.0 didn't handle remote execution, that wasn't available until TS3.0 or TS3.1.
    This example, shows how an executable can access the same execution of a TestStand sequencefile not a remote execution.
    If you have TestStand 3.x, then there should be some examples employing remote execution within theTestStand examples folders. or try this link http://zone.ni.com/devzone/conceptd.nsf/webmain/955A560B0B0052B88625698500563621
    Regards
    Ray Farmer
    Regards
    Ray Farmer

  • Error when submitting job to Qmaster cluster

    Hi all,
    I'm new to working with the Qmaster cluster but I created a cluster (at least I think I did it right) using the distributed processing apple document from the help menu. Everything looks right...I have an active cluster with two machines. What is a little weird is that the Cluster that I can choose in Compressor when submitting a job has a format like "ThisComputer.RScomputer.local:50411" instead of the name of the cluster I made (I called it Zeus Cluster).
    So, I choose this long cluster name and submit the job but I get this error:
    Error: An internal error occurred: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out'.
    Has anyone seen this error. What could I be doing wrong? Apple Qadministrator has the Cluster active and both machines are sharing fine.
    Any help would be appreciated. Thank you!

    Have you looked in the /Library/Logs/Qmaster for any specific detail? (use /applications/utilities/console.app )
    there's usually some detail in there that will give you an insight. If you see smething in there of significance, by all means post it here so w can examine and suggest to you.
    I have had his before, and in my case is related to the cluster setup I had.

  • CLI framerate and serial jobs on a cluster

    Hello,
    I have couple of questions for compressor (ver 4 )
    1) Is there a way to specify framerate for image sequence using command line interface.
    The compressor help lists the following but does not provide a way how such an option work. I tried few different ways in vain
    -jobpath  -- url to source file.                        -- In case of Image Sequence, URL should be a file URL pointing to directory with image sequence.
    -- Additional parameters may be specified to set frameRate (e.g. frameRate=29.97) and audio file (e.g. audio=/usr/me/myaudiofile.mov).
    2) I have a managed cluster with 8 nodes each running 4 instances. For some reason compressor only uses 6 instances spread across many nodes and not all 32.
    3) Is there a way to specify and process just one job for a given cluster? This is equivalent of serialize option but it does not seem to be available.
    Currently when I submit multiple jobs few of them run at once and create a havok on NFS mounts and fail. I can limit the job queue to one but that is not ideal.
    I would appreciate any pointers.
    Thanks
    --Amit

    Hi Amit, just saw your post. I assume you are passing the settings via the "-settingspath" <my_specific_job_settings_in_a_file_path_somwhere.settings" on the compressor command?
    If so, it's a very simple matter to specify the frame rate etc etc on the VIDEO setting itself, save it and use it.
    I don't recall that such "atomic settings" we're actually available in the v3 of compressor.app. I'll check later for v4. I'd be surprised  if they are. :)
    What I've done in the past is to simply make my own personalised settings (and destinations) up using  the Compressor.app V4 UI (save as ... my name .. i.e.  prores_noaudio_PAL" and path these file paths on the -settingspath" parm on the compressor cli. In your case I'd imagine a simple VIDEO setting for your frame rate and you're set!
    Compressor.app v4 makes any of your own R.Y.O settings on your ~/library/application support/compressor/settings folder. You can copy or move or link these anywhere you like and pass these customised Parms to compressor cli as above.
    I doubt also if these atomic settings are available as AppleScript properties, there's likely no vars like that there me thinks. . I recall the same ones exist as they do for the cli and now in compressor.app 4 they probably support Qmaster .. Yeah.. Compressor.app is apple scriptable.. It's not in the default list library so just add it...
    Lastly as a guess these compressor ".setting" files are probably XML base.. So you might consider to tinker with one with an editor.
    Anyway.. Try then"-settingspath" operand and see how u go.
    2) the way Qmaster schedules across unmanaged transcode nodes is likely based on how busy each node is. You should be able to see if there is a pattern simply by looking on the Qmaster job controller.log .. See share monitor.app info or use console.app look on
    your ~/library/application support/apple Qmaster/logs directory. There will be something in there.
    Also have a look for any errors on the cluster services logs in case the services you expects to be there are actually not.
    Are you using a managed cluster? Personally I have found this quite stable. Make sure u insure that those services are for managed climates only..
    3) yes you can specify a specific cluster on the using the "-clusterid" operand. Should you have more than one managed cluster in the farm, this is a cool way to go. Also considering the "-priority" operand usage as well for hot jobs. Make sure all submissions are low priority... It's batch.. Works great!!
    4) NDS mounts.. Well the simple rule is to keep them mounted on their own subnet, maker sure all destinations and sources are available to all hosts, set compressor options to copy to cluster only when ya must, and make sure the service time for NDS io READ Request is as fast as u can make it. ( jumbo frames, dedicated NICs and sinners and optimum fast read file systems... Keep other traffic away for it.. Works a treat!
    Should you turn something up pleased at it here for others to see. I'm certainly interested.
    Sorry for typos.. Just on MTR with iPhone on way home. :)
    Hth
    Warwick
    Hong Kong

  • Error while trying to enable remote on hd insight cluster

    I'm trying to enable remote on our hd insight cluster, but every time I do, I get this error:
    "An invalid passthrough request payload was submitted"
    Any ideas?

    Hi,
    Are you enabling remote on your HDIsight cluster via the management portal or .NET SDK?
    Are you following the following steps?
    To enable Remote Desktop
    Sign in to the Azure portal.
    Click HDINSIGHT on
    the left pane. You will see a list of deployed HDInsight clusters.
    Click the HDInsight cluster that you want to connect to.
    From the top of the page, click CONFIGURATION.
    From the bottom of the page, click ENABLE
    REMOTE.
    In the Configure Remote Desktop wizard, enter a user name and
    password for the remote desktop.  Enter an expiration date in the EXPIRES ON box. 
    The expiration time of day is assumed by default to be midnight of the specified date. Then click the check icon.
    You could refer the following link for details:
    http://azure.microsoft.com/en-us/documentation/articles/hdinsight-administer-use-management-portal/#connect-to-hdinsight-clusters-by-using-rdp
    Please Note that the user name must be different from the one used to create the cluster (admin by
    default with the Quick Create option) and that the expiration date must be in the future and no more than a week from the present.
    Hope this helps.
    Regards,
    Malar.

  • How to access work node in a HDInsight cluster?

    Hi,
    Is there any way to access the work node? I want to check the worker's status/performance.
    Thanks,
    David
    Regards, David Shen

    The RDP user is unfortunately not an admin user. We are actively evaluating making that user an Admin user. Till then there is a workaround, you can use cluster customization script to create a admin user during cluster provisioning time, add it to the Remote
    Desktop Users group and use that to log on to the cluster.
    Info on cluster customization can be found in the very helpful blog post.
    http://blogs.msdn.com/b/bigdatasupport/archive/2014/04/15/customizing-hdinsight-cluster-provisioning-via-powershell-and-net-sdk.aspx
    Maheshwar Jayaraman - http://blogs.msdn.com/mahjayar

  • Trying to submit a job for Compressor

    I'm having trouble submitting my sequence once it's been imported into Compressor.
    After performing the necessary steps then clicking on the submit button the window
    displays the name and priority, but for the cluster it displays "no value". So I click on submit again, then a warning appears stating: "Unable to submit to queue"
    Please restart or verify your Compressor installation is correct.
    I've tried to restart which didn't work either.
    I've also tried going into the QMaster under system preferences, but that seems to be fine. Please help me!

    I have been experiencing the same issue with Compressor, unable to submit a job to be compressed.
    This issue started recently, after the Pro App update.
    I am using Compressor 3.5, there are no current updates available in "Software Update".
    I have not tried reinstalling Final Cut Studio yet, I would like to avoid that if possible. I'm sure there is a software update fix coming soon... In the mean time, I am unable to use compressor.

  • Has anyone successfully sent a motion job to a cluster?

    I am trying to submit batch jobs to Compressor that have a motion element associated with them Whenever I submit jobs from the command line to Compressor via a cluster that do not contain Motion elements the jobs submit and render fine.
    When I try and submit a batch that requires Motion I get timeout errors with a "Service Down" message.
    I've filed the question under QMaster too (see here for thread: http://discussions.apple.com/thread.jspa?threadID=1984897&tstart=0) which details my setup.
    I am at a loss as to why Motion prevents jobs from being rendered once submitted to the cluster. The same problem occurs when I submit via Compressor - so it's not a command line only problem.
    Any suggestions would be very much appreciated.

    Hi Patrick,
    Thanks for the reply. I've been hammering away at this for two weeks solid now. I ended up having an almost 2 hour discussion with Apple on it Thursday. So here is what they told me...
    1. A "headless" installation is not supported (i.e. plug in a monitor to the server otherwise it isn't a supported setup).
    2. The graphics card I have in the server, although rated between two compatible devices, is not officially supported
    So basically I ended up with "it's not possible running it from OS X Server, with Motion in a headless setup".
    Well I'm a die-hard when it comes to being told "it can't" and so this is where I am at.
    I reinstalled Final Cut on the server. I then set up a cluster with services and as a controller. Next I checked my Motion files to make sure they were accessing assets over an NFS connection (even though they were on the local machine - and this might have been the problem all along). Started services. Setup QAdministrator to manage the instances and submitted the jobs.
    Success!! Loads of unprintable words were spewed with a few thanks to various beings in the universe. I basically had set up the cluster and it worked as advertised with Motion. I then installed the latest updates - boom. Everything stopped working. More expletives and back to square one.
    After uninstalling and reinstalling as per Apple's recommended route (see link in previous post), I now have an install working (without the latest updates) that renders properly. Here's the rub - and I think I might be pushing the envelope of feasibility here. When I submit a large number of jobs (say, 1000 - 4000) the render queue builds fine. The server renders multiple Motion jobs at once (in my test case 3 jobs at a time using 3 instances). After about 1000 jobs it craps out and dies. The only way to get it working again is to install the QMaster Service Node installer again. The jobs will pick up right where they left off, but now I'm babysitting renders...
    As far as I'm concerned I've still not finished solving this issue. I have to be able to farm out Motion jobs to a cluster (the sheer volume of render jobs to be done means I have no choice or the work just won't get done).
    What I need to do is figure out how to get a stable environment...
    So if anyone is following this. That's where I'm at. I'm still very much open to ideas, but I can definitely, without a doubt say that submitting motion jobs to a cluster IS possible and it works.
    So I'm closing this question.

  • Start one job after another complets using PL/SQL procedure and DBMS_JOB

    All,
    I am attempting to refresh a materialized view using DBMS_JOB and having a PL/SQL program loop through each materialized view name that resides in a table I created. We do the table because they have to be refreshed in a specific order and I utilize the ORDER_OF_REFRESH column to dictate which MV comes first, second, third, etc.
    Now - I have this working to the extent that it kicks off 4 materialized views (currently set the procedure to only do 4 MVs for testing purposes) but I would ultimately like the procedure to create a new DBMS_JOB that calls DBMS_MVIEW.REFRESH of the next view in line ONLY after the preceeding materialized view DBMS_JOB completes.
    The purpose of all of this is to do a few things. One - if I simply create a procedure with the DBMS_MVIEW.REFRESH call to each materialized view in order - that works but if one fails, the job starts over again and will up to 16 times - BIG PROBLEM. Secondly, we want the job that will call this procedure to fail if it encounters 2 failures on any one materialized view (because some MVs may be dependant upon that data and cannot use old stale data).
    This may not be the "best" approach but I am trying to make the job self-sufficient in that it knows when to fail or not, and doesn't kick off the materialized views jobs all at once (remember - they need to start one after the other - in order).
    As you can see near the bottom, my logic doesn't work quite right. It kicks off all four jobs at once with the date of the whatever LAST_REFRESH is in my cursor (which ultimately is from the prior day. What I would like to happen is this:
    1.) 1st MV kicks off as DBMS_JOB and completes
    2.) 2nd MV kicks off with a start time of 3 seconds after the completion of 1st MV (based off LAST_REFRESH) date.
    3.) This conitnues until all MVs are refresh or until 2 failures are encountered, in which no more jobs are scheduled.
    - Obviously I am having a little bit of trouble with #2 and #3 - any help is appreciated.
    CREATE OR REPLACE PROCEDURE Next_Job_Refresh_Test2 IS
    V_FAILURES NUMBER;
    V_JOB_NO NUMBER;
    V_START_DATE DATE := SYSDATE;
    V_NEXT_DATE DATE;
    V_NAME VARCHAR2(30);
    V_DELIMITER VARCHAR2(1);
    CURSOR MV_LIST IS SELECT DISTINCT A.ORDER_OF_REFRESH,
                                  A.MV_OBJECT_NAME
                        FROM CATEBS.DISCO_MV_REFRESH_ORDER A
                        WHERE A.ORDER_OF_REFRESH < 5
                   ORDER BY A.ORDER_OF_REFRESH ASC;
    CURSOR MV_ORDER IS SELECT B.ORDER_OF_REFRESH,
                                  B.MV_OBJECT_NAME,
                                  A.LAST_REFRESH
                             FROM USER_SNAPSHOTS A,
                                  DISCO_MV_REFRESH_ORDER B
                             WHERE A.NAME = B.MV_OBJECT_NAME
                        ORDER BY B.ORDER_OF_REFRESH ASC;
    BEGIN
    FOR I IN MV_LIST
    LOOP
    IF I.ORDER_OF_REFRESH = 1
    THEN V_START_DATE := SYSDATE + (30/86400); -- Start job one minute after execution time
              ELSE V_START_DATE := V_NEXT_DATE;
    END IF;
         V_FAILURES := 0;
         V_JOB_NO := 0;
         V_NAME := I.MV_OBJECT_NAME;
         V_DELIMITER := '''';
    DBMS_JOB.SUBMIT(V_JOB_NO,'DBMS_MVIEW.REFRESH(' || V_DELIMITER || V_NAME || V_DELIMITER || ');',V_START_DATE,NULL);
              SELECT JOB, FAILURES INTO V_JOB_NO, V_FAILURES
              FROM USER_JOBS
              WHERE WHAT LIKE '%' || V_NAME || '%'
              AND SCHEMA_USER = 'CATEBS';
    IF V_FAILURES = 3
    THEN DBMS_JOB.BROKEN(V_JOB_NO,TRUE,NULL); EXIT;
    END IF;
    FOR O IN MV_ORDER
    LOOP
    IF I.ORDER_OF_REFRESH > 2
    THEN V_NEXT_DATE:= (O.LAST_REFRESH + (3/86400)); -- Start next materialized view 3 seconds after completion of prior refresh
    END IF;
    END LOOP;
    END LOOP;
    EXCEPTION
    WHEN NO_DATA_FOUND
         THEN
              IF MV_LIST%ISOPEN
                   THEN CLOSE MV_LIST;
              END IF;
    NULL;
    END Next_Job_Refresh_Test2;
    ---------------------------------------------------------------------------------------------------------------------

    Justin,
    I think I am getting closer. I have a procedure shown just below this that updates my custom table with information from USER_SNAPSHOTS to reflect the time and status of the refresh completion:
    CREATE OR REPLACE PROCEDURE Upd_Disco_Mv_Refresh_Order_Tbl IS
    V_STATUS VARCHAR2(7);
    V_LAST_REFRESH DATE;
    V_MV_NAME VARCHAR2(30);
    CURSOR MV_LIST IS SELECT DISTINCT NAME, LAST_REFRESH, STATUS
                             FROM USER_SNAPSHOTS
                        WHERE OWNER = 'CATEBS';
    BEGIN
    FOR I IN MV_LIST
    LOOP
         V_STATUS := I.STATUS;
         V_LAST_REFRESH := I.LAST_REFRESH;
         V_MV_NAME := I.NAME;
    UPDATE DISCO_MV_REFRESH_ORDER A SET A.LAST_REFRESH = V_LAST_REFRESH
    WHERE A.MV_OBJECT_NAME = V_MV_NAME;
    COMMIT;
    UPDATE DISCO_MV_REFRESH_ORDER A SET A.REFRESH_STATUS = V_STATUS
    WHERE A.MV_OBJECT_NAME = V_MV_NAME;
    COMMIT;
    END LOOP;
    END Upd_Disco_Mv_Refresh_Order_Tbl;
    Next, I have a "new" procedure that does the job creation and refresh show just below this which, when starting the loop, sets the LAST_REFRESH date in my table to NULL and the STATUS = 'INVALID'. Then if the order of refresh = 1 then it uses SYSDATE to submit the job and start right away, else if it's not the first job, it uses V_NEXT_DATE. Now, V_NEXT_DATE is equal to the LAST_REFRESH date from my table when the view has completed and the V_PREV_STATUS = 'VALID'. I think tack on 2 seconds to that to begin my next job.... See code below:
    CREATE OR REPLACE PROCEDURE Disco_Mv_Refresh IS
    V_FAILURES NUMBER;
    V_JOB_NO NUMBER;
    V_START_DATE DATE := SYSDATE;
    V_NEXT_DATE DATE;
    V_NAME VARCHAR2(30);
    V_PREV_STATUS VARCHAR2(7);
    CURSOR MV_LIST IS SELECT DISTINCT A.ORDER_OF_REFRESH,
                                  A.MV_OBJECT_NAME,
                                  A.LAST_REFRESH,
                                  A.REFRESH_STATUS
                        FROM CATEBS.DISCO_MV_REFRESH_ORDER A
                        WHERE A.ORDER_OF_REFRESH <= 5
                   ORDER BY A.ORDER_OF_REFRESH ASC;
    BEGIN
    FOR I IN MV_LIST
    LOOP
    V_NAME := I.MV_OBJECT_NAME;
    V_FAILURES := 0;
    UPDATE DISCO_MV_REFRESH_ORDER SET LAST_REFRESH = NULL WHERE MV_OBJECT_NAME = V_NAME;
    UPDATE DISCO_MV_REFRESH_ORDER SET REFRESH_STATUS = 'INVALID' WHERE MV_OBJECT_NAME = V_NAME;
    IF I.ORDER_OF_REFRESH = 1
    THEN V_START_DATE := SYSDATE;
    ELSE V_START_DATE := V_NEXT_DATE;
    END IF;
    DBMS_JOB.SUBMIT(V_JOB_NO,'DBMS_MVIEW.REFRESH(' || '''' || V_NAME || '''' || '); BEGIN UPD_DISCO_MV_REFRESH_ORDER_TBL; END;',V_START_DATE,NULL);
    SELECT A.REFRESH_STATUS, A.LAST_REFRESH INTO V_PREV_STATUS, V_NEXT_DATE
    FROM DISCO_MV_REFRESH_ORDER A
    WHERE (I.ORDER_OF_REFRESH - 1) = A.ORDER_OF_REFRESH;
    IF I.ORDER_OF_REFRESH > 1 AND V_PREV_STATUS = 'VALID'
    THEN V_NEXT_DATE := V_NEXT_DATE + (2/86400);
    ELSE V_NEXT_DATE := NULL;
    END IF;
    END LOOP;
    EXCEPTION
    WHEN NO_DATA_FOUND
         THEN
              IF MV_LIST%ISOPEN
                   THEN CLOSE MV_LIST;
              END IF;
    NULL;
    END Disco_Mv_Refresh;
    My problem is that it doesn't appear to be looping to the next job. It worked succesfully on the first job but not the subsequent jobs (or materialized views in this case).... Any ideas?

Maybe you are looking for