Start job on server B when job is finished on server A

Hi,
like the subject already explains: Does anyone know how to schedule a job on server B (= BI server) that only starts after the successful end of a job on a different server A (= ERP server).
kr,
Rutger

> like the subject already explains: Does anyone know how to schedule a job on server B (= BI server) that only starts after the successful end of a job on a different server A (= ERP server).
With the default scheduler (SM36) this is not possible - but you can use the external Redwood scheduler, which is free of charge if you have a Netweaver license.
http://wiki.sdn.sap.com/wiki/display/CPS/CentralProcessScheduling+%28CPS%29
Markus

Similar Messages

  • "Cannot connect to server" error when setting up MAMP local server in Dreamweaver cc 2014

    I am trying to set up my Wordpress 4.1 site with MAMP 3.0.7.3 in Dreamweaver cc 2014, but I am getting a "Cannot connect to server" error when trying to view the site live. The site works fine on the localhost:8888. I am using OS X Yosemite and PHP 5.6.2. Below are screenshots of my Dreamweaver settings. Thanks in advance.

    Hi Subhadeep!
    Think I'm getting closer (Lol). I did as you suggested and changed the web url to http://localhost:8888/Documents/projects/whoknew/. As you suggested I moved the local site to the Documents folder under the sub-folder projects. It is connecting to the server, but now I am getting the requested url cannot be found on the server. Below are screenshots of my current settings.
    http://localhost:8888/Documents/projects/whoknew/. As you suggested I moved the local site to the Documents folder under the sub-folder projects. It is connecting to the server, but now I am getting the requested url cannot be found on the server. Below are screenshots of my current settings. Thank you!

  • Start automatically an action only when another action finish

    Hi all,
    first, sorry for my bad English....
    I am an Italian Architect, I use Photoshop actions mostly to manage many big tiff files.
    I try to solve this problem since long time.... but so far I can't..  :-(
    This is my problem: I woul'd like to use some actions in sequence launcing only the first one, and when the first finish the second starts automatically, when the second finish, the third starts automatically, and so on.
    Example
    I have 3 actions and a folder "X" that contain tiff files:
    I launch action "A" that resize, colorize, etc tiffs files in folder "X" and put them in a folder "Y"
    only WHEN action "A" finish........
    action "B" starts and works in folder "Y", grouping logically tiff files in some layerized PSD files and putting them in folder "Z"
    only WHEN action "B" finish........
    action "C" starts and works on PSD files in folder "Z" doing some "save as" at different resolution in some folders (folder "JPG", folder "PDF", etc)
    I hope to explain clearly my problem.
    Thanks a lot
    Emanuela

    Hi Mylenium,
    I will try to call existing actions creating a new one (?), but I tell you that I don't understand your question "Does that not give the same result, just only per individual file?"
    thanks
    Emanuela

  • Send job log as attachment when job cancelled

    Dear Friends,
    We have created job chain and also configured emails alerts functionality.
    Our requirement is to attach the job log file along wiht email alert.
    Thanks in advance.
    Any suggestions are welcome.
    Regards
    Jiggi

    Dear ,
    You can configure email attachment in destinations:
    Script name> Properties> Related Objects--> Destinations
    use Script sysjcs.RW_EMAIL_OUTPUT
    hope this will work.
    Thanks and regards
    Muhammad Asif

  • How to configure server location when you have a production server on Internet with PHP.

    I try to set up the 'Web root' and root URL with my datacenter server data.
    When I press 'validate configuration', get an error.
    Only work if the php server is localhost.
    Regards

    See screenshot of Expanded Files Panel.  Remote Server icon is left of Testing Server icon.
    Nancy O.

  • Secondary DPM server does not show "protected server group" when attempting to protect primary server

    Hi Guys,
    I have a bit of a weird situation with DPM 2012(pre SP1). We have a primary server which is running quite happily(protects 8 SQL servers). I'm attempting to setup a secondary. When I've done this in the past with other DPM versions(this version included)
    all i have to do is this following:
    Install the same version of DPM on the "secondary machine"
    Attach the agent on the "primary server"
    Create a new protection group for everything
    The catch is that I only see the following:
    I can successfully protect the database of the primary server. However the secondary server cannot see any computers protected on the primary for some reason. Any ideas why this object doesn't show at all? 

    Hi,
    At any time, was the primary DPM Server rebuilt and it's database restored ?
    Can you verify that the DPMWriter service is running.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Connection to server failed when selecting continue search on server

    Hi,
    I have a client that has an Exchange 2007 server and is having problems searching emails on their iPads (and iPhones). When they search and then hit continue search on server, it will come back with a "Connection to Server Failed"
    Client claims that it "used to work"
    anything we can tweak on the ipad or server to help?
    raffi

    Jim...
    See:  Issues sending or receiving mail on iPhone, iPad, or iPod touch > iCloud: Troubleshooting iCloud Mail

  • Error 500--Internal Server Error when running Facelet in Local Server

    Hi Experts,
    I have installed M2E plugin for eclipse and working on a Maven project in OEPE 12c.
    Running the facelet on the remote server , the results are returned, wheras running the facelet in the local server , the below error occurs
    Error 500--Internal Server Error
    com.sun.faces.context.FacesFileNotFoundException: /showModule.xhtml Not Found in ExternalContext as a Resource
    at com.sun.faces.facelets.impl.DefaultFaceletFactory.resolveURL(DefaultFaceletFactory.java:232)
    at com.sun.faces.facelets.impl.DefaultFaceletFactory.resolveURL(DefaultFaceletFactory.java:273)
    at com.sun.faces.facelets.impl.DefaultFaceletFactory.getMetadataFacelet(DefaultFaceletFactory.java:209)
    at com.sun.faces.application.view.ViewMetadataImpl.createMetadataView(ViewMetadataImpl.java:114)
    at com.sun.faces.lifecycle.RestoreViewPhase.execute(RestoreViewPhase.java:233)
    Could anybody share some pointers?
    Thanks,
    Vijaya

    I created the showModule.xhtml in the web.view.module\src\main\resources folder and test the application and Now I'm getting the error in both deployment ways.
    a) Local deployment: Same result
    Error 500--Internal Server Error
    com.sun.faces.context.FacesFileNotFoundException: /showModule.xhtml Not Found in ExternalContext as a Resource
    at com.sun.faces.facelets.impl.DefaultFaceletFactory.resolveURL(DefaultFaceletFactory.java:232)
    at com.sun.faces.facelets.impl.DefaultFaceletFactory.resolveURL(DefaultFaceletFactory.java:273)
    b) Remote server:
    Error 500--Internal Server Error
    com.sun.faces.context.FacesFileNotFoundException: /showModule.xhtml Not Found in ExternalContext as a Resource
    at com.sun.faces.facelets.impl.DefaultFaceletFactory.resolveURL(DefaultFaceletFactory.java:232)
    at com.sun.faces.facelets.impl.DefaultFaceletFactory.resolveURL(DefaultFaceletFactory.java:273)
    Please check the below screenshots for the mappings captured in the properties window.
    http://imageshack.us/photo/my-images/5/srwebviewmodule.png/
    http://imageshack.us/photo/my-images/811/eclipseexplorer.png/
    http://imageshack.us/photo/my-images/521/cdiandrichfacesear.png/
    http://imageshack.us/photo/my-images/90/cdiandrichfaces.png/
    Thanks,
    Vijaya

  • HOw to avoid DeadLocks when you schedule a Sql Server Agent Job and calling SSIS packages

    Hi All,
    I have scheduled 2 packages in in Sql Server Agent jobs .
    First job which is having Package 1 executing at 11 AM and where I am inserting the data in the table.
    Second job which is having Package 2 executing at 12 AM and where I am updating the data in the table based on the first job inserted records.
    When I am executing my first job it taking more time and executing till 12 AM and from 12 AM my job 2 also starting ,so getting deadlocks conflicts because inserting happening from job1 and updating happening from Job 2.
    How to avoid deadlocks and fix the issue.
    Please Suggest .
    Thanks & Regards,
    Anand

    Hi Anand,
    Here is another solution, you can set the Job 2 not to run based on a schedule, and create another SQL Server Agent Job which starts at 12 AM and run with a specified time interval to execute a SQL statement in which you do the following steps:
    1. Get the status information of Job 1 using the statement:
    DECLARE @i int;
    EXEC @i = msdb.dbo.sp_help_job @job_name = ‘Job Name'
    2. If the value of @i is 1 which means the status of job 1 is success and current time is, then start the job 2. So, the statement is as follows:
    IF @I = 1
    EXEC msdb.dbo.sp_start_job @job_name= ‘Job Name’
    Regards,
    Mike Yin
    TechNet Community Support

  • Start background Job when another is finished (NOT with the JobSteps)

    Hi guys,
    i need your help.
    I've already searched here in forum, but i wasn't able to find a good solution.
    I have this problem.
    I have a program that create a background job with the FM FM JOB_OPEN.. SUBMIT report with parameters .. JOB_CLOSE.
    I want that if i run this report again (20 secs after for example) it does:
    - Check if there is an already running Job with the same name (means with state 'R') (This task is simply, with a select on TBTO table)
    - If it's found, it have to create a new job with the same name that starts automatically AFTER the first running job is finished (don't care about the end-state of the first job).
    I've already tried with pred_jobcount & PRED_JOBNAME parameters of the JOB_CLOSE FM but it doesen't works!
    The JOB_CLOSE, creates a job in Planned state. But when the first Job is finished, the second job(Planned) doesen't start automatically.
    In this scenario, i CANNOT use endless loops (wait until the first job is finished and then submit the second) Job-steps (one job that contains multiple steps) events (i have to start only one job after the predecessor is finished), because this report could be run many times and each job should be collected like a "stack" (only when the first job is finshed the second "registered" should be started and so on, until the aren't more planned jobs).
    <REMOVED BY MODERATOR>
    Thx a lot for your help.
    Andrea
    Edited by: Alvaro Tejada Galindo on Jun 12, 2008 12:19 PM

    Hi Veda
    i can tell u but ... some reward points are very appreciated ....
    I'm joking (of course)
    Here the question:
    I have a program (called A) that submit a new program (called B) with the JOB_OPEN .. submit JOB_CLOSE. The program B should start only if another program B (called before for example) is finished.
    Here the solution.
    I add a parameter (with no-display clausole) to the program B. In this parameter i pass to the program, the job number returned by the JOB_OPEN function. 
    When i create the JOB with the function, the "jobname" parameter is set with value 'G_DELIVERY' (Here u can change the name of the job as u want : this is the jobname that u see in SM37 transaction).
    In the start-of-selection of program B i put a "waiting" procedure like this :
    First i save a timestamp of system-date and system-time (called for example r_date and r_time)
    select from table TBTCO all the jobs called "G_DELIVERY" with jobnumber <> from the jobnumber parameter (that means exclude itselfs)  with status running ('R')  -> that have startdate /starttime less than the r_date and r_time <- (this is the key of the selection that solve the problem).
    if it is found (means there is another running job started before this one).
    wait up to 60 seconds. "for example
    repeat the selection.
    endif.
    When the job called before ends, this one programs exits from the loop and continues. If u submit more programs "B", they'll works like a stack.
    I should say to u just one thing.... I solved my problem in another way (because i've changed the logic so this problem was no more) so i didn't implemented that logic, but it should works very good.
    Try it and tell me!
    Bye
    Andrea

  • Error while Setting Up Oracle Content Server to Send Jobs to Oracle IBR

    Hi,
    I am trying to configure Oracle Content Server to send jobs to IBR.
    I am using following version of UCM:
    11gR1-11.1.1.3.0-idcprod1-100505T121221 (Build:7.3.0.180)
    Both UCM and IBR are using same WAS domain. Installed on Windows server 2008.
    1.I have started both manged servers for UCM and IBR.
    2.Then by browsing IBR console http://vpunvfpctnsz-07:16250/ibr/ , i have changed the
    Incoming Socket Connection Address Security Filter:
    127.0.0.1|0:0:0:0:0:0:0:1|<<my.server.IP.address>>
    3.Enabled DAMConverter component on IBR
    4.Restarted IBR.
    5.Created an outgoing provider on UCM content server as follows:
    Provider Name: IBR
    Provider Description: Provider for IBR
    Provider Type: outgoing
    Provider Class: intradoc.provider.SocketOutgoingProvider
    Provider Connection: intradoc.provider.SocketOutgoingConnection
    Instance Name: VPUNVFPCTN955099yscom16250 << same as IBR server name >>
    Server Host Name: vpunvfpctnsz-07
    HTTP Server Address:
    Server Port: 16250
    Relative Web Root: /ibr/
    Conversion Options: Handles Inbound Refinery Conversion Jobs
    Refinery read-only mode: False
    Maximum Jobs to Queue: 1000
    It is showing following status:
    Connection State: This remains "good" when i click on test and after some time chages to "down".
    Connection Error: Unable to communicate with refinery provider IBR; it does not resolve to a valid IBR. Exception type is 'java.lang.Throwable'.
    Did i miss any step?
    Please suggest.
    Thanks and regards,
    Minal

    Hi
    Server Port: 16250
    This should be the value of IntradocServerPort for IBR server .
    By default it is 5555 .
    Replace 16250 with 5555 (if you have not changed it ) .
    Save the changes , restart UCM managed server .
    Test to see if the error shows up .
    Hope it helps .
    Thanks
    Srinath

  • SQL Server Agent and Jobs and executing @EventData

    I have a SQL Server Agent Job and within it a Job Step which states "Execute Report Subscriptions" and a command which has...
    exec msdb.dbo.sp_start_job '6FF53AED-855F-43AB-9FB7-064062B8012E' --9:07 subscription
    GO
    WAITFOR DELAY '00:08';
    Now, I find within SQL Server Agent the Job  '6FF53AED-855F-43AB-9FB7-064062B8012E' and its step command which is...
    exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='ca4e5410-2758-4a1a-9b06-513821e0d962'
    How can I drill-down further into the @EventData 'ca4e5410-2758-4a1a-9b06-513821e0d962' to see what it does? I do not see 'ca4e5410-2758-4a1a-9b06-513821e0d962' within SQL Server Agent and Jobs or am I way off base here as to what exactly the @EventData
    parameter seems to be?
    Thanks for your review and am hopeful for a reply.

    Hi ITBobbyP,
    According to your description, you need to know the what does SQL Server Reporting Services do when fire a subscription, right?
    When you create a subscription several things are added to the RS server
    A row is placed in the Subscriptions table identifying the name of the report, along with parameter settings, data driven query info and so on to process the subscription
    A row is placed in the Schedule and ReportSchedule tables with the timing of the subscription
    A SQL Server Agent job is created to control the scheduled execution of the report, and this is stored in the sysjobs and sysjobsteps of the MSDB database.
    When the subscription runs several things happen
    The SQL Server Agent job fires and puts a row in the Event table in the RS catalog with the settings necessary to process the subscription
    The RS server service has a limited number of threads (2 per CPU) that poll the Event table every few seconds looking for subscriptions to process
    When it finds an event, it puts a row in the Notifications table and starts processing the subscription
    Please refer to the links below to see the details.
    http://blogs.msdn.com/b/deanka/archive/2009/01/13/diagnosing-and-troubleshooting-subscriptions.aspx
    http://blogs.msdn.com/b/deanka/archive/2010/02/16/troubleshooting-subscriptions-part-ii-using-the-report-services-trace-log-file.aspx
    Regards,
    Charlie Liao
    TechNet Community Support

  • SQL Server 2012 syspolicy_purge_history job causes cross-instance login failures w. EraseSystemHealthPhantomRecords

    I have unique service accounts set up for multiple instances on the same SQL Server 2012.
    When step 3 of the inbuilt syspolicy_purge_history job(Erase Phantom System Health Records) runs, it appears to attempt to run against every instance on the server despite being passed the instance path!
    The SQLServer's powershell script call:
    if ('$(ESCAPE_SQUOTE(INST))' -eq 'MSSQLSERVER') {$a = '\DEFAULT'} ELSE {$a = ''};
    (Get-Item SQLSERVER:\SQLPolicy\$(ESCAPE_NONE(SRVR))$a).EraseSystemHealthPhantomRecords()
    so with instances SERVER\X this runs as...
    (Get-Item SQLSERVER:\SQLPolicy\SERVER\X).EraseSystemHealthPhantomRecords()
    SERVER\X's job will run and I will see login failures in the error logs of SERVER\Y and SERVER\Z for the service account set up for instance X.
    It seems Microsoft's only 'accepted solution' to this problem is for me to compromise my security by escalating the access of these service accounts?
    Has anyone else run into and corrected this failure?

    Hi Atombath,
    When you install multiple instances on one Server, and  the SQL Server’s powershell scripts are the same in inbuilt syspolicy_purge_history job steps. However, when you start PowerShell by right clicking
     syspolicy_purge_history job, you will find it will point to their own instance. I do a test in my SQL Server 2012,
     it will not across instance to collect the error logs. So I recommend you use its original powershell scripts for the syspolicy_purge_history job.
    Sometimes, if you run the syspolicy_purge_history job on a clustered instance, the syspolicy_purge_history SQL Server Agent job may fail due to using the computer node name instead of the virtual server name. For more information, see:
    http://support.microsoft.com/kb/955726/en-us
    In addition, you can use different service account for your multiple SQL Server instances on the same Server. And make sure the accounts that you created get added to the sysadmin fixed server role, the accounts are also set in the three Agent roles (SqlAgentUserRole,
    SqlAgentReaderRole, and SqlAgentOperatorRole).
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Window Server 2012 R2 Job Scheduler Failing.

    Hi,
    Currently, my company's Window Server 2012 R2 job scheduler having a problem.
    Then i found out the solution at http://support.microsoft.com/kb/2617046, so i download the hotfix to fix the problem.  However, i get another error as below.
    Please, what can i do?
    Thanks 
    Best Regard
    Vincent.

    Amy,
    Thanks for the tips, I follow the settings. I checked the condition and troubleshoot according to the lists you provided.
    The task was configured to run only when a specified network is available. (Not set this)
    The configured expiration time for the task has passed. (no expired time set)
    The task is configured to ignore or queue a new task instance if a previous instance is still running. (not set this)
    The task is configured not to run when the computer is on battery power. (yes, will not run when computer on battery power. will not happen because is a server all the time running AC power).
    The task is configured to run only if a specific user is logged on. (must select this, if not, the scheduler will not run at all)
    The task was disabled by a user. (is enabled)
    The previous task instance might have been running longer than expected because a component is busy processing data. If the task is normally expected to run for this length of time, consider modifying the task triggers to take this run-time length
    into consideration, or configure the task to be terminated after a preset time. (already set "Stop the task if runs longer than 3 days" and "If the running task does not end when requested, force it to stop".  and "Do not start
    a new instance " had been set)
    Frankly, for the moment it run normally, somehow, I don't know when the incident will happen again.  It seem randomly happen, and not by default.  That is why kinda need the fix to fix this bug.  Cannot trigger email when this incident happen. 
    And the schedule we run is highly critical.
    Thanks.

  • System exception while deleting the file from app server in background job

    Hi All,
    I have a issue while the deleting the file from application server.
    I am using the statement DELETE DATASET in my program to delete the file from app server.
    I am able to delete the file from the app server when i run the program from app server.
    When i run the same report from background job i am getting the message called System exception.
    Is there any secuirity which i need to get the issue.
    Thank You,
    Taragini

    Hi All,
    I get all the authorization sto delete the file from application serever.
    Thing is i am able to run the program sucessfully in foreground but not in the background .
    It i snot giving any short dump also just JOB is cancelled with the exception 'Job cancelled after system exception ERROR_MESSAGE'.
    Can anybody please give me suggestion
    Thanks,
    Taragini

Maybe you are looking for