TrackedMessages_Copy_BizTalkMsgBoxDb job is over running
Hi, we have a application which is running on BizTalk server 2004 and uses SQl server 2000. It was working fine for past couple of years. From past 6 month we noticed that
TrackedMessages_Copy_BizTalkMsgBoxDb job is over running and fails. It happens whenever there is an suden overload in the no of messages coming to BizTalk server. Also Application is stopped working. We are forced to reboot all the servers
once to get back to normal. But this wasn't happened before.
TrackedMessages_Copy_BizTalkMsgBoxDb is scheduled to run for each and every minute in SQL server agent. Now a days if the messages coming to BizTalk increases then job is taking around 4-5 hours also not getting complted. we are forced to
cancel the job and run it again. Also some time fails.
Can someone please help me on this......
Thanks and Regards, Bala
Hi Bala,
Based on my research, the TrackedMessages_Copy_BizTalkMsgBoxDb job may fail with deadlock error. To work around such issues, locked processes should be killed in BizTalk Database SQL server. And also you may need to apply the latest service pack on
BizTalk Server. For more information, I recommend you check the following articles.
BizTalk Job 'TrackedMessages_Copy_BizTalkMsgBoxDb' fails with deadlock message
FIX: The TrackedMessages_Copy_BizTalkMsgBoxDb SQL Server Agent job fails
TrackedMessages_Copy_BizTalkMsgBoxDb job fails in regular intervals
In addition, as the issue is more related to BizTalk Server , I would like to recommend that you ask the question in the BizTalk Server forums at
https://social.technet.microsoft.com/Forums/en-US/home?category=biztalkserver . It is appropriate and more experts will assist you.
Thanks,
Lydia Zhang
Similar Messages
-
How to identify all jobs that were running at a particular point in time?
TES 6.1.0.391
From time to time, we have a need to identify all jobs that were running at a particular moment in time on a particular agent (we have about 800 agents)...eg "what was running ("Active") at 09:03:42 a.m. two days ago on agent XYZ?"
I've used other job schedulers, and have written queries to extract that info, but I thought before I work on one for Tidal that I would ask the community....how are you getting this info?
Any help is greatly appreciated, thanks.I had some time over the weekend and was able to come up with something of use.
Please note that our repository is MSSQL
select jobmst_prntname as ParentJobName,
a.jobmst_prntid as ParentJobId,
a.jobmst_id as JobId,
a.jobdtl_id as JobDetailID,
jobmst_name as JobName,
b.owner_name as JobOwnerName,
c.jobdtl_cmd as JobCommand,
c.jobdtl_params as JobParameters,
jobmst_lstchgtm as LastUpdateDate,
d.nodlstmst_name as AgentListName
,[jobrun_status]
,[jobrun_duration]
,[jobrun_time] as starttime
,DATEADD(ss,jobrun_duration, jobrun_time) as endtime
,f.nodmst_name as AgentName
,[jobrun_owner]
,[jobrun_cmd]
,[jobrun_rundt]
,[jobrun_batch]
,[jobrun_params]
,[jobrun_launchtm]
,[jobrun_fullpath]
from Admiral..jobmst a,
Admiral.dbo.[owner] b,
Admiral.dbo.jobdtl c,
Admiral.dbo.nodlstms d,
Admiral.dbo.jobrun e,
[Admiral].[dbo].[nodmst] f
where a.jobmst_owner=b.owner_id
and a.jobdtl_id=c.jobdtl_id
and c.nodlstmst_id=d.nodlstmst_id
and e.jobmst_id=a.jobmst_id
and e.nodmst_id=f.nodmst_id
and jobmst_active='Y' --This condition shows only the active jobs
and jobrun_rundt ='2014-01-26' --This is the job run date. If the job finishes the next day, that is what is going to be used.
and f.nodmst_name = 'abc' --This is where you input your agent name
Hope this helps! -
Eclipse - Application over run but not over debug
Use Eclipse 3.2.0 on WinXP, jdk1.6.0_06, my project over run can implement completely normally. With the attempt with debug aborts:
In Eclipse-log:
+!ENTRY org.eclipse.jdt.launching 4 120 2008-05-09 11:21:14.140+
+!MESSAGE Verbindung zu VM kann nicht hergestellt werden+
+!STACK 0+
java.net.BindException: Address already in use: JVM_Bind
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.PlainSocketImpl.bind(Unknown Source)
at java.net.ServerSocket.bind(Unknown Source)
at java.net.ServerSocket.<init>(Unknown Source)
at java.net.ServerSocket.<init>(Unknown Source)
at org.eclipse.jdi.internal.connect.SocketTransportService.startListening(SocketTransportService.java:275)
at org.eclipse.jdi.internal.connect.SocketTransportImpl.startListening(SocketTransportImpl.java:47)
at org.eclipse.jdi.internal.connect.SocketListeningConnectorImpl.startListening(SocketListeningConnectorImpl.java:108)
at org.eclipse.jdt.internal.launching.StandardVMDebugger.run(StandardVMDebugger.java:202)
at org.eclipse.jdt.launching.JavaLaunchDelegate.launch(JavaLaunchDelegate.java:101)
at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:639)
at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:565)
at org.eclipse.debug.internal.ui.DebugUIPlugin.buildAndLaunch(DebugUIPlugin.java:754)
at org.eclipse.debug.internal.ui.DebugUIPlugin$6.run(DebugUIPlugin.java:944)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:58)
Because the again and again happened, I switched an external process guard on.
When Application runs:
C:\Programme\Java\jdk1.6.0_06\bin\javaw.exe use Port 1972 TCP
C:\Programme\Java\jdk1.6.0_06\bin\javaw.exe use Port 1974 TCP
C:\Programme\Java\jdk1.6.0_06\bin\javaw.exe use Port 1975 TCP
When Application debug aborts:
C:\WINDOWS\system32\javaw.exe use Port 1954 TCP
The last process, which is called, before into Eclipse-log writes, is:
C:\Programme\Java\jdk1.6.0_06\bin\javaw.exe -Xms40m -Xmx256m -jar C:\Programme\eclipse\startup.jar -os win32 -ws win32 -arch x86 -launcher C:\Programme\eclipse\eclipse.exe -name Eclipse -showsplash 600 -exitdata f70_788 -vm C:\Programme\Java\jdk1.6.0_06\bin\java.exe
To call and/or find directly before it without success tried:
C:\Programme\Java\jdk1.6.0_06\bin\jre\javaw
C:\Programme\Java\jdk1.6.0_06\bin\javawFirewall blocks jnlp
-
Sql Server Agent: job hasn't run once today. Scheduling problem?
I created this job yesterday at about 4PM; the view history shows that it last ran successfully at 11:53PM. These are the settings I put:
Schedule Type: Recurring
Occurs: Daily
Recurs every: 1 days(s)
Occurs every: 5 minute(s)
Starting at: 05:00:00 PM
Ending at: 11:59:59 PM
Start Date: 10/30/2014
No End Day (selected)
The job is enabled, but it hasn't run once yet today. I don't want to start it manually because it should've started already. It is currently not running.
What can the problem be?
Thanks.
VMThe output is:
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1617.0 (X64)
The length varies, but it's usually a bit over an hour to finish. It's set at 5 minutes so that, as soon as it completes, it runs the job again. The job history yesterday was: 5:48P, 7:03P, 7:43P, 8:58P, 9:53P, 10:58P, 11:53P. The job downloads some files,
and that's why the job varies in length.
VM
I would say there is no point in scheduling a job which runs for 1 hr to run at every 5 mins although as per SQL Server agent logic if job is currently running and it misses schedule it will only start when job is finished. I would say to change logic to
run every 1 hr.
Plus I cannot find the support article but I know there was bug where Agent job could miss schedule can you please apply
SQL Server 2008 R2 SP3. There are 2 reason
1. it might fix your schedule skipping issue
2 You would come under purview of extended support. Which I guess is very important.
You can easily open job activity monitor and look for column Next run date
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Scheduled jobs fail to run after reboot
A couple of months back we moved our CF 8 server to a VM (VMWare). We have noticed that after the server (Windows OS) is rebooted, all scheduled jobs do not run. There are no errors in the logs. One oddity is that after the reboot in the scheduler log there are a series of entries for all jobs with the ThreadID of "main", after that there are no other entries. Normally when the job runs the ThreadID will be something like “Scheduler-1”. Here is where it gets really strange. Simply logging into the console will “trigger” the jobs and they will run. I do not have to manually initiate on of the jobs. This can be repeated over and over simply by rebooting the server. Manually stopping and starting the service does not trigger this issue nor will it “kick start” the jobs to run.
Update:
I opened up a case with Microsoft Support and resolved the issue. Apperantly this is a known issue and the bug will be addressed in CU6. Microsoft was able to give me a hotfix (QFE_MOMEsc_4724.msi) which I applied on all systems that have SCOM Console. I
am told that this issue occurs when SCOM 2007 R2 CU5 runs on SQL 2008 R2.
I hope it helps to others that run into same problem.
ZMR -
Report job is not run at scheduled time
OS:WindowsServer2008R2
Oracle BI 11g(11.1.1.6.0)
CT defined a report job
====
Frequency Daily
Start 2013/01/14 AM 06:00:00
End 2020/01/14 PM 11:59:00
timezone [GMT+09:00]Osaka,Tokyo
====
but he found the report is not run at 06:00:00
at 2013/01/16 08:50AM,he login to BIP,and got to Report Job history.
he found the staus of this report is running and
start processing is 2013/01/16 08:51:32AM
It seems the report job is run after login.
In CT's environment , the followinig work are done Every Day
22:00 stop OracleBI (opmnctl.bat→stopManagedWebLogic.cmd→stopWebLogic.cmd)
00:00 restart server
05:45 clear cashe (nQCmd.exe)
05:45 start OracleBI(BI_START.bat)
06:00 report job is not run for no mail received.-->★
Could you tell me why the report job are not run at 06:00AM
Does any one can give me some advice on investigating this issue, which information are needed?Any suggestion on this issue?
-
DBMS_SCHEDULER.RUN_JOB causes scheduled job not to run again
Environment: 10.2.0.2 Linux 64-bit
Hi,
I have some DBMS_SCHEDULER chain jobs scheduled to run every hour which has been running fine with no issues for a while now.
I needed to run a couple of jobs manually in-between scheduled times so I ran the job using DBMS_SCHEDULER.RUN_JOB, and that worked as expected as it has done in the past.
This issue is that the jobs I ran manually haven't run at their scheduled time since I did that. The only way around it was to recreate the job completely.
Any idea if this is normal functionality, or should I be raising this as a possible issue through Oracle Metalink?
Any help or ideas would be much appreciated.
Thanks
TimHi Tim,
This is a known bug which is tracked internally by bug #5705385. It will be fixed in the next patchsets for 10.2 and 11.1 (i.e. 10.2.0.5 and 11.1.0.7) . If you urgently need a fix, an official patch seems to be available for 10.2.0.3 .
There does seem to be a workaround - using run_job again after the first run_job has completed will not do anything but the chain job should again run on schedule.
Hope this helps,
Ravi. -
Job is not running in Source system.
Hi Experts,
One issue I have because of this I am not able to load data into Data sources.
I am in BI 7.0 environment.
when I execute infopackage, total and techinical status are in yellow.
I found in R/3 Job is not running, based on this statement in SM37. from the fallowing.
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 0 records
Result of customer enhancement: 0 records
IDOC: Info IDoc 2, IDoc No. 3136, Duration 00:00:00
IDoc: Start = 14.05.2009 07:23:39, End = 14.05.2009 07:23:39
Synchronized transmission of info IDoc 3 (0 parallel tasks)
IDOC: Info IDoc 3, IDoc No. 3137, Duration 00:00:00
IDoc: Start = 14.05.2009 07:23:39, End = 14.05.2009 07:23:39
Job finished
Job satrt time and end time are same, so job is not running in the sourcesystem, am i right. let me known if i am worng.
It is Standard Data source but in the above statement becasue *result of Customer enhancement:0 records.*
please help me to reslove this issue to load data into BI.
1. what I have to do to run job in source system side.
2. should I take any help from Basis.
Regards
Vijay
Edited by: vijay anand on May 14, 2009 3:02 PM
Edited by: vijay anand on May 14, 2009 3:04 PMHi Rupesh,
in RSA3 data is availbale,
but it is not coming to BI,
Messages from source system
see also Processing Steps Request
These messages are sent by IDoc from the source system. Both the extractor itself as well as the service API can send messages. When errors occur, several messages are usually sent together.
From the source system, there are several types of messages that can be differentiated by the so-called Info-IDoc-Status. The IDoc with status 2 plays a particular role here; it describes the number of records that have been extracted in a source system and sent to BI. The number of the records received in BI is checked against this information.
the abovemessage I am getting in details tab.
Regards
Vijay -
Scheduled jobs are not running DPM 2012 R2
Hi,
Recently upgraded my dpm 2012 sp1 to 2012 R2 and upgrade went well but i got 'Connection to the DPM service has been lost.(event id:917 and other event ids in the eventlog errors ike '999,997)'. Few dpm backups are success and most of the dpm backups consistenancy
checks are failed.
After investigating the log files and found two SQL server services running in the dpm 2012 r2 server those are 'sql server 2010 & sql server 2012 'service. Then i stopped sql 2010 server service and started only sql server 2012 service using (.\MICROSOFT$DPM$Acct).
Now 'dpm console issue has gone (event id:917) but new issue ocurred 'all the scheduled job are not running' but manully i can able to run all backup without any issues. i am getting below mentioned event log errors
Log Name: Application
Source: SQLAgent$MSDPM2012
Date: 7/20/2014 4:00:01 AM
Event ID: 208
Task Category: Job Engine
Level: Warning
Keywords: Classic
User: N/A
Computer:
Description:
SQL Server Scheduled Job '7531f5a5-96a9-4f75-97fe-4008ad3c70a8' (0xD873C2CCAF984A4BB6C18484169007A6) - Status: Failed - Invoked on: 2014-07-20 04:00:00 - Message: The job failed. The Job was invoked by Schedule 443 (Schedule 1). The last step to
run was step 1 (Default JobStep).
Description:
Fault bucket , type 0
Event Name: DPMException
Response: Not available
Cab Id: 0
Problem signature:
P1: TriggerJob
P2: 4.2.1205.0
P3: TriggerJob.exe
P4: 4.2.1205.0
P5: System.UnauthorizedAccessException
P6: System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal
P7: 33431035
P8:
P9:
P10:
Log Name: Application
Source: MSDPM
Date: 7/20/2014 4:00:01 AM
Event ID: 976
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer:
Description:
The description for Event ID 976 from source MSDPM cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
The DPM job failed because it could not contact the DPM engine.
Problem Details:
<JobTriggerFailed><__System><ID>9</ID><Seq>0</Seq><TimeCreated>7/20/2014 8:00:01 AM</TimeCreated><Source>TriggerJob.cs</Source><Line>76</Line><HasError>True</HasError></__System><Tags><JobSchedule
/></Tags></JobTriggerFailed>
the message resource is present but the message is not found in the string/message table
plz help me to resolve this error.
jacobHi,
i would try to reinstall DPM
Backup DB
uninstall DPM
Install DPM same Version like before
restore DPM DB
run dpmsync.exe -sync
finished
Seidl Michael | http://www.techguy.at |
twitter.com/techguyat | facebook.com/techguyat -
IDOC Monitoring issue - job BPM_DATA_COLLECTION* not running.
Hi all,
We are facing an issue with BPM "IDOC Monitoring" (under application monitoring), which we have setup to monitor Inbound and Outbound Idocs in 2 separate R/3 systems.
In one system it works fine, and measured values are returned each time the monitor is set to run according to the specified schedule.
However, for another R/3 system, the monitor has never run, even though the settings are identical in the monitoring setup.
From reading the Interfaces Monitoring Setup guide, I found that this monitor depends on a job called BPM_DATA_COLLECTION* which runs in the monitored system. In the system where monitoring is functioning correctly, I can see this job is completing successfully at the time the monitor is set to run. All I find is completed jobs - no scheduled or released jobs present.
However, for the system where the monitoring is not functioning, I found that this job is not running, but that the job is sitting in scheduled status instead.
When I tested manually running the job in this system, it ran successfully, and in Solution Manager the monitor brought back the measured value, so the monitor only ran successfully when I manually ran the job in the monitored system.
I read notes 1321015 & 1339657 relating to IDOC monitoring. 1321015 appears to be more relevant, yet it does not exactly describe my issue - it mentions the job BPM_DATA_COLLECTION* failing rather than just remaining in scheduled status which is what I see.
Anyone else see this issue before?
On a more general point - the standard BPM Setup guide doesn't really go into much detail on IDOC Monitoring, and makes no mention of what is happening in the background, i.e. the job BPM_DATA_COLLECTION* being created and run as per schedule. This info is found in a separate document "Interface Monitoring Setup Guide".
Is there any single document which describes fully what happens both in the Solution Manager and the Monitored systems when BPM is activated? For example, to describe which monitors require jobs to be run, which monitors require additional setup in monitored system, etc? A document such as this which describes exactly the process flow for each monitor would be very useful in troubleshooting issues going forward.
Thanks,
JohnHello John,
most probably the user assigned to the corresponding RFC READ connection that connects SolMan with the backend system doesn't have proper authorization to release a job. That's why it is only created/scheduled but not released. Verify if the RFC user on the backend has the latest CSMREG profile assigned according to SAP note 455356.
You can also check if the latest ST-PI support package is installed on your backend system as the ST-PI usualy contain the latest definition of CSMREG.
Best Regards
Volker -
Scheduled jobs do not run as expected after upgrading to 10.2.0.3 or 4
FYI, we've had a ticket open for several days because our scheduled jobs (dbms_scheduler) would no longer run as scheduled after an upgrade to 10.2.0.4 on HPUX, couldn't find the solution by searching in metalink, nor did I find it here or using Google - obviously I wasn't searching correctly. There is a note id that references a set of steps that appears to have resolved our problem. I am putting this note out here so that if you encountered the same difficulty you may come across this note earlier in your troubleshooting efforts rather than later. The full title of the note is: 'Scheduled jobs do not run as expected after upgrading to either 10.2.0.3 or 10.2.0.4'. The Doc ID is: 731678.1.
Thanks - our ticket should be getting closed out on it (our dba will be updating it), the scheduler has been running reliably since we took the steps in the doc mentioned.
-
I need a mail after job is over in background ..
hi gurus
can any one suggest me
how to get a mail after background job is over
either report or bdc job
if job is not completed the queries should come to mail
how to solve this issue.
thank you
regards
kals.this code works for all the succesfull jobs in background:
data: list type table of abaplist with header line.
data: htmllines type table of w3html with header line.
data: maildata like sodocchgi1.
data: mailtxt like solisti1 occurs 10 with header line.
data: mailrec like somlrec90 occurs 0 with header line.
start-of-selection.
* Produce a list
do 100 times.
write:/ sy-index, at 30 sy-index, at 50 sy-index.
enddo.
* Save the list
call function 'SAVE_LIST'
tables
listobject = list
exceptions
list_index_invalid = 1
others = 2.
* Convert the list
call function 'WWW_LIST_TO_HTML'
tables
html = htmllines.
* Send mail
maildata-obj_name = 'TEST'.
maildata-obj_descr = 'Test Subject'.
loop at htmllines.
mailtxt = htmllines.
append mailtxt.
endloop.
mailrec-receiver = 'you at yourcompany.com'.
mailrec-rec_type = 'U'.
append mailrec.
call function 'SO_NEW_DOCUMENT_SEND_API1'
exporting
document_data = maildata
document_type = 'HTM'
put_in_outbox = 'X'
tables
object_header = mailtxt
object_content = mailtxt
receivers = mailrec
exceptions
too_many_receivers = 1
document_not_sent = 2
document_type_not_exist = 3
operation_no_authorization = 4
parameter_error = 5
x_error = 6
enqueue_error = 7
others = 8.
if sy-subrc 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
endif. -
WONDERING IF ANYBODY OUT THERE HAS RESOLVED THE BUFFER OVER RUN PROBLEM WITH QUICKTIME ON VISTA SYSTEMS. SOMEBODY HELP. CAN'T PLAY MY DIGITAL VIDEOS.
I have Quicktime 7.0.4 installed on my new PC with a Intel 640 processor on a ASUS P5LD2-Deluxe.
If I want to watch a video from apple.com/trailers
often but not always the "buffer overrun" error appears.
Everytime I can hear a quiet cracking in the sound
if I play back the video while QT is loading the rest of the video.
The cracking sound always disappers when the load-process had come to its end.
So I´ve updated all updatable, of course the Realtek HD-Audio-Driver (on the Intel HD-Audio-Bus), too !
This made it worse than before.
Now it can happen that the video stops in the middle because of the buffer overrun.
ALL OTHER applications are working fine.
For example I can hear music or reports with winamp while downloading big files without any disturbing sound and ugly error-messages.
I think now the Apple developers are asked ! -
Oracle automatic statistics optimizer job is not running after full import
Hi All,
I did a full import in our QA database, import was successful, however GATHER_STATS_JOB is not running after sep 18 2010 though its enable and scheduled, i did query last_analyzed table to check and its confirmed that it didnt ran after sep18,2010.
Please refer below for the output
OWNER JOB_NAME ENABL STATE START_DATE END_DATE LAST_START_DATE NEXT_RUN_D
SYS GATHER_STATS_JOB TRUE SCHEDULED 18-09-2010 06:00:02
Oracle defined automatic optimizer statistics collection job
=======
SQL> select OWNER,JOB_NAME,STATUS,REQ_START_DATE,
to_char(ACTUAL_START_DATE, 'dd-mm-yyyy HH24:MI:SS') ACTUAL_START_DATE,RUN_DURATION
from dba_scheduler_job_run_details where
job_name='GATHER_STATS_JOB' order by ACTUAL_START_DATE asc; 2 3 4
OWNER JOB_NAME STATUS REQ_START_DATE ACTUAL_START_DATE
RUN_DURATION
SYS GATHER_STATS_JOB SUCCEEDED 16-09-2010 22:00:00
+000 00:00:22
SYS GATHER_STATS_JOB SUCCEEDED 17-09-2010 22:00:02
+000 00:00:18
SYS GATHER_STATS_JOB SUCCEEDED 18-09-2010 06:00:02
+000 00:00:26
What could be the reason for GATHER_STATS_JOB job not running although its set to auto
SQL> select dbms_stats.get_param('AUTOSTATS_TARGET') from dual;
DBMS_STATS.GET_PARAM('AUTOSTATS_TARGET')
AUTO
Does anybody has this kind of experience, please share
Apprecitate your responses
Regards
srh?So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run, but i see tables are updated still the job is not running. I did >query dba_scheduler_jobs and the state of the job is true and scheduled. Please see my previous post on the output
Am i missing anything here, do i look for some parameters settings
So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run,GATHER_STATS_JOB will run and if there are any table in which there's a 10 percent change in data, it will gather statistics on that table. If no table data have changes less than 10 percent, it will not gather statistics.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41282
Hope this helps.
-Anantha -
Dbms_schduler job is not running on a 2 node rac when 1st node fails
Hi,
I want to create a dbms_scheduler job in a 2 node RAC and the job should always run on the node1 and if node1 is down then it should run on node2. This is Oracle 10gR2 (10.2.0.3 in WINDOWS) .In order to do the same I did following
-- First Step
Using DBCA- Service Managment - Created a service (BATCH_SERVICE) and given node1 as preferred and node2 as available. This created following entry in tnsnames.ora in both nodes.
BATCH_SERVICE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BATCH_SERVICE)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
--- Step 2
-- Created BATCH job classes.
BEGIN
DBMS_SCHEDULER.create_job_class(
job_class_name => 'BATCH_JOB_CLASS',
service => 'BATCH_SERVICE');
END;
-- Step 3 -- created a job using job_class as BATCH_JOB_CLASS
begin
dbms_scheduler.create_job(
job_name => 'oltp_job_test'
,job_type => 'STORED_PROCEDURE'
,job_action => 'schema1.P1'
,start_date => systimestamp at time zone 'US/Central'
,repeat_interval => 'FREQ=DAILY;BYHOUR=11;BYMINUTE=30;'
,job_class => 'BATCH_JOB_CLASS'
,enabled => TRUE
,comments => 'New Job.');
end;
Now when I monitor this job it runs on node1. Now I started testing for failover. I manually shutdown 1st instance. Then as per my understanding job should run on 2nd node. But job is not picking up.
when I run the followign command
srvctl status service -d db -s BATCH_SERVICE
service BATCH_SERVICE is running on instance node2.
Any help is really appreciated.It does not show that whether job is running or broken.
Maybe you are looking for
-
I can't change the url of my old behance account so I need to start afresh with a new behance account... how can i migrate all content from the old url into the new one? thanks
-
Please help-how to copy & apply attributes to another file
I just started learning AE CS5.5 and I am stuck. I am trying to duplicate or copy the attributes from a jpeg file in an existing layer in a comp that has already been created. I am not interested in duplicating the image, just the attributes (transfo
-
Hi Gurus, There is a BADI and an existing method in it, which only have importing parameters but no changing or exporting parameters, Now if we need to use this Badi and the method to change a field value entered on a Transaction Screen at runtime, h
-
I am quite new to Illustrator, but it seems this should be simple. (It is trivial in CorelDraw X4: After selecting an object with three nodes, I go to Edit > Find and Replace > Find Objects > Find objects that match the currently selected object, a
-
Migrating iTunes library from Windows to Mac
I am in the process of migrating all my files from my PC to my macbook. Is there a way to do this and still keep all my metadata?