Standby MRP0 process- Wait for Log
Hi ,
I have standby Oracle 11.2.0.3 DB on AIx server .
After configuring Dataguard Log apply service fails on standby DB ..Following is resulf from my standy and primary DB .
On Primary
select process, status, sequence#, block# from v$managed_standby;
PROCESS STATUS SEQUENCE# BLOCK#
ARCH CLOSING 52 1
ARCH CLOSING 51 1
ARCH WRITING 2 38913
ARCH CLOSING 52 1
LNS WRITING 54 1003
On Standby
select process, status, sequence#, block# from v$managed_standby;
PROCESS STATUS SEQUENCE# BLOCK#
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
RFS RECEIVING 2 6145
RFS IDLE 54 1025
RFS IDLE 0 0
MRP0 WAIT_FOR_LOG 2 0
On primaray Datagaurd status shows below output
select message from V$DATAGUARD_STATUS order by TIMESTAMP;
ARCH: Completed archiving thread 1 sequence 53 (1093671-1094088)
ARCH: Beginning to archive thread 1 sequence 53 (1093671-1094088)
LNS: Beginning to archive log 3 thread 1 sequence 54
MESSAGE
LNS: Completed archiving log 2 thread 1 sequence 53
On Standby DB
select message from V$DATAGUARD_STATUS order by TIMESTAMP;
MESSAGE
RFS[2]: Assigned to RFS process 659510
RFS[2]: No standby redo logfiles available for thread 1
RFS[3]: Assigned to RFS process 1110268
RFS[3]: No standby redo logfiles available for thread 1
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery starting Real Time Apply
Media Recovery Waiting for thread 1 sequence 2 (in transit)
Please let me know what needs ti change for start log apply on physical standby .
Thanks .
Vaishali.
Hi Shivananda,
Please find below output ..
I can tnsping both the database as well as when i try to connect from sqlplus to DB ..it was sucessful from both the side ...
SQL> select severity,error_code,message from v$dataguard_status where dest_id=2;
SEVERITY ERROR_CODE
MESSAGE
Error 1034
PING[ARC2]: Heartbeat failed to connect to standby 'IHISDR'. Error is 1034.
Error 1034
FAL[server, ARC2]: Error 1034 creating remote archivelog file 'IHISDR'
Error 1089
FAL[server, ARC2]: FAL archival, error 1089 closing archivelog file 'IHISDR'
SEVERITY ERROR_CODE
MESSAGE
Warning 1089
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (1089)
Warning 1089
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Error 1089
Error 1089 for archive log file 1 to 'IHISDR'
SEVERITY ERROR_CODE
MESSAGE
Error 1089
FAL[server, ARC0]: FAL archival, error 1089 closing archivelog file 'IHISDR'
7 rows selected.
Thanks .
Similar Messages
-
How to reduce "Wait for Log Writer"
Hi,
in a production system using MaxDB 7.6.03.07 i checked follow acitivities about Log:
Log Pages Written: 32.039
Wait for Log Writer: 31.530
the docs explains that "Wait for Log Writer", Indicates how often it was necessary to wait for a log entry to be written.
then what steps i must follow to reduce this?
thanks for any help
ClóvisHi,
when the log Io queue is full all user tasks who want to insert entries into the log io queue have to wait until the log entries of the queue have been written to the log volume - they are wiating for LOG Writer
First you should check the size of the LOG_IO_QUEUE Parameter.
Think about increaseing the parameter value.
Second will be to check the time for write I/O to the log -> use DB-Analyzer and activate time measurement via x_cons <DBNAME> time enable command.
Then you will get time for write I/O on the log in the DB-Analyzer log files (expert)
You will find more information about MaxDb Logging and Performance Analysis on maxdb.sap.com -> [training material|http://maxdb.sap.com/training] chapter logging and performance analysis.
Regards, Christiane -
HT201263 in restoring process,"waiting for ipad" is written in itunes after that it get stuck
in restoring process,"waiting for ipad" is written in itunes after that it get stuck
iPad: Unable to update or restore
http://support.apple.com/kb/ht4097
iTunes: Specific update-and-restore error messages and advanced troubleshooting
http://support.apple.com/kb/TS3694
If you can’t update or restore your iOS device
http://support.apple.com/kb/ht1808
iPad Stuck in Recovery Mode after Update
http://www.transfer-iphone-recovery.com/ipad-stuck-in-recovery-mode-after-update .html
iOS: Apple logo with progress bar after updating or restoring from backup
http://support.apple.com/kb/TS3681
Cheers, Tom -
Standby media recovery waiting for inactive thread
Hi,
Please let me know any idea on this scenario. Thanks.
Environment:
Oracle 11.2.0.2
primary: 3 node RAC
standby: 3 node RAC
Problem:
there is thread 5 (not registered instance or did not show in srvctl) that generates archivelog. but the lag apply stopped because of this when the instance (of thread 5) is shutdown.
question: somehow an instance is registered in the cluster but in srvctl only 3 instance is running. it should have 4 instance but 1 is not running. How can I remove the thread 5 so when someone startup then shutdown instance#4 it will not create archivelog that will stopped the apply of archivelog in standby.
note: this is perf environment server so someone and "other" DBA is accessing this environment which I am not aware what are they doing with the cluster.
looking in alert log file: - it is waiting for thread 5 sequence 510. But the instance is down so log is not shipped to standby database and this resulted to lag in other threads.
Sat Aug 03 18:54:47 2013
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_1_seq_13718.1544.822333555
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_2_seq_17665.22678.822315375
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_3_seq_15465.14138.822313997
Media Recovery Waiting for thread 5 sequence 510
THREAD# LAST_SEQ_RECEIVED LAST_SEQ_APPLIED
1 13745 13717
2 17728 17664
3 15527 15464
5 509 509
what I did is:
1. primary (asm copy to file system)
2. scp primary to standby
3. standby (file system copy to asm)
4. rman target / -> catalog archivelog <thread 5 sequence 510)
5. then looking into alert log file; it performed media recovery
Sat Aug 03 23:03:13 2013
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_1_seq_13718.1544.822333555
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_2_seq_17665.22678.822315375
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_3_seq_15465.14138.822313997
Media Recovery Waiting for thread 5 sequence 510
Sat Aug 03 23:15:21 2013
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_5_seq_510
Sat Aug 03 23:15:32 2013
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_3_seq_15466.10925.822316315
Sat Aug 03 23:17:18 2013
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_2_seq_17666.853.822333143
Sat Aug 03 23:18:39 2013
Media Recovery Log +FLASH/dgjmspl/archivelog/2013_08_01/thread_3_seq_15467.834.822333553
Sat Aug 03 23:20:54 2013in Standby, thread 4 and 5 are both UNUSED, and size is incorrect (not equal to other redo log). I want to recreate it but cannot drop redo log. I follow Doc ID 740675.1.
any idea what are the missing steps? thanks.
ORA-01624: needed for crash recovery of instance UNNAMED_INSTANCE_5 (thread 5)
select group#,thread#,archived,status,bytes from v$log;
primary DB:
GROUP# THREAD# ARC STATUS BYTES
1 1 YES INACTIVE 1073741824
2 1 YES INACTIVE 1073741824
3 2 NO CURRENT 1073741824
4 2 YES INACTIVE 1073741824
5 3 YES INACTIVE 1073741824
6 3 YES INACTIVE 1073741824
7 2 YES INACTIVE 1073741824
8 1 NO CURRENT 1073741824
9 3 NO CURRENT 1073741824
10 4 YES INACTIVE 1073741824
11 4 NO CURRENT 1073741824
GROUP# THREAD# ARC STATUS BYTES
12 4 YES INACTIVE 1073741824
13 5 YES INACTIVE 1073741824
14 5 YES INACTIVE 1073741824
15 5 NO CURRENT 1073741824
standby DB:
GROUP# THREAD# ARC STATUS BYTES
1 1 YES INACTIVE 1073741824
2 1 YES INACTIVE 1073741824
3 2 NO CURRENT 1073741824
4 2 YES INACTIVE 1073741824
5 3 YES INACTIVE 1073741824
6 3 YES INACTIVE 1073741824
7 2 YES INACTIVE 1073741824
8 1 NO CURRENT 1073741824
9 3 NO CURRENT 1073741824
10 4 YES INACTIVE 1073741824
11 4 NO CURRENT 1073741824
GROUP# THREAD# ARC STATUS BYTES
12 4 YES INACTIVE 1073741824
13 5 YES INACTIVE 1073741824
14 5 YES INACTIVE 1073741824
15 5 NO CURRENT 1073741824 -
Sending step in Integration Process waiting for Acknowledgement infinitely
In process I had to send an MATMAS, CLFMAS and CNPMAS. The data for
this IDoc comes in one message from third party system. So, my
Integration Process has receive step (to collect a data), and three
send-steps (for MATMAS, for CLFMAS, for CNPMAS), one by one. The
receive-step catch an inbound message and then (without transforms)
send this message to each of this three send-steps in Asynchronous
mode. The inbound message transforms in Interface Determination to
IDoc. Three steps, one Interface Determination with three conditions,
that looks
like «ProcessStep=send_matmas», «ProcessStep=send_clfmas», «ProcessStep=s
end_cnpmas», and in this place I set a mapping to transform inbound
message to IDoc. All send-steps has property Acknowledgement, which set
to Transport value. So in first send-step MATMAS goes to R3, then R3 in
response send ALEAUD IDoc (trans WE05 shows an incoming MATMAS and
outgoing ALEAUD); ALEAUD comes to XI (trans IDX5 shows inbound and
outbound messages), but ALEAUD didnu2019t transforms to XI Acknowledgement.
And there is no any CLFMAS or CNPMAS, because all processes sleeps in
their first send-steps (trans SWWL shows a many STARTED processes).
Each send-step waits for event u2018SEND_OK_TRANSPORTu2019. Moreover, if I use
trans sxmb_moni to monitor this situation and if I press refresh (F5
button) every time, sxmb_moni every time requests a status of
Acknowledgements, XI transform ALEAUD to Ack, send-step in process
catch this status, process wake up and moves to next send-step (which
sends an CLFMAS and waits for Ack). By continuously pressing refresh
(F5) in sxmb_moni all process becomes COMPLETED (trans SWWL), all
ALEAUD transforms to XI Ack and all IDocs goes to R3. But pressing F5
itu2019s not a solution for integration.Hi Igor,
I dont think your manual refresh changes the status . The status changes is the status change of GUI . The system will take how much so ever it needs to take and not as per your manual refresh.
Regards
joel -
Oracle 10g Source DB Capture Process : Waiting for dictionary redo?
Hi to all,
Situation: Source DB waiting for dictionary redo first SCN
First SCN = 314353677
Start SCN = 644568684
current SCN =779444676
Don't now how to trouble shoot
Previous error is Pause for flow control
Now it is prompting dictionary redo first SCN
How to trouble shoot Step by Step
PLeeaasssssssse expertssssss need helpthis are all the files present in the archive folder
1_2505_705753337.log 1_4218_705753337.log 1_5932_705753337.log 1_764_705753337.log 1_9360_705753337.log
1_2506_705753337.log 1_4219_705753337.log 1_5933_705753337.log 1_7647_705753337.log 1_9361_705753337.log
1_250_705753337.log 1_4220_705753337.log 1_5934_705753337.log 1_7648_705753337.log 1_9362_705753337.log
1_2507_705753337.log 1_4221_705753337.log 1_5935_705753337.log 1_7649_705753337.log 1_9363_705753337.log
1_2508_705753337.log 1_4222_705753337.log 1_5936_705753337.log 1_7650_705753337.log 1_9364_705753337.log
1_2509_705753337.log 1_4223_705753337.log 1_593_705753337.log 1_7651_705753337.log 1_9365_705753337.log
1_2510_705753337.log 1_4224_705753337.log 1_5937_705753337.log 1_7652_705753337.log 1_9366_705753337.log
1_2511_705753337.log 1_4225_705753337.log 1_5938_705753337.log 1_7653_705753337.log 1_936_705753337.log
1_2512_705753337.log 1_4226_705753337.log 1_5939_705753337.log 1_7654_705753337.log 1_9367_705753337.log
1_2513_705753337.log 1_422_705753337.log 1_5940_705753337.log 1_7655_705753337.log 1_9368_705753337.log
1_2514_705753337.log 1_4227_705753337.log 1_5941_705753337.log 1_7656_705753337.log 1_9369_705753337.log
1_2515_705753337.log 1_4228_705753337.log 1_5942_705753337.log 1_765_705753337.log 1_93_705753337.log
1_2516_705753337.log 1_4229_705753337.log 1_5943_705753337.log 1_7657_705753337.log 1_9370_705753337.log
1_251_705753337.log 1_4230_705753337.log 1_5944_705753337.log 1_7658_705753337.log 1_9371_705753337.log
1_2517_705753337.log 1_4231_705753337.log 1_5945_705753337.log 1_7659_705753337.log 1_9372_705753337.log
1_2518_705753337.log 1_4232_705753337.log 1_5946_705753337.log 1_7660_705753337.log 1_9373_705753337.log
1_2519_705753337.log 1_4233_705753337.log 1_594_705753337.log 1_7661_705753337.log 1_9374_705753337.log
1_2520_705753337.log 1_4234_705753337.log 1_5947_705753337.log 1_7662_705753337.log 1_9375_705753337.log
1_2521_705753337.log 1_4235_705753337.log 1_5948_705753337.log 1_7663_705753337.log 1_9376_705753337.log
1_2522_705753337.log 1_4236_705753337.log 1_5949_705753337.log 1_7664_705753337.log 1_937_705753337.log
1_2523_705753337.log 1_423_705753337.log 1_5950_705753337.log 1_7665_705753337.log 1_9377_705753337.log
1_2524_705753337.log 1_4237_705753337.log 1_5951_705753337.log 1_7666_705753337.log 1_9378_705753337.log
1_2525_705753337.log 1_4238_705753337.log 1_5952_705753337.log 1_766_705753337.log 1_9379_705753337.log
1_2526_705753337.log 1_4239_705753337.log 1_5953_705753337.log 1_7667_705753337.log 1_9380_705753337.log
1_252_705753337.log 1_4240_705753337.log 1_5954_705753337.log 1_7668_705753337.log 1_9381_705753337.log
1_2527_705753337.log 1_4241_705753337.log 1_5955_705753337.log 1_7669_705753337.log 1_9382_705753337.log
1_2528_705753337.log 1_4242_705753337.log 1_5956_705753337.log 1_76_705753337.log 1_9383_705753337.log
1_2529_705753337.log 1_4243_705753337.log 1_595_705753337.log 1_7670_705753337.log 1_9384_705753337.log
1_2530_705753337.log 1_4244_705753337.log 1_5957_705753337.log 1_7671_705753337.log 1_9385_705753337.log
1_2531_705753337.log 1_4245_705753337.log 1_5958_705753337.log 1_7672_705753337.log 1_9386_705753337.log
1_2532_705753337.log 1_4246_705753337.log 1_5959_705753337.log 1_7673_705753337.log 1_938_705753337.log
1_2533_705753337.log 1_424_705753337.log 1_5960_705753337.log 1_7674_705753337.log 1_9387_705753337.log
1_2534_705753337.log 1_4247_705753337.log 1_5961_705753337.log 1_7675_705753337.log 1_9388_705753337.log
1_2535_705753337.log 1_4248_705753337.log 1_5962_705753337.log 1_7676_705753337.log 1_9389_705753337.log
1_2536_705753337.log 1_4249_705753337.log 1_5963_705753337.log 1_767_705753337.log 1_9390_705753337.log
1_253_705753337.log 1_4250_705753337.log 1_5964_705753337.log 1_7677_705753337.log 1_9391_705753337.log
1_2537_705753337.log 1_4251_705753337.log 1_5965_705753337.log 1_7678_705753337.log 1_9392_705753337.log
1_2538_705753337.log 1_4252_705753337.log 1_5966_705753337.log 1_7679_705753337.log 1_9393_705753337.log
1_2539_705753337.log 1_4253_705753337.log 1_596_705753337.log 1_7680_705753337.log 1_9394_705753337.log
1_2540_705753337.log 1_4254_705753337.log 1_5967_705753337.log 1_7681_705753337.log 1_9395_705753337.log
1_2541_705753337.log 1_4255_705753337.log 1_5968_705753337.log 1_7682_705753337.log 1_9396_705753337.log
1_2542_705753337.log 1_4256_705753337.log 1_5969_705753337.log 1_7683_705753337.log 1_939_705753337.log
1_2543_705753337.log 1_425_705753337.log 1_59_705753337.log 1_7684_705753337.log 1_9397_705753337.log
1_2544_705753337.log 1_4257_705753337.log 1_5970_705753337.log 1_7685_705753337.log 1_9398_705753337.log
1_2545_705753337.log 1_4258_705753337.log 1_5971_705753337.log 1_7686_705753337.log 1_9399_705753337.log
1_2546_705753337.log 1_4259_705753337.log 1_5972_705753337.log 1_768_705753337.log 1_9400_705753337.log
1_254_705753337.log 1_4260_705753337.log 1_5973_705753337.log 1_7687_705753337.log 1_9401_705753337.log
1_2547_705753337.log 1_4261_705753337.log 1_5974_705753337.log 1_7688_705753337.log 1_9402_705753337.log
1_2548_705753337.log 1_4262_705753337.log 1_5975_705753337.log 1_7689_705753337.log 1_9403_705753337.log
1_2549_705753337.log 1_4263_705753337.log 1_5976_705753337.log 1_7690_705753337.log 1_9404_705753337.log
1_2550_705753337.log 1_4264_705753337.log 1_597_705753337.log 1_7691_705753337.log 1_9405_705753337.log
1_2551_705753337.log 1_4265_705753337.log 1_5977_705753337.log 1_7692_705753337.log 1_9406_705753337.log
1_2552_705753337.log 1_4266_705753337.log 1_5978_705753337.log 1_7693_705753337.log 1_940_705753337.log
1_2553_705753337.log 1_426_705753337.log 1_5979_705753337.log 1_7694_705753337.log 1_9407_705753337.log
1_2554_705753337.log 1_4267_705753337.log 1_5980_705753337.log 1_7695_705753337.log 1_9408_705753337.log
1_2555_705753337.log 1_4268_705753337.log 1_5981_705753337.log 1_7696_705753337.log 1_9409_705753337.log
1_2556_705753337.log 1_4269_705753337.log 1_5982_705753337.log 1_769_705753337.log 1_9410_705753337.log
1_255_705753337.log 1_42_705753337.log 1_5983_705753337.log 1_7697_705753337.log 1_9411_705753337.log
1_2557_705753337.log 1_4270_705753337.log 1_5984_705753337.log 1_7698_705753337.log 1_9412_705753337.log
1_2558_705753337.log 1_4271_705753337.log 1_5985_705753337.log 1_7699_705753337.log 1_9413_705753337.log
1_2559_705753337.log 1_4272_705753337.log 1_5986_705753337.log 1_7700_705753337.log 1_9414_705753337.log
1_2560_705753337.log 1_4273_705753337.log 1_598_705753337.log 1_7701_705753337.log 1_9415_705753337.log
1_2561_705753337.log 1_4274_705753337.log 1_5987_705753337.log 1_7702_705753337.log 1_9416_705753337.log
1_2562_705753337.log 1_4275_705753337.log 1_5988_705753337.log 1_7703_705753337.log 1_941_705753337.log
1_2563_705753337.log 1_4276_705753337.log 1_5989_705753337.log 1_7704_705753337.log 1_9417_705753337.log
1_2564_705753337.log 1_427_705753337.log 1_5990_705753337.log 1_7705_705753337.log 1_9418_705753337.log
1_2565_705753337.log 1_4277_705753337.log 1_5991_705753337.log 1_7_705753337.log 1_9419_705753337.log
1_2566_705753337.log 1_4278_705753337.log 1_5992_705753337.log 1_7706_705753337.log 1_9420_705753337.log
1_256_705753337.log 1_4279_705753337.log 1_5993_705753337.log 1_770_705753337.log 1_9421_705753337.log
1_2567_705753337.log 1_4280_705753337.log 1_5994_705753337.log 1_7707_705753337.log 1_9422_705753337.log
1_2568_705753337.log 1_4281_705753337.log 1_5995_705753337.log 1_7708_705753337.log 1_9423_705753337.log
1_2569_705753337.log 1_4282_705753337.log 1_5996_705753337.log 1_7709_705753337.log 1_9424_705753337.log
1_25_705753337.log 1_4283_705753337.log 1_599_705753337.log 1_7710_705753337.log 1_9425_705753337.log
1_2570_705753337.log 1_4284_705753337.log 1_5997_705753337.log 1_7711_705753337.log 1_9426_705753337.log
1_2571_705753337.log 1_4285_705753337.log 1_5998_705753337.log 1_7712_705753337.log 1_942_705753337.log
1_2572_705753337.log 1_4286_705753337.log 1_5999_705753337.log 1_7713_705753337.log 1_9427_705753337.log
1_2573_705753337.log 1_428_705753337.log 1_6000_705753337.log 1_7714_705753337.log 1_9428_705753337.log
1_2574_705753337.log 1_4287_705753337.log 1_6001_705753337.log 1_7715_705753337.log 1_9429_705753337.log
1_2575_705753337.log 1_4288_705753337.log 1_6002_705753337.log 1_7716_705753337.log 1_9430_705753337.log
1_2576_705753337.log 1_4289_705753337.log 1_6003_705753337.log 1_771_705753337.log 1_9431_705753337.log
1_257_705753337.log 1_4290_705753337.log 1_6004_705753337.log 1_7717_705753337.log 1_9432_705753337.log
1_2577_705753337.log 1_4291_705753337.log 1_6005_705753337.log 1_7718_705753337.log 1_9433_705753337.log
1_2578_705753337.log 1_4292_705753337.log 1_6006_705753337.log 1_7719_705753337.log 1_9434_705753337.log
1_2579_705753337.log 1_4293_705753337.log 1_600_705753337.log 1_7720_705753337.log 1_9435_705753337.log
1_2580_705753337.log 1_4294_705753337.log 1_6007_705753337.log 1_7721_705753337.log 1_9436_705753337.log
1_2581_705753337.log 1_4295_705753337.log 1_6008_705753337.log 1_7722_705753337.log 1_943_705753337.log
1_2582_705753337.log 1_4296_705753337.log 1_6009_705753337.log 1_7723_705753337.log 1_9437_705753337.log
1_2583_705753337.log 1_429_705753337.log 1_6010_705753337.log 1_7724_705753337.log 1_9438_705753337.log
1_2584_705753337.log 1_4297_705753337.log 1_6011_705753337.log 1_7725_705753337.log 1_9439_705753337.log
1_2585_705753337.log 1_4298_705753337.log 1_6012_705753337.log 1_7726_705753337.log 1_9440_705753337.log
1_2586_705753337.log 1_4299_705753337.log 1_6013_705753337.log 1_772_705753337.log 1_9441_705753337.log
1_258_705753337.log 1_4300_705753337.log 1_6014_705753337.log 1_7727_705753337.log 1_9442_705753337.log
1_2587_705753337.log 1_4301_705753337.log 1_6015_705753337.log 1_7728_705753337.log 1_9443_705753337.log
1_2588_705753337.log 1_4302_705753337.log 1_6016_705753337.log 1_7729_705753337.log 1_9444_705753337.log
1_2589_705753337.log 1_4303_705753337.log 1_601_705753337.log 1_7730_705753337.log 1_9445_705753337.log
1_2590_705753337.log 1_4304_705753337.log 1_6017_705753337.log 1_7731_705753337.log 1_9446_705753337.log
1_2591_705753337.log 1_4305_705753337.log 1_6018_705753337.log 1_7732_705753337.log 1_944_705753337.log
1_2592_705753337.log 1_4306_705753337.log 1_6019_705753337.log 1_7733_705753337.log 1_9447_705753337.log
1_2593_705753337.log 1_430_705753337.log 1_6020_705753337.log 1_7734_705753337.log 1_9448_705753337.log
1_2594_705753337.log 1_4307_705753337.log 1_6021_705753337.log 1_7735_705753337.log 1_9449_705753337.log
1_2595_705753337.log 1_4308_705753337.log 1_6022_705753337.log 1_7736_705753337.log 1_9450_705753337.log
1_2596_705753337.log 1_4309_705753337.log 1_6023_705753337.log 1_773_705753337.log 1_9451_705753337.log
1_259_705753337.log 1_4310_705753337.log 1_6024_705753337.log 1_7737_705753337.log 1_9452_705753337.log
1_2597_705753337.log 1_4311_705753337.log 1_6025_705753337.log 1_7738_705753337.log 1_9453_705753337.log
1_2598_705753337.log 1_4312_705753337.log 1_6026_705753337.log 1_7739_705753337.log 1_9454_705753337.log
1_2599_705753337.log 1_4313_705753337.log 1_602_705753337.log 1_7740_705753337.log 1_9455_705753337.log
1_2600_705753337.log 1_4314_705753337.log 1_6027_705753337.log 1_7741_705753337.log 1_9456_705753337.log
1_2601_705753337.log 1_4315_705753337.log 1_6028_705753337.log 1_7742_705753337.log 1_945_705753337.log
1_2602_705753337.log 1_4316_705753337.log 1_6029_705753337.log 1_7743_705753337.log 1_9457_705753337.log
1_2603_705753337.log 1_431_705753337.log 1_6030_705753337.log 1_7744_705753337.log 1_9458_705753337.log
1_2604_705753337.log 1_4317_705753337.log 1_6031_705753337.log 1_7745_705753337.log 1_9459_705753337.log
1_2605_705753337.log 1_4318_705753337.log 1_6032_705753337.log 1_7746_705753337.log 1_9460_705753337.log
1_2606_705753337.log 1_4319_705753337.log 1_6033_705753337.log 1_774_705753337.log 1_9461_705753337.log
1_260_705753337.log 1_4320_705753337.log 1_6034_705753337.log 1_7747_705753337.log 1_9462_705753337.log
1_2607_705753337.log 1_4321_705753337.log 1_6035_705753337.log 1_7748_705753337.log 1_9463_705753337.log
1_2608_705753337.log 1_4322_705753337.log 1_6036_705753337.log 1_7749_705753337.log 1_9464_705753337.log
1_2609_705753337.log 1_4323_705753337.log 1_603_705753337.log 1_7750_705753337.log 1_9465_705753337.log
1_2610_705753337.log 1_4324_705753337.log 1_6037_705753337.log 1_7751_705753337.log 1_9466_705753337.log
1_2611_705753337.log 1_4325_705753337.log 1_6038_705753337.log 1_7752_705753337.log 1_946_705753337.log
1_2612_705753337.log 1_4326_705753337.log 1_6039_705753337.log 1_7753_705753337.log 1_9467_705753337.log
1_2613_705753337.log 1_432_705753337.log 1_6040_705753337.log 1_7754_705753337.log 1_9468_705753337.log
1_2614_705753337.log 1_4327_705753337.log 1_6041_705753337.log 1_7755_705753337.log 1_9469_705753337.log
1_2615_705753337.log 1_4328_705753337.log 1_6042_705753337.log 1_7756_705753337.log 1_94_705753337.log
1_2616_705753337.log 1_4329_705753337.log 1_6043_705753337.log 1_775_705753337.log 1_9470_705753337.log
1_261_705753337.log 1_4330_705753337.log 1_6044_705753337.log 1_7757_705753337.log 1_9471_705753337.log
1_2617_705753337.log 1_4331_705753337.log 1_6045_705753337.log 1_7758_705753337.log 1_9472_705753337.log
1_2618_705753337.log 1_4332_705753337.log 1_6046_705753337.log 1_7759_705753337.log 1_9473_705753337.log
1_2619_705753337.log 1_4333_705753337.log 1_604_705753337.log 1_7760_705753337.log 1_9474_705753337.log
1_2620_705753337.log 1_4334_705753337.log 1_6047_705753337.log 1_7761_705753337.log 1_9475_705753337.log
1_2621_705753337.log 1_4335_705753337.log 1_6048_705753337.log 1_7762_705753337.log 1_9476_705753337.log
1_2622_705753337.log 1_4336_705753337.log 1_6049_705753337.log 1_7763_705753337.log 1_947_705753337.log
1_2623_705753337.log 1_433_705753337.log 1_6050_705753337.log 1_7764_705753337.log 1_9477_705753337.log
1_2624_705753337.log 1_4337_705753337.log 1_6051_705753337.log 1_7765_705753337.log 1_9478_705753337.log
1_2625_705753337.log 1_4338_705753337.log 1_6052_705753337.log 1_7766_705753337.log 1_9479_705753337.log
1_2626_705753337.log 1_4339_705753337.log 1_6053_705753337.log 1_776_705753337.log 1_9480_705753337.log
1_262_705753337.log 1_4340_705753337.log 1_6054_705753337.log 1_7767_705753337.log 1_9481_705753337.log
1_2627_705753337.log 1_4341_705753337.log 1_6055_705753337.log 1_7768_705753337.log 1_9482_705753337.log
1_2628_705753337.log 1_4342_705753337.log 1_6056_705753337.log 1_7769_705753337.log 1_9483_705753337.log
1_2629_705753337.log 1_4343_705753337.log 1_605_705753337.log 1_77_705753337.log 1_9484_705753337.log
1_2630_705753337.log 1_4344_705753337.log 1_6057_705753337.log 1_7770_705753337.log 1_9485_705753337.log
1_2631_705753337.log 1_4345_705753337.log 1_6058_705753337.log 1_7771_705753337.log 1_9486_705753337.log
1_2632_705753337.log 1_4346_705753337.log 1_6059_705753337.log 1_7772_705753337.log 1_948_705753337.log
1_2633_705753337.log 1_434_705753337.log 1_6060_705753337.log 1_7773_705753337.log 1_9487_705753337.log
1_2634_705753337.log 1_4347_705753337.log 1_6061_705753337.log 1_7774_705753337.log 1_9488_705753337.log
1_2635_705753337.log 1_4348_705753337.log 1_6062_705753337.log 1_7775_705753337.log 1_9489_705753337.log
1_2636_705753337.log 1_4349_705753337.log 1_6063_705753337.log 1_7776_705753337.log 1_9490_705753337.log
1_263_705753337.log 1_4350_705753337.log 1_6064_705753337.log 1_777_705753337.log 1_9491_705753337.log
1_2637_705753337.log 1_4351_705753337.log 1_6065_705753337.log 1_7777_705753337.log 1_9492_705753337.log
1_2638_705753337.log 1_4352_705753337.log 1_6066_705753337.log 1_7778_705753337.log 1_9493_705753337.log
1_2639_705753337.log 1_4353_705753337.log 1_606_705753337.log 1_7779_705753337.log 1_9494_705753337.log
1_2640_705753337.log 1_4354_705753337.log 1_6067_705753337.log 1_7780_705753337.log 1_9495_705753337.log
1_2641_705753337.log 1_4355_705753337.log 1_6068_705753337.log 1_7781_705753337.log 1_9496_705753337.log
1_2642_705753337.log 1_4356_705753337.log 1_6069_705753337.log 1_7782_705753337.log 1_949_705753337.log
1_2643_705753337.log 1_435_705753337.log 1_60_705753337.log 1_7783_705753337.log 1_9497_705753337.log
1_2644_705753337.log 1_4357_705753337.log 1_6070_705753337.log 1_7784_705753337.log 1_9498_705753337.log
1_2645_705753337.log 1_4358_705753337.log 1_6071_705753337.log 1_7785_705753337.log 1_9499_705753337.log
1_2646_705753337.log 1_4359_705753337.log 1_6072_705753337.log 1_7786_705753337.log 1_9500_705753337.log
1_264_705753337.log 1_4360_705753337.log 1_6073_705753337.log 1_778_705753337.log 1_9501_705753337.log
1_2647_705753337.log 1_4361_705753337.log 1_6074_705753337.log 1_7787_705753337.log 1_9502_705753337.log
1_2648_705753337.log 1_4362_705753337.log 1_6075_705753337.log 1_7788_705753337.log 1_9503_705753337.log
1_2649_705753337.log 1_4363_705753337.log 1_6076_705753337.log 1_7789_705753337.log 1_9504_705753337.log
1_2650_705753337.log 1_4364_705753337.log 1_607_705753337.log 1_7790_705753337.log 1_9505_705753337.log
1_2651_705753337.log 1_4365_705753337.log 1_6077_705753337.log 1_7791_705753337.log 1_9506_705753337.log
1_2652_705753337.log 1_4366_705753337.log 1_6078_705753337.log 1_7792_705753337.log 1_950_705753337.log
1_2653_705753337.log 1_436_705753337.log 1_6079_705753337.log 1_7793_705753337.log 1_9507_705753337.log
1_2654_705753337.log 1_4367_705753337.log 1_6080_705753337.log 1_7794_705753337.log 1_9508_705753337.log
1_2655_705753337.log 1_4368_705753337.log 1_6081_705753337.log 1_7795_705753337.log 1_9509_705753337.log
1_2656_705753337.log 1_4369_705753337.log 1_6082_705753337.log 1_7796_705753337.log 1_9510_705753337.log
1_265_705753337.log 1_43_705753337.log 1_6083_705753337.log 1_779_705753337.log 1_9511_705753337.log
1_2657_705753337.log 1_4370_705753337.log 1_6084_705753337.log 1_7797_705753337.log 1_9512_705753337.log
1_2658_705753337.log 1_4371_705753337.log 1_6085_705753337.log 1_7798_705753337.log 1_9513_705753337.log
1_2659_705753337.log 1_4372_705753337.log 1_6086_705753337.log 1_7799_705753337.log 1_9514_705753337.log
1_2660_705753337.log 1_4373_705753337.log 1_608_705753337.log 1_7800_705753337.log 1_9515_705753337.log
1_2661_705753337.log 1_4374_705753337.log 1_6087_705753337.log 1_7801_705753337.log 1_9516_705753337.log
1_2662_705753337.log 1_4375_705753337.log 1_6088_705753337.log 1_7802_705753337.log 1_951_705753337.log
1_2663_705753337.log 1_4376_705753337.log 1_6089_705753337.log 1_7803_705753337.log 1_9517_705753337.log
1_2664_705753337.log 1_437_705753337.log 1_6090_705753337.log 1_7804_705753337.log 1_9518_705753337.log
1_2665_705753337.log 1_4377_705753337.log 1_6091_705753337.log 1_7805_705753337.log 1_952_705753337.log
1_2666_705753337.log 1_4378_705753337.log 1_6092_705753337.log 1_7806_705753337.log 1_953_705753337.log
1_266_705753337.log 1_4379_705753337.log 1_6093_705753337.log 1_780_705753337.log 1_954_705753337.log
1_2667_705753337.log 1_4380_705753337.log 1_6094_705753337.log 1_7807_705753337.log 1_955_705753337.log
1_2668_705753337.log 1_4381_705753337.log 1_6095_705753337.log 1_7808_705753337.log 1_956_705753337.log
1_2669_705753337.log 1_4382_705753337.log 1_6096_705753337.log 1_7809_705753337.log 1_95_705753337.log
1_26_705753337.log 1_4383_705753337.log 1_609_705753337.log 1_7810_705753337.log 1_957_705753337.log
1_2670_705753337.log 1_4384_705753337.log 1_6097_705753337.log 1_7811_705753337.log 1_958_705753337.log
1_2671_705753337.log 1_4385_705753337.log 1_6098_705753337.log 1_7812_705753337.log 1_959_705753337.log
1_2672_705753337.log 1_4386_705753337.log 1_6099_705753337.log 1_7813_705753337.log 1_960_705753337.log
1_2673_705753337.log 1_438_705753337.log 1_6100_705753337.log 1_7814_705753337.log 1_961_705753337.log
1_2674_705753337.log 1_4387_705753337.log 1_6101_705753337.log 1_7815_705753337.log 1_962_705753337.log
1_2675_705753337.log 1_4388_705753337.log 1_6102_705753337.log 1_7816_705753337.log 1_963_705753337.log
1_2676_705753337.log 1_4389_705753337.log 1_6103_705753337.log 1_781_705753337.log 1_964_705753337.log
1_267_705753337.log 1_4390_705753337.log 1_6104_705753337.log 1_7817_705753337.log 1_965_705753337.log
1_2677_705753337.log 1_4391_705753337.log 1_6105_705753337.log 1_7818_705753337.log 1_966_705753337.log
1_2678_705753337.log 1_4392_705753337.log 1_6106_705753337.log 1_7819_705753337.log 1_96_705753337.log
1_2679_705753337.log 1_4393_705753337.log 1_610_705753337.log 1_7820_705753337.log 1_967_705753337.log
1_2680_705753337.log 1_4394_705753337.log 1_6107_705753337.log 1_7821_705753337.log 1_968_705753337.log
1_2681_705753337.log 1_4395_705753337.log 1_6108_705753337.log 1_7822_705753337.log 1_969_705753337.log
1_2682_705753337.log 1_4396_705753337.log 1_6109_705753337.log 1_7823_705753337.log 1_9_705753337.log
1_2683_705753337.log 1_439_705753337.log 1_6110_705753337.log 1_7824_705753337.log 1_970_705753337.log
1_2684_705753337.log 1_4397_705753337.log 1_6111_705753337.log 1_7825_705753337.log 1_971_705753337.log
1_2685_705753337.log 1_4398_705753337.log 1_6112_705753337.log 1_7826_705753337.log 1_972_705753337.log
1_2686_705753337.log 1_4399_705753337.log 1_6113_705753337.log 1_782_705753337.log 1_973_705753337.log
1_268_705753337.log 1_4400_705753337.log 1_6114_705753337.log 1_7827_705753337.log 1_974_705753337.log
1_2687_705753337.log 1_4401_705753337.log 1_6115_705753337.log 1_7828_705753337.log 1_975_705753337.log
1_2688_705753337.log 1_4402_705753337.log 1_6116_705753337.log 1_7829_705753337.log 1_976_705753337.log
1_2689_705753337.log 1_4403_705753337.log 1_611_705753337.log 1_7830_705753337.log 1_97_705753337.log
1_2690_705753337.log 1_4404_705753337.log 1_6117_705753337.log 1_7831_705753337.log 1_977_705753337.log
1_2691_705753337.log 1_4405_705753337.log 1_6118_705753337.log 1_7832_705753337.log 1_978_705753337.log
1_2692_705753337.log 1_4406_705753337.log 1_6119_705753337.log 1_7833_705753337.log 1_979_705753337.log
1_2693_705753337.log 1_440_705753337.log 1_6120_705753337.log 1_7834_705753337.log 1_980_705753337.log
1_2694_705753337.log 1_4407_705753337.log 1_6121_705753337.log 1_7835_705753337.log 1_981_705753337.log
1_2695_705753337.log 1_4408_705753337.log 1_6122_705753337.log 1_7836_705753337.log 1_982_705753337.log
1_2696_705753337.log 1_4409_705753337.log 1_6123_705753337.log 1_783_705753337.log 1_983_705753337.log
1_269_705753337.log 1_4410_705753337.log 1_6124_705753337.log 1_7837_705753337.log 1_984_705753337.log
1_2697_705753337.log 1_4411_705753337.log 1_6125_705753337.log 1_7838_705753337.log 1_985_705753337.log
1_2698_705753337.log 1_4412_705753337.log 1_6126_705753337.log 1_7839_705753337.log 1_986_705753337.log
1_2699_705753337.log 1_4413_705753337.log 1_612_705753337.log 1_7840_705753337.log 1_98_705753337.log
1_2700_705753337.log 1_4414_705753337.log 1_6127_705753337.log 1_7841_705753337.log 1_987_705753337.log
1_2701_705753337.log 1_4415_705753337.log 1_6128_705753337.log 1_7842_705753337.log 1_988_705753337.log
1_2702_705753337.log 1_4416_705753337.log 1_6129_705753337.log 1_7843_705753337.log 1_989_705753337.log
1_2703_705753337.log 1_441_705753337.log 1_6130_705753337.log 1_7844_705753337.log 1_990_705753337.log
1_2704_705753337.log 1_4417_705753337.log 1_6131_705753337.log 1_7845_705753337.log 1_991_705753337.log
1_2705_705753337.log 1_4418_705753337.log 1_6132_705753337.log 1_7846_705753337.log 1_992_705753337.log
1_2_705753337.log 1_4419_705753337.log 1_6133_705753337.log 1_784_705753337.log 1_993_705753337.log
1_2706_705753337.log 1_4420_705753337.log 1_6134_705753337.log 1_7847_705753337.log 1_994_705753337.log
1_270_705753337.log 1_4421_705753337.log 1_6135_705753337.log 1_7848_705753337.log 1_995_705753337.log
1_2707_705753337.log 1_4422_705753337.log 1_6136_705753337.log 1_7849_705753337.log 1_996_705753337.log
1_2708_705753337.log 1_4423_705753337.log 1_613_705753337.log 1_7850_705753337.log 1_99_705753337.log
1_2709_705753337.log 1_4424_705753337.log 1_6137_705753337.log 1_7851_705753337.log 1_997_705753337.log
1_2710_705753337.log 1_4425_705753337.log 1_6138_705753337.log 1_7852_705753337.log 1_998_705753337.log
1_2711_705753337.log 1_4426_705753337.log 1_6139_705753337.log 1_7853_705753337.log 1_999_705753337.log
1_2712_705753337.log 1_442_705753337.log 1_6140_705753337.log 1_7854_705753337.log
1_2713_705753337.log 1_4427_705753337.log 1_6141_705753337.log 1_7855_705753337.logEdited by: user13640691 on Feb 23, 2011 2:35 AM -
Can a BPEL process wait for a second web service call
Hi,
My BPEL process is an asynchronous process, so the first web service call kicks off an instance of the process.
what i want is somewhere down in the process i can put a "receive" shape to pause the process and wait for another web service call to come in, once the "receive" shape gets the second web service call, the process continues on.
Is that a valid thing to do on BPEL?
i don't seem to be able to get it working.
i could change the WSDL for the BPEL process to publish two operations, the default "initiate" and another one called "continue", they both accept the same type of request message.
but when i test it, the BPEL process just can't accept message at the second operation, it always creates a new instance to handle the web service calls, even when the call is targeting "continue" operation.
Any ideas?
Thanks in advance!I was just about to give up after the last post but then the "pick" shape hit my eye somehow ("thank God", that's all i can say)
and it did the trick, the "pick" shape can wait for an incoming call from a partner link.
so what i can achieve is this,
the first web service call sends some thing like
<Root><CorrelationId>1</CorrelationId><Content>first name</Content></Root>
on the "initiate" operation, just like calling an "initiate" method in java code
the BPEL instance gets initiated, reaches the "pick" shape and stops
then a second web service call comes in as
<Root><CorrelationId>1</CorrelationId><Content>last name</Content></Root>
on the "continue" operation, again just like calling a "continue" method
it works! -
IPlanet 4.1 Web Server - ns-cron process hangs for log rotation
Does anybody know if there is a known issue with iPlanet 4.1 Web Server where the ns-cron log rotation process hangs up?
Basically it won't rotate any logs, and using either the "Stop" or "Restart" buttons in the console won't shut it down, so you have to kill it manually, then start it again.
It seems like it will run for a couple days fine, then it will hang up.
Thanks.Nope, I've not seen this before. But 4.1 version is really old and no longer supported, you should consider upgrading to a supported version.
-
Should clients of long-running processes wait for a response?
I am a bit confused by the worklist tutorial where the client waits around for
a response from the worklist process that blocks on human input. This may be okay
for the sample application, but is it a realistic scenario? Have you designed
any practical worklist processes this way? I would have thought that the client
of a worklist process would simply trigger the process and go away. Eventually
when the human involved in the process would take some action, the process would
then, for example, send a message to some application. What do you think? I would
be interested in your opinions and perhaps some real examples of how you are using
worklists.
Thanks.
P.S. Is it even possible for a worklist client to trigger the process and go away?
Would the process die because the client has gone out of scope?I agree with your comments. In being honest as well, we unfortunately do have SRs that get behind or missed. It's very frustrating on our side as well when we have these come up. We have implemented a number of technologies and also human touch to help us catch these and continue to focus on ways to overcome these delays. I know it's frustrating for you as well.
Our customer satisfaction is very high and actually continues to increase each Qtr. Our data shows us our three key satisfaction areas are; speed of response, speed of resolution and followup.
Chat support helps us in response time, but sometimes may hinder us in resolution times. Especially with more complex issues. Our goals were not to totally replace the phones with chats, but to use chat to allow us to respond more quickly, which it has done. We're now focusing our folks to realize it's still ok to call you when the Chat is hindering resolution and it would be quicker to talk live about the issue.
We resolve over half of the SRs on the first day. We continue to focus on speeding up the resolution of those that go beyond the first day. A couple things we've implemented are we call you back at least every 2 days when the SR is in Novell's court and at least every 5 days when it's in your court. We've also implemented timed escalation for SRs and are now looking at also escalating when progress is not being made after a certain number of days (rather that just time open). The goal is to get you in the hands of the right person asap to speed up the resolution time.
We're also having Managers call you on more and more SRs before being closed to double check your satisfaction with the SR.
Our dream is to never have you need to ask us for attention on a SR. We continue to make progress on it.
I look forward to your thoughts and other ideas you may have to help us improve,
-Todd
Todd Abney
Technical Support Director
Novell -
Logical Standby: WAITING FOR DICTIONARY LOGS status.
Hi,
I have just configured a Logical Standby from Physical Standby, I followed all steps explained in section 4.2.1 from the Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2) documentation, but now my logical standby stay on WAITING FOR DICTIONARY LOGS more than 3 days, all archives from primary DB have been replicated and registered, I don't understand whats
wrong.
SQL> SELECT * FROM V$LOGSTDBY_STATE;
PRIMARY_DBID SESSION_ID
REALTIME_APPLY
STATE
144528764 1
Y
WAITING FOR DICTIONARY LOGS
Fri Apr 8 18:05:47 2011
RFS LogMiner: Registered logfile [/archive/dbp/581440892_1_0000157573.arc] to LogMiner session id [1]
Fri Apr 8 18:10:55 2011
RFS LogMiner: Client enabled and ready for notification
Fri Apr 8 18:10:55 2011
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: Successfully opened standby log 4: '/redo2a/dbp/redo4a_stb.log'
Fri Apr 8 18:10:58 2011
RFS LogMiner: Registered logfile [/archive/dbp/581440892_1_0000157574.arc] to LogMiner session id [1]
Fri Apr 8 18:15:54 2011
RFS LogMiner: Client enabled and ready for notification
Fri Apr 8 18:15:54 2011
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: Successfully opened standby log 3: '/redo1a/dbp/redo3a_stb.log'
Fri Apr 8 18:15:57 2011
RFS LogMiner: Registered logfile [/archive/dbp/581440892_1_0000157575.arc] to LogMiner session id [1]Thanks in advance.
My Oracle version is: 10.2.0.4
My Platform is: AIX 6.1
Nataly.On standby alert no error messages found.
RFS LogMiner: Client enabled and ready for notification
Mon Apr 11 18:09:58 2011
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: Successfully opened standby log 4: '/redo2a/mefsf/redo4a_stb.log'
Mon Apr 11 18:10:01 2011
RFS LogMiner: Registered logfile [/archive/mefsf/581440892_1_0000157782.arc] to LogMiner session id [1]
Mon Apr 11 18:23:16 2011
RFS LogMiner: Client enabled and ready for notification
Mon Apr 11 18:23:16 2011
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: Successfully opened standby log 3: '/redo1a/mefsf/redo3a_stb.log'
Mon Apr 11 18:23:18 2011
RFS LogMiner: Registered logfile [/archive/mefsf/581440892_1_0000157783.arc] to LogMiner session id [1]Continues
On primary alertlog,
There is a common message:
ORACLE Instance bdprod - Archival Error. Archiver continuing.
Mon Apr 11 18:22:57 2011
Errors in file /oracle/app/oracle/admin/bdprod/bdump/bdprod_arc4_2818414.trc:
ORA-00308: cannot open archived log '/archive/bdprod/581440892_1_0000157545.arc'
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 3
Mon Apr 11 18:22:57 2011
FAL[server, ARC4]: FAL archive failed, see trace file.
Mon Apr 11 18:22:57 2011
Errors in file /oracle/app/oracle/admin/bdprod/bdump/bdprod_arc4_2818414.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Mon Apr 11 18:22:57 2011
ORACLE Instance bdprod - Archival Error. Archiver continuing.
Mon Apr 11 18:22:57 2011
Errors in file /oracle/app/oracle/admin/bdprod/bdump/bdprod_arc1_7078038.trc:
ORA-00308: cannot open archived log '/archive/bdprod/581440892_1_0000157546.arc'
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 3
Mon Apr 11 18:22:57 2011
FAL[server, ARC1]: FAL archive failed, see trace file.
Mon Apr 11 18:22:57 2011
Errors in file /oracle/app/oracle/admin/bdprod/bdump/bdprod_arc1_7078038.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Mon Apr 11 18:22:57 2011
ORACLE Instance bdprod - Archival Error. Archiver continuing. -
Hi,
I have a SQL 2012 instance that hosts 3 databases that are publishers in (push) transactional replication to subscribers in other domains and are also primary in an Availability Group to other instances in the same domain. The AG is healty (synchronized),
and replication is ok for 2 databases, but the third continuously shows the Publisher to Distributor history status as 'Replicated transactions are waiting for next Log backup or for mirroring to catch up.'
I have weekly full, nightly differential, and hourly log. Those are running and I've tested restoring the publisher db from them into a test instance. The option for sync_with_backup is 0. I generated a new snapshot, no change (haven't
reinitialized, that will be last possible option as this is production). When the log backups run, status will briefly change to 'Approximately (x) log records have been scanned in pass # ..., 0 of which were marked for replication' and some data
does replicate, but then status immediately changes back to the message about waiting for log backup. We are only replicating tables (no views, sp's, etc). All subscribers and AG secondaries are SQL 2012. My db is less than 200
GB data.
The jobs are continuously running and data updates are occurring on the primary. The distribution database is on the same instance as the publishers. Anyone have any ideas what could be happening?
Thanks greatly!Update - I resolved this.
We had added a new asynchronous instance to the AG, but hadn't added the databases on the new node to the AG yet. Conceptually it seems that adding the instance must have marked the logs and interferred with transactional replication on the
primary. Once I added the databases to the AG on the secondary, then the replication status cleared on the primary and has remained cleared. It is odd though that it only impacted one of the databases in the AG.
cheers. -
"Waiting for iPhone... " alwayz stuck at this stage...Already tried using DFU mode and many times...but always stuck at this stage. However in iphone APPLE LOGO and Status bar is coming but not processing ahead! Even my device is not jailbroken.
iTunes recognize this device and giving option for Restore. i go for it and in the continue process WAITING FOR iPHONE /// at this stage it stuck.. and keep serching like for something and in the iPhone APPLE LOGO and Status bar showing..but not processing further.Shaishel,
Did you resolve this? I am having the same problem! Let me know if you have any suggestions.
Thanks -
Waiting for Gap Email Notification
Hi all,
11.2.0.1
Aix 6.1
Is there an email system in 11g that will notify me if our standby DB is "waiting for gap"?
What other status value are needed to be watched for?
Thanks,
zxyHi Mse,
We have 2 database on 1 server named PROD1, PROD2.
At PROD1, I got this:
DT MBPS_REQ_A_DAY MIN_MBPS_REQ_AN_HOUR MAX_MBPS_REQ_AN_HOUR AVG_MBPS_REQ_AN_HOUR
15-SEP-13 28.4012215 .640219932 2.57893228 2.02865868
16-SEP-13 68.943453 .147678891 20.4264564 3.44717265
17-SEP-13 69.4071543 .639978837 28.1937717 3.47035771
18-SEP-13 54.1598321 .614150599 17.484673 2.7079916
19-SEP-13 55.7515449 .370225948 15.4680441 2.78757725
20-SEP-13 25.9854426 .135556096 17.5167623 3.71220608
6 rows selected.
At PROD2, I got this:
DT MBPS_REQ_A_DAY MIN_MBPS_REQ_AN_HOUR MAX_MBPS_REQ_AN_HOUR AVG_MBPS_REQ_AN_HOUR
15-SEP-13 28.4012215 .640219932 2.57893228 2.02865868
16-SEP-13 68.943453 .147678891 20.4264564 3.44717265
17-SEP-13 69.4071543 .639978837 28.1937717 3.47035771
18-SEP-13 54.1598321 .614150599 17.484673 2.7079916
19-SEP-13 55.7515449 .370225948 15.4680441 2.78757725
20-SEP-13 25.9854426 .135556096 17.5167623 3.71220608
6 rows selected.
So what Mbps plan do we get from network service provider to accommodate our dataguard standby sync process?
Also can you teach me how to format the display above?
Thanks a lot,
zxy -
GUI Login to System "Waiting for Response"
Dear all,
I´ve installed a new SAP-System with ABAP and JAVA.
The installation was successfull, but when I want to login to the System
the GUI write "Waiting for Response". That´s all.
Can anyone help me?The disp_dev writes:
trc file: "dev_disp", trc level: 1, release: "700"
sysno 01
sid CCC
systemid 560 (PC with Windows NT)
relno 7000
patchlevel 0
patchno 111
intno 20050900
make: multithreaded, Unicode, optimized
pid 3732
Sat Aug 16 09:45:36 2008
kernel runs with dp version 229000(ext=109000) (@(#) DPLIB-INT-VERSION-229000-UC)
length of sys_adm_ext is 576 bytes
SWITCH TRC-HIDE on ***
***LOG Q00=> DpSapEnvInit, DPStart (01 3732) [dpxxdisp.c 1239]
shared lib "dw_xml.dll" version 111 successfully loaded
shared lib "dw_xtc.dll" version 111 successfully loaded
shared lib "dw_stl.dll" version 111 successfully loaded
shared lib "dw_gui.dll" version 111 successfully loaded
shared lib "dw_mdm.dll" version 111 successfully loaded
rdisp/softcancel_sequence : -> 0,5,-1
use internal message server connection to port 3901
Sat Aug 16 09:45:41 2008
WARNING => DpNetCheck: NiAddrToHost(1.0.0.0) took 4 seconds
***LOG GZZ=> 1 possible network problems detected - check tracefile and adjust the DNS settings [dpxxtool2.c 5361]
MtxInit: 30000 0 0
DpSysAdmExtInit: ABAP is active
DpSysAdmExtInit: VMC (JAVA VM in WP) is not active
DpIPCInit2: start server >solman_CCC_01 <
DpShMCreate: sizeof(wp_adm) 18672 (1436)
DpShMCreate: sizeof(tm_adm) 4232256 (21056)
DpShMCreate: sizeof(wp_ca_adm) 24000 (80)
DpShMCreate: sizeof(appc_ca_adm) 8000 (80)
DpCommTableSize: max/headSize/ftSize/tableSize=500/8/528056/528064
DpShMCreate: sizeof(comm_adm) 528064 (1048)
DpSlockTableSize: max/headSize/ftSize/fiSize/tableSize=0/0/0/0/0
DpShMCreate: sizeof(slock_adm) 0 (96)
DpFileTableSize: max/headSize/ftSize/tableSize=0/0/0/0
DpShMCreate: sizeof(file_adm) 0 (72)
DpShMCreate: sizeof(vmc_adm) 0 (1536)
DpShMCreate: sizeof(wall_adm) (38456/34360/64/184)
DpShMCreate: sizeof(gw_adm) 48
DpShMCreate: SHM_DP_ADM_KEY (addr: 06420040, size: 4892312)
DpShMCreate: allocated sys_adm at 06420040
DpShMCreate: allocated wp_adm at 06422090
DpShMCreate: allocated tm_adm_list at 06426980
DpShMCreate: allocated tm_adm at 064269B0
DpShMCreate: allocated wp_ca_adm at 0682FDF0
DpShMCreate: allocated appc_ca_adm at 06835BB0
DpShMCreate: allocated comm_adm at 06837AF0
DpShMCreate: system runs without slock table
DpShMCreate: system runs without file table
DpShMCreate: allocated vmc_adm_list at 068B89B0
DpShMCreate: allocated gw_adm at 068B89F0
DpShMCreate: system runs without vmc_adm
DpShMCreate: allocated ca_info at 068B8A20
DpShMCreate: allocated wall_adm at 068B8A28
MBUF state OFF
DpCommInitTable: init table for 500 entries
EmInit: MmSetImplementation( 2 ).
MM global diagnostic options set: 0
<ES> client 0 initializing ....
<ES> InitFreeList
<ES> block size is 1024 kByte.
Using implementation view
<EsNT> Using memory model view.
<EsNT> Memory Reset disabled as NT default
<ES> 511 blocks reserved for free list.
ES initialized.
J2EE server info
start = TRUE
state = STARTED
pid = 4056
argv[0] = C:\usr\sap\CCC\DVEBMGS01\exe\jcontrol.EXE
argv[1] = C:\usr\sap\CCC\DVEBMGS01\exe\jcontrol.EXE
argv[2] = pf=C:\usr\sap\CCC\SYS\profile\CCC_DVEBMGS01_solman
argv[3] = -DSAPSTART=1
argv[4] = -DCONNECT_PORT=1127
argv[5] = -DSAPSYSTEM=01
argv[6] = -DSAPSYSTEMNAME=CCC
argv[7] = -DSAPMYNAME=solman_CCC_01
argv[8] = -DSAPPROFILE=C:\usr\sap\CCC\SYS\profile\CCC_DVEBMGS01_solman
argv[9] = -DFRFC_FALLBACK=ON
argv[10] = -DFRFC_FALLBACK_HOST=localhost
start_lazy = 0
start_control = SAP J2EE startup framework
Sat Aug 16 09:45:42 2008
DpJ2eeStart: j2ee state = STARTED
Sat Aug 16 09:45:44 2008
rdisp/http_min_wait_dia_wp : 1 -> 1
***LOG CPS=> DpLoopInit, ICU ( 3.0 3.0 4.0.1) [dpxxdisp.c 1629]
***LOG Q0K=> DpMsAttach, mscon ( solman) [dpxxdisp.c 11753]
DpStartStopMsg: send start message (myname is >solman_CCC_01 <)
DpStartStopMsg: start msg sent
CCMS: AlInitGlobals : alert/use_sema_lock = TRUE.
CCMS: Initalizing shared memory of size 60000000 for monitoring segment.
CCMS: start to initalize 3.X shared alert area (first segment).
DpMsgAdmin: Set release to 7000, patchlevel 0
MBUF state PREPARED
MBUF component UP
DpMBufHwIdSet: set Hardware-ID
***LOG Q1C=> DpMBufHwIdSet [dpxxmbuf.c 1050]
DpMsgAdmin: Set patchno for this platform to 111
Release check o.K.
Sat Aug 16 09:45:46 2008
DpJ2eeLogin: j2ee state = CONNECTED
Sat Aug 16 09:46:08 2008
MBUF state ACTIVE
DpModState: change server state from STARTING to ACTIVE
Sat Aug 16 09:46:16 2008
WARNING => DpRqServiceQueue: timeout of HIGH PRIO msg, return DP_CANT_HANDLE_REQ
WARNING => DpRqServiceQueue: timeout of HIGH PRIO msg, return DP_CANT_HANDLE_REQ
Sat Aug 16 09:46:53 2008
WARNING => DpRqServiceQueue: timeout of HIGH PRIO msg, return DP_CANT_HANDLE_REQ
Sat Aug 16 09:47:13 2008
SoftCancel request for T14 U15 M0 received from IC_MAN
SoftCancel request for T13 U14 M0 received from IC_MAN
Sat Aug 16 09:52:24 2008
WARNING => DpEnvCheck: no answer from msg server since 20 secs, but dp_ms_keepalive_timeout(300 secs) not reached [dpxxdisp.c 7224]
Sat Aug 16 09:52:44 2008
WARNING => DpEnvCheck: no answer from msg server since 40 secs, but dp_ms_keepalive_timeout(300 secs) not reached [dpxxdisp.c 7224]
Sat Aug 16 09:53:04 2008
WARNING => DpEnvCheck: no answer from msg server since 60 secs, but dp_ms_keepalive_timeout(300 secs) not reached [dpxxdisp.c 7224]
Sat Aug 16 09:53:24 2008
WARNING => DpEnvCheck: no answer from msg server since 80 secs, but dp_ms_keepalive_timeout(300 secs) not reached [dpxxdisp.c 7224]
Sat Aug 16 09:53:44 2008
WARNING => DpEnvCheck: no answer from msg server since 100 secs, but dp_ms_keepalive_timeout(300 secs) not reached [dpxxdisp.c 7224]
Sat Aug 16 09:54:04 2008
WARNING => DpEnvCheck: no answer from msg server since 120 secs, but dp_ms_keepalive_timeout(300 secs) not reached [dpxxdisp.c 7224]
Sat Aug 16 09:54:24 2008
WARNING => DpEnvCheck: no answer from msg server since 140 secs, but dp_ms_keepalive_timeout(300 secs) not reached [dpxxdisp.c 7224]
Sat Aug 16 09:59:46 2008
J2EE server info
start = TRUE
state = ACTIVE
pid = 4056
http = 50100
https = 50101
load balance = 1
start_lazy = 0
start_control = SAP J2EE startup framework
Mon Aug 18 08:24:32 2008
DpSigQuit: caught signal 3
Mon Aug 18 08:24:45 2008
DpModState: change server state from ACTIVE to SHUTDOWN
Softshutdown of server...
send softshutdown to gateway
there are still entries in queues: 2
Mon Aug 18 08:25:05 2008
Softshutdown of server...
send softshutdown to gateway
DpHalt: shutdown server >solman_CCC_01 < (normal)
Mon Aug 18 08:25:06 2008
Stop work processes
Mon Aug 18 08:25:07 2008
Stop gateway
Stop icman
Terminate gui connections
wait for end of work processes
waiting for termination of work processes ...
Mon Aug 18 08:25:08 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:09 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:11 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:12 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:13 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:14 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:15 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:16 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:17 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:18 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:19 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:20 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:21 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:23 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:24 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:25 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:26 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:27 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:28 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:29 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:30 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:31 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:32 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:33 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:34 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:35 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:36 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:37 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:38 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:39 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:40 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:41 2008
waiting for termination of work processes ...
Mon Aug 18 08:25:42 2008
wait for end of gateway
wait for end of icman
DpHalt: disconnect j2ee listener
DpHalt: wait for end of j2ee server
waiting for termination of J2EE server ...
Mon Aug 18 08:25:43 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:44 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:46 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:47 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:48 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:49 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:50 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:51 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:52 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:53 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:54 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:55 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:56 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:57 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:58 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:25:59 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:00 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:01 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:02 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:03 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:04 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:05 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:06 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:07 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:08 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:09 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:10 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:11 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:12 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:13 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:14 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:15 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:16 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:17 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:18 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:19 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:20 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:21 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:22 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:23 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:24 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:25 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:26 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:27 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:28 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:29 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:30 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:31 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:32 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:33 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:34 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:35 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:36 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:37 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:38 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:39 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:40 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:41 2008
waiting for termination of J2EE server ...
Mon Aug 18 08:26:42 2008
ERROR => DpHalt: J2EE (pid 4056) still alive ... [dpxxdisp.c 10223]
DpIJ2eeShutdown: send SIGINT to SAP J2EE startup framework (pid=4056)
DpIJ2eeShutdown: j2ee state = SHUTDOWN
waiting for termination of J2EE server ...
Mon Aug 18 08:26:43 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:44 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:45 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:46 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:47 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:48 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:49 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:50 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:51 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:52 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:53 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:54 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:55 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:56 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:57 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:58 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:26:59 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:00 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:01 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:02 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:03 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:04 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:05 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:06 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:07 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:08 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:09 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:10 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:11 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:12 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:13 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:14 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:15 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:16 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:17 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:18 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:19 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:20 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:21 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:22 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:23 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:24 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:25 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:26 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:27 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:28 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:29 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:30 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:31 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:32 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:33 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:34 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:35 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:36 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:37 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:38 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:39 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:40 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:41 2008
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:42 2008
ERROR => DpHalt: J2EE (pid 4056) still alive [dpxxdisp.c 10255]
DpJ2eeEmergencyShutdown: j2ee state = SHUTDOWN
DpJ2eeEmergencyShutdown: try to kill SAP J2EE startup framework (pid=4056)
waiting for termination of J2EE server (2nd chance) ...
Mon Aug 18 08:27:43 2008
DpStartStopMsg: send stop message (myname is >solman_CCC_01 <)
DpStartStopMsg: stop msg sent
Mon Aug 18 08:27:44 2008
DpHalt: sync with message server o.k.
detach from message server
***LOG Q0M=> DpMsDetach, ms_detach () [dpxxdisp.c 12099]
MBUF state OFF
MBUF component DOWN
cleanup EM
Mon Aug 18 08:27:45 2008
***LOG Q05=> DpHalt, DPStop ( 3732) [dpxxdisp.c 10371]
Can you understand, what this means? -
File to Idoc with wait for 30 secs time
Hi Experts,
We have a requirement like, sending a file using MDM adapter to Idoc.
We need to process messgaes coming into ftp one by one to target system ECC with a time interval of 30 seconds.
After 1 msg is processed, wait for 30 seconds and next message have to process.
Please let us know any user defined function code for this.
Thanks in advance...
Soumya ACheck below thread.
File adapter to pick a single file
CONFIGURE FILE ADAPTER ,get files one by one.
Edited by: phani kumar on Jan 6, 2012 12:50 PM
Maybe you are looking for
-
Why does ext HDD / time machine run slow?
Apologies if this is answered else where, but I can't find any answers. I have a MBP running OSX 10.6.8. Connected to it as a Time Machine is a Freecom 400GB 28147 uk, external hardrive connected over USB 2.0. I also have a Mac Mini running 10.6.8. W
-
How to check the maximum Value and Last value in Reports CP column
Hi all First of all i am sory that iam posting this question in forms instead of Reports....i had posted it in reports but no one reply and i need uirgent solution so I post here.... My problem is that in Report I have calculated some values in CP co
-
"Page could not create the iview" - Error
Hi All, I have a customized portal masthead par file. I used a specific computer for development. When I deploy the project from this computer to the portal, everything is fine, but when I copy/export the par and try to deploy it from other computers
-
Installing 11GR2 on OEL 5.2 error linking ins_ctx.mk
I am attempting to install the 11GR2 software on our Oracle Enterprise Linux 5.2 server. I have verified that all the required rpm packages are installed on this server. When I attempt to install the software it fails during the linking phase with th
-
Image Carousel with Rating Capabilities
Experts: I have pictures stored in a pictures library in a SharePoint 2010 site and want to display them in a slideshow. When a picture is loaded, I want to display the rating control so that users can rate the picture. So far, using JQuery, I am a