LMS 3.2 Archive Update Job Failing
The scheduled archive update job is failing for all devices. Every one that I've checked is failing with the same message:
Execution Result:
Unable to get results of job execution for device. Retry the job after increasing the job result wait time using the option:Resource Manager Essentials -> Admin -> Config Mgmt -> Archive Mgmt ->Fetch Settings
This setting is at 120 seconds. I've tried adjusting this setting and get same results
Attaching job logs from most recent failure.
Thanks for any help.
Hi ,
Archive purge can fail for many reasons. I can suggest few things , If it did not work. You can open up a TAC case for troubleshooting.
Try this :
Increase the ConfigJobManager.heapsize as “1024m” in the following file:
NMSROOT/MDC/tomcat/webapps/rme/WEB-INF/classes/JobManager.properties (ie,,ConfigJobManager.heapsize=1024m) ·
Restart the daemon manager ·
Once the daemon manager is started successfully, Go to Admin > Network > Purge Settings > Config Archive Purge Settings, increase the “Purge versions older than:” to 12 Months (Also configure large value for no. of versions that you would like to have per device) and trigger the job. ·
Once job is completed, decrease the number of months gradually, until the desired no. of days & no. of versions required is reached. This exercise is to reduce the number of archives loading in memory during a purge job, which will cause job hanging.
Thanks-
Afroz
[Do rate the useful post]
Similar Messages
-
CO_COSTCTR Archiving Write Job Fails
Hello,
The CO_COSTCTR archiving write job fails with the error messages below.
Input or output error in archive file \\HOST\archive\SID\CO_COSTCTR_201209110858
Message no. BA024
Diagnosis
An error has occurred when writing the archive file \\HOST\archive\SID\CO_COSTCTR_201209110858 in the file system. This can occur, for example, as the result of temporary network problems or of a lack of space in the fileing system.
The job logs do not indicate other possible causes. The OS and system logs don't have either. When I ran it in test mode it it finished successfully after long 8 hours. However, the error only happens during production mode where the system is generating the archive files. The weird thing, I do not have this issue with our QAS system (db copy from our Prod). I was able to archive successfully in our QAS using the same path name and logical name (we transport the settings).
Considering above, I am thinking of some system or OS related parameter that is unique or different from our QAS system. Parameter that is not saved in the database as our QAS is a db copy of our Prod system. This unique parameter could affect archiving write jobs (with read/write to file system).
I already checked the network session timeout settings (CMD > net server config) and the settings are the same between our QAS and Prod servers. No problems with disk space. The archive directory is a local shared folder \\HOST\archive\SID\<filename>. The HOST and SID are variables which are unique to each system. The difference is that our Prod server is HA configured (clustered) while our QAS is just standalone. It might have some other relevant settings I am not aware of. Has anyone encountered this before and was able to resolve it?
We're running SAP R3 4.7 by the way.
Thanks,
TonyHi Rod,
We tried a couple of times already. They all got cancelled due to the error above. As much as we wanted to trim down the variant, the CO_COSTCTR only accepts entire fiscal year. The data it has to go through is quite a lot and the test run took us more that 8 hours to complete. I have executed the same in our QAS without errors. This is why I am bit confused why in our Production system I am having this error. Aside that our QAS is refreshed from our PRD using DB copy, it can run the archive without any problems. So I made to think that there might be unique contributing factors or parameters, which are not saved in the database that affects the archiving. Our PRD is configured with High availability; the hostname is not actually the physical host but rather a virtual host of two clustered servers. But this was no concern with the other archiving objects; only in CO_COSTCTR it is giving us this error. QAS has archiving logs turned off if it’s relevant.
Archiving 2007 fiscal year cancels every after around 7200 seconds, while the 2008 fiscal year cancels early around 2500 seconds. I think that while the write program is going through the data in loops, by the time it needs to access back the archive file, the connection has been disconnected or timed out. And the reason why it cancels almost consistently after an amount of time is because of the variant, there is not much variety to trim down the data. The program is reading the same set of data objects. When it reaches to that one point of failure (after the expected time), it cancels out. If this is true, I may need to find where to extend that timeout or whatever it is that is causing above error.
Thanks for all your help. This is the best way I can describe it. Sorry for the long reply.
Tony -
Company Address deleted when update job fails
Hey guys,
Last night we had an issue with a development system, because of this our IdM wasn't able to run it's nightly update job.
Now before I noticed this in the morning I had an urgent request to create a user but I couldn't create the user because I couldn't find the company address.
Had a look in IdM noticed the update job going wrong, ran it manually and out of the blue (or out of the development system) the company address is back.
Only problem: every single user now does not have a company address assigned to him/her.
So my question is: how is it possible that an update job failing is a reason for idM to remove a company address?
We've had issues where users didn't have a company address before and it gave all types of strange behavior so I'm not looking forward to that type of situation again.Hi Srinivas S,
Create the new recurring instances of those reports which having issue. and then check
it will resolve your issue.
Regards,
Anish -
LMS 3.2 Syslog purge job failed.
We installed LMS 3.2 on windows server 2003 enterprise edition. RME version is 4.3.1. We have created daily syslog purge job from RME> Admin> Syslog> Set Purge Policy. This job failes with the following error.
"Drop table failed:SQL Anywhere Error -210: User 'DBA' has the row in 'SYSLOG_20100516' locked 8405 42W18
[ Sat Jun 19 01:00:07 GMT+05:30 2010 ],ERROR,[main],Failed to purge syslogs"
After server restart job will again execute normally for few days.
Please check attached log file of specified job.
RegardsThe only reports of this problem in the past has been with two jobs being created during an upgrade. This would not have been something you would have done yourself. I suggest you open a TAC service request so database locking analysis can be done to find what process is keeping that table locked.
-
LMS 4.0 Archive Poller starts failing after some working cycles
Hi,
We have a LMS 4.0 running in our network with arround 650 nodes.
We are having problems with the Archive Poller.
The first few days it worked fine, but then it started to fail for some devices and on the next day it failed to poll all devices. I have restarted the daemon manager after the failure and it came back working fine for another few days. Since then the Change Poller works for a few days, then it degrades polling to a complete failure, a restart of the daemon manager get it working only for a few days.
Here is some output of a failed node:
*** Device Details for zagora4-sw ***
Protocol ==> Unknown / Not Applicable
Selected Protocols with order ==> Telnet,SSH
Execution Result:
Unable to get results of job execution for device. Retry the job after increasing the job result wait time using the option:Admin > Collection Settings > Config > Config Job Timeout Settings
It seems that the poller is not even trying to poll nodes...?
Do you think the DB might got corrupted, as I restarted the machine without shutting down the processes some weeks ago...
Any hints or ideas would be appreciated !Hi @ all
I have the same Problem as Ruslan. Do ansbody have a idea to solve this problem ? Afer some succesfull runs ob the ArchivePoll it stop with the mentioned error...
*** Device Details for nunw-n30-05-005 ***
Protocol ==> Unknown / Not Applicable
Selected Protocols with order ==> Telnet,SSH,HTTPS
Execution Result:
Unable to get results of job execution for device. Retry the job after increasing the job result wait time using the option:Admin > Collection Settings > Config > Config Job Timeout Settings
I´d tried to increase the "wait time" but notthing happens ...
HELP Please ...
Thanks
Regard Mario -
SharePoint 2013 - Team Foundation Server Dashboard Update job failed
Hi
I integrated TFS 2012 with SharePoint 2013 on Windows Server 2012. SharePoint 2013 farm have 3 WFE and 3 App servers
here what i did
Install TFS extension for SP 2013 on each of SP server and granted access of SP web application to TFS server successfully
in CA - I deployed TFS solutions (wsp) successfully) for wfe3 server
microsoft.teamfoundation.sharepoint.dashboards.wsp
microsoft.teamfoundation.sharepoint.dashboards15.wsp
microsoft.teamfoundation.sharepoint.wsp
I have a number of SC with TFS features activated and connect with TFS server project site working but I really don't know much about TFS.
What I see is there are 2 TFS timer jobs "Team Foundation Server Dashboard Update" for each of the web application (web1 and web2)
running every 30 minutes.
All jobs on web1 are running and succeed and ran on wfe1 and app3
but all jobs on web2 are failed and ran on wfe2, wfe3 and app1, app2 with the following error "An exception occurred while scanning dashboard sites. Please see the SharePoint
log for detailed exceptions"
I looked into the log file and it is show the same error but nothing more.
If anyone experience this or have any advice on how to resolve this, please share
Thanks
SwanlHi Swanl,
It seems that the Dashboard Update timer job will loop through the existing site collection, regardless if it is associated to a TFS site.
If one or more of this site collection is down/corrupted, this will cause the job to fail.
You can try the following step to check if the sites are good:
1. Go to Central Administration > Application Management > View all Site Collections. Proceed to click on each Site collection, and notice the properties for the site on the right hand site.
If the properties does not show up or errors out, this will need to be fixed.
2. Detach the SharePoint content database and reattach it to see if the issue still occurs.
Thanks,
Victoria
Forum Support
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
[email protected]
Victoria Xia
TechNet Community Support -
Archive Backup job failing with no error
Hi All,
Can anybody help me to fix the backup issue. please find the rman log below and help me to fix this.
Script /opt/rman_script/st80_oracle_arch.sh
==== started on Fri Jun 28 11:05:11 SGT 2013 ====
RMAN: /OraBase/V10203/bin/rman
ORACLE_SID: ST801
ORACLE_USER: oracle
ORACLE_HOME: /OraBase/V10203
NB_ORA_SERV: zsswmasb
NB_ORA_POLICY: bsswst80_archlog_daily
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
You have new mail.
Script /opt/rman_script/st80_oracle_arch.sh
==== ended in error on Fri Jun 28 11:05:11 SGT 2013 ====
Thanks,
NayabHi Sarat,
Hope it has got solved now it was due to the archive log full causing it to hung it worked after my system admin moving few logs manually.
Thanks,
Nayab -
Hello,
When I tried to update the schedule of the project , the job failed with the error message " AssignmentAlreadyExists". I further looked into the ULS logs but it wasn't that helpful.
This is the message I found in ULS logs "type = UpdateScheduledProject failed at Message 5 and is blocking the correlation Errors: AssignmentAlreadyExists,Leaving Monitored Scope (FillTypedDataSet -- MSP_WEB_SP_QRY_ReadTimeSheetAssignmentsAndCustomFieldData).
Execution Time=398.730006187937"
Pplease help me to understand the root cause of this problem.
I have verified all the assignments in the plan and I did not find any duplicates.
Thanks in advance.
SeshaThis issue might need detailed troubleshooting, I would suggest you raise a support case with Microsoft.
Cheers! Happy troubleshooting !!! Dinesh S. Rai - MSFT Enterprise Project Management Please click Mark As Answer; if a post solves your problem or Vote As Helpful if a post has been useful to you. This can be beneficial to other community members reading
the thread. -
Db13 both the jobs failed update stats and DB check
Dear Friends,
Request you to look at following logs and suggest .
Thanks in advance.
BR0883I Table selected to collect statistics after check: SAPSR3.ALSLDCTRL (1/0:34:0)
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.50
BR0881I Collecting statistics for table SAPSR3.ALSLDCTRL with method/sample C ...
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.50
BR0884I Statistics collected for table: SAPSR3.ALSLDCTRL, rows old/new: 1/1
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.50
BR0883I Table selected to collect statistics after check: SAPSR3.ALTSTLOD (118/0:6940:0)
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.50
BR0881I Collecting statistics for table SAPSR3.ALTSTLOD with method/sample C ...
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.50
BR0884I Statistics collected for table: SAPSR3.ALTSTLOD, rows old/new: 118/118
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.50
BR0850I 9 of 102 objects processed - 0.105 of 9.323 units done
BR0204I Percentage done: 1.12%, estimated end time: 23:11
BR0001I *_________________________________________________
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.50
BR0883I Table selected to collect statistics after check: SAPSR3.ARFCRSTATE (43/29218:7005:29195)
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.50
BR0881I Collecting statistics for table SAPSR3.ARFCRSTATE with method/sample C ...
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.53
BR0884I Statistics collected for table: SAPSR3.ARFCRSTATE, rows old/new: 43/66
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.53
BR0883I Table selected to collect statistics after check: SAPSR3.ARFCSDATA (153830/419010:0:296422)
BR0280I BRCONNECT time stamp: 2009-07-20 23.00.53
BR0881I Collecting statistics for table SAPSR3.ARFCSDATA with method/sample E/P10 ...
BR0280I BRCONNECT time stamp: 2009-07-20 23.01.12
BR0884I Statistics collected for table: SAPSR3.ARFCSDATA, rows old/new: 153830/274700
BR0280I BRCONNECT time stamp: 2009-07-20 23.01.12
BR0850I 11 of 102 objects processed - 0.550 of 9.323 units done
BR0204I Percentage done: 5.90%, estimated end time: 23:08
BR0001I ***_______________________________________________
BR0280I BRCONNECT time stamp: 2009-07-20 23.01.12
BR0301W SQL error -1 at location brc_dblog_write-2, SQL statement:
'INSERT INTO SAP_SDBAD (BEG, FUNCT, SYSID, POS, LINE) VALUES ('20090720230019', 'sta', 'WIP', '0000', 'A 00000000 00000000 00000
ORA-00001: unique constraint (SAPSR3DB.SDBAD__0) violated
BR0325W Writing to database log failed
BR0280I BRCONNECT time stamp: 2009-07-20 23.01.12
BR0883I Table selected to collect statistics after check: SAPSR3.ARFCSSTATE (4029/146985:38657:32482)
BR0280I BRCONNECT time stamp: 2009-07-20 23.01.12
BR0881I Collecting statistics for table SAPSR3.ARFCSSTATE with method/sample E/P10 ...
BR0280I BRCONNECT time stamp: 2009-07-20 23.01.27
BR0884I Statistics collected for table: SAPSR3.ARFCSSTATE, rows old/new: 4029/119330
BR0280I BRCONNECT time stamp: 2009-07-20 23.01.27
BR0883I Table selected to collect statistics after check: SAPSR3.BAL_AMODAL (0/70:0:70)
BR0280I BRCONNECT time stamp: 2009-07-20 23.01.27
BR0881I Collecting statistics for table SAPSR3.BAL_AMODAL with method/sample C ...
BR2.16
20.07.2009 23:12:50 BR0884I Statistics collected for table: SAPSR3.TSP02, rows old/new: 23263/24250
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.16
20.07.2009 23:12:50 BR0883I Table selected to collect statistics after check: SAPSR3.TSP02FX (1511/9606:0:9600)
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.16
20.07.2009 23:12:50 BR0881I Collecting statistics for table SAPSR3.TSP02FX with method/sample C ...
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.16
20.07.2009 23:12:50 BR0884I Statistics collected for table: SAPSR3.TSP02FX, rows old/new: 1511/1517
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.16
20.07.2009 23:12:50 BR0883I Table selected to collect statistics after check: SAPSR3.TSP02W (1511/9606:0:9600)
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.16
20.07.2009 23:12:50 BR0881I Collecting statistics for table SAPSR3.TSP02W with method/sample C ...
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.16
20.07.2009 23:12:50 BR0884I Statistics collected for table: SAPSR3.TSP02W, rows old/new: 1511/1517
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.16
20.07.2009 23:12:50 BR0883I Table selected to collect statistics after check: SAPSR3.TSPOPTIONS (12/0:10970:0)
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.16
20.07.2009 23:12:50 BR0881I Collecting statistics for table SAPSR3.TSPOPTIONS with method/sample C ...
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.17
20.07.2009 23:12:50 BR0884I Statistics collected for table: SAPSR3.TSPOPTIONS, rows old/new: 12/12
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.17
20.07.2009 23:12:50 BR0883I Table selected to collect statistics after check: SAPSR3.TSPSVI (5/0:19210:0)
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.17
20.07.2009 23:12:50 BR0881I Collecting statistics for table SAPSR3.TSPSVI with method/sample C ...
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.17
20.07.2009 23:12:50 BR0884I Statistics collected for table: SAPSR3.TSPSVI, rows old/new: 5/5
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.17
20.07.2009 23:12:50 BR0883I Table selected to collect statistics after check: SAPSR3.TST01 (31843/15941:56883:15606)
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.17
20.07.2009 23:12:50 BR0881I Collecting statistics for table SAPSR3.TST01 with method/sample E/P30 ...
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.19
20.07.2009 23:12:50 BR0884I Statistics collected for table: SAPSR3.TST01, rows old/new: 31843/31640
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.19
20.07.2009 23:12:50 BR0883I Table selected to collect statistics after check: SAPSR3.TUCNTRAW (63/83:16833:91)
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.19
20.07.2009 23:12:50 BR0881I Collecting statistics for table SAPSR3.TUCNTRAW with method/sample C ...
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.19
20.07.2009 23:12:50 BR0884I Statistics collected for table: SAPSR3.TUCNTRAW, rows old/new: 63/55
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.19
20.07.200 for table: SAPSR3.ZWS_POSTING_DATE, rows old/new: 582110/578840
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0883I Table selected to collect statistics after check: SAPSR3DB.BC_UDDI_PARAM (82/320:4:320)
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0881I Collecting statistics for table SAPSR3DB.BC_UDDI_PARAM with method/sample C ...
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0884I Statistics collected for table: SAPSR3DB.BC_UDDI_PARAM, rows old/new: 82/82
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0883I Table selected to collect statistics after check: SAPSR3DB.J2EE_KEYSEQUENCE (1/0:8:0)
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0881I Collecting statistics for table SAPSR3DB.J2EE_KEYSEQUENCE with method/sample C ...
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0884I Statistics collected for table: SAPSR3DB.J2EE_KEYSEQUENCE, rows old/new: 1/1
20.07.2009 23:12:50
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0850I 102 of 102 objects processed - 9.323 of 9.323 units done
20.07.2009 23:12:50 BR0204I Percentage done: 100.00%, estimated end time: 23:12
20.07.2009 23:12:50 BR0001I **************************************************
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0301W SQL error -1 at location brc_dblog_write-2, SQL statement:
20.07.2009 23:12:50 'INSERT INTO SAP_SDBAD (BEG, FUNCT, SYSID, POS, LINE) VALUES ('20090720230019', 'sta', 'WIP', '0000', 'A 00000000 00000000 00000
20.07.2009 23:12:50 ORA-00001: unique constraint (SAPSR3DB.SDBAD__0) violated
20.07.2009 23:12:50 BR0325W Writing to database log failed
20.07.2009 23:12:50
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0879I Statistics checked for 68351 tables
20.07.2009 23:12:50 BR0878I Number of tables selected to collect statistics after check: 102
20.07.2009 23:12:50 BR0880I Statistics collected for 102/0 tables/indexes
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0301W SQL error -1 at location brc_dblog_write-2, SQL statement:
20.07.2009 23:12:50 'INSERT INTO SAP_SDBAD (BEG, FUNCT, SYSID, POS, LINE) VALUES ('20090720230019', 'sta', 'WIP', '0000', 'A 00000000 00000000 00000
20.07.2009 23:12:50 ORA-00001: unique constraint (SAPSR3DB.SDBAD__0) violated
20.07.2009 23:12:50 BR0325W Writing to database log failed
20.07.2009 23:12:50
20.07.2009 23:12:50 BR0806I End of BRCONNECT processing: cebbggbz.sta2009-07-20 23.12.49
20.07.2009 23:12:50 BR0280I BRCONNECT time stamp: 2009-07-20 23.12.49
20.07.2009 23:12:50 BR0803I BRCONNECT completed successfully with warnings
20.07.2009 23:12:50 External program terminated with exit code 1
20.07.2009 23:12:50 BRCONNECT returned error status E
20.07.2009 23:12:50 Job finishedHi Jiggi
Are you running 2 jobs at the same time from DB13? What do you mean by >Db13 both the jobs failed
>ORA-00001: unique constraint
can happen due to lot of reasons in SAP. Basically it means you are insertion of already existing records.
This also may happen due to bug in brtools patch level. What is your brtools patch level?
You can also have a look at
https://service.sap.com/sap/support/notes/421697
https://service.sap.com/sap/support/notes/458872
Anindya -
BPM process archiving "Processing of archiving write command failed"
Can someone help me with the following problem. After archiving a BPM proces, I get the following messages (summary):
ERROR Processing of archiving write command failed
ERROR Job "d5e2a9d9ea8111e081260000124596b3" could not be run as user"E61006".
LOG -> Processing of archiving write command failed
[EXCEPTION] com.sap.glx.arch.xml.XmlArchException: Cannot create archivable items from object
Caused by: java.lang.ClassCastException: ...
Configuration
I've completed the following steps based on a blog item.
1. created an archive user with the corresponding roles
2. updated the destination DASdefault with the created user -> destination ping = OK
3. created an archive store BPM_ARCH based on unix root folder
4. created home path synchornization with home path /<sisid>/bpm_proc/ and archive store BPM_ARCH
5. start process archiving from manage processes view.
Process Archiving
Manage Process -> Select a process from the table -> Archive button -> Start archiving by using the default settings.
Archiving Monitor
The following log is created which describe that the write command failed.
Write phase log:
[2011.09.29 12:00:18 CEST] INFO Job bpm_proc_write (ID: d5e2a9d9ea8111e081260000124596b3, JMS ID: ID:124596B30000009D-000000000C08) started on Thu, 29 Sep 2011 12:00:18:133 CEST by scheduler: 5e11a5e0df3111decc2d00237d240438
[2011.09.29 12:00:18 CEST] INFO Start execution of job named: bpm_proc_write
[2011.09.29 12:00:18 CEST] INFO Job status: RUNNING
[2011.09.29 12:00:18 CEST] ERROR Processing of archiving write command failed
[2011.09.29 12:00:18 CEST] INFO Start processing of archiving write command ...
Verify Indexes ...
Archive XML schema ...
Resident Policy for object selection is instanceIds = [9ca38cb2343511e0849600269e82721e] , timePeriod = 1317290418551 , inError = false ,
[2011.09.29 12:00:18 CEST] ERROR Job "d5e2a9d9ea8111e081260000124596b3" could not be run as user"E61006".
[2011.09.29 12:00:18 CEST] INFO Job bpm_proc_write (ID: d5e2a9d9ea8111e081260000124596b3, JMS ID: ID:124596B30000009D-000000000C08) ended on Thu, 29 Sep 2011 12:00:18:984 CEST
Log viewer
The following message is created in the log viewer.
Processing of archiving write command failed
[EXCEPTION]
com.sap.glx.arch.xml.XmlArchException: Cannot create archivable items from object
at com.sap.engine.core.thread.execution.CentralExecutor$SingleThread.run(CentralExecutor.java:328)
Caused by: java.lang.ClassCastException: class com.sap.glx.arch.Archivable:sap.com/tcbpemarchear @[email protected]2@alive incompatible with interface com.sap.glx.util.id.UID:library:tcbpembaselib @[email protected]f@alive
at com.sap.glx.arch.him.xml.JaxbTaskExtension.createJaxbObjects(JaxbTaskExtension.java:69)
at com.sap.glx.arch.xml.JaxbSession.fillFromExtensions(JaxbSession.java:73)
at com.sap.glx.arch.pm.xml.ArchProcessExtension.fillHimObjects(ArchProcessExtension.java:113)
at com.sap.glx.arch.pm.xml.ArchProcessExtension.createArchObjectItem(ArchProcessExtension.java:60)
at com.sap.glx.arch.xml.JaxbSession.createArchObjectItems(JaxbSession.java:39)
at com.sap.glx.arch.xml.Marshaller.createItems(Marshaller.java:29)
... 61 moreHi Martin,
I don't have a specific answer sorry, however I do recall seeing a number of OSS notes around BPM archiving whilst searching for a different issue last year - have you checked on there for anything relevant to your currnet version and SP level? There were quite a few notes if memory serves me well!
Regards,
Gareth. -
Hi,
Below is deatils of an issue I am having with my 2900 routers, which I cannot backup (taken from dcmaservice.log).
All other devices are working fine.
Is the a bug fix ?
Thanks
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-5],com.cisco.nm.rmeng.dcma.configmanager.ConfigManager,updateArchive,1943,Sync Archive for 1 devices - Sync Archive
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-5],com.cisco.nm.rmeng.dcma.configmanager.ConfigManager,updateArchive,1957,Number of devices in fetch Q = 0
[ Wed Jul 04 10:28:37 BST 2012 ],WARN ,[Thread-5],com.cisco.nm.rmeng.util.DCRWrapperAPIs,getResultFromQuery,3315,SQLException occurred as connection closed. Re-connecting to DB...
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-5],com.cisco.nm.rmeng.dcma.configmanager.CfgThreadManager,compareDeviceWithDevicesinRunningThreads,59,inside compareDeviceWithDevicesinRunningThreads method
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-5],com.cisco.nm.rmeng.dcma.configmanager.CfgThreadManager,compareDeviceWithDevicesinRunningThreads,60,Total running threads:5
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-5],com.cisco.nm.rmeng.dcma.configmanager.ConfigManager,updateArchiveIfRequired,2057,Compared the device with running thread devices.Adding to Fetch Q
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-5],com.cisco.nm.rmeng.dcma.configmanager.CfgThreadManager,triggerConfigFetch,52,#### Start of Sweep Wed Jul 04 10:28:37 BST 2012 ####
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.xms.xdi.pkgs.SharedDcmaGeneric.transport.GenericConfigOperator,fetchConfig,70,I am here
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.xms.xdi.pkgs.SharedDcmaGeneric.transport.GenericCliOperator,registerPlatform,95,Calling new GenericPlatform()
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.xms.xdi.pkgs.SharedDcmaGeneric.transport.GenericPlatform,<init>,23,setting GP user and pass prompts
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.xms.xdi.pkgs.SharedDcmaGeneric.transport.GenericPlatform,<init>,30,registering generic platform
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.xms.xdi.pkgs.SharedDcmaGeneric.transport.GenericPlatform,<init>,32,registered generic platform
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.util.rmedaa.RMEDeviceContext,getCmdSvc,1537,Inside RMEDeviceContext's getCmdSvc ....
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.util.rmedaa.RMEDeviceContext,getCmdSvc,1547,Protocol and Platforms passed = SSH , GEN
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.util.rmedaa.RMEDeviceContext,getCmdSvc,1571,Iam inside ssh ....
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.util.rmedaa.RMEDeviceContext,getCmdSvc,1573,Initial time_out : 0
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.util.rmedaa.RMEDeviceContext,getCmdSvc,1583,Computed time_out : 36
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.util.rmedaa.RMEDeviceContext,getCmdSvc,1599,After computing time_out : 36
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.util.rmedaa.RMEDeviceContext,getSshCmdSvc,1637,inside getSshCmdSvc with timeout : 36000
[ Wed Jul 04 10:28:37 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.util.rmedaa.RMEDeviceContext,getSshProtocols,1743,Inside getsshprotocols with time out : 36000
[ Wed Jul 04 10:28:38 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.util.rmedaa.RMEDeviceContext,getSshCmdSvc,1651,SSH2 is running
[ Wed Jul 04 10:28:39 BST 2012 ],ERROR,[Thread-384],com.cisco.nm.xms.xdi.transport.cmdsvc.LogAdapter,error,19,Unknown authentication method: keyboard-interactive
[ Wed Jul 04 10:28:43 BST 2012 ],ERROR,[Thread-384],com.cisco.nm.rmeng.dcma.configmanager.ConfigManager,updateArchiveForDevice,1357,PRIMARY RUNNING Config fetch Failed for MIL-R-002
[ Wed Jul 04 10:28:43 BST 2012 ],INFO ,[Thread-384],com.cisco.nm.rmeng.dcma.configmanager.ConfigManager,writePerDeviceResultFile,2445,Serializing the Device Result = D:/CISCOW~2/files\/rme/jobs\ArchiveMgmt\1108/31.device
[ Wed Jul 04 10:28:53 BST 2012 ],ERROR,[Thread-384],com.cisco.nm.rmeng.dcma.utils.ArchiveUtils,getDeviceReachabilityStatus,461,Telnet/SSH may be disabled on 172.31.255.25
[ Wed Jul 04 10:28:53 BST 2012 ],INFO ,[Thread-2],com.cisco.nm.rmeng.dcma.configmanager.CfgThreadManager,run,99,#### End of Sweep Wed Jul 04 10:28:53 BST 2012 ####Hi,
I am still loking at the issue with no resolution.
I am trying Telnet, which all correctly configured and working (with Tacacs and have looked at the Taccas...ini file)
But still my archive jobs fail on LMS 3.2 for 2900 Routers
SSH - is beacuse of the client used on LMS has the "keyboard-interactive" mode setting set in its client cfg, which as it's a Java ssh client , it is not configuable and therefore needs a fix.
Telnet - seems to failing with authentication or prompt issues, with Tacacs enabled.
I have seen no resolutions to these issue.
Any ideas
Thanks -
DPM 2012 R2 Backup job FAILED for some Hyper-v VMs and Some Hyper-v VMs are not appearing in the DPM
DPM 2012 R2 Backup job FAILED for some Hyper-v VMs
DPM encountered a retryable VSS error. (ID 30112 Details: VssError:The writer experienced a transient error. If the backup process is retried,
the error may not reoccur.
(0x800423F3))
All the vss Writers are in stable state
Also Some Hyper-v VMs are not appearing in the DPM 2012 R2 Console When I try to create the Protection Group please note that they are not part of cluster.
Host is 2012 R2 and The VM is also 2012 R2.Hi,
What update rollup are you running on the DPM 2012 R2 server ? DPM 2012 R2 UR5 introduced a new refresh feature that will re-enumerate data sources on an individual protected server.
Check for VSS errors inside the guests that are having problems being backed up.
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights. -
The Job Failed Due to MAX-Time at ODS Activation Step
Hi
I'm getting these errors "The Job Failed Due to MAX-Time at ODS Activation Step" and
"Max-time Failure"
How to resolve this failure?Hi ,
You can check the ODs activation logs in ODs batch montior >> click on taht job >>> job log
for this first you check in sm37 how many jobs are running if there are many otehr long running jobs then . the ODs activation will happen very slowly as the system is overloaded .. due to the long running jobs ..this causes the poor performance of system ..
for checking the performance of system .. you first check the lock waits in ST04..and check wether they are progressing or nt ..
check sm66 .. for the no of process running on system ..
check st22 >> for short dumps ..
check Os07> to check the cpu idle tiem if its less tehn 20% . then it means the cpu is overlaoded .
check sm21 ...
check the table space avilable in st04..
see if the system is overlaoded then the ODS wont get enough work process to create the backgorund jobs for that like(Bi_BCTL*)..jobs .the updation will happen but very slowly ..
in this case you can kill few long running jobs which are not important .. and kill few ODs activations also ..
Dont run 23 ODs activation all at a time .. run some of them at a time ..
And as for checking the key points for data loading is check st22,cehck job in R/3 , check sm58 for trfc,check sm59 for Rfc connections
Regards,
Shikha -
Some jobs fail BackupExec, Ultrium 215 drive, NW6.5 SP6
The OS is Netware 6.5 SP6.
The server is a HP Proliant DL-380 G4.
The drive is a HP StorageWorks Ultrium LTO-1 215 100/200GB drive.
The drive is connected to a HP PCI-X Single Channel U320 SCSI HBA, which I recently installed, in order to solve slow transfer speeds, and to solve CPQRAID errors which stalled the server during bootup (it was complaining to have a non-disk drive on the internal controller).
Backup Exec Administrative Console is version 9.10 revision 1158, I am assuming that this means that BE itself has this version number.
Since our data is now more than the tape capacity I have recently started running two jobs interleaved, to backup (around) half of the data at night. One which runs Monday, Wednesday and Friday and one which runs Tuesday and Thursday.
My problem is that while the Tue/Thu job completes succesfully every time, the Mon/Wed/Thu job fails every time.
The jobs have identical policies (except for the interleaved weekdays), but different file selections.
The job log of the Mon/Wed/Thu job fails with this error:
##ERR##Error on HA:1 ID:4 LUN:0 HP ULTRIUM 1-SCSI.
##ERR##A hardware error has been detected during this operation. This
##ERR##media should not be used for any additional backup operations.
##ERR##Data written to this media prior to the error may still be
##ERR##restored.
##ERR##SCSI bus timeouts can be caused by a media drive that needs
##ERR##cleaning, a SCSI bus that is too long, incorrect SCSI
##ERR##termination, or a faulty device. If the drive has been working
##ERR##properly, clean the drive or replace the media and retry the
##ERR##operation.
##ERR##Vendor: HP
##ERR##Product: ULTRIUM 1-SCSI
##ERR##ID:
##ERR##Firmware: N27D
##ERR##Function: Write(5)
##ERR##Error: A timeout has occurred on drive HA:1 ID:4 LUN:0 HP
##ERR##ULTRIUM 1-SCSI. Please retry the operation.(1)
##ERR##Sense Data:
##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
##NML##
##NML##
##NML##
##NML## Total directories: 2864
##NML## Total files: 23275
##NML## Total bytes: 3,330,035,351 (3175.7 Megabytes)
##NML## Total time: 00:06:51
##NML## Throughput: 8,102,275 bytes/second (463.6 Megabytes/minute)
I am suspecting the new controller, or perhaps a broken drive?
I have run multiple cleaning jobs on the drive with new cleaning tapes. The cabling is secured in place.
I have looked for firmware updates, but even though theres a mentioning of a new firmware on hp's site (see http://h20000.www2.hp.com/bizsupport...odTypeId=12169), I can't find the firmware for netware HP LTT (the drive diagnosis / update tool).
I'm hoping someone can provide me some useful info towards solving this problem.
Regards,
TorMy suggestion to you is to probably just give up on fixing this. I
have the same DL380, but a slightly newer drive(Ultrium 448). After
working with HP, Adaptec, & Symantec for over a year I gave up. I've
tried different cards (HP-LSI, Adaptec) , cables, and even swapped the
drive twice with HP but was never able to get it to work.
In the end I purchased a new server, moved the card and tape drive,
and cables all over to the new server and the hardware has been
working fine in the new box for the last year or so. Until I loaded l
SP8 the other day.
My guess is that the PCI-X slot used for these cards isn't happy with
the server hardware.
On Tue, 27 Jan 2009 11:16:02 GMT, torcfh
<[email protected]> wrote:
>
>The OS is Netware 6.5 SP6.
>
>The server is a HP Proliant DL-380 G4.
>
>The drive is a HP StorageWorks Ultrium LTO-1 215 100/200GB drive.
>
>The drive is connected to a HP PCI-X Single Channel U320 SCSI HBA,
>which I recently installed, in order to solve slow transfer speeds, and
>to solve CPQRAID errors which stalled the server during bootup (it was
>complaining to have a non-disk drive on the internal controller).
>
>Backup Exec Administrative Console is version 9.10 revision 1158, I am
>assuming that this means that BE itself has this version number.
>
>Since our data is now more than the tape capacity I have recently
>started running two jobs interleaved, to backup (around) half of the
>data at night. One which runs Monday, Wednesday and Friday and one which
>runs Tuesday and Thursday.
>
>My problem is that while the Tue/Thu job completes succesfully every
>time, the Mon/Wed/Thu job fails every time.
>
>The jobs have identical policies (except for the interleaved weekdays),
>but different file selections.
>
>The job log of the Mon/Wed/Thu job fails with this error:
>
>##ERR##Error on HA:1 ID:4 LUN:0 HP ULTRIUM 1-SCSI.
>
>##ERR##A hardware error has been detected during this operation. This
>
>##ERR##media should not be used for any additional backup operations.
>
>##ERR##Data written to this media prior to the error may still be
>
>##ERR##restored.
>
>##ERR##SCSI bus timeouts can be caused by a media drive that needs
>
>##ERR##cleaning, a SCSI bus that is too long, incorrect SCSI
>
>##ERR##termination, or a faulty device. If the drive has been working
>
>##ERR##properly, clean the drive or replace the media and retry the
>
>##ERR##operation.
>
>##ERR##Vendor: HP
>
>##ERR##Product: ULTRIUM 1-SCSI
>
>##ERR##ID:
>
>##ERR##Firmware: N27D
>
>##ERR##Function: Write(5)
>
>##ERR##Error: A timeout has occurred on drive HA:1 ID:4 LUN:0 HP
>
>##ERR##ULTRIUM 1-SCSI. Please retry the operation.(1)
>
>##ERR##Sense Data:
>
>##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
>
>##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
>
>##NML##
>
>##NML##
>
>##NML##
>
>##NML## Total directories: 2864
>
>##NML## Total files: 23275
>
>##NML## Total bytes: 3,330,035,351 (3175.7 Megabytes)
>
>##NML## Total time: 00:06:51
>
>##NML## Throughput: 8,102,275 bytes/second (463.6
>Megabytes/minute)
>
>I am suspecting the new controller, or perhaps a broken drive?
>
>I have run multiple cleaning jobs on the drive with new cleaning tapes.
>The cabling is secured in place.
>
>I have looked for firmware updates, but even though theres a mentioning
>of a new firmware on hp's site (see http://tinyurl.com/d8tkku), I can't
>find the firmware for netware HP LTT (the drive diagnosis / update
>tool).
>
>I'm hoping someone can provide me some useful info towards solving this
>problem.
>
>Regards,
>Tor -
The last update failed on
my window 7 home prermium pc, resulting in me having to try it manually download and install,
this also failed leaving my I tunes not working, I tried restore back to a
safer date, but every time it tried to update it failed. eventually it told me
to delete and reinstall, this fails due to the apple mobile device service
failing to start, and asking me if I have sufficient privileges to start system
service, I have tried all suggestions on iTunes page, ie delete everything to
do with apple, and start as administrator, nothing works, help pleaseI too have tried the latest iTunes (12.1.1.4) in Win7. I did a search for latent drivers as you noted but found none. I am glad it worked for you - I'm still no-joy at this point. I'm able to install AMDS over and over again, without issue - it just doesn't start then iTunes launch fails with Error 7. I have to manually remove it.
I am able to connect my iPhone via USB cable and access my photo storage on the phone. I am just not able to install iTunes anymore. I have attempted resolution to this issue for almost two months now. Until that time, I was content with what was installed. It was only when the proverbial update box appeared that I attempted to 'update' my iTunes. WHAT A MISTAKE putting blind faith into an Apple request to update. I SUGGEST NOT TO UPDATE YOUR ITUNES IF IT IS RUNNING PROPERLY!
I realize now, once again I might add, my reliance on software provided by others. It's like anything else, I shouldn't rely on just one method for anything. Time to search for a more pleasant alternative than Apple. Truly, when it worked, it was good at its job, but now that I am wasting time I am looking seriously at upgrading to another type of smartphone and media management software. Way too much trouble for anything as simple as this should be. I wonder, is this a result of another feud with Microsoft?
Maybe you are looking for
-
HT5278 How to update my iPhone to ios5
How to update my iPhone 3gs to ios 5
-
Server 2012 R2 support in SHarePoint 2013
When 2012 R2 is released will SharePoint 2013 be able to run on it? i've seen there are issues with Preview currently....my server admins would rather build machines in r2 for long term but asked my preference.
-
Dispform.aspx with content types
SharePoint 2013 farm. I have two content types in a list. I need to change the dispform.aspx. There are different fields for both content types, so copying the dispform.aspx to create and modify a new one is not a solution I think. I am not sure. i
-
How to interprete footage?
Hello, I've clips that are 1080i, 25 interlaced frames or 50 fields - just to be clear. Outside a project, I can click on a clip and view it's properties in the info panels header - there it sais "1080p HD", that's wrong. How can I tell final cut the
-
Pop ups blocked after latest down load. Mac 10.6 Safari 5 How do I fix this?
I need step by step instructions to fix this. I am not computer savvy at all.