CO_COSTCTR Archiving Write Job Fails

Hello,
The CO_COSTCTR archiving write job fails with the error messages below. 
Input or output error in archive file \\HOST\archive\SID\CO_COSTCTR_201209110858
Message no. BA024
Diagnosis
An error has occurred when writing the archive file \\HOST\archive\SID\CO_COSTCTR_201209110858 in the file system. This can occur, for example, as the result of temporary network problems or of a lack of space in the fileing system.
The job logs do not indicate other possible causes.  The OS and system logs don't have either.  When I ran it in test mode it it finished successfully after long 8 hours.  However, the error only happens during production mode where the system is generating the archive files.  The weird thing, I do not have this issue with our QAS system (db copy from our Prod).  I was able to archive successfully in our QAS using the same path name and logical name (we transport the settings). 
Considering above, I am thinking of some system or OS related parameter that is unique or different from our QAS system.  Parameter that is not saved in the database as our QAS is a db copy of our Prod system.  This unique parameter could affect archiving write jobs (with read/write to file system). 
I already checked the network session timeout settings (CMD > net server config) and the settings are the same between our QAS and Prod servers.  No problems with disk space.  The archive directory is a local shared folder \\HOST\archive\SID\<filename>.  The HOST and SID are variables which are unique to each system.  The difference is that our Prod server is HA configured (clustered) while our QAS is just standalone.  It might have some other relevant settings I am not aware of.  Has anyone encountered this before and was able to resolve it?
We're running SAP R3 4.7 by the way.
Thanks,
Tony

Hi Rod,
We tried a couple of times already. They all got cancelled due to the error above. As much as we wanted to trim down the variant, the CO_COSTCTR only accepts entire fiscal year. The data it has to go through is quite a lot and the test run took us more that 8 hours to complete. I have executed the same in our QAS without errors. This is why I am bit confused why in our Production system I am having this error. Aside that our QAS is refreshed from our PRD using DB copy, it can run the archive without any problems. So I made to think that there might be unique contributing factors or parameters, which are not saved in the database that affects the archiving. Our PRD is configured with High availability; the hostname is not actually the physical host but rather a virtual host of two clustered servers. But this was no concern with the other archiving objects; only in CO_COSTCTR it is giving us this error. QAS has archiving logs turned off if it’s relevant.
Archiving 2007 fiscal year cancels every after around 7200 seconds, while the 2008 fiscal year cancels early around 2500 seconds. I think that while the write program is going through the data in loops, by the time it needs to access back the archive file, the connection has been disconnected or timed out. And the reason why it cancels almost consistently after an amount of time is because of the variant, there is not much variety to trim down the data. The program is reading the same set of data objects. When it reaches to that one point of failure (after the expected time), it cancels out. If this is true, I may need to find where to extend that timeout or whatever it is that is causing above error.
Thanks for all your help.  This is the best way I can describe it.  Sorry for the long reply.
Tony

Similar Messages

  • BPM process archiving "Processing of archiving write command failed"

    Can someone help me with the following problem. After archiving a BPM proces, I get the following messages (summary):
    ERROR  Processing of archiving write command failed
    ERROR  Job "d5e2a9d9ea8111e081260000124596b3" could not be run as user"E61006".
    LOG -> Processing of archiving write command failed
    [EXCEPTION] com.sap.glx.arch.xml.XmlArchException: Cannot create archivable items from object
    Caused by: java.lang.ClassCastException: ...
    Configuration
    I've completed the following steps based on a blog item.
    1. created an archive user with the corresponding roles
    2. updated the destination DASdefault with the created user -> destination ping = OK
    3. created an archive store BPM_ARCH based on unix root folder
    4. created home path synchornization with home path /<sisid>/bpm_proc/ and archive store BPM_ARCH
    5. start process archiving from manage processes view.
    Process Archiving
    Manage Process -> Select a process from the table -> Archive button -> Start archiving by using the default settings.
    Archiving Monitor
    The following log is created which describe that the write command failed.
    Write phase log:
    [2011.09.29 12:00:18 CEST] INFO   Job bpm_proc_write (ID: d5e2a9d9ea8111e081260000124596b3, JMS ID: ID:124596B30000009D-000000000C08) started on Thu, 29 Sep 2011 12:00:18:133 CEST by scheduler: 5e11a5e0df3111decc2d00237d240438
    [2011.09.29 12:00:18 CEST] INFO   Start execution of job named: bpm_proc_write
    [2011.09.29 12:00:18 CEST] INFO   Job status: RUNNING
    [2011.09.29 12:00:18 CEST] ERROR  Processing of archiving write command failed
    [2011.09.29 12:00:18 CEST] INFO   Start processing of archiving write command ...
    Verify Indexes ...
    Archive XML schema ...
    Resident Policy for object selection is  instanceIds = [9ca38cb2343511e0849600269e82721e] ,  timePeriod = 1317290418551 ,  inError = false ,
    [2011.09.29 12:00:18 CEST] ERROR  Job "d5e2a9d9ea8111e081260000124596b3" could not be run as user"E61006".
    [2011.09.29 12:00:18 CEST] INFO   Job bpm_proc_write (ID: d5e2a9d9ea8111e081260000124596b3, JMS ID: ID:124596B30000009D-000000000C08) ended on Thu, 29 Sep 2011 12:00:18:984 CEST
    Log viewer
    The following message is created in the log viewer.
    Processing of archiving write command failed
    [EXCEPTION]
    com.sap.glx.arch.xml.XmlArchException: Cannot create archivable items from object
    at com.sap.engine.core.thread.execution.CentralExecutor$SingleThread.run(CentralExecutor.java:328)
    Caused by: java.lang.ClassCastException: class com.sap.glx.arch.Archivable:sap.com/tcbpemarchear @[email protected]2@alive incompatible with interface com.sap.glx.util.id.UID:library:tcbpembaselib @[email protected]f@alive
    at com.sap.glx.arch.him.xml.JaxbTaskExtension.createJaxbObjects(JaxbTaskExtension.java:69)
    at com.sap.glx.arch.xml.JaxbSession.fillFromExtensions(JaxbSession.java:73)
    at com.sap.glx.arch.pm.xml.ArchProcessExtension.fillHimObjects(ArchProcessExtension.java:113)
    at com.sap.glx.arch.pm.xml.ArchProcessExtension.createArchObjectItem(ArchProcessExtension.java:60)
    at com.sap.glx.arch.xml.JaxbSession.createArchObjectItems(JaxbSession.java:39)
    at com.sap.glx.arch.xml.Marshaller.createItems(Marshaller.java:29)
    ... 61 more

    Hi Martin,
    I don't have a specific answer sorry, however I do recall seeing a number of OSS notes around BPM archiving whilst searching for a different issue last year - have you checked on there for anything relevant to your currnet version and SP level?  There were quite a few notes if memory serves me well!
    Regards,
    Gareth.

  • Does MM_EKKO  archive write job lock the tables?

    Archive experts,
    We run MM_EKKO archiving twice a week. From my understanding the write job just read the data and write it to archive files. But we run Replenishment jobs which hit EKPO tables, the jobs run slow, and I noticed that the archive write job is holding the locks on this table. As soon as I cancelled the write job, the replenishment jobs move faster. why this happens?  Archive write jobs should not be causing any performance issues, only the delete jobs will impact the performance. Am I correct?  Is any one experiencing similar issues?
    Sam

    Hi Sam,
    Interesting question! Your understanding is correct, write job just reads the data from tables and writes it into archive files... but....write job of MM_EKKO (and MM_EBAN) is a bit different. The MM_EKKO write job also takes care of setting the deletion indicator (depending on whether its a one step or two step archiving. So its possible that it puts a lock during the process of setting the deletion indicator, as its a change to database.
    please have a look at the folloing link for the explanation of one step and two step archiving:
    http://help.sap.com/saphelp_47x200/helpdata/en/9b/c0963457889b37e10000009b38f83b/frameset.htm
    Hope this explains the reason for the performance problem you are facing.
    Regards,
    Naveen

  • LMS 3.2 Archive Update Job Failing

    The scheduled archive update job is failing for all devices. Every one that I've checked is failing with the same message:
    Execution Result:
    Unable to get results of job  execution for device. Retry the job after increasing the job result wait  time using the option:Resource Manager Essentials -> Admin ->  Config Mgmt -> Archive Mgmt ->Fetch Settings
    This setting is at 120 seconds. I've tried adjusting this setting and get same results
    Attaching job logs from most recent failure.
    Thanks for any help.

    Hi ,
    Archive purge can fail for many reasons. I can suggest few things , If it did not work. You can open up a TAC case for troubleshooting.
    Try  this :
    Increase the ConfigJobManager.heapsize as “1024m” in the following file:
    NMSROOT/MDC/tomcat/webapps/rme/WEB-INF/classes/JobManager.properties  (ie,,ConfigJobManager.heapsize=1024m)  ·
    Restart the daemon manager  ·
    Once the daemon manager is started successfully,  Go to Admin > Network > Purge Settings > Config Archive Purge Settings,  increase the “Purge versions older than:” to 12 Months  (Also configure large value  for no. of versions that you would like to have per device) and trigger the job.  ·
    Once job is completed, decrease the number of months gradually, until the desired no. of days & no. of versions required is reached.  This exercise is to reduce the number of archives loading in memory during a purge job, which will cause job hanging.
    Thanks-
    Afroz
    [Do rate the useful post]

  • Archive Backup job failing with no error

    Hi All,
    Can anybody help me to fix the backup issue. please find the rman log below and help me to fix this.
    Script /opt/rman_script/st80_oracle_arch.sh
    ==== started on Fri Jun 28 11:05:11 SGT 2013 ====
    RMAN: /OraBase/V10203/bin/rman
    ORACLE_SID: ST801
    ORACLE_USER: oracle
    ORACLE_HOME: /OraBase/V10203
    NB_ORA_SERV: zsswmasb
    NB_ORA_POLICY: bsswst80_archlog_daily
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    Script /opt/rman_script/st80_oracle_arch.sh
    ==== ended in error on Fri Jun 28 11:05:11 SGT 2013 ====
    Thanks,
    Nayab

    Hi Sarat,
    Hope it has got solved now it was due to the archive log full causing it to hung it worked after my system admin moving few logs manually.
    Thanks,
    Nayab

  • Error in write Job of MM_EKKO archive

    Greetings,
    I have a problem. Unexpectedly a large data was generated during MM_EKKO archiving. The Wrtie job failed with an error due to no more space available. Now I have deleted the files which showed as archived. One file which was in write mode is not showing in the archive management section. And thus we can not delete it  or not able to know how much data in it. as the job log cleared , the session shows as complete.
    My question is , at which time the deletion is marked for POs ?
    1. At the time of Write
    2. At the time of Delete job.
    Regards,
    Vikram

    Hello,
    Deletion job will start only after creation of archive file successfully.
    If management is not showing the archive session number then it means that archive files are not created and there is no deletion happened.
    Before execution of write job it is always better to make note of total no of entries of main table.
    Thanks,
    Ajay

  • SAP Archiving's Write Job (SARA) -  Execution Target

    Dear All,
        As you guys you know, whenever we trigger the Write Job via TxCode: SARA. A SUBMISSION job : ARV_FI_DOCUMNT_SUB20100527121927 will be created. When the SUBMISSION job: ARV_FI_DOCUMNT_SUB20100527121927 is completed, the WRITE job : ARV_FI_DOCUMNT_WRI20100530070050 will be trigerred.
        However, I realized that, the WRITE job being triggered by the SUBMISSION job will be automatically assigned with the DB server as the Execution Target.
        My question is, is that possible to avoid this ?
        Our WRITE jobs always get delayed due to there is only limited BGD work processes in DB server, and we don't plan to increase the DB server's BGD work processes because it will affect the DB server's performances.
        Kindly advise me on this.
        Thanks in advanced.
        SAP Release : 640
    Best Regards,
    Ken

    Hi Ken,
    Have you tried configuring the server group for background processing in the cross-archive object customizing?
    Hope this feature will resolve your problem. Have a look in Customizing -> Cross-Archiving Object Customizing -> Server Group for Background Processing
    Link:[http://help.sap.com/saphelp_470/helpdata/en/6d/56a06a463411d189000000e8323d3a/frameset.htm]
    Thanks,
    Naveen

  • Archive Delete job taking too much time - STXH Sequential Read

    Hello,
    We have been running Archive sessions in our production system in last couple of months. We use SARA and selecting the appropriate variants for WRITE, DELETE and STORAGE options.
    Currently we use the Archive object FI_DOCUMNT and the write job is finished as per the normal time (5 hrs based on the selection criteria). After that normally the delete job is used to complete in 1hr or less than 2hrs always (in last 3 months).
    But in last few days the delete job is taking too much to complete (around 8 - 10hrs) when I monitor the system found that the Sequential Read for table STXH is taking too much time to read and it seems this is the cause.
    Could you please provide a solution for the same, so that the job will run faster as earlier.
    Thanks for your time
    Shyl

    Hi Juan,
    After the statistics run the performance is quite good. Now the job getting finished as expected.
    Thanks. Problem solved
    Shyl

  • DPM 2012 R2 Backup job FAILED for some Hyper-v VMs and Some Hyper-v VMs are not appearing in the DPM

    DPM 2012 R2  Backup job FAILED for some Hyper-v VMs
    DPM encountered a retryable VSS error. (ID 30112 Details: VssError:The writer experienced a transient error.  If the backup process is retried,
    the error may not reoccur.
     (0x800423F3))
    All the vss Writers are in stable state
    Also Some Hyper-v VMs are not appearing in the DPM 2012 R2 Console When I try to create the Protection Group please note that they are not part of cluster.
    Host is 2012 R2 and The VM is also 2012 R2.

    Hi,
    What update rollup are you running on the DPM 2012 R2 server ?  DPM 2012 R2 UR5 introduced a new refresh feature that will re-enumerate data sources on an individual protected server.
    Check for VSS errors inside the guests that are having problems being backed up.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Some jobs fail BackupExec, Ultrium 215 drive, NW6.5 SP6

    The OS is Netware 6.5 SP6.
    The server is a HP Proliant DL-380 G4.
    The drive is a HP StorageWorks Ultrium LTO-1 215 100/200GB drive.
    The drive is connected to a HP PCI-X Single Channel U320 SCSI HBA, which I recently installed, in order to solve slow transfer speeds, and to solve CPQRAID errors which stalled the server during bootup (it was complaining to have a non-disk drive on the internal controller).
    Backup Exec Administrative Console is version 9.10 revision 1158, I am assuming that this means that BE itself has this version number.
    Since our data is now more than the tape capacity I have recently started running two jobs interleaved, to backup (around) half of the data at night. One which runs Monday, Wednesday and Friday and one which runs Tuesday and Thursday.
    My problem is that while the Tue/Thu job completes succesfully every time, the Mon/Wed/Thu job fails every time.
    The jobs have identical policies (except for the interleaved weekdays), but different file selections.
    The job log of the Mon/Wed/Thu job fails with this error:
    ##ERR##Error on HA:1 ID:4 LUN:0 HP ULTRIUM 1-SCSI.
    ##ERR##A hardware error has been detected during this operation. This
    ##ERR##media should not be used for any additional backup operations.
    ##ERR##Data written to this media prior to the error may still be
    ##ERR##restored.
    ##ERR##SCSI bus timeouts can be caused by a media drive that needs
    ##ERR##cleaning, a SCSI bus that is too long, incorrect SCSI
    ##ERR##termination, or a faulty device. If the drive has been working
    ##ERR##properly, clean the drive or replace the media and retry the
    ##ERR##operation.
    ##ERR##Vendor: HP
    ##ERR##Product: ULTRIUM 1-SCSI
    ##ERR##ID:
    ##ERR##Firmware: N27D
    ##ERR##Function: Write(5)
    ##ERR##Error: A timeout has occurred on drive HA:1 ID:4 LUN:0 HP
    ##ERR##ULTRIUM 1-SCSI. Please retry the operation.(1)
    ##ERR##Sense Data:
    ##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    ##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    ##NML##
    ##NML##
    ##NML##
    ##NML## Total directories: 2864
    ##NML## Total files: 23275
    ##NML## Total bytes: 3,330,035,351 (3175.7 Megabytes)
    ##NML## Total time: 00:06:51
    ##NML## Throughput: 8,102,275 bytes/second (463.6 Megabytes/minute)
    I am suspecting the new controller, or perhaps a broken drive?
    I have run multiple cleaning jobs on the drive with new cleaning tapes. The cabling is secured in place.
    I have looked for firmware updates, but even though theres a mentioning of a new firmware on hp's site (see http://h20000.www2.hp.com/bizsupport...odTypeId=12169), I can't find the firmware for netware HP LTT (the drive diagnosis / update tool).
    I'm hoping someone can provide me some useful info towards solving this problem.
    Regards,
    Tor

    My suggestion to you is to probably just give up on fixing this. I
    have the same DL380, but a slightly newer drive(Ultrium 448). After
    working with HP, Adaptec, & Symantec for over a year I gave up. I've
    tried different cards (HP-LSI, Adaptec) , cables, and even swapped the
    drive twice with HP but was never able to get it to work.
    In the end I purchased a new server, moved the card and tape drive,
    and cables all over to the new server and the hardware has been
    working fine in the new box for the last year or so. Until I loaded l
    SP8 the other day.
    My guess is that the PCI-X slot used for these cards isn't happy with
    the server hardware.
    On Tue, 27 Jan 2009 11:16:02 GMT, torcfh
    <[email protected]> wrote:
    >
    >The OS is Netware 6.5 SP6.
    >
    >The server is a HP Proliant DL-380 G4.
    >
    >The drive is a HP StorageWorks Ultrium LTO-1 215 100/200GB drive.
    >
    >The drive is connected to a HP PCI-X Single Channel U320 SCSI HBA,
    >which I recently installed, in order to solve slow transfer speeds, and
    >to solve CPQRAID errors which stalled the server during bootup (it was
    >complaining to have a non-disk drive on the internal controller).
    >
    >Backup Exec Administrative Console is version 9.10 revision 1158, I am
    >assuming that this means that BE itself has this version number.
    >
    >Since our data is now more than the tape capacity I have recently
    >started running two jobs interleaved, to backup (around) half of the
    >data at night. One which runs Monday, Wednesday and Friday and one which
    >runs Tuesday and Thursday.
    >
    >My problem is that while the Tue/Thu job completes succesfully every
    >time, the Mon/Wed/Thu job fails every time.
    >
    >The jobs have identical policies (except for the interleaved weekdays),
    >but different file selections.
    >
    >The job log of the Mon/Wed/Thu job fails with this error:
    >
    >##ERR##Error on HA:1 ID:4 LUN:0 HP ULTRIUM 1-SCSI.
    >
    >##ERR##A hardware error has been detected during this operation. This
    >
    >##ERR##media should not be used for any additional backup operations.
    >
    >##ERR##Data written to this media prior to the error may still be
    >
    >##ERR##restored.
    >
    >##ERR##SCSI bus timeouts can be caused by a media drive that needs
    >
    >##ERR##cleaning, a SCSI bus that is too long, incorrect SCSI
    >
    >##ERR##termination, or a faulty device. If the drive has been working
    >
    >##ERR##properly, clean the drive or replace the media and retry the
    >
    >##ERR##operation.
    >
    >##ERR##Vendor: HP
    >
    >##ERR##Product: ULTRIUM 1-SCSI
    >
    >##ERR##ID:
    >
    >##ERR##Firmware: N27D
    >
    >##ERR##Function: Write(5)
    >
    >##ERR##Error: A timeout has occurred on drive HA:1 ID:4 LUN:0 HP
    >
    >##ERR##ULTRIUM 1-SCSI. Please retry the operation.(1)
    >
    >##ERR##Sense Data:
    >
    >##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    >
    >##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    >
    >##NML##
    >
    >##NML##
    >
    >##NML##
    >
    >##NML## Total directories: 2864
    >
    >##NML## Total files: 23275
    >
    >##NML## Total bytes: 3,330,035,351 (3175.7 Megabytes)
    >
    >##NML## Total time: 00:06:51
    >
    >##NML## Throughput: 8,102,275 bytes/second (463.6
    >Megabytes/minute)
    >
    >I am suspecting the new controller, or perhaps a broken drive?
    >
    >I have run multiple cleaning jobs on the drive with new cleaning tapes.
    >The cabling is secured in place.
    >
    >I have looked for firmware updates, but even though theres a mentioning
    >of a new firmware on hp's site (see http://tinyurl.com/d8tkku), I can't
    >find the firmware for netware HP LTT (the drive diagnosis / update
    >tool).
    >
    >I'm hoping someone can provide me some useful info towards solving this
    >problem.
    >
    >Regards,
    >Tor

  • Check Writer is failed with HR_6990_HRPROC_CHQ_SRW2_FAILED

    Failed with
    HR_6990_HRPROC_CHQ_SRW2_FAILED
    APP-PAY-06990: Report Writer report failed with an error
    Cause: The report failed to complete and returned an error condition.
    Action: Check the report logfile in PAY_TOP/APPLOUT/xx.lg for reason for the failure.
    I don't see anything in $PAY_TOP/APPLOUT, it is all empty files. all of the worker processes completed except one worker process failed.
    i would like to restart the check writer. do i need to rollback the "check writer" alone before i restart the "Check Writer" ?

    974096 wrote:
    Failed with
    HR_6990_HRPROC_CHQ_SRW2_FAILED
    APP-PAY-06990: Report Writer report failed with an error
    Cause: The report failed to complete and returned an error condition.
    Action: Check the report logfile in PAY_TOP/APPLOUT/xx.lg for reason for the failure.
    I don't see anything in $PAY_TOP/APPLOUT, it is all empty files. all of the worker processes completed except one worker process failed.
    i would like to restart the check writer. do i need to rollback the "check writer" alone before i restart the "Check Writer" ?Please see these docs.
    Checkwriter Fails With APP-PAY-06859, APP-PAY-06990,APP-FND-00500: AFPPRN and kgepop Errors in 11.5 [ID 227408.1]
    Archive Deposit Advice Error with APP-PAY-06990, HR_6859_HRPROC_OTHER_PROC_ERR and 'Check your installation manual' Messages [ID 564537.1]
    Oracle Payroll 'Check / Cheque Writer / Deposit Advice' Frequently Asked Questions (FAQ) [ID 1373891.1]
    Error running the cheque writer process PAY-06990 HR_6990_HRPROC_CHQ_SRW2_FAILED [ID 402550.1]
    Seeded Third Party CheckWriter Errors Out With: REP-1212, REP-0069, REP-57054 [ID 1466418.1]
    Canadian Chequewriter Fails with Rep-1212 and HR_6990_HRPROC_CHQ_SRW2_FAILED [ID 145826.1]
    Thanks,
    Hussein

  • Job failed with error tempdb is not available

    HI All,
    Backup job failed with error 'Unable to determine if the owner () of job backup_all_user_db_full has server access (reason: The log for database 'tempdb' is not available. [SQLSTATE HY000] (Error 9001)).'
    I checked in errorlog
    LogWriter: Operating system error 1784(The supplied user buffer is not valid for the requested operation.) encountered.                                        
                        Write error during log flush. Shutting down server
    2014-09-09 07:46:25.90 spid52    Error: 9001, Severity: 21, State: 1
    2014-09-09 07:46:25.90 spid52    The log for database 'tempdb' is not available..
    2014-09-09 07:46:25.92 spid51    Error: 9001, Severity: 21, State: 1
    Can anyone suggest to why it is happening

    From the error message, it looks like the drive on which the tempdb log file is residing is having issues. 
    Please involve your storage admin to look into the disk subsystem

  • ERROR VNIC creation job failed

    Hello All,
    I have brought oracle VM X86 manager into ops center 12c control. When I try to create a new virtual machine it is throwing the ‘ERROR VNIC creation job failed’ error. Can anybody throw some light over this issue.
    Thanks in advance.
    Detailed log is
    44:20 PM IST ERROR Exception occurred while running the task
    44:20 PM IST ERROR java.io.IOException: VNIC creation job failed
    44:20 PM IST ERROR VNIC creation job failed
    44:20 PM IST ERROR com.sun.hss.services.virtualization.guestservice.impl.OvmCreateVnicsTask.doRun(OvmCreateVnicsTask.java:116)
    44:20 PM IST ERROR com.sun.hss.services.virtualization.guestservice.impl.OvmAbstractTask.run(OvmAbstractTask.java:560)
    44:20 PM IST ERROR sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    44:20 PM IST ERROR sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    44:20 PM IST ERROR sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    44:20 PM IST ERROR java.lang.reflect.Method.invoke(Method.java:597)
    44:20 PM IST ERROR com.sun.scn.jobmanager.common.impl.TaskExecutionThread.run(TaskExecutionThread.java:194)
    Regards,
    george

    Hi friends
    I managed to find the answer. Internally it is has some indexes in the data base level. It still maintains the indexes in the shadow tables. Those all need to be deleted. With our Basis team help I have successfully deleted those and recreated the indexes.
    As Soorejkv said sap note 1283322 will help you on this to understand the scenarios.
    Thank you all.
    Regards
    Ram

  • Hi Post Upgrade Job Failing

    HI All,
    We have recently done the Upgrade from BW 3.1 to BI 7.0
    There is a issue regarding the POST Upgrade
    One Job is failing every day
    The name of the JOB is BI_WRITE_PROT_TO_APPLLOG
    And the Job log is
    06/18/2008 00:04:55 Job started                                                          00           516          S
    06/18/2008 00:04:55 Logon of user PALAPSJ in client 200 failed when starting a step      00           560          A
    06/18/2008 00:04:55 Job cancelled
                                                           00           518          A
    This Job is actually running the program
    RSBATCH_WRITE_PROT_TO_APPLLOG
    when i try to run this program with my user id its working
    but with the other user id which it is schedule its failing giving the messages as mentioned in the job log
    Kindly suggest me
    REgards
    janardhan K

    Maybe it's a dialog and no system or background user. so the job fails.
    Regards,
    Juergen

  • Background job failing

    Experts,
    We have background job running as part of the daily load. The user who created and scheduled the job, his account diabled recently and that makes job failing. I need to keep that job running but can not figure out how to change the user name in the job. For example, 0EMPLOYEE_CHANGE_RUN job based on event  'zemployee' triggers at the end of employee data load. Can you please provide any hint what should I do to change user name or take the ownership to keep this job running. Thanks.
    Regards,
    Nimesh

    Hello,
    Go to SM37--> put job name "0EMPLOYEE_CHANGE_RUN"  and user as "*" to get all users.
    Now select the job with released status>(Menu)Job>change->Steps-> select 1st row > change(pencil icon)>now you see the user name here.
    Change the user name to the user name used for all background job and save.
    Done.
    Happy Tony

Maybe you are looking for

  • Is it possible to easily run Microsoft office 2010 with access on a MacBook pro or have I wasted my money ?

    Okay , first off thanks for any help in advance . I just purchased a MacBook pro for my daughter for college . ( finance major ) . Her class requires Microsoft office 2010 with access .  As you probably already know , and I have just learned , that i

  • How to load a UIView when a button clicked ? With a UIViewController ?

    Hi, I have a view controller which is loaded in a Window at the launch of the application. But I would like to load a UIView when I click on a button. I have the files below : - simpleAppDelegate.h - simpleAppDelegate.m - mainViewController.h - mainV

  • No video through Kinivo 301bn HDMI switch

    Apple TV MD199LL/A This problem started in late October. Unfortunately, I cannot correlate it to the Apple TV software upgrade pushed out at that time. From Feb 2013 through October 2013, this setup worked fine:  Apple TV --> Kinivo 301bn --> HDMI-DV

  • Sub contracting scrap

    Hi, for which T-codes am using for mvt 544 and 545 to get the scrap from sub-contracting vendor rgds

  • Emca -upgrade db

    I have upgraded Oracle Database now I want to upgrade emca C:\Documents and Settings\Administrator.JSIL>emca -upgrade db STARTED EMCA at May 11, 2010 10:15:10 AM EM Configuration Assistant, Version 10.2.0.1.0 Production Copyright (c) 2003, 2005, Orac