All Jobs Fail

I have been trying to export a database - both the wizard and the command line failed so I found that I should run catexp.sql or catalog.sql so I tried setting up those job and they both came out as failed in the job history. The Job History lists the submitted time as the same thing as the failed time. If anyone has any ideas why this might happen - I don't get any errors just the "failed" so I didn't know what else to post.
Thanks all for your help - I've learned so much searching the archives!
Meg

Thanks Joel,
I don't know if it is easier to contact me through email, if so my email is [email protected]
Thanks,
Meghan

Similar Messages

  • Job Failed // PLAAP5BULCOD_CONS_CHECK_CCR_MPOR  in APO

    Hello All,
    Job failed due below reason  PLAAP5BULCOD_CONS_CHECK_CCR_MPOR
    Iteration executed
    Spool request (number 0000029847) created without immediate output
    Results saved successfully
    Step 003 started (program /SAPAPO/CIF_DELTAREPORT3, variant PR4_MANUF_ORD2, user ID BTCH_ADM)
    Background job terminated due to communication error
    Job cancelled after system exception ERROR_MESSAGE
    Please help me to resolve the issue.
    Regards
    Mohsin M

    Hi ,
    Check the similar issue:
    [background job PLAAP5BULCOD_CONS_CHECK_CCR_MPOR cancelling;
    Thanks and Regards
    Purna

  • SharePoint 2013 - Team Foundation Server Dashboard Update job failed

    Hi
    I integrated TFS 2012 with SharePoint 2013 on Windows Server 2012.  SharePoint 2013 farm have 3 WFE and 3 App servers
    here what i did
    Install TFS extension for SP 2013 on each of SP server and granted access of SP web application to TFS server successfully
    in CA - I deployed TFS solutions (wsp) successfully) for wfe3 server
    microsoft.teamfoundation.sharepoint.dashboards.wsp
    microsoft.teamfoundation.sharepoint.dashboards15.wsp
    microsoft.teamfoundation.sharepoint.wsp
    I have a number of SC with TFS features activated and connect with TFS server project site working but I really don't know much about TFS.
           What I see is there are 2 TFS timer jobs "Team Foundation Server Dashboard Update" for each of the web application (web1 and web2)
    running every 30 minutes.
    All jobs on web1 are running and succeed and ran on wfe1 and app3
    but all jobs on web2 are failed and ran on wfe2, wfe3 and app1, app2 with the following error  "An exception occurred while scanning dashboard sites. Please see the SharePoint
    log for detailed exceptions"
    I looked into the log file and it is show the same error but nothing more.
    If anyone experience this or have any advice on how to resolve this, please share
    Thanks
    Swanl

    Hi Swanl,
    It seems that the Dashboard Update timer job will loop through the existing site collection, regardless if it is associated to a TFS site.
    If one or more of this site collection is down/corrupted, this will cause the job to fail.
    You can try the following step to check if the sites are good:
    1. Go to Central Administration > Application Management > View all Site Collections. Proceed to click on each Site collection, and notice the properties for the site on the right hand site.
    If the properties does not show up or errors out, this will need to be fixed.
    2. Detach the SharePoint content database and reattach it to see if the issue still occurs.
    Thanks,
    Victoria
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Victoria Xia
    TechNet Community Support

  • ERROR VNIC creation job failed

    Hello All,
    I have brought oracle VM X86 manager into ops center 12c control. When I try to create a new virtual machine it is throwing the ‘ERROR VNIC creation job failed’ error. Can anybody throw some light over this issue.
    Thanks in advance.
    Detailed log is
    44:20 PM IST ERROR Exception occurred while running the task
    44:20 PM IST ERROR java.io.IOException: VNIC creation job failed
    44:20 PM IST ERROR VNIC creation job failed
    44:20 PM IST ERROR com.sun.hss.services.virtualization.guestservice.impl.OvmCreateVnicsTask.doRun(OvmCreateVnicsTask.java:116)
    44:20 PM IST ERROR com.sun.hss.services.virtualization.guestservice.impl.OvmAbstractTask.run(OvmAbstractTask.java:560)
    44:20 PM IST ERROR sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    44:20 PM IST ERROR sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    44:20 PM IST ERROR sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    44:20 PM IST ERROR java.lang.reflect.Method.invoke(Method.java:597)
    44:20 PM IST ERROR com.sun.scn.jobmanager.common.impl.TaskExecutionThread.run(TaskExecutionThread.java:194)
    Regards,
    george

    Hi friends
    I managed to find the answer. Internally it is has some indexes in the data base level. It still maintains the indexes in the shadow tables. Those all need to be deleted. With our Basis team help I have successfully deleted those and recreated the indexes.
    As Soorejkv said sap note 1283322 will help you on this to understand the scenarios.
    Thank you all.
    Regards
    Ram

  • Hi Post Upgrade Job Failing

    HI All,
    We have recently done the Upgrade from BW 3.1 to BI 7.0
    There is a issue regarding the POST Upgrade
    One Job is failing every day
    The name of the JOB is BI_WRITE_PROT_TO_APPLLOG
    And the Job log is
    06/18/2008 00:04:55 Job started                                                          00           516          S
    06/18/2008 00:04:55 Logon of user PALAPSJ in client 200 failed when starting a step      00           560          A
    06/18/2008 00:04:55 Job cancelled
                                                           00           518          A
    This Job is actually running the program
    RSBATCH_WRITE_PROT_TO_APPLLOG
    when i try to run this program with my user id its working
    but with the other user id which it is schedule its failing giving the messages as mentioned in the job log
    Kindly suggest me
    REgards
    janardhan K

    Maybe it's a dialog and no system or background user. so the job fails.
    Regards,
    Juergen

  • Background job failing

    Experts,
    We have background job running as part of the daily load. The user who created and scheduled the job, his account diabled recently and that makes job failing. I need to keep that job running but can not figure out how to change the user name in the job. For example, 0EMPLOYEE_CHANGE_RUN job based on event  'zemployee' triggers at the end of employee data load. Can you please provide any hint what should I do to change user name or take the ownership to keep this job running. Thanks.
    Regards,
    Nimesh

    Hello,
    Go to SM37--> put job name "0EMPLOYEE_CHANGE_RUN"  and user as "*" to get all users.
    Now select the job with released status>(Menu)Job>change->Steps-> select 1st row > change(pencil icon)>now you see the user name here.
    Change the user name to the user name used for all background job and save.
    Done.
    Happy Tony

  • User, Role, Profile Synchronization Job Fails

    Hi Gurus,
    When I am scheduling a job the User, Role, and Profile Sync. job fails giving an error
    "Cannot assign a java.lang.String object of length 53 to host variable 5 which has JDBC type VARCHAR(40)."
    This happens when the synchronization happens with a portal system. We dont have a ruleset for the portal system, So if I put in a "*", it includes this system and results in the error, If I manually select all other system, it works fine. Is there any way to remove this error so that I can schedule the jobs without having to select every system manually.
    Regards,
    Chinmaya

    Hi,
    As per my knowledge, in the Portal system, you should perform only user sync. Roles/profile sync will not work since portal will have workset roles.
    Please refer SAP Note 1168120, which may help you to understand the limitations
    Hope this helps!!
    Rgds,
    Raghu
    Edited by: Raghu Boddu on Nov 4, 2010 7:39 PM

  • Report Job failed when Bursting is used in BI Publisher 11.1.1.5

    The Report Job failed when Bursting is used.
    error message:
    [INSTANCE_ID=aimedap1s.1347261753557] [OUTPUT_ID=1421][ReportProcessor]Error rendering documentoracle.xdo.servlet.scheduler.ProcessingException: [ReportProcessor]Error rendering document
    at oracle.xdo.enterpriseScheduler.bursting.BurstingReportProcessor.renderReport(BurstingReportProcessor.java:455)
    at oracle.xdo.enterpriseScheduler.bursting.BurstingReportProcessor.onMessage(BurstingReportProcessor.java:127)
    at oracle.xdo.enterpriseScheduler.util.CheckpointEnabl
    the reproduce steps :
    1. Create a Bursting query in the Data Model
    2. Create Report Job with the option
    "Use Bursting Definition to Determine Output & Delivery Destination" enabled
    3.Schedule the report Job
    4. run the report and found the status of it is problem, the error message above can be found.
    *Note:not all the report job failed when bursting is used, in step1 when set OUTPUT_FORMAT as PDF,HTML,RTF,PowerPoint 2007,
    the report will be run successfully,but when set other OUTPUT_FORMAT that list in the following document, the report can not run successfully.
    http://docs.oracle.com/cd/E21764_01/bi.1111/e18862/T527073T555155.htm
    Adding Bursting Definitions>>Defining the Query for the Delivery XML
    >>OUTPUT_FORMAT
    Can anyone give some advice on how to troubleshooting it?
    Looking forward for your reply
    Regards

    Hello vma.
    I happened to find the solution on 11.1.1.3. With xdo-server.jar, you can use DataProcessor class.
    For details and sample source code:
    http://blog-koichiro.blogspot.com/2011/11/bi-publisher-java-apigenerate-pdf-with.html
    * Not sure if it works on 11.1.1.5 though, I hope this gonna help you.

  • SQL Server Agent Job Failing on Job Step

    Hi,
    Firstly, apologies if this post has been made in the wrong group.  Running SQL Server 2012.  I'm attempting to add a SQL Server Agent Job which calls a stored procedure that sends a Database Mail message.  The SQL Server and SQL Server Agent
    both run under the NT AUTHORITY\NETWORK SERVICE account.  The Database Mail service has been enabled, and a public profile created.  When running the stored procedure manually, I receive no errors - everything runs as expected and I do receive an
    email containing the expected information.
    I've created the job, job step, job schedule, attached the schedule to the job, and assigned the job to server, all using T-SQL without error.  I've also enabled the mail profile on the SQL Server Agent, and I know that part is working because when
    the job fails, I get an email notification of the failure.
    I've checked the command text in the job step and parsed it within the SQL Job Step Edit window to confirm, it shows as parsing correctly.  However, when I manually run the job itself, I get the following:
    Execution of job failed.  See the history log for details.
    I check the history log and it shows:
    [474] Unable to refresh Database Mail profile Database Mail Profile. (reason: ) (Not a typo, the history log shows no reason)
    [260] Unable to start mail session.
    [396] An idle CPU condition has not been defined - OnIdle job schedules will have no effect
    The command text on the failing job step is as follows:
    DECLARE @date [varchar](10)
    SET @date = CAST(GETDATE() AS [varchar](10))
    EXEC [dbo].[GetExceptions]
    @company = 'MyCompany',
    @checkDate = @date
    With regard to the date value being passed as varchar: This stored procedure is used to check for exceptions against multiple databases on this server (hence the company parameter) via dynamic SQL.  I'd much prefer to use proper data typing but this
    is the only way I could get it to work.
    Does anyone have any suggestions on anything else I could check, or insights into why this is failing?  Any help is greatly appreciated!
    Best Regards
    Brad

    I am not sure if this really helps but I would do follow the below steps:
    1. make sure sql server agent has database mail enabled(rigt click on server agent--properties--alert system--enable database mail and choose right profile.) and RESTART THE SQL SERVER AGENT.(i know you said it i working, but sometimes,  restart the
    sql server agent might fix)
    2. check agent error log and check if any error messages.
    3. run the command text you mentioned in sql server and see if it is working.(i know you said it is working but just to make sure).
    4. make sure sql server agent service user has permissions to run database mail in msdb. check this https://msdn.microsoft.com/en-us/library/ms186358.aspx
    5. Check the output from select * from msdb.dbo.sysmail_log and if it says anything
    6.it does not look like the job log is getting truncated but to make sure, get the job step output to a text file.to do this, edit the job step and on the job step properties click on advanced,  enter path to the output file. this will give the complete
    output for the step.
    Hope it Helps!!

  • I have lost track of the number of websites that do not work properly on the iPad without Adobe Flash player which is unsupported. I cannot use retail sites, billing sites and most important of all job application sites. All are missing tabs, links, info

    I have lost track of the number of websites which do not work properly on my iPad. They include retail sites, billing sites and most important of all job application sites. They all seem to require Adobe Flash Player which cannot be downloaded onto an iPad. Skyfire does not solve the problem. They all load without vital parts of the site such as tabs, links and correct formatting. Any suggestions?

    Most such brower/service combinations have a difficult time working with Flash-based apps and often fail completely. Flash videos are usually the most successful content these browers can handle. You can try the others apps - Puffin, iSwifter, etc - but you may find that none of them work, in which case you will not be able to use your iPad with these sites other than by using one of the various remote control solutions to take over a computer running the full Flash Player.
    IMHO, any developer that built a Flash application for a billing or job application site was an idiot, but I know that's out of the control of anyone but the relevant company.
    Regards.

  • Background jobs failing with ABAP/4 processor: RFC_CONVERSION_FIELD

    Hi Guru's,
    Below are the backgroud jobs failing with runtime Error " ABAP/4 processor: RFC_CONVERSION_FIELD" in  solution manager system (solution manager 3.0,SR2). Not sure why all these jobs failing with this dump on of sudden. i haven't made any changes to the system in the recent past.
    BTC_CMS_COLLECTOR
    SAP_APPLICATION_STAT_COLLECTOR
    SESS_Y000001806_COLL_TRANS
    ABAp Dump says that
    Conversion error between two character sets
    What happened?
    Conversion error "ab_rfccon" from character set 4103 to character set 1100.
    When executing a remote function call a conversion error occurred. This
    occurred when receiving or sending the data. The conversion error can
    only appear, when the data is transferred from a Unicode system to a
    non-Unicode system.
    Could some one please share some information if any one experienced the issue. Please note that all of our ECC systems are uni-code systems.
    Thanks & Regards,
    Vinod.

    Hi Vinod
    Just try to understand the purpose of these jobs and may be these Notes will help you
    Note 814707 - Troubleshooting for RFC connections Unicode/non-Unicode
    Note 647495 - RFC for Unicode ./. non-Unicode Connections
    Note 1361970 - Conversion problems during RFC communication

  • SSIS job fails on connection to target db - works in debug

    the ongoing saga...
    I have a web application developed through VS 2012 which has a button on a form that when operated starts a SQL Server agent job on the server that runs an SSIS package.  The website and the instance of SQL Server with the agent and SSIS package are
    on the same windows 2008 r2 server.  When the button is operated no exceptions are raised but the SSIS package did not execute.  The SQL Server Agent job owner is sa ...that seemed to resolve some issues as originally I had the owner being another
    admin on the same box.  Now the job seems to be failing on connect to the target database on an IBM iSeries DB2 server.  The job runs fine in debug connecting to the db and doing its task successfully. 
    The log indicates that the user specified in the connection manager is using an invalid password or is otherwise unable to connect, although it works fine in debug mode and save password is specified.  I use BIDS remote desktoped to the server
    when developing and debuging.
    Can anyone show me whats wrong now?  Thanks much in advance for any help, Roscoe                                           
    The log shows as follows...
    The job failed.  The Job was invoked by User DOMAINNAME\myuserid.  The last step to run was step 1 (Step1).,00:00:00,0,0,,,,0
    06/12/2014 13:34:50,runWebDevSmall,Error,1,NTSVR59,runWebDevSmall,Step1,,Executed as user: WINDOWSSERVRNAME\SYSTEM. Microsoft (R) SQL Server Execute Package Utility  Version 10.0.5500.0 for 32-bit  Copyright (C) Microsoft Corp 1984-2005. All rights
    reserved.    Started:  1:34:50 PM  Error: 2014-06-12 13:34:50.31     Code: 0xC0016016     Source:       Description: Failed to decrypt protected XML node "DTS:Password"
    with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available.  End Error  Error:
    2014-06-12 13:34:50.84     Code: 0xC0202009     Source: vbTestsmall Connection manager "iSeriesname.DBNAME.iSeriesuserid"     Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE
    DB error has occurred. Error code: 0x80004005.  An OLE DB record is available.  Source: "IBMDA400 Session"  Hresult: 0x80004005  Description: "CWBSY0002 - Password for user iSeriesuser on system iSeriesname is not
    correct ".  End Error  Error: 2014-06-12 13:34:50.86     Code: 0xC00291EC     Source: clear iSeriesname libraryname tablename Execute SQL Task     Description: Failed
    to acquire connection "iSeriesname.DBNAME.iSeriesuserid". Connection may not be configured correctly or you may not have the right permissions on this connection.  End Error  DTExec: The package execution returned DTSER_FAILURE (1). 
    Started:  1:34:50 PM  Finished: 1:34:50 PM  Elapsed:  0.703 seconds.  The package execution failed.  The step failed.,00:00:00,0,0,,,

    Hi Arthur, Thanks for the reply,
    BIDS would not let me save the package with ProtectionLevel of ServerStorage giving error...
    Failed to apply package protection with error 0xC0014061 "The protection level, ServerStorage, cannot be used when saving to this destination. The system could not verify that the destination supports secure storage capability.". This error occurs when saving
    to Xml.
     (vbTestsmall)
    ...but it would let me change the ProtectionLevel to EncryptSensitiveWithPassword so I did that and provided the jobstep in the agent with the password and the agent was able to run successfully.
    Thanks for all the help...I will follow your blog, Roscoe

  • DPM 2012 R2 Backup job FAILED for some Hyper-v VMs and Some Hyper-v VMs are not appearing in the DPM

    DPM 2012 R2  Backup job FAILED for some Hyper-v VMs
    DPM encountered a retryable VSS error. (ID 30112 Details: VssError:The writer experienced a transient error.  If the backup process is retried,
    the error may not reoccur.
     (0x800423F3))
    All the vss Writers are in stable state
    Also Some Hyper-v VMs are not appearing in the DPM 2012 R2 Console When I try to create the Protection Group please note that they are not part of cluster.
    Host is 2012 R2 and The VM is also 2012 R2.

    Hi,
    What update rollup are you running on the DPM 2012 R2 server ?  DPM 2012 R2 UR5 introduced a new refresh feature that will re-enumerate data sources on an individual protected server.
    Check for VSS errors inside the guests that are having problems being backed up.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • The Job Failed Due to  MAX-Time at ODS Activation Step

    Hi
            I'm getting these errors "The Job Failed Due to  MAX-Time at ODS Activation Step" and
             "Max-time Failure"
    How to resolve this failure?

    Hi ,
    You can check the ODs activation logs in ODs batch montior >> click on taht job >>> job log
    for this first you check in sm37 how many jobs are running if there are many otehr long running jobs then . the ODs activation will happen very slowly as the system is overloaded .. due to the long running jobs ..this causes the poor performance of system ..
    for checking the performance of system .. you first check the lock waits in ST04..and check wether they are progressing or nt ..
    check sm66 .. for the no of process running on system ..
    check st22 >> for short dumps ..
    check Os07> to check the cpu idle tiem if its less tehn 20% . then it means the cpu is overlaoded .
    check sm21 ...
    check the table space avilable in st04..
    see if the system is overlaoded then the ODS wont get enough work process to create the backgorund jobs for that like(Bi_BCTL*)..jobs .the updation will happen but very slowly ..
    in this case you can kill few long running jobs which are not important .. and kill few ODs activations also ..
    Dont run 23 ODs activation all at a time .. run some of them at a time ..
    And as for checking the key points for data loading is check st22,cehck job in R/3 , check sm58 for trfc,check sm59 for Rfc connections
    Regards,
    Shikha

  • Some jobs fail BackupExec, Ultrium 215 drive, NW6.5 SP6

    The OS is Netware 6.5 SP6.
    The server is a HP Proliant DL-380 G4.
    The drive is a HP StorageWorks Ultrium LTO-1 215 100/200GB drive.
    The drive is connected to a HP PCI-X Single Channel U320 SCSI HBA, which I recently installed, in order to solve slow transfer speeds, and to solve CPQRAID errors which stalled the server during bootup (it was complaining to have a non-disk drive on the internal controller).
    Backup Exec Administrative Console is version 9.10 revision 1158, I am assuming that this means that BE itself has this version number.
    Since our data is now more than the tape capacity I have recently started running two jobs interleaved, to backup (around) half of the data at night. One which runs Monday, Wednesday and Friday and one which runs Tuesday and Thursday.
    My problem is that while the Tue/Thu job completes succesfully every time, the Mon/Wed/Thu job fails every time.
    The jobs have identical policies (except for the interleaved weekdays), but different file selections.
    The job log of the Mon/Wed/Thu job fails with this error:
    ##ERR##Error on HA:1 ID:4 LUN:0 HP ULTRIUM 1-SCSI.
    ##ERR##A hardware error has been detected during this operation. This
    ##ERR##media should not be used for any additional backup operations.
    ##ERR##Data written to this media prior to the error may still be
    ##ERR##restored.
    ##ERR##SCSI bus timeouts can be caused by a media drive that needs
    ##ERR##cleaning, a SCSI bus that is too long, incorrect SCSI
    ##ERR##termination, or a faulty device. If the drive has been working
    ##ERR##properly, clean the drive or replace the media and retry the
    ##ERR##operation.
    ##ERR##Vendor: HP
    ##ERR##Product: ULTRIUM 1-SCSI
    ##ERR##ID:
    ##ERR##Firmware: N27D
    ##ERR##Function: Write(5)
    ##ERR##Error: A timeout has occurred on drive HA:1 ID:4 LUN:0 HP
    ##ERR##ULTRIUM 1-SCSI. Please retry the operation.(1)
    ##ERR##Sense Data:
    ##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    ##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    ##NML##
    ##NML##
    ##NML##
    ##NML## Total directories: 2864
    ##NML## Total files: 23275
    ##NML## Total bytes: 3,330,035,351 (3175.7 Megabytes)
    ##NML## Total time: 00:06:51
    ##NML## Throughput: 8,102,275 bytes/second (463.6 Megabytes/minute)
    I am suspecting the new controller, or perhaps a broken drive?
    I have run multiple cleaning jobs on the drive with new cleaning tapes. The cabling is secured in place.
    I have looked for firmware updates, but even though theres a mentioning of a new firmware on hp's site (see http://h20000.www2.hp.com/bizsupport...odTypeId=12169), I can't find the firmware for netware HP LTT (the drive diagnosis / update tool).
    I'm hoping someone can provide me some useful info towards solving this problem.
    Regards,
    Tor

    My suggestion to you is to probably just give up on fixing this. I
    have the same DL380, but a slightly newer drive(Ultrium 448). After
    working with HP, Adaptec, & Symantec for over a year I gave up. I've
    tried different cards (HP-LSI, Adaptec) , cables, and even swapped the
    drive twice with HP but was never able to get it to work.
    In the end I purchased a new server, moved the card and tape drive,
    and cables all over to the new server and the hardware has been
    working fine in the new box for the last year or so. Until I loaded l
    SP8 the other day.
    My guess is that the PCI-X slot used for these cards isn't happy with
    the server hardware.
    On Tue, 27 Jan 2009 11:16:02 GMT, torcfh
    <[email protected]> wrote:
    >
    >The OS is Netware 6.5 SP6.
    >
    >The server is a HP Proliant DL-380 G4.
    >
    >The drive is a HP StorageWorks Ultrium LTO-1 215 100/200GB drive.
    >
    >The drive is connected to a HP PCI-X Single Channel U320 SCSI HBA,
    >which I recently installed, in order to solve slow transfer speeds, and
    >to solve CPQRAID errors which stalled the server during bootup (it was
    >complaining to have a non-disk drive on the internal controller).
    >
    >Backup Exec Administrative Console is version 9.10 revision 1158, I am
    >assuming that this means that BE itself has this version number.
    >
    >Since our data is now more than the tape capacity I have recently
    >started running two jobs interleaved, to backup (around) half of the
    >data at night. One which runs Monday, Wednesday and Friday and one which
    >runs Tuesday and Thursday.
    >
    >My problem is that while the Tue/Thu job completes succesfully every
    >time, the Mon/Wed/Thu job fails every time.
    >
    >The jobs have identical policies (except for the interleaved weekdays),
    >but different file selections.
    >
    >The job log of the Mon/Wed/Thu job fails with this error:
    >
    >##ERR##Error on HA:1 ID:4 LUN:0 HP ULTRIUM 1-SCSI.
    >
    >##ERR##A hardware error has been detected during this operation. This
    >
    >##ERR##media should not be used for any additional backup operations.
    >
    >##ERR##Data written to this media prior to the error may still be
    >
    >##ERR##restored.
    >
    >##ERR##SCSI bus timeouts can be caused by a media drive that needs
    >
    >##ERR##cleaning, a SCSI bus that is too long, incorrect SCSI
    >
    >##ERR##termination, or a faulty device. If the drive has been working
    >
    >##ERR##properly, clean the drive or replace the media and retry the
    >
    >##ERR##operation.
    >
    >##ERR##Vendor: HP
    >
    >##ERR##Product: ULTRIUM 1-SCSI
    >
    >##ERR##ID:
    >
    >##ERR##Firmware: N27D
    >
    >##ERR##Function: Write(5)
    >
    >##ERR##Error: A timeout has occurred on drive HA:1 ID:4 LUN:0 HP
    >
    >##ERR##ULTRIUM 1-SCSI. Please retry the operation.(1)
    >
    >##ERR##Sense Data:
    >
    >##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    >
    >##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    >
    >##NML##
    >
    >##NML##
    >
    >##NML##
    >
    >##NML## Total directories: 2864
    >
    >##NML## Total files: 23275
    >
    >##NML## Total bytes: 3,330,035,351 (3175.7 Megabytes)
    >
    >##NML## Total time: 00:06:51
    >
    >##NML## Throughput: 8,102,275 bytes/second (463.6
    >Megabytes/minute)
    >
    >I am suspecting the new controller, or perhaps a broken drive?
    >
    >I have run multiple cleaning jobs on the drive with new cleaning tapes.
    >The cabling is secured in place.
    >
    >I have looked for firmware updates, but even though theres a mentioning
    >of a new firmware on hp's site (see http://tinyurl.com/d8tkku), I can't
    >find the firmware for netware HP LTT (the drive diagnosis / update
    >tool).
    >
    >I'm hoping someone can provide me some useful info towards solving this
    >problem.
    >
    >Regards,
    >Tor

Maybe you are looking for

  • How to stop notification email of Completion: Purchase Order

    When a PO is rejected, user gets the email "RE: Notif. of Completion:Purchase order 4500000695 rej" from the system. It is not part of the rejection task. Does anyone know how to stop it? Thanks in advance. Philip

  • My Browser cannot load web pages. I get The URL is not valid and cannot be loaded"

    I can no longer use Firefox. EVERY PAGE I attempt to load produces the following message alert: "The URL is not valid and cannot be loaded" I have already cleared everything in my browser, rebooted my computer and absolutely nothing works. This just

  • Field symbols inside class

    Is it possible to declare field symbols inside classes? Thanks in advance. Hema Moderator message: please search for information and try yourself before asking. Edited by: Thomas Zloch on Dec 23, 2010 10:55 AM

  • File naming by time picture taken - 12h clock?

    I love the flexibility that Lightroom gives in naming exported pictures. Is there a way to have it use a 12h clock instead of 24h? In other words, how can I get Lightroom to generate file names like 20071107 IMG_0010 (1055a).jpg 20071107 IMG_0011 (01

  • Issues with exporting

    When I go to compress to mpeg-2 I get -Error trying to open source media file- But Everything is connected. I tried looking up this topic here and on the compressor discussion board but I can't seem to find it. I'm using Compressor 1.2.1