Timer job failed

Dear all,
I am running timer job to bring in some data sent to trainers. but the timer job is failing in central administration - without much information. When I try to debug the code said in one area trainer FullName - the CopyFieldMask value as 'trainer.CopyField.Mask'
threw an exception of type 'System.ArgumentException'.
Can somebody throw light on this.
CS file for sending mail
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.Mail;
using System.Text;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Administration;
namespace TTJ
public class TJInstructorSchedule :SPJobDefinition
public TJInstructorSchedule() : base() { }
public TJInstructorSchedule(string jobName, SPService service, SPServer server, SPJobLockType targetType) : base(jobName, service, server, targetType) { }
public TJInstructorSchedule(string jobName, SPWebApplication webApplication) : base(jobName, webApplication, null, SPJobLockType.ContentDatabase) { }
public override void Execute(Guid targetInstanceId)
//Access the Trainers list
SPWebApplication webApp = this.Parent as SPWebApplication;
SPSite trainingSite = webApp.Sites["sites/training"];
SPWeb rootWeb = trainingSite.RootWeb;
SPList trainersList = rootWeb.Lists["SessionTrainer"];
SPListItemCollection trainers = trainersList.Items;
foreach (SPListItem trainer in trainers)
//Store the trainer's email address
string trainerEmail = trainer["E-mail Address"].ToString();
//Store the trainer's Full name for future use
string trainerFullName = trainer["Full Name"].ToString();
//Access the sessions list and retrieve session for this trainer that occur in the future
SPList sessionList = rootWeb.Lists["SessionList"];
SPQuery getSessionsForTrainer = new SPQuery();
getSessionsForTrainer.ViewFields = "<FieldRef Name='CourseTitle'/><FieldRef Name='Trainer'/><FieldRef Name='TrainingVenue'/><FieldRef Name='RegisterInfo'/><FieldRef Name='StartDate'/><FieldRef Name='EndDate'/>"; //CAML
getSessionsForTrainer.Query = "<Where><And><Eq><FieldRef Name='Trainer'/><Value Type='Lookup'>" + trainerFullName + "</Value></Eq><Geq><FieldRef Name='StartDate'/><Value Type='DateTime'><Today/></Value></Geq></And></Where>";
SPListItemCollection sessionsForTrainer = sessionList.GetItems(getSessionsForTrainer);
//Iterate through the sessions and build an email to send to the Trainer
string emailSubject = "Instructor Schedule for " + trainerFullName;
string emailBody = "";
emailBody += "Hello " + trainerFullName +",<br/><br/>";
emailBody += "Here is your upcoming schedule. If you have any questions, please contact Judy Moore([email protected]) or Amanda Stevenson([email protected]).<br/><br/>";
foreach (SPListItem scheduledSession in sessionsForTrainer)
emailBody += scheduledSession["CourseTitle"].ToString().Remove(0, 3) + " at " + scheduledSession["TrainingVenue"].ToString() + " starting at " + scheduledSession["StartDate"].ToString() + " and ending at " + scheduledSession["EndDate"].ToString() + ", which has " + scheduledSession["RegisterInfo"].ToString() + " registrations.<br/>";
emailBody+="<br/>Thank you!<br/></br>";
emailBody += "Do not reply to this message; it is an automatically generated system message.";
//send the mail
MailMessage instructorScheduleEmail = new MailMessage("[email protected]", trainerEmail, emailSubject, emailBody);
instructorScheduleEmail.IsBodyHtml = true;
SmtpClient smtpClient = new SmtpClient("x");
smtpClient.Send(instructorScheduleEmail);
//base.Execute(targetInstanceId);
eventreceiver file code
using System;
using System.Runtime.InteropServices;
using System.Security.Permissions;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Administration;
using Microsoft.SharePoint.Security;
namespace TTJ.Features.Feature_TTJ
/// <summary>
/// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade.
/// </summary>
/// <remarks>
/// The GUID attached to this class may be used during packaging and should not be modified.
/// </remarks>
[Guid("a3ddf329-29a8-4748-b004-1efe1cc096f2")]
public class Feature_TTJEventReceiver : SPFeatureReceiver
// Uncomment the method below to handle the event raised after a feature has been activated.
public override void FeatureActivated(SPFeatureReceiverProperties properties)
SPWebApplication webApp = properties.Feature.Parent as SPWebApplication;
if (webApp.Name == "x - 47")
foreach (SPJobDefinition job in webApp.JobDefinitions)
if (job.Name == "Training Registration Portal - Instructor Schedules")
job.Delete();
TJInstructorSchedule tjSendSchedules = new TJInstructorSchedule("Training Registration Portal - Instructor Schedules", webApp);
tjSendSchedules.Title = "Training Registration Portal - Instructor Schedules";
SPWeeklySchedule weeklySchedule = new SPWeeklySchedule();
weeklySchedule.BeginDayOfWeek = DayOfWeek.Friday;
weeklySchedule.BeginHour = 16;
weeklySchedule.EndDayOfWeek = DayOfWeek.Friday;
weeklySchedule.EndHour = 17;
tjSendSchedules.Schedule = weeklySchedule;
tjSendSchedules.Update();
// Uncomment the method below to handle the event raised before a feature is deactivated.
public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
SPWebApplication webApp = properties.Feature.Parent as SPWebApplication;
if (webApp.Name == "x - 47")
foreach (SPJobDefinition job in webApp.JobDefinitions)
if (job.Name == "Training Registration Portal - Instructor Schedules")
job.Delete();
// Uncomment the method below to handle the event raised after a feature has been installed.
//public override void FeatureInstalled(SPFeatureReceiverProperties properties)
// Uncomment the method below to handle the event raised before a feature is uninstalled.
//public override void FeatureUninstalling(SPFeatureReceiverProperties properties)
// Uncomment the method below to handle the event raised when a feature is upgrading.
//public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters)
Cheers
Sathya

Hi,
If you want to get a item field value, you need to get by its internal name, you can use the CAML Query Builder to get the real internal name.
For the error message, it is related to the SMTPClient host and port configuration, you need to specify the valid hostname and port id.
Here are some similiar thread for your reference:
http://stackoverflow.com/questions/17497154/smtpexception-unable-to-read-data-from-the-transport-connection-net-io-connec
http://stackoverflow.com/questions/20228644/smtpexception-unable-to-read-data-from-the-transport-connection-net-io-connect
Thanks
Best Regards
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected]

Similar Messages

  • Microsoft SharePoint Foundation Usage Data Import Timer Job Fails on two of six servers

    Starting about one week ago this timer job started to fail on my two WFE's in a newly implemented SP2013 Farm, the ULS error and verbose details are below..Thanks.
    03/19/2015 15:55:58.54 OWSTIMER.EXE (0x30D4) 0x7748 SharePoint Foundation Timer 6398 Critical The Execute method of job definition
    Microsoft.SharePoint.Administration.SPUsageImportJobDefinition (ID 676104fa-9a64-4936-a6cd-986223176db5) threw an exception. More information is included below. Index was outside the bounds of the array.
    825cf49c-2b6d-a0a5-c45a-88e7d3e4ed5d
    03/19/2015 15:55:58.54 OWSTIMER.EXE (0x30D4) 0x7748 SharePoint Foundation Timer 72ae Unexpected Exception stack trace: at
    Microsoft.SharePoint.Administration.SPAnalyticsUsageDefinition.ParseLogFileEntry(String line) at Microsoft.SharePoint.Administration.SPUsageLogImporter.ImportUsageLogFile(SPUsageProvider usageProvider,
    FileInfo logFileInfo) at Microsoft.SharePoint.Administration.SPUsageLogImporter.ImportUsageLogFiles(SPUsageProvider usageProvider, String logDirName, List`1 usageLogFileList) at
    Microsoft.SharePoint.Administration.SPUsageLogImporter.ImportUsageData(SPUsageManager usageManager, DirectoryInfo usageLogDirInfo, String fileFilter) at
    Microsoft.SharePoint.Administration.SPUsageLogImporter.ImportUsageData() at Microsoft.SharePoint.Administration.SPUsageImportJobDefinition.Execute(Guid targetInstanceId) at
    Microsoft.SharePoint.Administration.SPTimerJobInvokeInternal.Invoke(SPJobDefinition jd, Guid targetInstanceId, Boolean isTimerService, Int32& result) 825cf49c-2b6d-a0a5-c45a-88e7d3e4ed5d

    Hi,
    Could you please locate more relevant error information in ULS log to help troubleshooting?
    If you go to Central Administration > Monitoring > Job History, change it to Failed Jobs, check if there are entries for the timer job
    Microsoft SharePoint Foundation Usage Data Import Timer Job. Also try restart the timer job.
    Regards,
    Rebecca Tu
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • The Job Failed Due to  MAX-Time at ODS Activation Step

    Hi
            I'm getting these errors "The Job Failed Due to  MAX-Time at ODS Activation Step" and
             "Max-time Failure"
    How to resolve this failure?

    Hi ,
    You can check the ODs activation logs in ODs batch montior >> click on taht job >>> job log
    for this first you check in sm37 how many jobs are running if there are many otehr long running jobs then . the ODs activation will happen very slowly as the system is overloaded .. due to the long running jobs ..this causes the poor performance of system ..
    for checking the performance of system .. you first check the lock waits in ST04..and check wether they are progressing or nt ..
    check sm66 .. for the no of process running on system ..
    check st22 >> for short dumps ..
    check Os07> to check the cpu idle tiem if its less tehn 20% . then it means the cpu is overlaoded .
    check sm21 ...
    check the table space avilable in st04..
    see if the system is overlaoded then the ODS wont get enough work process to create the backgorund jobs for that like(Bi_BCTL*)..jobs .the updation will happen but very slowly ..
    in this case you can kill few long running jobs which are not important .. and kill few ODs activations also ..
    Dont run 23 ODs activation all at a time .. run some of them at a time ..
    And as for checking the key points for data loading is check st22,cehck job in R/3 , check sm58 for trfc,check sm59 for Rfc connections
    Regards,
    Shikha

  • InfoPath Forms Services Form Upgrade timer job continuously failing

    Hello all,
    We've recently uploaded several InfoPath form templates to our production SharePoint 2010 environment (6 server farm).  One of the forms
    was still stuck in 'upgrading' (even after it said successful after the upload).  So we deleted that form and re -uploaded it successfully. 
    Since then we've been seeing the following timer job failures every minute.  The forms all work but the timer job failures still keep
    coming. 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The Execute method of job definition Microsoft.Office.InfoPath.Server.Administration.FormsUpgradeJobDefinition (ID 6904b095-822d-4158-9572-f6b3fe8d4265)
    threw an exception. More information is included below.
    Object reference not set to an instance of an object.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    We saw this
    http://gpkarnik.wordpress.com/  looks hopeful, but we are reluctant to proceed when we read this article
    http://blogs.technet.com/b/nishants/archive/2008/10/10/how-to-delete-orphan-configuration-objects-from-sharepoint-farm.aspx 
    We have done the following:
    Reset timer service on all servers (same job just moves to a different server failing)
    removed the suspect InfoPath form and re uploaded (no change)
    All servers in the farm rebooted (no change)
    Questions:
    Can we just get rid of the failing InfoPath Forms Services Form Upgrade? 
    Will a new one get created the next time we upgrade a form?
    Thank you,
    Jim

    Hi Jim,
    When you upload an administrator-approved InfoPath form template to SharePoint, SharePoint automatically creates a Solution Package for it as well as a Timer Job. When a solution package is deployed in a farm, it will be restored in 14 hives.
    Timer jobs are scheduled for execution, but sometimes it can take a long while before a SharePoint timer job gets executed. The timer job created by SharePoint would have a Title similar to Windows SharePoint Services Solution Deployment for “form-[your_uploaded_form_template_name].wsp”.
    The Schedule Type for this timer job would be One-time.
    If you want to hurry the execution of administrative timer jobs along, you can use the following stsadm command:
    stsadm -o execadmsvcjobs
    More information, please refer to the link:
    http://www.bizsupportonline.net/blog/2008/12/infopath-form-uploaded-sharepoint-remains-stuck-installing-upgrading-status/
    http://johnliu.net/blog/2013/1/8/infopath-form-stuck-on-installing-upgrading-or-deleting.html
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

  • Failed timer jobs via powershell

    Hi Friends,
    Is it possible to get an output of the failed timer jobs in the Central Admin via powershell? If yes, could anybody please share the script?
    Shonilchi..

    Hi Shonilchi, 
    You haven't actually changed the script, other than to pipe the output of Format-Table to a text file. 
    The script (that I posted above) creates a collection of "jobs", which are stored in the $jobCollection variable. 
    The original code I posted above contained an example that filtered the collection based on a webapplication URL (which was http://devmy101), and then displayed the results using Format-Table.
    To create a CSV report using this collection, you would send the $jobCollection variable to the Export-CSV PowerShell cmdlet, like this:
    $jobCollection | Export-Csv -Path C:\temp\jobsreport.csv -NoTypeInformation
    If you wanted to create a CSV report of all the jobs that had failed on the farm, then this would be the complete code:
    $items = New-Object psobject
    $items | Add-Member -MemberType NoteProperty -Name "Title" -value "" ;
    $items | Add-Member -MemberType NoteProperty -Name "Schedule" -value "" ;
    $items | Add-Member -MemberType NoteProperty -Name "WebApplication" -value "" ;
    $webapplications = Get-SPWebApplication
    $jobCollection = @();
    foreach($wa in $webapplications)
    $jd = $wa.JobDefinitions;
    foreach($job in $jd)
    $j = $items | Select-Object *;
    $j.Title=$job.Title;
    $j.WebApplication=$job.WebApplication.Url;
    $j.Schedule=$job.Schedule;
    $jobCollection += $j
    #Add the farm jobs
    $f = get-spfarm
    $ts = $f.TimerService
    $jd = $ts.JobDefinitions
    foreach($job in $jd)
    $j = $items | Select-Object *;
    $j.Title=$job.Title;
    $j.WebApplication="Farm";
    $j.Schedule=$job.Schedule;
    $jobCollection += $j
    #Create the report and save it as jobsreport.csv in the C:\temp directory
    $jobCollection | Export-Csv -Path C:\temp\jobsreport.csv -NoTypeInformation
    Regards, Matthew
    MCPD | MCITP
    My Blog
    Please remember to click "Mark As Answer" if a post solves your problem or "Vote As Helpful" if it was useful.
    I just added a webpart to the TechNet Gallery that allows administrative users to upload, crop and format user profile photos. Check it out here:
    Upload and Crop User Profile Photos

  • Sql server job failing some times and suuccessful some time, even when it should run succesful

    Hi Experts,
    I got stuck with some issue....please help me resolve
    1.I created job -> calling SSIS package
    2. In SSIS package ->
    (I) firstly i have zip file container
    (II)second i have Execute process taskk editor - here i am doing unzip - in side On query cancel i have package failure if correpted file comes.
    (III) loading into table using bulk insert
    Problem:
    When i run the job it's working fine most of the times.
    But some time job executing still Execute Process taskk editor and after that job is getting canceld as follows"The job was stoped prior to completion by (unknown)" even through i have proper file(which is gettng unzip manualy).
    Please help me to resolve this issue...

    What is the SQL  version you are on?
    Is this applicable for you?
    http://support.microsoft.com/kb/922527
    Satheesh
    My Blog |
    How to ask questions in technical forum

  • Sharepoint 2013 Upgrade Job failed.

    Hello everybody,
    I would like some help with a litle problem that occured after I installed November CU on all of Sharepoint 2013 servers, 1 app, and 2 WFE. I'm unable to get Timer Job Service working on my app server, and that is a big problem. Everytime I restart Timer
    Job Service I get this error - Upgrade Job Appserver :00:00 Failed 12/16/2014 1:33 PM.. And in Event viewer - 
    System
    Provider
    [ Name]
    Microsoft-SharePoint Products-SharePoint Foundation
    [ Guid]
    {6FB7E0CD-52E7-47DD-997A-241563931FC2}
    EventID
    6398
    Version
    15
    Level
    1
    Task
    12
    Opcode
    0
    Keywords
    0x4000000000000000
    TimeCreated
    [ SystemTime]
    2014-12-16T11:33:37.846840300Z
    EventRecordID
    41822872
    Correlation
    [ ActivityID]
    {CD50D69C-AE16-00AA-3D5F-ECACB6C25BFE}
    Execution
    [ ProcessID]
    11368
    [ ThreadID]
    3216
    Channel
    Application
    Computer
    AppServername
    Security
    [ UserID]
    S-1-5-21-1708537768-1425521274-1417001333-40380
    EventData
    string0
    Microsoft.SharePoint.Administration.SPUpgradeJobDefinition
    string1
    13096a33-a91b-4c73-ae6c-e0ba750c5d4e
    string2
    Object reference not set to an instance of an object.
    And, also from Sharepoint Log file -
    Updating SPPersistedObject SPUpgradeJobDefinition Name=job-upgrade. Version: -1 Ensure: False, HashCode: 16745860, Id: 13096a33-a91b-4c73-ae6c-e0ba750c5d4e, Stack:    at Microsoft.SharePoint.Administration.SPJobDefinition.Update()    
    at Microsoft.SharePoint.Administration.SPTimerStore.CheckEnterUpgradeMode(SPFarm farm, Object& jobDefinitions, Int32& timerMode)     at Microsoft.SharePoint.Administration.SPTimerStore.InitializeTimer(Int64& cacheVersion, Object&
    jobDefinitions, Int32& timerMode, Guid& serverId, Boolean& isServerBusy)     at Microsoft.SharePoint.Administration.SPNativeConfigurationProvider.InitializeTimer(Int64& cacheVersion, Object& jobDefinitions, Int32& timerMode,
    Guid& serverId, Boolean& isServerBusy)
    Updating SPPersistedObject SPUpgradeJobDefinition Name=job-upgrade. Version: 2925660 Ensure: False, HashCode: 16745860, Id: 13096a33-a91b-4c73-ae6c-e0ba750c5d4e, Stack:    at Microsoft.SharePoint.Administration.SPJobDefinition.Update()    
    at Microsoft.SharePoint.Administration.SPTimerStore.CheckEnterUpgradeMode(SPFarm farm, Object& jobDefinitions, Int32& timerMode)     at Microsoft.SharePoint.Administration.SPTimerStore.InitializeTimer(Int64& cacheVersion, Object&
    jobDefinitions, Int32& timerMode, Guid& serverId, Boolean& isServerBusy)     at Microsoft.SharePoint.Administration.SPNativeConfigurationProvider.InitializeTimer(Int64& cacheVersion, Object& jobDefinitions, Int32& timerMode,
    Guid& serverId, Boolean& isServerBusy)
    Created upgrade job definition id 13096a33-a91b-4c73-ae6c-e0ba750c5d4e, mode InPlace, entering timer upgrade mode
    Queued timer job Upgrade Job, id {13096A33-A91B-4C73-AE6C-E0BA750C5D4E}
    Updating SPPersistedObject SPUpgradeJobDefinition Name=job-upgrade. Version: 2925663 Ensure: False, HashCode: 6922919, Id: 13096a33-a91b-4c73-ae6c-e0ba750c5d4e, Stack:    at Microsoft.SharePoint.Administration.SPJobDefinition.Update()    
    at Microsoft.SharePoint.Administration.SPUpgradeJobDefinition.Execute(Guid targetInstanceId)     at Microsoft.SharePoint.Administration.SPAdministrationServiceJobDefinition.ExecuteAdminJob(Guid targetInstanceId)     at Microsoft.SharePoint.Administration.SPTimerJobInvokeInternal.Invoke(SPJobDefinition
    jd, Guid targetInstanceId, Boolean isTimerService, Int32& result)     at Microsoft.SharePoint.Administration.SPTimerJobInvoke.Invoke(TimerJobExecuteData& data, Int32& result)
    The Execute method of job definition Microsoft.SharePoint.Administration.SPUpgradeJobDefinition (ID 13096a33-a91b-4c73-ae6c-e0ba750c5d4e) threw an exception. More information is included below.  Object reference not set to an instance of an object.
    And after that it all fails and Timer Jobs ain't working on App server, on WFE servers eveyrthing is allright and working as it needs to be.
    I searched a lot of forums and different, but none of offered solutions ain't working, and that is not good.
    PS. On development server eveything worked fine, looked through log files and this job run without any errors - Dev server is one in all installation, though.
    If there is somebody who can help me with this, I would appreciate your help.
    TY!

    TY for your response,
    I used the script to update timer job config, but that isnt helping, and, also, this isnt a timer job that is already in timer job definitions. Its the one, that's created and after execution gets deleted, because its grayed out in timer job history page
    and I cant access it. if you look closer at log files I have pasted, you will see that the timer job that is created gets deleted almost immediatily and isnt executed and after that all timer jobs on my App server aint working and this happens every time after
    I restart Timer Service on App server, event ID is 6398, but timer job config cache isnt the problem.
    And, as much as I cant tell from logs - Object reference not set to an instance of an object. - I recieve this error because there is no timer job with ID, that is shown in error, because of this:
    Deleting the SPPersistedObject, SPUpgradeJobDefinition Name=job-upgrade.
    Maybe this information cansomehow help:
    The Execute method of job definition Microsoft.SharePoint.Administration.SPUpgradeJobDefinition (ID 910fbcdc-2691-490e-a9e9-563ae767612e) threw an exception. More information is included below.  Object reference not set to an instance of an object.
    After previous error:
    Exception stack trace:    at Microsoft.SharePoint.Administration.SPUpgradeJobDefinition.Execute(Guid targetInstanceId)     at Microsoft.SharePoint.Administration.SPAdministrationServiceJobDefinition.ExecuteAdminJob(Guid targetInstanceId)
        at Microsoft.SharePoint.Administration.SPTimerJobInvokeInternal.Invoke(SPJobDefinition jd, Guid targetInstanceId, Boolean isTimerService, Int32& result)
    And after that Upgrade timer job gets deleted and thats it, App server aint working like it nedds to and no Timer job are executed.
    I hope this can help to narrow this problem down to a specific component or help resolve this error.
    Rihards

  • SharePoint 2013 - Team Foundation Server Dashboard Update job failed

    Hi
    I integrated TFS 2012 with SharePoint 2013 on Windows Server 2012.  SharePoint 2013 farm have 3 WFE and 3 App servers
    here what i did
    Install TFS extension for SP 2013 on each of SP server and granted access of SP web application to TFS server successfully
    in CA - I deployed TFS solutions (wsp) successfully) for wfe3 server
    microsoft.teamfoundation.sharepoint.dashboards.wsp
    microsoft.teamfoundation.sharepoint.dashboards15.wsp
    microsoft.teamfoundation.sharepoint.wsp
    I have a number of SC with TFS features activated and connect with TFS server project site working but I really don't know much about TFS.
           What I see is there are 2 TFS timer jobs "Team Foundation Server Dashboard Update" for each of the web application (web1 and web2)
    running every 30 minutes.
    All jobs on web1 are running and succeed and ran on wfe1 and app3
    but all jobs on web2 are failed and ran on wfe2, wfe3 and app1, app2 with the following error  "An exception occurred while scanning dashboard sites. Please see the SharePoint
    log for detailed exceptions"
    I looked into the log file and it is show the same error but nothing more.
    If anyone experience this or have any advice on how to resolve this, please share
    Thanks
    Swanl

    Hi Swanl,
    It seems that the Dashboard Update timer job will loop through the existing site collection, regardless if it is associated to a TFS site.
    If one or more of this site collection is down/corrupted, this will cause the job to fail.
    You can try the following step to check if the sites are good:
    1. Go to Central Administration > Application Management > View all Site Collections. Proceed to click on each Site collection, and notice the properties for the site on the right hand site.
    If the properties does not show up or errors out, this will need to be fixed.
    2. Detach the SharePoint content database and reattach it to see if the issue still occurs.
    Thanks,
    Victoria
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Victoria Xia
    TechNet Community Support

  • Audit Log Trimming Timer Job stuck at "pausing" status

    Hi,
    We have a SharePoint 2010 farm and our Audit table is growing rapidly. I checked our "Audit log Trimming" timer job and it has been stuck at "pausing" status for more than a month. Any advice to resolve this issue would be great.
    Thanks,
    norasampang

    Hi Trevor,
    Do you think the reason that the time job is failing is because the audit log table is big and the audit timer jod times out. I saw your reply here at this
    post 
    where you have mentioned "
    It may be timing out. Have you executed it manually to see if it runs without errors?
    Can you please explain in more detail what you meant by that. I was thinking of trying to trim the Audit log using this script in small batch. Can you please let me know if this script seems right?
    $site = Get-SPSite -Identity http://sharepointsite.com
    $date = Get-Date
    $date = $date.AddDays(-1021)
    $site.Audit.DeleteEntries($date) 
    At first i would like to delete all datas that are older than 1021 days old and eventually get rid of the other logs in smaller chunks. Any advice and suggestion would be highly appreciated.
    Thanks,
    norasampang

  • Data Collection Jobs fail to run

    Hello,
    I have installed three servers with SQL Server 2012 Standard Edition, which should be identically installed and configured.
    Part of the installation was setting up Data Collection which worked with out problems on two of the servers, but not on the last where 5 of the SQL Server Agent Jobs fails.
    The jobs failing are:
    collection_set_1_noncached_collect_and_upload
    Message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_2_collection
    Step 1 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_2_upload
    Step 2 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_3_collection
    Step 1 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_3_upload
    Step 2 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    When I try to execute one of the jobs, I get the following event in the System Log:
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Application Popup"
    Guid="{47BFA2B7-BD54-4FAC-B70B-29021084CA8F}" />
      <EventID>26</EventID>
      <Version>0</Version>
      <Level>4</Level>
      <Task>0</Task>
      <Opcode>0</Opcode>
      <Keywords>0x8000000000000000</Keywords>
      <TimeCreated
    SystemTime="2013-06-04T11:23:10.924129800Z" />
      <EventRecordID>29716</EventRecordID>
      <Correlation
    />
      <Execution
    ProcessID="396" ThreadID="1336" />
      <Channel>System</Channel>
      <Computer>myServer</Computer>
      <Security
    UserID="S-1-5-18" />
      </System>
    - <EventData>
      <Data Name="Caption">DCEXEC.EXE - System Error</Data>
      <Data Name="Message">The program can't start because SqlTDiagN.dll
    is missing from your computer. Try reinstalling the program to fix this
    problem.</Data>
      </EventData>
     </Event>
    I have tried removing and reconfiguring the Data Collection two times using the stored procedure msdb.dbo.sp_syscollector_cleanup_collector and removing the underlying database.
    Both times with the same result.
    Below are the basic information about the setup of the server:
    Does any one have any suggestions about what the problem could be and what the cure might be?
    Best regards,
    Michael Andy Poulsen

    I tried running a repair on the SQL Server installation.
    This solved the problem.
    Only thing is that now I have to figure out the mening of these last lines in the log of the repair:
    "The following warnings were encountered while configuring settings on your SQL Server.  These resources / settings were missing or invalid so default values were used in recreating the missing resources.  Please review to make sure they don’t require
    further customization for your applications:
    Service SID support has been enabled on the service.
    Service SID support has been enabled on the service."
    /Michael Andy Poulsen

  • Some jobs fail BackupExec, Ultrium 215 drive, NW6.5 SP6

    The OS is Netware 6.5 SP6.
    The server is a HP Proliant DL-380 G4.
    The drive is a HP StorageWorks Ultrium LTO-1 215 100/200GB drive.
    The drive is connected to a HP PCI-X Single Channel U320 SCSI HBA, which I recently installed, in order to solve slow transfer speeds, and to solve CPQRAID errors which stalled the server during bootup (it was complaining to have a non-disk drive on the internal controller).
    Backup Exec Administrative Console is version 9.10 revision 1158, I am assuming that this means that BE itself has this version number.
    Since our data is now more than the tape capacity I have recently started running two jobs interleaved, to backup (around) half of the data at night. One which runs Monday, Wednesday and Friday and one which runs Tuesday and Thursday.
    My problem is that while the Tue/Thu job completes succesfully every time, the Mon/Wed/Thu job fails every time.
    The jobs have identical policies (except for the interleaved weekdays), but different file selections.
    The job log of the Mon/Wed/Thu job fails with this error:
    ##ERR##Error on HA:1 ID:4 LUN:0 HP ULTRIUM 1-SCSI.
    ##ERR##A hardware error has been detected during this operation. This
    ##ERR##media should not be used for any additional backup operations.
    ##ERR##Data written to this media prior to the error may still be
    ##ERR##restored.
    ##ERR##SCSI bus timeouts can be caused by a media drive that needs
    ##ERR##cleaning, a SCSI bus that is too long, incorrect SCSI
    ##ERR##termination, or a faulty device. If the drive has been working
    ##ERR##properly, clean the drive or replace the media and retry the
    ##ERR##operation.
    ##ERR##Vendor: HP
    ##ERR##Product: ULTRIUM 1-SCSI
    ##ERR##ID:
    ##ERR##Firmware: N27D
    ##ERR##Function: Write(5)
    ##ERR##Error: A timeout has occurred on drive HA:1 ID:4 LUN:0 HP
    ##ERR##ULTRIUM 1-SCSI. Please retry the operation.(1)
    ##ERR##Sense Data:
    ##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    ##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    ##NML##
    ##NML##
    ##NML##
    ##NML## Total directories: 2864
    ##NML## Total files: 23275
    ##NML## Total bytes: 3,330,035,351 (3175.7 Megabytes)
    ##NML## Total time: 00:06:51
    ##NML## Throughput: 8,102,275 bytes/second (463.6 Megabytes/minute)
    I am suspecting the new controller, or perhaps a broken drive?
    I have run multiple cleaning jobs on the drive with new cleaning tapes. The cabling is secured in place.
    I have looked for firmware updates, but even though theres a mentioning of a new firmware on hp's site (see http://h20000.www2.hp.com/bizsupport...odTypeId=12169), I can't find the firmware for netware HP LTT (the drive diagnosis / update tool).
    I'm hoping someone can provide me some useful info towards solving this problem.
    Regards,
    Tor

    My suggestion to you is to probably just give up on fixing this. I
    have the same DL380, but a slightly newer drive(Ultrium 448). After
    working with HP, Adaptec, & Symantec for over a year I gave up. I've
    tried different cards (HP-LSI, Adaptec) , cables, and even swapped the
    drive twice with HP but was never able to get it to work.
    In the end I purchased a new server, moved the card and tape drive,
    and cables all over to the new server and the hardware has been
    working fine in the new box for the last year or so. Until I loaded l
    SP8 the other day.
    My guess is that the PCI-X slot used for these cards isn't happy with
    the server hardware.
    On Tue, 27 Jan 2009 11:16:02 GMT, torcfh
    <[email protected]> wrote:
    >
    >The OS is Netware 6.5 SP6.
    >
    >The server is a HP Proliant DL-380 G4.
    >
    >The drive is a HP StorageWorks Ultrium LTO-1 215 100/200GB drive.
    >
    >The drive is connected to a HP PCI-X Single Channel U320 SCSI HBA,
    >which I recently installed, in order to solve slow transfer speeds, and
    >to solve CPQRAID errors which stalled the server during bootup (it was
    >complaining to have a non-disk drive on the internal controller).
    >
    >Backup Exec Administrative Console is version 9.10 revision 1158, I am
    >assuming that this means that BE itself has this version number.
    >
    >Since our data is now more than the tape capacity I have recently
    >started running two jobs interleaved, to backup (around) half of the
    >data at night. One which runs Monday, Wednesday and Friday and one which
    >runs Tuesday and Thursday.
    >
    >My problem is that while the Tue/Thu job completes succesfully every
    >time, the Mon/Wed/Thu job fails every time.
    >
    >The jobs have identical policies (except for the interleaved weekdays),
    >but different file selections.
    >
    >The job log of the Mon/Wed/Thu job fails with this error:
    >
    >##ERR##Error on HA:1 ID:4 LUN:0 HP ULTRIUM 1-SCSI.
    >
    >##ERR##A hardware error has been detected during this operation. This
    >
    >##ERR##media should not be used for any additional backup operations.
    >
    >##ERR##Data written to this media prior to the error may still be
    >
    >##ERR##restored.
    >
    >##ERR##SCSI bus timeouts can be caused by a media drive that needs
    >
    >##ERR##cleaning, a SCSI bus that is too long, incorrect SCSI
    >
    >##ERR##termination, or a faulty device. If the drive has been working
    >
    >##ERR##properly, clean the drive or replace the media and retry the
    >
    >##ERR##operation.
    >
    >##ERR##Vendor: HP
    >
    >##ERR##Product: ULTRIUM 1-SCSI
    >
    >##ERR##ID:
    >
    >##ERR##Firmware: N27D
    >
    >##ERR##Function: Write(5)
    >
    >##ERR##Error: A timeout has occurred on drive HA:1 ID:4 LUN:0 HP
    >
    >##ERR##ULTRIUM 1-SCSI. Please retry the operation.(1)
    >
    >##ERR##Sense Data:
    >
    >##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    >
    >##ERR##00 00 00 00 - 00 00 00 00 - 00 00 00 00 - 00 00 00 00
    >
    >##NML##
    >
    >##NML##
    >
    >##NML##
    >
    >##NML## Total directories: 2864
    >
    >##NML## Total files: 23275
    >
    >##NML## Total bytes: 3,330,035,351 (3175.7 Megabytes)
    >
    >##NML## Total time: 00:06:51
    >
    >##NML## Throughput: 8,102,275 bytes/second (463.6
    >Megabytes/minute)
    >
    >I am suspecting the new controller, or perhaps a broken drive?
    >
    >I have run multiple cleaning jobs on the drive with new cleaning tapes.
    >The cabling is secured in place.
    >
    >I have looked for firmware updates, but even though theres a mentioning
    >of a new firmware on hp's site (see http://tinyurl.com/d8tkku), I can't
    >find the firmware for netware HP LTT (the drive diagnosis / update
    >tool).
    >
    >I'm hoping someone can provide me some useful info towards solving this
    >problem.
    >
    >Regards,
    >Tor

  • SQL Server agent job failing to back up database

    I have set up a SQL server agent job to back up database on regular basis.. For some reason the back up fails.. Could not find any meaningful info in the logs.. Transaction logs are getting created for the same DB but not the database back up..
    The below is my environment..
    MS SQL Server 2005 - 9.00.1399.06
    Windows 2003 server SP2
    Any pointers would be helpful.. I checked the event logs and the agent logs and did not see obvious error.
    Thanks

    I have set up a SQL server agent job to back up database on regular basis.. For some reason the back up fails.. Could not find any meaningful info in the logs.. Transaction logs are getting created for the same DB but not the database back up..
    The below is my environment..
    MS SQL Server 2005 - 9.00.1399.06
    Windows 2003 server SP2
    Any pointers would be helpful.. I checked the event logs and the agent logs and did not see obvious error.
    Thanks
    NO answer can be provided unless you show us the error message. There can be numerous reasons for backup job to fail. See job history for details please post complete information present in job history.
    Check errorlog and note what error message is there at time when the backup job failed. You could find something useful. Please learn how to post question in forum just saying I have a problem will not help both of us.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • CO_COSTCTR Archiving Write Job Fails

    Hello,
    The CO_COSTCTR archiving write job fails with the error messages below. 
    Input or output error in archive file \\HOST\archive\SID\CO_COSTCTR_201209110858
    Message no. BA024
    Diagnosis
    An error has occurred when writing the archive file \\HOST\archive\SID\CO_COSTCTR_201209110858 in the file system. This can occur, for example, as the result of temporary network problems or of a lack of space in the fileing system.
    The job logs do not indicate other possible causes.  The OS and system logs don't have either.  When I ran it in test mode it it finished successfully after long 8 hours.  However, the error only happens during production mode where the system is generating the archive files.  The weird thing, I do not have this issue with our QAS system (db copy from our Prod).  I was able to archive successfully in our QAS using the same path name and logical name (we transport the settings). 
    Considering above, I am thinking of some system or OS related parameter that is unique or different from our QAS system.  Parameter that is not saved in the database as our QAS is a db copy of our Prod system.  This unique parameter could affect archiving write jobs (with read/write to file system). 
    I already checked the network session timeout settings (CMD > net server config) and the settings are the same between our QAS and Prod servers.  No problems with disk space.  The archive directory is a local shared folder \\HOST\archive\SID\<filename>.  The HOST and SID are variables which are unique to each system.  The difference is that our Prod server is HA configured (clustered) while our QAS is just standalone.  It might have some other relevant settings I am not aware of.  Has anyone encountered this before and was able to resolve it?
    We're running SAP R3 4.7 by the way.
    Thanks,
    Tony

    Hi Rod,
    We tried a couple of times already. They all got cancelled due to the error above. As much as we wanted to trim down the variant, the CO_COSTCTR only accepts entire fiscal year. The data it has to go through is quite a lot and the test run took us more that 8 hours to complete. I have executed the same in our QAS without errors. This is why I am bit confused why in our Production system I am having this error. Aside that our QAS is refreshed from our PRD using DB copy, it can run the archive without any problems. So I made to think that there might be unique contributing factors or parameters, which are not saved in the database that affects the archiving. Our PRD is configured with High availability; the hostname is not actually the physical host but rather a virtual host of two clustered servers. But this was no concern with the other archiving objects; only in CO_COSTCTR it is giving us this error. QAS has archiving logs turned off if it’s relevant.
    Archiving 2007 fiscal year cancels every after around 7200 seconds, while the 2008 fiscal year cancels early around 2500 seconds. I think that while the write program is going through the data in loops, by the time it needs to access back the archive file, the connection has been disconnected or timed out. And the reason why it cancels almost consistently after an amount of time is because of the variant, there is not much variety to trim down the data. The program is reading the same set of data objects. When it reaches to that one point of failure (after the expected time), it cancels out. If this is true, I may need to find where to extend that timeout or whatever it is that is causing above error.
    Thanks for all your help.  This is the best way I can describe it.  Sorry for the long reply.
    Tony

  • File lock in real time job

    Hi All,
    I encountered an file lock error when creating a real time job.
    After a dataflow, I have a script to move the processed file to archive folder. (e.g. move c:\source\order.xml c:\archive). When I test run it, I received a 50306 error. It saids "The process cannot access the file because it is being used by another process. 0 file(s) moved.". However, the df and script were running OK in batch job. Also, I can move the files manually after the job failed. Can anyone help me with that? Is it something to do with the setting of realtime services?
    Many thanks!
    Knight

    hi,
    Not sure but you can check in sm12 if there is any lock entry, if so than manually delete and than check again.
    Ray

  • DBA: ANALYZETAB and DBA:CHECKOPT jobs failed.

    Hi Guys,
    Good day!
    I would like to seek an assistance on how can I deal with the issue below:
    For JOB: DBA: ANALYZETAB
    27.05.2010 04:00:20 Job started
    27.05.2010 04:00:28 Step 001 started (program RSDBAJOB, variant &0000000000113, use
    27.05.2010 04:00:28 Action unknown in SDBAP <<< Job failed due to this error.
    27.05.2010 04:00:28 Job cancelled after system exception ERROR_MESSAGE
    For JOB: DBA:CHECKOPT
    Date       Time     Message text
    26.05.2010 18:15:07 Job started
    26.05.2010 18:15:08 Step 001 started (program RSDBAJOB, variant &0000000000112)
    26.05.2010 18:15:09 Action ANA Unknown in database ORACLE <<<< Job failed due to this error.
    26.05.2010 18:15:09 Job cancelled after system exception ERROR_MESSAGE
    I also checked those DBAs and I could find any in DB13 as well as in table SDBAP.
    Appreciate your help on this matter.
    Cheers,
    Virgilio
    Edited by: Virgilio Padios on May 29, 2010 3:24 PM

    Hello,
    I may have the same scenario because I have two same jobs that are getting canceled in DB13.  I checked them and it is because that the jobs were created in a client which is no longer existing.  "Logon of user XXXX in client XXX failed when starting a step"
    DBA:ANALYZETAB
    DBA:CHECKOPT
    Can you please tell the importance of these jobs as these are not included in SAP Note 16083 - "Standard jobs, reorganization jobs" as one of the standard jobs.  If required, I need to reschedule these jobs in DB13 which I do not know how.  DB13 does not have options that will create these type of jobs.  So far, this is what I can only see from my DB13.
    Whole database offline + r
    Whole database offline bac
    Whole database online + re
    Whole database online back
    Redo log backup          
    Partial database offline b
    Partial database online ba
    Check and update optimizer - this is for DBA:UDATESTATS
    Adapt next extents - this is for DBA:NEXTEXTENT
    Check database - this is for DBA:CHECKDB        
    Verify database  - this is for DBA:VERIFYDB       
    Cleanup logs  - this is for DBA:CLEANUPLOGS           
    So where does these two jobs should be generated from?  I would appreciate any help regarding this.  
    Thanks,
    Tony

Maybe you are looking for

  • Number of records in a flat file

    Hi All, I have a very big file in CSV format to be loaded into BW. When I try to open it to see the number of records, it is giving me a message "File not loaded completely". When I click on OK, it is showing some records, which I presume to be the m

  • 1st Gen Time Capsule and External Hard Drive Issues

    I have a 1st generation Time Capsule and have never had any luck connecting an external hard drive (both Western Digital MyBook and Seagate FreeAgent Pro) via an USB hub for an extended period. They will connect for a few hours or up to a day or two

  • Reports 6i pdf output as an attachment

    I am using Reports 6i to create a vendor report card. We are then using Oracle ERP 11.5.9 apps to run the report card as a concurrent request. I need to figure out how to send the output of the concurrent request as a pdf email attachment. One possib

  • Error msg: "Config message not received after 45 seconds"

    I have developed a simple stateless session bean. After being called once, the following is output to the log: Thu Jan 04 18:15:46 CST 2001:<I> <RJVM> Signaling peer 3978882810638278829C198.171.129.36 gone: weblogic.rjvm.PeerGoneException: - with nes

  • An error occurred

    Hello I got this problem, and I can't figure out what's wrong. I got my computer at christmas, 07.. All works fine until 1 month ago. I use word 2004 for mac, and I could usually open my old written documents, but someday, it began lagging out while