Failed batch jobs

I have been trying to compress a mpeg2-stream from a esata drive and it comes up with a "Failed: 3x crash service down" message. What is happening? I have done this type of jobs in the past.

Jim,
If you look up posts by my user name you will find that I have been trying to get an answer to this problem since September 2006 for an intel machine. Also try looking up "3x crash service down" for more than just myself. I don't believe this is a problem due to mismatched RAM because I had stock Apple RAM in the intel computer and it still crashed. I have stock RAM in the G5 at work and no problems there. I'm really surprised that this is happening on a PPC processor with you. I did a totally clean install the other day from FCP 5.1.1 in order to bypass the 5.1.2 update (which is where I first encountered the problem) and go straight to the 5.1.3 update hoping the problem was corrected. I ran two short tests of approx 7 minute clips exported to mpeg2 multiplexed and it worked. I thought, **** Yeah it's fixed. Then I dropped a longer clip (17 minutes) on the timeline to export and I got the 3x crash service down error from compressor. I then trashed everything with FCP Rescue and restarted the computer and tried the short clip (7 min.) again and it crashed out. So, it is still broken. BTW, when you look in your compressor crash logs what is the error you get? Have you reported this as a bug to Apple through the ADC Bug Reporting Feature? If I'm the only one that has reported this then I wouldn't think there would be much of a rush/interest to get it fixed. However, if x-number of people are having the same problem and report it then I would think it would garner a higher priority from engineering. In the meantime I have been using MPEG Streamclip for a workaround. You ought to try it. It's creates an extra step or two but it may help you keep your sanity. I'm not sure about it's ability to do batch processing though - haven't tried to do that. Another theory that I have is that at work I use a static IP address through a LAN and I have a static DNS Server number (which is listed as optional in the Network settings of System Prefernces). At home my IP address changes every couple of days and I don't have an IP address in the DNS Server field. That is the only difference in the setup of the systems other than one being a PPC and the other an Intel. I'm just glad that this isn't happening to the G5 at work or else I would be totally screwed. Keep up the good work on your end and keep providing feedback about what you find. Maybe we'll narrow it down or someone with greater knowedge will be able to help us.
Update: Received a reply from apple that said they were investigating the problem and would let me know something when they know something. Hope that sheds some light on things.

Similar Messages

  • Function module to start failed batch job

    Hi all,
    Any function module in ABAP to start the failed batch job run which i will use in a report.
    I am using SAP 4.7.
    Thanks
    Raj

    Hi Minish,
    Normally, we can call an RFC in background task as below:
    CALL FUNCTION func IN BACKGROUND TASK
                         [DESTINATION dest]
                         parameter_list
                         [AS SEPARATE UNIT].
    But I am not sure whether your external application can call this RFC in background. But you can create an another RFC and inside call this RFC in background.
    Then call the new RFC so that it will internally call the required RFC in background and Immediately after it will be closed.
    Regards,
    Selva K.

  • Compressor failing after job completion, but only in batches?

    First, the setup is a stock MacPro 8-Core 2.8GHz. RAM has been upgraded to 6GB, there's 3TB of SATA 3.0Gbps drives inside. Qmaster is set up with a QuickCluster, and we've tried several variations on the number of virtual nodes. Currently, it's at 8, but we see this problem with 2, 4, 6 as well.
    We're working with Uncompressed material, running some scaling and cropping presets and creating a new Uncompressed SD resolution quicktime. By Uncompressed, I actually mean uncompressed (Blackmagic 10 Bit 4:2:2 in and out). The job is segmented and processed, and shows as 100% complete, but fails in every case, *ONLY IF IT'S A BATCH JOB* at the "Merging Distributed QuickTime Segments" stage. If we run these one at a time, they all work fine. The failures vary, but all have to do with Quicktime errors, sometimes it's just "error 2" other times it's more substantive, talking about file offsets being incorrect.
    a batch of small files will work fine, but these are all 130GB to 150GB in size.
    I've watched the Batch Monitor status until the failure occurs, and the problem seems to be related to disk I/O. This Merging step takes forever - the final file takes almost as long to merge as it does to process. It seems that if there's a merge happening while other jobs are running everything is fine, but once another large merge begins, everything bogs down and they start failing. If we then re-run each one of those individually with the exact same presets, target locations, etc. they run perfectly.
    But the idea here is to create a workflow we can just throw jobs at and not have to worry about this. So, is there something we can do to prevent this from happening? Any ideas?
    -perry
    Message was edited by: perry paolantonio
    Message was edited by: perry paolantonio

    First, the setup is a stock MacPro 8-Core 2.8GHz. RAM has been upgraded to 6GB, there's 3TB of SATA 3.0Gbps drives inside. Qmaster is set up with a QuickCluster, and we've tried several variations on the number of virtual nodes. Currently, it's at 8, but we see this problem with 2, 4, 6 as well.
    We're working with Uncompressed material, running some scaling and cropping presets and creating a new Uncompressed SD resolution quicktime. By Uncompressed, I actually mean uncompressed (Blackmagic 10 Bit 4:2:2 in and out). The job is segmented and processed, and shows as 100% complete, but fails in every case, *ONLY IF IT'S A BATCH JOB* at the "Merging Distributed QuickTime Segments" stage. If we run these one at a time, they all work fine. The failures vary, but all have to do with Quicktime errors, sometimes it's just "error 2" other times it's more substantive, talking about file offsets being incorrect.
    a batch of small files will work fine, but these are all 130GB to 150GB in size.
    I've watched the Batch Monitor status until the failure occurs, and the problem seems to be related to disk I/O. This Merging step takes forever - the final file takes almost as long to merge as it does to process. It seems that if there's a merge happening while other jobs are running everything is fine, but once another large merge begins, everything bogs down and they start failing. If we then re-run each one of those individually with the exact same presets, target locations, etc. they run perfectly.
    But the idea here is to create a workflow we can just throw jobs at and not have to worry about this. So, is there something we can do to prevent this from happening? Any ideas?
    -perry
    Message was edited by: perry paolantonio
    Message was edited by: perry paolantonio

  • Batch Jobs fail because User ID is either Locked or deleted from SAP System

    Business Users releases batch jobs under their user id.
    And when these User Ids are deleted or locked by system administrator, these batch jobs fail, due to either user being locked or deleted from the system.
    Is there any way that these batch jobs can be stopped from cancelling or any SAP standard report to check if there are any batch jobs running under specific user id.

    Ajay,
    What you can do is, if you want the jobs to be still running under the particular user's name (I know people crib about anything and everything), and not worry about the jobs failing when the user is locked out, you can still achieve this by creating a system (eg bkgrjobs) user and run the Steps in the jobs under that System User name. You can do this while defining the Step in SM37
    This way, the jobs will keep running under the Business User's name and will not fail if he/she is locked out. But make sure that the System User has the necessary authorizations or the job will fail. Sap_all should be fine, but it really again depends on your company.
    Kunal

  • SAP BODS XI 3.2 - batch job, global variable, name includes apostrophe - why failing?

    Global Variable in batch job used to define user access (add/update/remove) - how do we handle the inclusion of an apostrophe in a name?
    Tested with wildcard prior to apostrophe, still failing.
    Any clues, please?
    Thank you.

    I believe you can not use any special characters in global variables name.

  • Email notification if batch job fails

    Hi,
    My requirement is to send a notification to a user if the batch job fails.
    The batch job is a Z report program.
    Regards
    Suresh Kumar

    HI,
    Check this thread..
    When a Job in SM37 fails means a email should be send
    Regards,
    Omkar.

  • Setting up alarms when batch jobs fails

    Hi,
    I hv a requirement to setup alarms whenever a sap batch job fails,  Can anyone help me in this regards.
    Thank you very much in advance.
    Kind Regards,
    Ahmed.

    check this document
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/8065591d-ea33-2b10-a0af-f17c923874c8
    see in page-2.
    кu03B1ятu03B9к

  • KSV5 Batch Job fails on second day

    Hi All,
    We execute KSV5 in batch for first four days(i.e. 01 till 04 of new month) of new month.
    First day the Batch job runs successfully, but balance 3 days the job fails with an error as
    (program RKGALKSV5, variant XXXX DSB, user ID BATCH)
    The reversal has not been updated
    Message no. GA534
    Diagnosis
    Because CCA: Actual Distribution has already run, old document numbers are reversed before processing begins. Processing can only begin once reversals have been updated. If processing occurred earlier, debits and credits not yet reversed would be affected.
    To ensure this, the reversal sets a lock (ENQUEUE EK811D) which is only removed once updates have been made.
    The processing waits until the block has been removed. The maximum time processing will wait is determined by the profile parameter ENQUE/DELAY_MAX. The default for this parameter is 5 seconds. Normally this amount of time is sufficient; however, certain factors such as high system usage can delay updates.
    This is the case here.
    System Response
    Processing of CCA: Actual Distribution is not started.
    Procedure
    Wait until the reversal has been updated. You can monitor the update with the menu path Tools -> Administration. If the update is not made due to a system error, contact your system administrator. If the reversal is updated, restart the processing. Because reversal has been performed, processing starts immediately.
    If this error occurs repeatedly, increase the value of the ENQUE/DELAY_MAX profile parameter. Processing then will wait longer for the reversal to be updated. Consult your system administrator before you change the parameter.
    We are not sure where & how to update the wait time in Profile Parameter.
    Can you pls advice, how to fix it.
    Appreciate your help.
    Regards,
    Dixon.

    Hello Dixon,
    Are you running the same cycle each time ?
    so you run, and reverse and then run and reverse  ??
    or is it 4 different cycles ?

  • Batch job failing with INVALID_STARTDATE exception

    Hello Experts,
    our batch  job  runs 17th minute of every hour on daily basis.It is running successfully from 00:17 AM to 20:17 PM and gets completed within 1 hour for every run.
    But when it is starting 21:17PM, it keeps on running for next 5-6 hours and then gets cancelled with error "Invalid_startdate"
    Currently it is posting all the idocs.Idoc status is 53. But the problem is  closing the job Function module JOB_CLOSE is throwing an exception "INVALID_STARTDATE" and gets cancelled.
    Could anyone please provide us any help on this.

    Usually this message is raised because of:
    Unsupported combinations of specifications, such as periodic repetition of jobs that were scheduled to wait for a predecessor job
    Incomplete or incorrect specifications, such as an incomplete start date.
    but..is your SAP in a time zone 3 hours different from yours?
    e.g. New York has already a new day when you have 9PM in LA

  • Regarding Batch job "RISTRA20" getting fail

    Hi,
    There is one bacaground Job which is active for the past three days with no error it has not gererated an output also, its a standard report "RISTRA20" for PM module.
    have you got any idea why the job is active such a long time, it has not finished though the required output is generated.
    Please help me with the solution if you ever have exprience this probelm.
    Regards,
    Vishvesh

    Hi
    Yes you are right the batch job is related to ip30.
    We have scheduled background job (Ristra20) for generating maintenance orders through maintenance plan. Every night it runs automatically and generates orders.
    This batch job is running for the complete day without any output, May be it is going into loops beacuse of some data issue or the other.
    If you can please help me to investigate the same.
    Thanks in advance.
    Vishvesh

  • Batch job

    Hi I am working on batch job .
    my program is printing invoice as well as downloading and I have run this program in batch,
    I am using the FM job_open , job_submit and job_close.
    but it is failing in job_submit with sy-subrc eq 1 .
    giving me error bad print parameter , I haven;t done this before.
    I think it should be option.
    CALL FUNCTION 'JOB_SUBMIT'
        EXPORTING
          authcknam = SY-UNAME  "tbtcjob-authcknam
          jobcount  = tbtcjob-jobcount
          jobname   = p_jobnam
          language  = sy-langu
           report    = c_reprot
          variant   = pvariant
        EXCEPTIONS
          OTHERS    = 01.
    could anybody pls guide me why it is so???
    Regards.
    Kusum.

    you can below code
    OPEN DATASET gv_file FOR INPUT IN TEXT MODE ENCODING DEFAULT
                                WITH SMART LINEFEED.
        IF sy-subrc EQ 0.
          WHILE sy-subrc IS INITIAL.
            READ DATASET gv_file INTO gwa_header_file.
            IF sy-subrc NE 0.
              EXIT.
            ELSE.
              APPEND gwa_header_file TO gt_header_file.
            ENDIF.
          ENDWHILE.
          CLOSE DATASET gv_file.
        ENDIF.
      ENDIF.

  • How to find out batch job failure and taking action:

    Normally We will monitor the batch jobs  through transaction code sm37 for job monitoring. In SM37 we will give a batch job name date and time  as  input. In the first step we will check the batch job for the reason failure or check the spool request for the batch job for failures an help in analyzing the error
    I understand from the my experience is that the batch may fail due to below reasons.
    1.,Data issues :             ex: Invalid character in quantity (Meins) field  >>>> We will correct the corresponding document with correct value or we will manually run or request the team to rerun the batch job by excluding  the problematic documents from the batch job variant  so that it may process other documents.
    2.Configuration issues : Materials XXXX is not extended for Plant >>>> we will contact the material master team or business to correct the data or we will raise sub contract call with support team to correct he data. Once the data been corrected and will request the team to rerun the batch job.
    3.Performance issues : Volume of the data being processed  by the batch job ,network problems.>>>Normally these kind of issues we will encounter during the month end process as there will lot of accounting transactions or documents being posted business hence it may cause the batch job failure as there is enough memory to complete the program or select queries in the program will timeout because of volume of the records.
    4.Network issues. : Temporary connectivity issues in other partner systems :Outage in other partner systems like APO or other system like GTS  will cause the batch job failure as Batch job not in position to connect other system to get the inforamtion and proceed for further steps.Nornmally we will check RFC destination status by running a custom program  weather connectivity between system are in progress or not. then intimate other partner system  for the further actions, Once the partner system comes online then we will intimate the team to restart or manually submit batch job.
    Some times we will create a manual job by transaction code SM36.

    I'm not sure what the question is among all that but if you want to check on jobs that are viewable via SM37 and started via SM36. The tables are TBTCP -Background Job Step Overview and TBTCO - Job Status Overview Table.
    You can use the following FM to get job details:
    GET_JOB_RUNTIME_INFO - Reading Background Job Runtime Data

  • Launch batch job thru SOAP call : no execution, connection OK.

    Hello,
    I am experiencing some problems launching batch jobs thru Web Services. I have added batch job "EPN_Test_Webservices" to the Web Services and enabled the Job Attributes.
    When I try to launch the job thru a SOAP call, I get a reply but the job is not executed.
    This is the SOAP envelope:
    <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
         <SOAP-ENV:Body>
              <m:EPN_Test_Webservices_Job xmlns:m="http://www.businessobjects.com/DataIntegrator/ServerX.xsd">
                   <job_parameters>
                        <job_system_profile>String</job_system_profile>
                        <sampling_rate>10</sampling_rate>
                        <auditing>true</auditing>
                        <recovery>true</recovery>
                        <job_server>JS_ict</job_server>
                        <trace>String</trace>
                   </job_parameters>
              </m:EPN_Test_Webservices_Job>
         </SOAP-ENV:Body>
    </SOAP-ENV:Envelope>
    This is the reply I get:
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
         <soapenv:Body>
              <BatchJobResponse>
                   <pid>3888</pid>
                   <cid>13</cid>
              </BatchJobResponse>
         </soapenv:Body>
    </soapenv:Envelope>
    When I use the "ping" operation I get the following reply:
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
         <soapenv:Body>
              <pingVersion>
                   <version>Business Objects Data Integrator Version 11.7.2.0</version>
              </pingVersion>
         </soapenv:Body>
    </soapenv:Envelope>
    ...which indicates the connection works. Also the "processed counter" increases, but the job does not get executed. There is nothing in the  job log, there is no result (the job is supposed to write a record in a database table). What can be wrong?

    If I run the following (on the machine I'm trying to set up the share)
    smbclient -L localhost -U%
    I got the following output
    Connection to localhost failed (Error NT_STATUS_CONNECTION_REFUSED)
    so I thought it might be something incorrect with the iptables side of things, however I haven't really touched that at all and it seems to look correct
    iptables -nvL
    Chain INPUT (policy ACCEPT 667 packets, 79977 bytes)
    pkts bytes target prot opt in out source destination
    Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
    pkts bytes target prot opt in out source destination
    Chain OUTPUT (policy ACCEPT 157 packets, 20724 bytes)
    pkts bytes target prot opt in out source destination
    So from my (very little) knowledge this appears correct (I think)... However it appears that something is blocking access somewhere.

  • How to find out the Batch job selection screen values

    Dear Users,
    One of our users has set up a Batch job by manually entering values into the Selection screen of a report instead of picking up a Variant. We would like to know the values entered on selection screen since the job has failed and the user doesn't remember the selection screen values entered.
    Can anyone please advise if there is a way we can figure out the selection screen values entered?
    Thanks,
    Vijay

    Hi,
    You can debug your failed job by going to 'SM37', type in 'JDBG' in the command line ( no '/' ), put the cursor on the job and press enter - will take you to the job in debug mode.
    You can do this only after the job has finished execution. This will simulate the exact background scenario with the same selection screen values as used in the job.
    So type in the transaction code 'JDBG' and place your cursor on the job after It has finished. It will take you to a SAP program in debug mode. Step through this program which is about 10 lines, after this your program will be executed in the debug mode.
    Check the selection screen values.

  • Run_report_object in forms10g: Reports Server only accept batch jobs

    Hi all,
    We've migrated an old forms application to forms 10g R1. To run report objects from the form applications, I use the following code to run reports with a report server (10g):
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_FILENAME,'c:\path\to\report.rep'');
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_EXECUTION_MODE,BATCH);
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_COMM_MODE,ASYNCHRONOUS);
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_DESTYPE,CACHE);
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_DESFORMAT,'PDF');
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_SERVER,'repserver');
    l_rep_return := RUN_REPORT_OBJECT(l_rep_id, pl_id);
    The parameter pl_id is a pre-filled parameter list, and the parameter l_rep_id is the report_object, which was added to the reports-node in the fmb.
    By running this code, the reportserver successfully executes the report and via the report server queue I can open the created PDF file.
    But, as the report_comm_mode is set to ASYNCHRONOUS, the code doesn't wait until the reportserver has finished the execution of the report, so it's impossible to show the report by web.show_document. So I've set the report_comm_mode to SYNCHRONOUS:
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_FILENAME,'c:\path\to\report.rep'');
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_EXECUTION_MODE,BATCH);
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_COMM_MODE,SYNCHRONOUS);
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_DESTYPE,CACHE);
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_DESFORMAT,'PDF');
    SET_REPORT_OBJECT_PROPERTY(l_rep_id,REPORT_SERVER,'repserver');
    l_rep_return := RUN_REPORT_OBJECT(l_rep_id, pl_id);
    But when I executes this code, the execution of the report failes, and the following error code is shown in the report server queue:
    Terminated with error: REP-188: Reports Server only accept batch jobs
    But as you can see, the execution_mode is set to BATCH, so that is not the problem.
    Is there anyone who can help me to fix this problem?
    Regards,
    Jeroen

    I just sent you an e-mail, I hope it helps!

Maybe you are looking for

  • Google Analytics site no longer works in Safari 5

    I just updated to Safari 5 last week and have been having the same kind of issues that other people are posting here about slowness and inability to find servers and crashes etc. My biggest issue now is that Google Analytics website no longer works.

  • New iMac 750 GB Hard Drive Fail Smart Drive Test - Glitch or Problem

    Greetings! I have a new 24" imac with 750 GB hard drive which has had problems out of the box (system crashing several times a day/unable to re-start/application crashes/general sluggishness). Five re-installs of Leopard (upgrade/clean install/instal

  • Importing mail from backup disk

    Hi My G5 iMac running 10.4.11 has packed in beyond repair, so am getting a new iMac with Snow Leopard installed. Fortunately I was able to do a complete backup before my iMac completely stopped working. How should I transfer all my email / folders /

  • Can't open iTunes anymore.  Get error message

    I recently got an iTouch. Also recently got a new computer. I downloaded iTunes onto new computer (Windows Vista) and it worked just fine. iTouch worked too. However, one day I went to open iTunes and it didn't open. I got an error message that said:

  • Personal domain hosted at MobileMe - but not showing up.

    You guys have already helped me out so much on this forum, so thank you. Here's the problem. Several days ago, I transferred my domain to MobileMe and changed the CNAME info at Network Solutions (my domain registrar). But for some reason, when I visi