Not clearing the job queue

Hello everyone,
We've been experiencing some oddities in our database and we hope to find some help around here. The scenario is as follows:
Our job queue shows 13 pages of jobs, updated in the last 5 days, in which we find lots of jobs - actually most of them from 3 days ago - with the progress showing "transcoding clip" and the status as "RUN".
When we try to cancel any of those jobs we get an error message that says: "Error cancelling - this job is currently not running".
We have tried flushing the event queue from Terminal using the following commands:
fcsvr_client flushresponsequeue
fcsvr_run psql px pxdb -c "vacuum analyze verbose;"
but none of them solved our problem. We cannot clear the event tables in the db (pxevent and pxeventresponse). We have even tried a more desperate solution by accessing the db via Navicat and manually deleting each and every row (!!!), visually clearing the tables.
BUT on the next day they did reappear and on the day after double in quantity (i.e. empty table --next day--> 4K rows --next day--> +4K rows and so on...).
So now we have 18,826 entries in our table which seems to grow indefinitely (and for no apparent reason).
We fear that our problem may get worse and cause us bigger problems. Any help would be very appreciated.
Details of the system: FCS V1.5.2 / Mac OS X (10.5.8)
Thank you in advance,

Hey guys, thanks for the replies.
the issues with the "pxevent" and "pxeventresponse" tables seems to have been solved BUT the issue with the Job queue remains the same.
Old (1 day old or more) jobs are showing status as RUN although when you try to delete it you get an error message which says "error cancelling - this job is currently not running" despite the fact it is shown as running. When you "get info" on any of those, you can see in the logs that it has failed and retried, then failed again (on the retry). This condition should make the status change from RUN to FAIL but it is not what's happening.
All this ends up putting some recently added jobs on a WAIT status which ends up kind of locking my queue (therefore preventing jobs from running, timming them out and, consequently, failing them). If I go to the "pxjobs" table and delete all entries it solves the symptom (as it was well said by one of you) but the problem doesn't cease to happen.
I can force the waiting jobs to run now but they end up failing too.
Any thoughts on that? I have some print screens but don't know if I can post them here (newbie on this forum, sorry ).
Thanks for your attention

Similar Messages

  • Background Job cancelling with error Data does not match the job definition

    Dear Team,
    Background Job is getting cancelled when I run a Job on periodically but the same Job is executing perfectly when I run it manually(repeat scheduling) .
    Let me describe the problem clearly.
    We have a program which picks up files from an FTP server and posts the documents into SAP. We are scheduling this program as a background Job daily. This Job is running perfectly if the files contain no data. But if the file contains data the JOb is getting cancelled with the following messages.
    And also the same Job is getting executed perfectly when repeat scheduling is done ( even for files with data).
    Time     Message text                                                                       Message class Message no. Message type
    03:46:08 Job PREPAID_OCT_APPS2_11: Data does not match the job definition; job terminated        BD           078          E
    03:46:08 Job cancelled after system exception ERROR_MESSAGE                                      00           564          A
    Please help me in resolving this issue.
    Thanks in advance,
    Sai.

    hi,
    If you have any GUI function modules used in the program of job
    you cannot run it in background mode.

  • [Guide] How To Clear the Play Queue

    QuestionHow do I clear the play queue?
    AnswerPlease note this is an unofficial workaround and not an official solution. 
    1 - Click New Playlist.
    2 - Add only one song to the Playlist.
    3 - Whenever you want to clear the Play Queue double click it.
    Once Play Queue is flushed, only the short song you used will be in the Queue. The reason is that the last song played in the Queue always stays until something new is loaded into it other than manually added material.

    The MarkupBean uses its own broadcaster (MarkupBroadcaster).
    The setBroadcast(boolean b) is available only in the VueEventBroadcaster.
    We may work on better solution to fix the issue.
    Probably the issue is not related to the number of notifications!. Try this workaround to confirm...!
    Workaround (I do not recommend to do this):
    - Save the current markup broadcaster : oldBroadcaster = getMarkupBean().getMarkupBroadcaster()'
    - Set an empty broadcaster (getMarkupBean().setMarkupBroadcaster(new MarkupBroadcaster())). All the listeners are copied into the new broadcaster
    - getMarkupBean().getMarkupBroadcaster().removeAllMarkupEventListeners(); // Remove the copied listeners
    - Put back the old broadcaster (markup listeners).

  • Clearing the print queue

    While printing, the paper supply ran out but when I reloaded, it tells me the printer is out of paper. I've turned off the printer, tried to delete the current job but am still unable to print.
    Thank you.

    Hi - Have you rebooted the computer yet?  I'd try that first.  That should clear the print queue in case the previous document is still stuck int he print queue.  If that doesn't take care of it, does the printer attempt to pickup paper or do you get any response from the printer when you send a file to print?
    Hope that helps.
    Say Thanks by clicking the Kudos thumbs up. Please mark the post that solves your problem as an Accepted Solution so other forum users can utilize the solution.
    I am an HP employee.

  • How can I clear the print queue?

    How can I clear the print queue on an iPad 2?

    Double-click the Home button to bring up the "recent Apps" tray. Find the Print Center icon in the tray and touch it to switch to it. You can cancel print jobs from there.

  • Job Cancelled with an error "Data does not match the job def: Job terminat"

    Dear Friends,
    The following job is with respect to an inbound interface that transfers data into SAP.
    The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
    The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
    1.Job   Started                                                                               
    2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)                           
    3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
    4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.                                   
    5.File mist.txt read from /data/sap/ARD/interface/FI/work/.                                    
    6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)  
    7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)    
    8.Job cancelled after system exception
    ERROR_MESSAGE                                                
    Could you please analyse and come up about under what circumstance the above error is reported.
    As well I heard that because of the customization issues in T.Code BMV0, the above error has raised. 
    Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
    My Trials
    1. Tested uplaoding an empty file
    2. Tested uploading with wrong data
    3. Tested uploading with improper data that has false file structue
    But failed to simulate the above scenario.
    Clarification Required
    Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
    Is the above question valid?
    Edited by: dharmendra gali on Jan 28, 2008 6:06 AM

    Dear Friends,
    _Urgent : Please work on this ASAP _
    The following job is with respect to an inbound interface that transfers data into SAP.
    The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
    The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
    1.Job Started
    2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)
    3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
    4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.
    5.File mist.txt read from /data/sap/ARD/interface/FI/work/.
    6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)
    7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)
    8.Job cancelled after system exception
    ERROR_MESSAGE
    Could you please analyse and come up about under what circumstance the above error is reported.
    As well I heard that because of the customization issues in T.Code BMV0, the above error has raised.
    Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
    My Trials
    1. Tested uplaoding an empty file
    2. Tested uploading with wrong data
    3. Tested uploading with improper data that has false file structue
    But failed to simulate the above scenario.
    Clarification Required
    Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
    Is the above question valid?

  • Regarding "Data does not match the job definition; job terminated"

    Dear Friends,
    The following job is with respect to an inbound interface that transfers data into SAP.
    The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
    The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
    1.Job Started
    2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)
    3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
    4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.
    5.File mist.txt read from /data/sap/ARD/interface/FI/work/.
    6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)
    7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)
    8.Job cancelled after system exception
    ERROR_MESSAGE
    Could you please analyse and come up about under what circumstance the above error is reported.
    As well I heard that because of the customization issues in T.Code BMV0, the above error has raised.
    Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
    My Trials
    1. Tested uplaoding an empty file
    2. Tested uploading with wrong data
    3. Tested uploading with improper data that has false file structue
    But failed to simulate the above scenario.
    Clarification Required
    Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
    Is the above question valid?

    Hi dharmendra
    Good Day
    How are you
       I am facing the same problem which you have posted
    By any chance have you got the soultion for this
    If so please let me know the solution for this.
    Thanks in advance.
    Cheers
    Vallabhaneni

  • MR11 not clearing the GR/IR account done in foreign currency.

    Hi Experts,
    I have an issue and that is with the MR11. The MR11 is not clearing the GR/IR account for those purchase orders which are raised in foreign currency and consequently MIGO and MIRO too in the foreign currency however there is no quantity variance. There is no problem with local currency.
    Can you please let me know what could be the problem.
    Rgds,
    BABA

    Hi
    Please maintain the GR/ IR accont with only balances in Local currency then you will be able to clear the account
    goto FS00 , enter the GR/ IR account , enter the Compnay code Click on change & Clcik on the tab Control data , here flag the Only Manage Balances in Local Currency
    This needs to be flagged for GR?IR account to avoid such problems
    Thanks & Regards
    Kishore

  • How do i clear the qRFC queue in a R/3 sandbox

    Hi All,
        I just built a R/3 4.7B sandbox and we are going to perform an upgrade on it. But before we do that we need to clear the qRFC queue. Can anyone please let me know as to how i go about in clearing the qRFC queue.
    Thanks
    Anil

    Open the infopackage that loads deltas from this infosource.Use option delta and run the infopackage.Do a monitor on this infopackage and check that it has gone GREEN and pulled records.
    then go to tcode rsa7 in the source system and highlight the datasource,then click on the 'detail'(lens) button.
    In next screen choose radio button delta update and check that it lists 0 records(shows as 0 from 0 records).
    Will also probably show 0 LUW's in the delta queue screen beside the datasource tech name.
    This confirms that queue is cleared.
    cheers,
    Vishvesh

  • DBMS_DATAPUMP.STOP_JOB procedure is not halting the job

    Dear All,
    We have created a application using .NET which a user can backp his schema (Fires datapump procedurein the backend). But we have noticed that DBMS_DATAPUMP.STOP_JOB is actually not stopping the job after few minutes of start time, will stop only if it is fired within few minutes.
    Following is how i called the Datapump procedure:
    DBMS_DATAPUMP.STOP_JOB(0,1,0)

    post the code you used to start the job. It should be similar to the following, where it assigns the handle to a variable.
    declare
    h1 NUMBER;
    begin
    h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE', job_name => 'TESTING_DP_JOB', version => 'COMPATIBLE');
    in this case the h1 variable holds the handle. I'm sure there is a view that can be used as well to get the handle after the job has been started.

  • Azure Compute Nodes not running the job

    I have an on-premise head node. I have joined 10 Azure Compute Nodes via the cloud service and storage account. I have uploaded the dlls directory to storage account and synced using hpcsync all the compute nodes. The Azure compute nodes are still not
    running the job. I see in the HPC Job Manager that the Cores In Use = 0. How should I resolve this issue?

    Hello,
    We are researching on the query and would get back to you soon on this.
    I apologize for the inconvenience and appreciate your time and patience in this matter.
    Regards,
    Azam khan

  • Could not execute the job

    Hi,
    when i execute the job a a window appear with the massage" ERROR: could not execute the job .Error returned was 1
         MESSAGE is : Could not open command file...
    And i can't find from where it comes any suggestion?

    When today i execute the job i had this list of error
    13860    15384    REP-100109        27/05/2014 08:22:10       |Session TF_SGFA
    13860    15384    REP-100109        27/05/2014 08:22:10       Cannot save <History info> into the repository. Additional database information: <SQL submitted to ODBC data source
    13860    15384    REP-100109        27/05/2014 08:22:10       <SIGSIRDDB01\SQLSIRDBD> resulted in error <[Microsoft][ODBC SQL Server Driver][SQL Server]Could not allocate space for object
    13860    15384    REP-100109        27/05/2014 08:22:10       'dbo.AL_HISTORY_INFO' in database 'DS_REP' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded
    13860    15384    REP-100109        27/05/2014 08:22:10       files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files
    13860    15384    REP-100109        27/05/2014 08:22:10       in the filegroup.>. The SQL submitted is <INSERT INTO "AL_HISTORY_INFO" ("OBJECT_KEY", "NAME", "VALUE", "NORM_NAME",
    13860    15384    REP-100109        27/05/2014 08:22:10       "NORM_VALUE") VALUES (430, N'TRACE_LOG_INFO',
    13860    15384    REP-100109        27/05/2014 08:22:10                N'F:\SAPDS/log/JobServDS/sigsirddb01_sqlsirdbd_ds_rep_dsuser/trace_05_27_2014_08_22_09_10__3a4327b0_92c8_4abe_ac24_96d49123242a.
    13860    15384    REP-100109        27/05/2014 08:22:10       txt', N'TRACE_LOG_INFO',
    13860    15384    REP-100109        27/05/2014 08:22:10                N'F:\SAPDS/LOG/JOBSERVDS/SIGSIRDDB01_SQLSIRDBD_DS_REP_DSUSER/TRACE_05_27_2014_08_22_09_10__3A4327B0_92C8_4ABE_AC24_96D49123242A.
    13860    15384    REP-100109        27/05/2014 08:22:10       TXT') >.>.
    13860    15384    REP-100112        27/05/2014 08:22:10       |Session TF_SGFA
    13860    15384    REP-100112        27/05/2014 08:22:10       Cannot save <History info> for repository object <>. Additional database information: <Cannot save <History info> into the
    13860    15384    REP-100112        27/05/2014 08:22:10       repository. Additional database information: <SQL submitted to ODBC data source <SIGSIRDDB01\SQLSIRDBD> resulted in error
    13860    15384    REP-100112        27/05/2014 08:22:10       <[Microsoft][ODBC SQL Server Driver][SQL Server]Could not allocate space for object 'dbo.AL_HISTORY_INFO' in database 'DS_REP'
    13860    15384    REP-100112        27/05/2014 08:22:10       because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup,
    13860    15384    REP-100112        27/05/2014 08:22:10       adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.>. The SQL submitted is
    13860    15384    REP-100112        27/05/2014 08:22:10       <INSERT INTO "AL_HISTORY_INFO" ("OBJECT_KEY", "NAME", "VALUE", "NORM_NAME", "NORM_VALUE") VALUES (430, N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/log/JobServDS/sigsirddb01_sqlsirdbd_ds_rep_dsuser/trace_05_27_2014_08_22_09_10__3a4327b0_92c8_4abe_ac24_96d49123242a.
    13860    15384    REP-100112        27/05/2014 08:22:10       txt', N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/LOG/JOBSERVDS/SIGSIRDDB01_SQLSIRDBD_DS_REP_DSUSER/TRACE_05_27_2014_08_22_09_10__3A4327B0_92C8_4ABE_AC24_96D49123242A.
    13860    15384    REP-100112        27/05/2014 08:22:10       TXT') >.>.>.
    13860    15384    REP-100112        27/05/2014 08:22:10       |Session TF_SGFA
    13860    15384    REP-100112        27/05/2014 08:22:10       Cannot save <History info> for repository object <>. Additional database information: <Cannot save <History info> into the
    13860    15384    REP-100112        27/05/2014 08:22:10       repository. Additional database information: <SQL submitted to ODBC data source <SIGSIRDDB01\SQLSIRDBD> resulted in error
    13860    15384    REP-100112        27/05/2014 08:22:10       <[Microsoft][ODBC SQL Server Driver][SQL Server]Could not allocate space for object 'dbo.AL_HISTORY_INFO' in database 'DS_REP'
    13860    15384    REP-100112        27/05/2014 08:22:10       because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup,
    13860    15384    REP-100112        27/05/2014 08:22:10       adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.>. The SQL submitted is
    13860    15384    REP-100112        27/05/2014 08:22:10       <INSERT INTO "AL_HISTORY_INFO" ("OBJECT_KEY", "NAME", "VALUE", "NORM_NAME", "NORM_VALUE") VALUES (430, N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/log/JobServDS/sigsirddb01_sqlsirdbd_ds_rep_dsuser/trace_05_27_2014_08_22_09_10__3a4327b0_92c8_4abe_ac24_96d49123242a.
    13860    15384    REP-100112        27/05/2014 08:22:10       txt', N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/LOG/JOBSERVDS/SIGSIRDDB01_SQLSIRDBD_DS_REP_DSUSER/TRACE_05_27_2014_08_22_09_10__3A4327B0_92C8_4ABE_AC24_96D49123242A.
    13860    15384    REP-100112        27/05/2014 08:22:10       TXT') >.>.>.
    and thank u.
    Sincerly

  • HT4236 I successfully synched photos from my mac book iphoto to my new iphone 5 (icloud did not do the job).  Now these photos appear on my phone in three places--iphoto, last import and photo library.  Does that mean that they take up three times the mem

    I successfully synched photos from my mac book iphoto to my new iphone 5 (icloud did not do the job).  Now these photos appear on my phone in three places--iphoto app, last import, and photo library.  Does that mean that they take up three times the memory?  Will that affect iCloud?  Sorry for the end-user questions!!

    HHi, thank you for the reply. I have checked my iPad and iPhone and neither has iCloud Photo Library (Beta) enabled. Turned off in both. Photostream is turned on.
    i tried to sort it out  by dragging all the photos to Events on the Mac and then deleting them from iCloud - (left hand side of iPhoto under the section 'Shared'). the photos now show up in Events. I did force quit but the issue remains. The message reads ' photos are bing imported to the library. Please wait for import to complete.'
    i can't empty iPhoto trash either. The message read "Delete error. Please wait for import to complete.'
    WHen I was moving the photos to the Events I always had a message about duplicates - to the effect that the photos already existed, did I want to import them? I clicked on Yes, import all duplicates. But when it showed the images - duplicates side by side - one showed the photo and the other was blank.
    I really don't know what to do! And I don't know how to handle my iOS devices. Is it to do with the large number of photos? Any help, advice appreciated.

  • Do we necessarily need to clear the BW queues in preprocessing steps of SAP

    Hello Experts,
    A quick question.
    I am doing EHP upgrade on BW system
    Do we necessarily need to clear the BW queues in preprocessing steps of SAPehpi?
    OR the same can be done just before downtime?
    A fast response is much appreciated.
    Best Regards
    Sachin Bhatt

    Hello Markus,
    Thanks for the info.
    We finished EHP4 implementation successfully last saturday.
    However to my surprise didnt receive any message by EHPI to clear delta queues.
    We cleared it just before down time.
    Anyways thanks for the answer.
    We wil take a close look when implementing next time during the phase mentioned.
    Regards
    Sachin Bhatt

  • Copy opening does not clear the destination data - Just appends

    Hi All,
    Copy opening does not clear the destination data. It just appends so if i run the copy opening package twice i can see the signed data get doubled.
    We have SAP BPC NW 7.5 SP08 Patch 1001.
    I working on the Periodic Application.
    Copy opening Script logic:
    *FOR %TIME% = %TIME_SET%
    *RUN_PROGRAM COPYOPENING
    CATEGORY = %CATEGORY_SET%
    CURRENCY = %CURRENCY_SET%
    TID_RA = %TIME%
    OTHER = [B_ENTITY=%B_ENTITY_SET%]
    *ENDRUN_PROGRAM
    *NEXT
    Carry-forward business rules
    Source account : BALANCE SHEET ACCOUNTS
    Source flow  : F99
    Dest account :
    Dest flow  : F00
    Reverse sign : FALSE
    Data source type :  ALL
    Same period      :FALSE
    Apply to YTD : False
    Category dimension:
    ID               ACTUAL               BUDGET
    EVDESCRIPTION          Actual               BUDGET
    PARENTH1          
    YEAR               2008               2011
    COMPARISON          BUDGET               NBUDGET
    STARTMTH               1               1
    CATEGORY_FOR_OPE                    ACTUAL
    FX_SOURCE_CATEGORY          
    RATE_CATEGORY          
    RATE_YEAR          
    RATE_PERIOD          
    FX_DIFFERENCE_ONLY          
    OPENING_YEAR          
    OPENING_PERIOD          
    OWNER               [ADMIN]               [ADMIN]
    STORED_CALC          
    REVIEWER_CAT          
    Appreciate any help.
    Regards
    Mehul

    Snote 1620613 has fixed the issue
    Regards
    Mehul

Maybe you are looking for

  • PO Creation of free goods for intrastat reporting

    All, Due to intrastat reporting restrictions, when we are creating a PO with a free good we need to mark the item as "Free Good" on the item overview, and set the "Business Transaction type" to 41 on the "Origin / Destination / Business" tab of the "

  • Why is the quality of the DVD not as good as original mpeg layer 3

    I have produced a wedding album slideshow in keynote (this allows me the 'flop' transition giving the appearance of the pages being turned) I tried exporting staring into iDVD but the slideshow quality was awful on the DVD. I have now tried exporting

  • Function list for pl/sql package script in file editor

    Hello, i am editing a pl/sql package file in the editor an run this script to activate my changes. is there an possibility to see the functions and procedures list which exists in this file ? Regards Sascha

  • Fuji X10 raw image distorted

    Adobe raw for the fuji X10 outputs a distorted image, the image is also cropped horizontally. When opening a X10 raw file (.RAF) in adobe raw, the following unexpected results: about 3% is cropped from horizontal field of view (symmetrical, both side

  • Frozen screen during reset.

    The iPod stopped in mid song. I could move through menu's, but no song would play. I tried a hard reset, and the apple appeared. However, it just stays there until it eventually becomes a red cross, all the while emitting a 'stop-and-start' crackle,