Collector job

Hi all,
A collector job is running periodically for every 2 hrs .when we upgraded our sap version to sap7 from that time onwards Iwe are observing that the same job is running for upto 4 hrs and at the same time more than one jobs in release state at the same time with same job name. Actually only one job should be run at a time.
In that job one is in active state and other jobs in release state for some time .only one job is in active state for long time for 4 hrs and all the other jobs will turn to finish state in some time only.
When i tried to debug the active job from process overview (sm50) the program reading data from  SYS.DBA_EXTENTS  for long time.
Thanks & regards,
Naveena.

HI Sylke,
Note 2098187 is for ATC exception, the not is currently not in the Solution manager system SOP (and also in SOD which is the development system of solution manager landscape).
But I have configured SOD (Solution manager Development system) to PDS (ECC sandbox) and the Quality collector job was successful (without note 2098187 implemented)
As SOD and SOP are at same SP level, I am expecting Quality collector job on SOP for DVS to finish successful as well similar to Quality collector on SOD for PDS which is already finished successfully and fetched the data. Can you please advise here?
Based on 2077995, i have executed the report for DVS in SOP system and the status is Green for '0SM_ATC'
When i check the for data in 0SM_ATC, I find the data for DVS system
I am able to successfully run 0SM_ATC_CCL_QUAL but not 0SM_ATC_RUNDATE_LOOKUP.
when i try to execute the query 0SM_ATC_RUNDATE_LOOKUP in RSRT for SOP, i get the below notification, looks like i need to install/activate  the query in RSA1 transaction
0SM_ATC_RUNDATE_LOOKUP in RSA1 of SOP system
0SM_ATC_RUNDATE_LOOKUP in RSA1 of SOD system (development solution manager where Quality collector job for PDS is successful)
sample output of 0SM_ATC_CCL_QUAL,
Can you advise here a well? Looks like install/activate 0SM_ATC_RUNDATE_LOOKUP in SOP, should solve the problem
All the notes suggested are already implemented in SOD and SOP
thanks
Sai

Similar Messages

  • DB6 - DB13 collector jobs fail

    Hallo,
    in the DB13 there are DB-collect-jobs red.
    CLEANUP.... are scheduled two times (repeating every 2 hours).
    One of the CLEANUP-collector-jobs is green and okay, the other one is red and canceled.
    Any idea how to get rid of the dupplets?
    THX
    Raymond

    Hi,
    the additional (db-collector) entries can not be deleted with db13.
    When I delete per sql-script as told in the sap note, everything is fine.
    Thank you
    Raymond

  • RSCOLL00 OS collector job is cancelled and getting TIME_OUT dump

    Hi...
    Am using ECC 5.0 , In our PRD system progaram name RSCOLL00 OS collector job and its getting TIME_OUT dump for the particular job.
    And i've seen that an error SQL error 3997 occuredin that job log.
    And this are the dump following search criteria
    "TIME_OUT" C
    "SAPLSALC" or "LSALCU16"
    "SALC_MT_READ"
    pls do the needful ASAP.
    thanks,
    Gopinath.

    Hi Gopinath,
    If your problem is still pending pls paste the complete dump message in this form so that we can give solution accordingly.
    If you problem has been solved then mentions here complete dump message and solution which is applied by you.
    Anil

  • VRF Collector Job Failing (LMS 4.0)

    My VRF Collector job has started failing.
    I have attached the contents of the vnmcollector.log file after setting debug level to DEBUG.
    I cannot see for the life of me what the problem is - has anyone got any ideas on this as I cannot see from the debug log what the problem is.
    Many thanks
    Steve

    This may or may not be relevant.
    If I go to Monitor> Troubleshooting Tools> VRF Lite> Show Commands, click 'Select' against source device and then expand 'All Devices' nothing is listed.

  • Quality collector job fails inspite of extractor fetching the data successfully

    Hi,
    Quality collector job fails on 'DVS' (managed system) inspite of extractor fetching the data successfully.
    Currently Solution manager in use is Solution Manager 'SOP' 7.1 SP12
    All the RTCCTOOL recommendations are applied on SOP & DVS. ST-PI is on 2008_1_700 SP11 on both the systems. ST-BCO component in on SP11.
    As a part of trouble shooting, the extractor run for ATC & ATC exemptions are monitored.
    The extractor run is successfull. Also Master run for ATC is successfully completed.
    ATC monitoring for Custom Code (result) extractor result
    There are no ATC monitoring for Custom Code (exemption) but the extractor run is successful
    Successfull ATC master run on DVS Managed system
    following notes are implemented
    2127901 - CCLM: Quality fails to get Results and exemptions due to Incorrect format of date
    2067543 - Quality Collector : Missing Function name for Remote Solution
    Is anyone facing a similar issue?
    Can

    HI Sylke,
    Note 2098187 is for ATC exception, the not is currently not in the Solution manager system SOP (and also in SOD which is the development system of solution manager landscape).
    But I have configured SOD (Solution manager Development system) to PDS (ECC sandbox) and the Quality collector job was successful (without note 2098187 implemented)
    As SOD and SOP are at same SP level, I am expecting Quality collector job on SOP for DVS to finish successful as well similar to Quality collector on SOD for PDS which is already finished successfully and fetched the data. Can you please advise here?
    Based on 2077995, i have executed the report for DVS in SOP system and the status is Green for '0SM_ATC'
    When i check the for data in 0SM_ATC, I find the data for DVS system
    I am able to successfully run 0SM_ATC_CCL_QUAL but not 0SM_ATC_RUNDATE_LOOKUP.
    when i try to execute the query 0SM_ATC_RUNDATE_LOOKUP in RSRT for SOP, i get the below notification, looks like i need to install/activate  the query in RSA1 transaction
    0SM_ATC_RUNDATE_LOOKUP in RSA1 of SOP system
    0SM_ATC_RUNDATE_LOOKUP in RSA1 of SOD system (development solution manager where Quality collector job for PDS is successful)
    sample output of 0SM_ATC_CCL_QUAL,
    Can you advise here a well? Looks like install/activate 0SM_ATC_RUNDATE_LOOKUP in SOP, should solve the problem
    All the notes suggested are already implemented in SOD and SOP
    thanks
    Sai

  • VRF Collector Job

    Hello,
    I work with Cisco LMS 4.0, in the Job Browser I have several times the job VRF Collector Job with the status Failed.
    Could you please tell me what is this job, why it starts every day and how can I fix it?
    Thanks.

    This job is started after Topology Data Collections to collect VRF lite information from devices.  If you are not using VRF lite in your network, then you can disable the jobs under
    Admin > Collection Settings > VRF Lite > VRF Lite Collector Schedule.  Uncheck the "
    Run VRF Collector After Every Data Collection" box.

  • V3 Collector Jobs are failing

    Hi All,
    We have newly insatlled Purchasing LIS (02) dataflow. We have enhanced 2LIS_02_ITM datasource. We are able to fill the setup tables and perform the INIT load.
    But the V3 extractor job is failing. The ABAP dump says - SY-SUBRC = 2. Changes made to structures..
    Please let me know your views.
    Regards and Thamks,
    Vivek Das Gupta

    Hi,
    Is there is another system with the same client and if the data source if you have already done some activity on that client.
    the error could be because you did the changes to data source when there was some delta records present for that data source.
    try delete the existing delta queue and the any entries from LBWQ for this data source and and from all the clients if there are any.
    then do a reinit and start the delta again.
    hope it helps.
    Ajeet

  • OXM_CBO_STATISTIC COLLECTOR job getting failed

    Hello,
    1.)  The job which is scheduled daily in our sap SRM - EHP 3 system running on Linux OS and oracle 11.2. is getting failed regularly and dumps.
    What is the need for this job?
    2.)  Also , we get a problem in the system when cust runs some queries in the 111 (prod) client - like getting too many dumps (memory related , ASSERTION_FAILED , TIME_OUT's...) and getting ora - 01555 errors regarding old snapshots (undo_retention) deletion time....We get to extend PSAPUNDO when such things happen.
    Our undo_ret value is 43200 secs.
    Should we increase it?
    Undo_size in bytes -  
    UNDO_SIZE
    6.4466E+10
    UNDO_BLOCK_PER_SECOND
    364.346667
    DB_BLOCK_SIZE
    8192

    Hello SS,
    please find the notepad attached with details for OXM job.
    1.)  The job which is scheduled daily in our sap SRM - EHP 3 system running on Linux OS and oracle 11.2. is getting failed regularly and dumps.
    What is the need for this job?

  • Job SAP_PERIODIC_ORACLE_SNAPSHOT

    Hi All,
    We have checked that job SAP_PERIODIC_ORACLE_SNAPSHOT plays main role in Updating currect space statistics in DB02.
    But I do not find how this job is getting triggered, I expect some house keeping job (might be PERFORMANCE_COLLECTOR) is
    triggering this job but I am not sure. I also doubt that job SAP_PERIODIC_ORACLE_SNAPSHOT is running hourly but in DB02
    data is 8 hours old so what is the reason that every run of job SAP_PERIODIC_ORACLE_SNAPSHOT is not updating DB02 statastics.
    So please let me know how job SAP_PERIODIC_ORACLE_SNAPSHOT is getting started and how it is Updating DB02 space statistics.
    Regards,
    Shivam Mittal

    Hi Shivam,,
    It is part of RSCOLL00 program which can be configured on ST03 transaction. Scheduling data is stored on TCOLL table on the database.
    ST03N -> Expert Mode -> Collector and Performance DB -> Performance Monitor Collector -> Execution Times
    As an additional info, performance stat collector jobs are triggered by RSCOLL00 program automatically.
    Best regards,
    Orkun Gedik
    Edited by: Orkun Gedik on May 27, 2011 1:50 PM

  • SECURITY Audit Jobs

    Just finished the Hardware Migration.
    The Audit Logs in SM20 are not showing any Activites.
    Configuration in  SM19  is complete
    Now my Question :
    What are the Jobs ( Collector jobs??) That need to released to get the SM20 Log
    Thanks

    Thank you Sir !
    I have been fiddling with this for a while now and just got it working today
    @George:
    Here is what I was trying to know.
    If there are any/some logs generated, then it is probably the size of the logs that is incorrect. But as such, the default size for logs is big enough to at least start the creation of logs. So your configuration is okay.
    If the logs are not being generated, then it has to be either the profile parameters or the configuration of the filters or the permissions on the filesystem.
    Also, I started with the Dynamic Configuration as Julius suggested. It is much easier to work with and you don't have to restart the system every now and then to see if the changes you make did take effect.
    Hope this helps
    Kunal

  • CCLM Background jobs failed

    Dear colleagues,
    I have some background jods failed:
    SM_CCL:OBJ_<SID>_<Installation_Number>
    19.05.2014 10:05:57 Job started
    19.05.2014 10:05:57 Step 001 started (program RAGS_CC_GET_OBJECTS, variant &0000000000000, user ID SOLMAN_BTC)
    19.05.2014 10:05:59 Selected System: <SID>_<Installation_Number>
    19.05.2014 10:05:59 Variant
    19.05.2014 10:06:01 Solution: Local SAP Solution
    19.05.2014 10:06:42 No objects found.
    19.05.2014 10:06:42 Failed with sy-subrc: 2
    19.05.2014 10:06:42 Job cancelled after system exception ERROR_MESSAGE
    and
    SM_CCL:LUSG_<SID>_<Installation_Number>
    18.05.2014 13:47:30 Job started
    18.05.2014 13:47:31 Step 001 started (program RAGS_CC_GET_LAST_USAGE, variant &0000000000000, user ID SOLMAN_BTC)
    18.05.2014 13:47:33 Variant
    18.05.2014 13:47:35 CCLM Landscape not set up properly. Read long text for further details
    18.05.2014 13:47:35 CC Objects: 0
    18.05.2014 13:47:35 Last Usage Data has never been retrieved before
    18.05.2014 13:47:35 Merge data from AGS_CC_USAGE
    18.05.2014 13:47:35 Solution: Local SAP Solution
    18.05.2014 13:47:35 Regarding time range: 17.05.2014 - 17.05.2014
    18.05.2014 13:47:58 No usage data found
    18.05.2014 13:47:58 Failed with sy-subrc: 2
    18.05.2014 13:47:58 Job cancelled after system exception ERROR_MESSAGE
    Could you please help me to resolve it?
    Thanks a lot,
    Alexander

    We have a similar Issue SM_CCL:QUAL_XXX_0020275865 was aborted  and SAP has suggested the below
    The Quality collector job is failing because ATC is not properly configured for the Managed system.
    According to SE16 table e2e_efwk_log, there is no ATC data being extracted from the managed system.
    I also attempted to run the remote function call SATC_CI_GET_RESULT_CATALOG, and it failed to produce records.
    I've attached a  link which outlines the method of ensuring ATC is properly configured. (ATC_config).
    Once ATC has been properly configured, the QUALITY job should run to completion.
    please refer to the below link for setup details
    Getting Started with the ABAP Test Cockpit for QMs and Admins

  • URGENT : Please help: Purchasing and Inventory loads

    I am currently in BWQ. I did the inital deletion of set up tables by LBWG and then ran the set up jobs through SBIW for Purchasing and Inventory.
    I also did the industry setting and processkey .
    I ran the init load yesterday and it gave some records and then deltas brought 0 records . For some reason I felt that there should not be 0 records and I went and deleted the set up table again and ran the set up jobs. I can see records in the set up table and RSA3.
    But when I am running the load today it brings 0 records. I thought may be the delta ran this morning so it brought 0 records. therefore I ran the full repair reuqest but still there only 0 records.
    Can someone explain this and advice what needs to be done.
    Thanks

    Hi BI TCS
    Initially when you loaded init data you got some records and delta brought 0 records. This is logical if there are no new records creates in R/3 system or you have not executed delta collector job - Job control through LBWE for a particular application.
    Now carry out following steps for purchasing data loads -
    1.delete the setup tables using tcode LBWG
    2.perform set up for the purchasing in background mode
    3.check the job log for step 2 in SM27
    4. find out no of documents written to setup tables for purchasing using tcode NPRT
    5.Now load the init load to BW- this will load the data that you have covered under setup tables
    6. Now create/change some data in R/3. This data must be withing your init range used in init infopackage. If you have done init load with no selection parameters then no problem.
    7. Run job control for purachasing in LBWE
    8. Now load the delta load -> it should load the correct delta data into BW.
    Hope this resolves
    Regards
    Pradip

  • 11.1.2- Need to find out what reports are being executed by a user

    Hi,
    We have financial reports executed from workspace.In essbase session , only the user id is displayed and the Request type- mdxreport is displayed. We also tried searching the log for Financial report didnt find anything relevant.
    Is there a way to find out what reports are being executed by the user.
    Thanks in advance.
    Regards,
    Saumya

    STAT will give you the information for a limited period of time and is limited to the server which you are logged onto yourself. Beyond 24 hours it is of even less use even if you change the selection screen values.
    If you want it for a period way back into the past, then you need to use ST03N.
    There are at least 2 dependencies and 1 confusion:
    1 dependency) The length of period is determined by the size (length) of the file. You can change this in ST03N (default 50MB) via the menu settings.
    2 dependency) The stat collector jobs need to be scheduled to write the information to ST03N (once per hour is a legal requirement in some non-banana republic countries).
    1 confusion) There is an obscure function which converts a report submit to a transaction name (there is not much difference anyway) and an even more obsure one which filters what ST03N will record and therefore whether you can read it. You can (un)filter these things away if you search the SAP Marketplace for the term "MONI".
    What is of particular value from this control is that you can even detect a submission of an abap which only existed temporarily.
    Also note that having this information is potentially very powerfull with respect to the users (some of them are human too), so you should expose and use it responsibly.
    You should also ensure that only responsible users / auditors have access to S_TOOLS_EX.

  • Delta  reocrds are not getting extracted . Problem with delta Quaue

    Hello friends .
    Could you please help me in this scenario ?
    It’s related to delta load from application 12 ,  - 2LIS_12_VCITM.
    Some how data is not getting transferred from LBWQ to RSA7 . There are more than 200000 entries in LBWQ , but the job is not able to transfer any entries to RSA7.
    For rest other application , data is going correctly .
    The job status showing is correct , like XXXXX LUW are transferred … there is no error in Job log , it technically correct .
    I have found one notes , in SAP service market place , but still it doesn’t match with my job status . My job status is NOTESID  ..where as per the notes it should WKEDPI*** ( something like this ) .
    I have taken below steps to solve this problem but still it doesn’t work .
    1.     Removed job from schedule ( the job for LBWQ to RSA7 ) and Executed the job manually .
    2.     Executed Program RMSBW12.
    3.     Deleted the Delta Q from RSA for application 12 and regenerated the same once again by running the infopackage , early delta initialization with out data transfer , . Once Delta Q is generated , run the job once again .
    I hope , I tried with all correct way ..but still I could not able to transfer the data .
    But with all above steps , still data is not transferred . Daily infopackage is running as per the schedule , and bringing only 0 record . If it still doesn’t work then I have to do Re-initilization .  and this is big cost for us . as we have to lock the sys for users and so on .
    Could you please give me some tips , that how can I transferr the data from LBWQ to delta Q ( RSA7) . If it works then definitely , I will save my lot of time .
    Please suggest me , how can I proceed . Please save me from this situation.
    Many many thanks in advance.
    Regards,

    Hi Akshay,
    Go to SMQ1 and check MCEX12 Queue. If your delta records are piling up in SMQ1 and not being transfered to RSA7, you need to check your back gound job control option once again.
    The back ground collector job will collect the recrords from SMQ1 and push to RSA7.
    Try to execute RMBWV312 program in SE38 and manully push all the recrods to RSA7. Once all the records are available in RSA7 you will find the MCEX12 queue will be cleared in SMQ1.
    Cheers
    Praveen

  • SAP_COLLECTOR_FOR_PERFMONITOR dumping MEMORY_NO_MORE_PAGING

    Dear friends,
    In one of our BW production system, 2 instance of hourly job SAP_COLLECTOR_FOR_PERFMONITOR is dumping MEMORY_NO_MORE_PAGING.
    I have already checked this thread :
    MEMORY_NO_MORE_PAGING
    But I am unable to find out the reason and also i do not find any Moni Key in abap dump. Please suggest.
    PART OF ABAP DUMP :
       40          SEGMT     = SEGMT.
       41 *     exceptions
       42 *          others       = 1.
       43
       44 EXPORT TS    TO MEMORY ID 'RSORAT2M'.
       45 EXPORT TD110 TO MEMORY ID 'RSORAT4M'.
    >>>>> EXPORT SEGMT TO MEMORY ID 'RSORAT6M'.
       47
       48 * INCLUDE rsorat0b.       T.S. 11/96
       49 * INCLUDE rsorat0m.       T.S. 11/96
       50 * INCLUDE rsorat0f.       T.S. 11/96
       51 *INCLUDE RSORAT2F.
    thanks
    ashish

    Hello friends,
    Of-Course MEMORY_NO_MORE_PAGING is a memory dump and it can be avoided if i increage paging sizes. We have smaller value for this parameter and I am not convinced if it is required now to increase it. This was working nicely sometime back.
    So, When we say we have abap statement EXPORT which is causing MEMORY_NO_MORE_PAGING. But the same abap statement was working fine sometime back and we did not had this problem. So i assume if there is something like large number of history records as per SAP Note 713211 is troubling us.
    Now, i want to findout what is causing so much of memory requirement & page files.
    We also get similar dumo when we manually try to perform DB Checks & update histories with transaction DB02OLD. This is the same this what Collector Job also doing and dumping.
    Any new suggestion please..
    thanks
    ashish

Maybe you are looking for