Automatic stats gathering job log

Hi
we are auto stats gathering feature in ORACLE 10g.I want to know the following information Reg this.
For last one week,Start time and end time of this Job and any errors while gathering jobs.Where can i find this info
Thanks,
pramod

You can check from the following table
SQL> desc DBA_SCHEDULER_JOBS;
Name                            Null?    Type
OWNER                           NOT NULL VARCHAR2(30)
JOB_NAME                        NOT NULL VARCHAR2(30)
JOB_SUBNAME                              VARCHAR2(30)
JOB_CREATOR                              VARCHAR2(30)
CLIENT_ID                                VARCHAR2(64)
GLOBAL_UID                               VARCHAR2(32)
PROGRAM_OWNER                            VARCHAR2(4000)
PROGRAM_NAME                             VARCHAR2(4000)
JOB_TYPE                                 VARCHAR2(16)
JOB_ACTION                               VARCHAR2(4000)
NUMBER_OF_ARGUMENTS                      NUMBER
SCHEDULE_OWNER                           VARCHAR2(4000)
SCHEDULE_NAME                            VARCHAR2(4000)
SCHEDULE_TYPE                            VARCHAR2(12)
START_DATE                               UNDEFINED
REPEAT_INTERVAL                          VARCHAR2(4000)
EVENT_QUEUE_OWNER                        VARCHAR2(30)
EVENT_QUEUE_NAME                         VARCHAR2(30)
EVENT_QUEUE_AGENT                        VARCHAR2(30)
EVENT_CONDITION                          VARCHAR2(4000)
EVENT_RULE                               VARCHAR2(65)
END_DATE                                 UNDEFINED
JOB_CLASS                                VARCHAR2(30)
ENABLED                                  VARCHAR2(5)
AUTO_DROP                                VARCHAR2(5)
RESTARTABLE                              VARCHAR2(5)
STATE                                    VARCHAR2(15)
JOB_PRIORITY                             NUMBER
RUN_COUNT                                NUMBER
MAX_RUNS                                 NUMBER
FAILURE_COUNT                            NUMBER
MAX_FAILURES                             NUMBER
RETRY_COUNT                              NUMBER
LAST_START_DATE                          UNDEFINED
LAST_RUN_DURATION                        UNDEFINED
NEXT_RUN_DATE                            UNDEFINED
SCHEDULE_LIMIT                           UNDEFINED
MAX_RUN_DURATION                         UNDEFINED
LOGGING_LEVEL                            VARCHAR2(4)
STOP_ON_WINDOW_CLOSE                     VARCHAR2(5)
INSTANCE_STICKINESS                      VARCHAR2(5)
RAISE_EVENTS                             VARCHAR2(4000)
SYSTEM                                   VARCHAR2(5)
JOB_WEIGHT                               NUMBER
NLS_ENV                                  VARCHAR2(4000)
SOURCE                                   VARCHAR2(128)
DESTINATION                              VARCHAR2(128)
COMMENTS                                 VARCHAR2(240)
FLAGS                                    NUMBER

Similar Messages

  • Automatic stats gathering

    Hi:
    I am on 10.2.0.3 and use Automatic stats gathering with statistics_level=Typical, but some of my collegaues say that they "colud not trust Oracle to do this job". We are running PeopleSoft Finacials and HR (practically every module) and things seem fine.
    I'd like to get a general feel on this issue. Do you guys us that or run your own scripts? Anybody knows some significant limitations of it?
    TIA.

    Most 10g databases run fine with automatic stats gathering and this works well 99% of the time.
    If you have very specific performance requirements (e.g stock brokers can be fined if transactions don't complete in a certain time) then you cannot afford to have plans change just because the stats changed from an automated dbms_stats job that is not documented or 100% predictable. For these systems DBAs are generally very nervous of any change, and often lock the table stats, set the outline, or disable automatic stats and write their own job.
    http://www.contractoracle.com

  • 11g automatic stats gathering

    hi *
    would you confirm, please, if 11g doesn't have by default enabled automatic stats gathering as 10g has?
    My 2nd question is then, what is the best way to set such an automatic job to gather stats on a regular basis? If you have any script/code, please share.

    You're right I missed the answer, thanks for your hints.
    I know the consequences of disabling stats.
    But one should know that the default it's not always the best, e.g. open_cursors parameter.
    P.S. Although you gave me the answer I dislike the way you contribute to this thread. You see, you're not my boss, you're not obliged to respond in my thread, so cool down, dear friend. Don't shout with exclamations signs, please.
    Perhaps I have no time to do a deep research and need the solution fast, and I believe this forum is also for that reason.
    So don't question people's questions, as there are always wrong answers, not questions. If you know the answer, just pass it.
    Thanks to all of you,

  • Updating LAST_ANALYZED with automatic stats gathering

    I am looking at the last_analyzed column in dba_tables and it shows some tables have not been analyzed since early April last year. But the scheduler is running the stats gathering program nightly. What gives?

    That's probably why. Here is the excerpt from Oracle Documentation regarding the job:
    The GATHER_STATS_JOB job gathers optimizer statistics by calling the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure. The GATHER_DATABASE_STATS_JOB_PROC procedure collects statistics on database objects when the object has no previously gathered statistics or the existing statistics are stale because the underlying object has been modified significantly (more than 10% of the rows).The DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC is an internal procedure, but its operates in a very similar fashion to the DBMS_STATS.GATHER_DATABASE_STATS procedure using the GATHER AUTO option. The primary difference is that the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure prioritizes the database objects that require statistics, so that those objects which most need updated statistics are processed first. This ensures that the most-needed statistics are gathered before the maintenance window closes.

  • Stats gathering

    Hi All,
    oracle 11gr2
    Linux
    am new to database, how to gather statistics in oracle 11g? like database/schema/table wise?
    what is the impact gathering stats daily basis? how to automate stats gathering database wise?
    can anyone please suggest & explain with example if possible.
    thanks,
    Sam.

    AUTO_SAMPLE_SIZE lets Oracle Database determine the best sample size necessary for good statistics, based on the statistical property of the object. Because each type of statistics has different requirements, the size of the actual sample taken may not be the same across the table, columns, or indexes.from the link posted before by Aman
    If you ommited this clause, full scan will be done: (again in the document attached by Aman)
    Gathering statistics without sampling requires full table scans and sorts of entire tables. Sampling minimizes the resources necessary to gather statistics

  • Exclude TAO tables from stat gathering

    Hi,
    on FSCM 9.1, Tools 8.52. on Windows 2008. DB Oracle 11g R2.
    Automatic stat gathering, collects statistics for temporary tables PS_XXX_TAO. This distorts the Explain plans cardinality but if I delete the statistics :
    dbms_stats.delete_table_stats(ownname=>'SYSADM',tabname=>'PS_XXX_TAO');
    But it is not permanent and the stats are re collected during the night.
    -how to ran a definite delete for that tables ? I mean delete one time but for a permanent result ?
    or
    -how to exclude these two tables from automatic stat gathering ?
    Thanks.

    thank you I applied :
    exec dbms_stats.lock_table_stats('ADAM','B')
    Any Poeplesoft recommendation in documentaries ?
    Regards.

  • Partition Maintenance and Stats gathering running together

    Hi All,
    I have a scenario in my production database wherein Partition Maintenance and Stats Gathering jobs begin at the same time (both are run weekly on Sundays only).
    Partition Maintenance is a job which basically rebuilds indexes and creates new partitions on tables in the database. While Stats Gathering job gathers stats on database tables.
    My question was based on the scenario if we consider - Maintenance job is rebuilding indexes on a table and at the same time Stats gathering job is trying to gather stats for that same table. So will there be any issue caused due to this scenario ??
    I would like to know whether there is any issue with their running at the same time ?
    Database version: Oracle 10 R2
    Environment: Unix AIX version 5.3
    Thanks in advance.

    Sandyboy036 wrote:
    Thanks for the reply.
    Could you elaborate what effect could it have on the table or some issues if I were to run both at the same time please ?
    Thanks.I would be concerned that statistics would not reflect reality.
    A partition could be created and populated & the statistics would not reflect this recent activity.
    why are you regularly rebuilding indexes?

  • Job logs are automatically deleted..Why?

    Hi,
    All jobs logs older than one weeks are deleted from my system..How come it is?
    Is there parameter settings for this?
    How to avoid that?
    Thanks in Advance.
    Regards,
    Jansi

    R/3 JOB OVERVIEW
    Job name : SAP_REORG_JOBS
    Program : RSBTCDEL2
    Variant : Yes
    Parameters variant : 7 days (good value, but check in company)
    Frequency : Daily
    Description : Delete batch job logs
    WARNING: Do not schedule job with program RSBTCDEL; much more jobs are deleted; see note 837691.
    Job name : SAP_REORG_SPOOL
    Program : RSPO1041
    Variant : Yes
    Parameters variant : 7 days (good value, but check in company)
    Frequency : Daily
    Description : Delete old spool requests.
    Job name : SAP_SPOOL_CONSISTENCY_CHECK
    Program : RSPO1043
    Variant : Yes
    Parameters variant : 7 days (good value, but check in company)
    Frequency : Daily
    Description : Spool data (TemSe) consistency check.
    NOTE: Job must be scheduled an hour after SAP_REORG_SPOOL.
    Job name : SAP_REORG_BATCHINPUT_<CLIENT>
    Program : RSBDCREO
    Variant : Yes
    Parameters variant : 5 days (good value, but check in company)
    Frequency : Daily
    Description : Delete successfully processed BIM sessions and their logs from the TEMSE table.
    WARNING: Job must be scheduled when no other jobs are running.
    NOTE: Job must be scheduled for all customer clients (meaning each individual client).
    Job name : SAP_REORG_ABAPDUMPS
    Program : RSSNAPDL
    Variant : Yes
    Parameters variant : 30000 / 500
    Frequency : Daily
    Description : Delete short dumps from SNAP table.
    Job name : SAP_REORG_JOBSTATISTIC
    Program : RSBPSTDE
    Variant : Yes
    Parameters variant : 30 days
    Frequency : Monthly
    Description : Delete runtime statistics.
    Job name : SAP_COLLECTOR_FOR JOBSTATISTIC
    Program : RSBPCOLL
    Variant : No
    Parameters variant :
    Frequency : Daily
    Description : CCMS: Collector for Background Job Run-time Statistics
    Job name : SAP_REORG_PRIPARAMS
    Program : RSBTCPRIDEL
    Variant : No
    Parameters variant :
    Frequency : Monthly
    Description : Reorganization of Print Parameters for Background Jobs (see note 307970).
    Job name : SAP_REORG_XMILOG
    Program : RSXMILOGREORG
    Variant : Yes
    Parameters variant : 7 days
    Frequency : Weekly
    Description : Reorganization of XMI interface loggings.
    Job name : SAP_CCMS_MONI_BATCH_DP
    Program : RSAL_BATCH_TOOL_DISPATCHING
    Variant : No
    Parameters variant :
    Frequency : Hourly
    Description : Starts CCMS Method Dispatching in the background.
    Job name : SAP_UPDATE_RECORDS
    Program : RSM13002
    Variant :
    Parameters variant :
    Frequency :
    Description :
    WARNING: Do not schedule this job !!!
    Following parameter is set: ?rdisp/vb_delete_after_execution=1?.
    Please check these settings, if are there then remove them or change according to requirement.

  • Partitioned Incremental Table - no stats gathered on new partitions

    Dear Gurus
    Hoping that someone can point me in the right direction to trouble-shoot. Version Enterprise 11.1.0.7 AIX.
    Range partitioned table with hash sub-partitions.
    Automatic stats gather is on.
    dba_tables shows global stats YES analyzed 06/09/2011 (when first analyzed on migration of data) and dba_tab_partitions shows most partitions analyzed at that date and most others up until 10/10/2011 - done by the automatically by the weekend stats_gather scheduled job.
    46 new partitions added in the last few months but no stats gathered on in dba_tab_partitions and dba_table last_analyzed says 06/09/2011 - the date it was first analyzed manually gathering stats rather than using auto stats gatherer.
    Checked dbms_stats.get_prefs set to incremental and all the default values recommended by Oracle are set including publish = TRUE.
    dba_tab_partitions has no values in num_rows, last_analyzed etc.
    dba_tab_modifications has no values next to the new partitions but shows inserts as being 8 million approx per partition - no deletes or updates.
    dba_tab_statistics has no values next to the new partitions. All other partitions are marked as NO in the stale column.
    checked the dbms_stats job history window - and it showed that the window for stats gathering stopped within the Window automatically allowed.
    Looked at Grid Control - the stats gather for the table started at 6am Saturday morning and closed 2am Monday morning.
    Checked the recommended Window - and it stopped analyzing that table at 2am exactly having tried to analyze it since Saturday morning at 6am.
    Had expected that as the table was in incremental mode - it wouldn't have timed out and the new partitions would have been analyzed within the window.
    The job_queue_processes on the database = 1.
    Increased the job_queue_processes on the database = 2.
    Had been told that the original stats had taken 3 days in total to gather so via GRID - scheduled a dbms_scheduler (10.2.0.4) - to gather stats on that table over a bank holiday weekend - but asked management to start it 24 hours earlier to take account of extra time.
    The Oracle defaults were accepted (and as recommended in various seminars and whilte papers) - except CASCADE - although I wanted the indexes to be analyzed - I decided that was icing on the cake I couldn't afford).
    Went to work - 24 hours later - checked dba_scheduler_tasks_job running. Checked stats on dba_tab_stats and tba tablestats nothing had changed. I had expected to see partition stats for those not gathered first - but quick check of Grid - and it was doing a select via full table scan - and still on the first datafile!! Some have suggested to watchout for the DELETE taking along time - but I only saw evidence of the SELECT - so ran an AWR report - and sure enough full table scan on the whole table. Although the weekend gather stats job was also in operation - it wasn't doing my table - but was definitely running against others.
    So I checked the last_analyzed on other tables - one of them is a partitioned table - and they were getting up-to-date stats. But the tables and partitions are ridiculously small in comparison to the table I was focussed on.
    Next day I came in checked the dba_scheduler_job log and my job had completed within 24 hours and completed successfully.
    Horrors of horrors - none of the stats had changed one bit in any view I looked at.
    I got my excel spreadsheet out - and worked out whether because there was less than 10% changed - and I'd accepted the defaults - that was why there was nothing in the dba_tables to reflect it had last been analyzed when I asked it to.
    My stats roughly worked out showed that they were around the 20% mark - so the gather_table stats should have picked that up and gathered stats for the new partitions? There was nothing in evidence on any views at all.
    I scheduled the job via GRID 10.2.04 for an Oracle database using incremental stats introduced in 11.1.0.7 - is there a problem at that level?
    There are bugs I understand with incremental tables and gathering statistics in 11.1.0.7 which are resolved in 11.2.0 - however we've applied all the CPU until April of last year - it's possible that as we are so behind - we've missed stuff?
    Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
    I'd rather find a solution than save my reputation!!
    Thanks for anyone who replies - I'm not online at work so can't always give you my exact commands done - but hopefully you'll give me a few pointers of where to look next?
    Thanks!!!!!!!!!!!!!

    Save the attitude for your friends and family - it isn't appropriate on the forum.
    >
    I did exactly what it said on the tin:
    >
    Maybe 'tin' has some meaning for you but I have never heard of it when discussing
    an Oracle issue or problem and I have been doing this for over 25 years.
    >
    but obviously cannot subscribe to individual names:
    >
    Same with this. No idea what 'subscribe to individual names' means.
    >
    When I said defaults - I really did mean the defaults given by Oracle - not some made up defaults by me - I thought that by putting Oracle in my text - there - would enable people to realise what the defaults were.
    If you are suggesting that in all posts - I should put the Oracle defaults in name becuause the gurus on the site do not know them - then please let me know as I have wrongly assumed that I am asking questions to gurus who know this suff inside out.
    Clearly I have got this site wong.
    >
    Yes - you have got this site wrong. Putting 'Oracle' in the text doesn't enable people to realize
    what the defaults in your specific environment are.
    There is not a guru that I know of,
    and that includes Tom Kyte, Jonathan Lewis and many others, that can tell
    you, site unseen, what default values are in play in your specific environment
    given only the information you provided in your post.
    What is, or isn't a 'default' can often be changed at either the system or session level.0
    Can we make an educated guess about what the default value for a parameter might be?
    Of course - but that IS NOT how you troubleshoot.
    The first rule of troubleshooting is DO NOT MAKE ANY ASSUMPTIONS.
    The second rule is to gather all of the facts possible about the reported problem, its symptoms
    and its possible causes.
    These facts include determining EXACTLY what steps and commands the user performed.
    Next you post the prototype for stats
    DBMS_STATS.GATHER_TABLE_STATS (
    ownname VARCHAR2,
    tabname VARCHAR2,
    partname VARCHAR2 DEFAULT NULL,
    estimate_percent NUMBER DEFAULT to_estimate_percent_type
    (get_param('ESTIMATE_PERCENT')),
    block_sample BOOLEAN DEFAULT FALSE,
    method_opt VARCHAR2 DEFAULT get_param('METHOD_OPT'),
    degree NUMBER DEFAULT to_degree_type(get_param('DEGREE')),
    granularity VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),
    cascade BOOLEAN DEFAULT to_cascade_type(get_param('CASCADE')),
    stattab VARCHAR2 DEFAULT NULL,
    statid VARCHAR2 DEFAULT NULL,
    statown VARCHAR2 DEFAULT NULL,
    no_invalidate BOOLEAN DEFAULT to_no_invalidate_type (
    get_param('NO_INVALIDATE')),So what exactly is the value for GRANULARITY? Do you know?
    Well it can make a big difference. If you don't know you need to find out.
    >
    As mentioned earlier - I accepted all the "defaults".
    >
    Saying 'I used the default' only helps WHEN YOU KNOW WHAT THE DEFAULT VALUES ARE!
    Now can we get back to the issue?
    If you had read the excerpt I provided you should have noticed that the values
    used for GRANULARITY and INCREMENTAL have a significant influence on the stats gathered.
    And you should have noticed that the excerpt mentions full table scans exactly like yours.
    So even though you said this
    >
    Had expected that as the table was in incremental mode
    >
    Why did you expect this? You said you used all default values. The excerpt I provided
    says the default value for INCREMENTAL is FALSE. That doesn't jibe with your expectation.
    So did you check to see what INCREMENTAL was set to? Why not? That is part of troubleshooting.
    You form a hypothesis. You gather the facts; one of which is that you are getting a full table
    scan. One of which is you used default settings; one of which is FALSE for INCREMENTAL which,
    according to the excerpt, causes full table scans which matches what you are getting.
    Conclusion? Your expectation is wrong. So now you need to check out why. The first step
    is to query to see what value of INCREMENTAL is being used.
    You also need to check what value of GRANULARITY is being used.
    And you say this
    >
    Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
    I'd rather find a solution than save my reputation!!
    >
    Yet when I provide an excerpt that seems to match your issue you cop an attitude.
    I gave you a few pointers of where to look next and you fault us for not knowing the default
    values for all parameters for all versions of Oracle for all OSs.
    How disingenous is that?

  • Writing in SM37 Job Log

    Hi Abappers,
    I have a requirement in which I am displaying all the error messages in an ALV when my program runs in Foreground. But this program runs in background, those error messages must be displayed in the Job Log of SM37.
    I should not raise any error or information messages to come in log automatically as it affects the foreground processing of my program.
    Is there any FM to write in the Job log?? Or can I use the Message statement in such a way that it is not diaplayed in forground?
    Thanks in Advance,

    Hi,
    Try this.
    If you are using ALV grid in the foreground, use the same without without creating the container (basically check the SY-BATCH, before creating a container).  Then the same ALV grid  will create a spool with all the messages during background execution.
    Regards,
    Sharin Varghese

  • Message with IDOC number, created by LSMW, missing in job log in SM37

    Hi gurus,
    We have a temporary interface which uses LSMW to create IDOCs and update in SAP. It's used for materials, BOMs and document info records. In LSMW we have defined standard message types MATMAS_BAPI, BOMMAT and DOCUMENT_LOAD for the IDOCs. All these have the same problem.
    A background job runs and starts LSMW. In the job log in SM37 I want to see which IDOCs were created. For some reason this is different in my development system and my test system, and as far as I know all settings should be the same. In the test system LSMW creates more message lines in the job log, than it does in the dev system. Message number E0-097 is "IDOC XXXX added", and this is missing in the dev system.
    This is what it looks like in the dev system:
    Data transfer started for object 'MATMAS' (project 'X', subobject 'Y')             /SAPDMC/LSMW   501    I
    Import program executed successfully                                                             /SAPDMC/LSMW   509    I
    File 'XXX.lsmw.read' exists                                                                               /SAPDMC/LSMW   502    I
    Conversion program executed successfully                                                    /SAPDMC/LSMW   513    I
    Data transfer terminated for object 'MATMAS' (project 'X', subproject 'Y')       /SAPDMC/LSMW  516    I
    And this is what it looks like in the test system. More information, which is exactly what I want in dev system too:
    Data transfer started for object 'MATMAS' (project 'X', subobject 'Y')             /SAPDMC/LSMW   501    I
    Import program executed successfully                                                             /SAPDMC/LSMW  509    I
    File 'XXX.lsmw.read' exists                                                                               /SAPDMC/LSMW  502    I
    Conversion program executed successfully                                                    /SAPDMC/LSMW  513    I
    File 'XXX.lsmw.conv' exists                                                                              /SAPDMC/LSMW   502   I
    IDoc '0000000002489289' added                                                                      E0                         097   S
    File 'XXX.lsmw.conv' transferred for IDoc generation                                      /SAPDMC/LSMW   812   I
    Data transfer terminated for object 'MATMAS' (project 'X', subproject 'Y')      /SAPDMC/LSMW   516   I
    In both cases the IDOC is created and update works fine.
    My only issue is that I can't see the IDOC number in the dev system. I know I can get the IDOC number in WE02, but in this case we have program logic which reads the job log to be able to check IDOC status before sending OK message back to the other side of the interface.
    I hope any of you can have an idea how I can update somewhere to get message E0-097 with IDOC number into the log.
    Regards,
    Lisbeth

    Hi Arun,
    If you want to show your messages in the job log you have to use the MESSAGE statement. In case you use WRITE statements an output list be created which can be found in the spool (there is an icon to go to the spool directly).
    Regards,
    John.

  • Error in Source system (Job is not going in to job log of CRM system)

    Hi,
    We replicated datasources from CRM server in BI 7 server and they get replicated and are in Active version
    All transactiond atasources which are 7.0 version are executing successfully, but the Datasources which are in 3.5 version i.e. Datasource for master data infoobject gives error while executing infopackage.
    The error message is as follows:
    "Diagnosis
         In the source system, there is no transfer structure available for
         InfoSource 0CRM_MKTMETA_ATTR .
    System Response
         The data transfer is terminated.
    Procedure
         In the Administrator Workbench, regenerate from this source system the
         transfer structure for InfoSource 0CRM_MKTMETA_ATTR ."
    Also while executing the infopackage the job log in source system is not able to create the corresponding job.
    Following action are already been taken:
    1. Datasource replication
    2. Confirmed that the Transfer Structure are avialable and are in ACTIVE state.

    Hi
    It seems some changes are taken place at Source System for the Mast.Data DS. Try to Recheck the Same at Source System -- Activate/Retranport the same.
    BI side.
    RSA13Source System Your DS-- Replicate the Data Source -- Try to Run RS_TRANSTRUC_ACTIVATE_ALL(SE38) / Activate Transfer Rules Manually -- Then Full/Re Init -- Delta Uploads
    Hope it helps and clear

  • Write Message in Job Log from FM

    Hi everyone,
    I´m having an issue trying to find the way to write a message in job log.
    I´ve read a lot of solutions but I can't find anyone that describes how to do it from a function module.
    What i'm saying is that all the answers focus on reports and I have to develop this from a function module.
    If anyone can help me with that, I´ll appreciate it.
    Thanks everyone.
    Best regards.
    Pablo.

    Hi Thomas,
    Thanks por replying. MESSAGE statement does not work.
    Regarding the last question, it can´t be done like that .
    Regards.
    Pablo.

  • Regarding Messages to Job Log

    Hi,
    What is a Job Log.
    i have to write messages to job log.
    how to do this.
    how can i see that messages whether it is written or not.
    how it can be seen to the client/user.
    regards,
    kiran

    Hi Kiran
    Use 'write' to display the message in the job log.
    write : / log_message.
    Place this statement after the block of statements in your program where you want the message to be displayed in the log.
    You can see the job log in SM37 transaction.
    hope this helps you.
    regards
    Message was edited by:
            Sarah Bollavaram
    Message was edited by:
            Sarah Bollavaram

  • How to write log information into SM37 batch job log

    Hi,
    I have a report running in batch mode, and I would like to log the start time and end time for some part of the code (different Function modules). I need to write this log information into the batch job log. Therefore I can check the time frame of my FMs.
    After search the SDN, I can only get some information on how to write log into the application log displayed in SLG1, but that's not I want. I want to write batch log information, and check it in SM37.
    If you have some solution or code to share, please. Thanks a lot.
    Best Regards,
    Ben

    Hi Nitin
    Thanks for the reply. Could you explain it with some code ?
    I tried to use the write statement , but it did not wrok. I could not see the result in SM37.
    write : "start of the FM1 processing".
    FM1 code
    write : "end of the FM1 processing".
    but those two statement did not show in SM37..
    1) how to use  a information message  ?
    2) how to use NEW PAGE PRINT ON and PRINT OFF command. ?
    I would appreciate if you can write some code ,that I can use directly.
    Thanks a lot.
    Best Regards,
    Ben

Maybe you are looking for

  • How can I remove unwanted apps from my account now ?

    all is in the title...

  • STILL having problems with 1.2

    I am one of the people whose iPod died as soon as they got the iPod software 1.2 update loaded on. Now, 3 months later... Yes, three months, later, I took a chance and updated it again (after reverting back to 1.1.2) and the same **** thing happens.

  • New iPhone 5 wireless security problem

    I have just discovered hat I cannot use several PC and Macbook devices on my wireless network if I am using WPA/AES security.  I have turned security off completely and all works ok and tomorrow when I have hours to spare I will try WEP.  Can this re

  • What's the best external battery pack to bring when traveling abroad?

    We're planning on going to Europe for our honeymoon and figured we'll probably need our phones to get around, translate things, etc and would hate to be in the middle of nowhere with a dead phone. Which external battery packs would you recommend? I'm

  • Pl Help......Web services with complex types

    Hi, I have deployed a web service on OC4J (9.02) having a complex type input and a complex type return. The web service is implemented as a stateless session bean and the relevant web.xml for the deployment is: <servlet> <servlet-name>ZipLookupManage