Job is not executing

Hi,
I scheduled one package in test server but when I am scheduling that package on Live server it is not executing
job is showing in user_jobs table but it is not executing. could any body suggest what is the problem?
Regards
Gagan

could you give the commit a try. In my server, the total time is 0 for my job and i know that it completed. Maybe your test server is just slower.
drop table testi;
Create Table Testi(I Int);
DECLARE
  X NUMBER;
BEGIN
  SYS.DBMS_JOB.SUBMIT
    ( Job       => X
     ,What      => 'insert into testi values (1); commit;'
     ,Next_Date => sysdate+10/86400
     ,interval=>null
     ,no_parse  => TRUE
  Sys.Dbms_Output.Put_Line('Job Number is: ' || To_Char(X));
End;
Select * From Testi;
select total_time from user_jobs;
I                     
1                     
1 rows selected
TOTAL_TIME            
0                     
1 rows selected
Dave

Similar Messages

  • Scheduled GridControl jobs get not executed

    Hello,
    I have a really strange behaviour lately with the GridControl jobs:
    We have several GridControl jobs which are scheduled (e.g. from monday til friday at 05:00 am). In usual, they get executed at the scheduled time. But some of them
    don't get executed anymore from one day to the other (and don't appear in the "job activity" tab anymore). When looking up these jobs in the job library, the scheduled
    time is still correct. The agent on the target node hasn't stopped or dind't report any errors. It's just like someone has deleted these jobs (which is not the case).
    That's real bad since we rely on those jobs...
    Does anybody know this behaviour? It looks like a bug to me...
    Rgds
    JH

    Please refer the note 457792.1

  • Crontab job does not execute Export script

    Hello Dear Oracle/Linux Gurus,
    I have limited knowledge about Cron job scheduling. After reading few articles/documentation, i have scheduled an Export job with crontab.
    When i run manually script (./exportuser.sh) Then its working fine. But with crontab is doesn't trigger. Log file exist but does not contain any entry, its empty.
    I would be grateful if you could let me know where is mistake?
    Thankyou in advance.
    ----Crontab entries are below here----
    oracle@backup-oracle:~/oracle/product/10.2.0/db_1/admin/orcl/dpdump> crontab -l
    # DO NOT EDIT THIS FILE - edit the master and reinstall.
    # (/tmp/crontab.XXXXFcYToV installed on Mon Dec 5 10:26:28 2011)
    # (Cron version V5.0 -- $Id: crontab.c,v 1.12 2004/01/23 18:56:42 vixie Exp $)
    * 21 * * * /home/oracle/oracle/product/10.2.0/db_1/rdbms/scripts/exportuser.sh > /home/oracle/oracle/product/10.2.0/db_1/rdbms/scripts/log/exportuser.log
    ----exportuser.sh is below here-----
    cd /home/oracle/oracle/product/10.2.0/db_1/admin/orcl/dpdump
    find /home/oracle/oracle/product/10.2.0/db_1/admin/orcl/dpdump -ctime +7 -exec rm{} \;
    export DATE=$(date +"%d-%m-%Y")
    exp scott/tiger file=scott_md_bk_$DATE.dmp log=scott_md_bk_$DATE.log rows=yes
    gzip scot_md_bk_*

    I suggest to put the following line at the beginning of your exportuser.sh script:
    date >> /tmp/exportuser.started
    Then check the output of /tmp/exportuser.started to see if the script gets actually called. Errors are normally dispatched by email to the user running the cron task. Did you receive any? You can also check the cron log file for errors and should verify that the cron daemon is actually running, e.g. "ps -ef | grep crond". Crontasks can run as root or any user who is allowed to run cron. I'm not so much familiar with Suse, which is quite different to other Linux distros.

  • OMS Jobs could not execute after server shutting down for days in holiday.

    Hi all,
    I found yesterday that all jobs won't run after a long term holiday(several days),the Oracle Management Server and the database server installed on different machine,all shutdown before holiday.Those jobs were scheduled to perform day by day without ending date.
    Howerer,when we come back to work and startup database server and the Management Server,in the past two days those jobs won't run anymore.I add a new job and find it can run correctly.And I found there is many extra connections from OS users DBSNMP to database,why?Each time I stop the agent service on the database host,and then restart it,I found in the jobs history lists there is failure notice said "job failed while running due to the agent stopped".
    Should I recreate those jobs again?There is about 20 different jobs,my God!
    Your reply is highly appreciated!

    Try editing boot.properties file in both Admin server and bi_server1 in these two locations
    domain/servers/AdminServer/security/boot.properties
    domain/servers/bi_server1/security/boot.properties
    Once it is done,Try restarting and let us know the outcome.
    Thanks,

  • Execute SSIS Package from JOB which contains Execute Process Task calling a .bat file

    Hi All,
    I have a EXCEL Macro needs to be called from SSIS. We could not use Script task because of some internal reason.
    So we have taken an approach to call a .BAT file using Execute Process Task. This .BAT file will call a .VBS file which will execute the EXCEL Macro.
    The SSIS Package is running good if I execute the package from BIDS.
    But the real problem is with the scheduling this SSIS Package using SQL JOB.
    If i execute this SSIS package from SQL Server Job, its executing the whole package successfully except the Execute Process Task.
    So the overall issue is SQL Server Job is not executing properly if I call any .BAT file from the SSIS Package.
    Please give me suggestion to get rid of the issue. Thanks in advance.

    Hi Sai.N,
    If you run the SQL Server Agent job manually from SSMS, does the package execute properly? If the package executes properly when you run the job manually, the issue should occur due to permission issue. In this case, I suggest that you create a SQL Server
    Agent proxy based on the current Windows account which you use to log onto the operating system, and run the job under the proxy account.
    If it is not the issue, please enable logging in the package as Visakh mentioned and post the warning/error message for further analysis.
    Regards,
    Mike Yin
    TechNet Community Support

  • Batch file not executing in BODS Job

    Hi friends,
    I have created a batch file to create a text file in a directory. I ran it manually and it worked fine i.e a txt file was created, However when I used the batch file in script in BODS job, It's not being executed.
    The script being used is as follows ( I have tried both, Both worked manually but not in Job, the job ran successfully without desired output )
    print(exec('C:\xxx\testdir.bat ','',0));
    exec('C:\xxx\testdir.bat ','',8);
    testdir.bat has the below txt code
    cd C:\xxx\yyy dir *.xml /b > dir.txt
    Thanks and Regards
    Anil

    What have you changed between
    However when I used the batch file in script in BODS job, It's not being executed
    and
    The Job was successfully executed but  an empty dir.txt was created with no files.

  • Could not execute the job

    Hi,
    when i execute the job a a window appear with the massage" ERROR: could not execute the job .Error returned was 1
         MESSAGE is : Could not open command file...
    And i can't find from where it comes any suggestion?

    When today i execute the job i had this list of error
    13860    15384    REP-100109        27/05/2014 08:22:10       |Session TF_SGFA
    13860    15384    REP-100109        27/05/2014 08:22:10       Cannot save <History info> into the repository. Additional database information: <SQL submitted to ODBC data source
    13860    15384    REP-100109        27/05/2014 08:22:10       <SIGSIRDDB01\SQLSIRDBD> resulted in error <[Microsoft][ODBC SQL Server Driver][SQL Server]Could not allocate space for object
    13860    15384    REP-100109        27/05/2014 08:22:10       'dbo.AL_HISTORY_INFO' in database 'DS_REP' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded
    13860    15384    REP-100109        27/05/2014 08:22:10       files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files
    13860    15384    REP-100109        27/05/2014 08:22:10       in the filegroup.>. The SQL submitted is <INSERT INTO "AL_HISTORY_INFO" ("OBJECT_KEY", "NAME", "VALUE", "NORM_NAME",
    13860    15384    REP-100109        27/05/2014 08:22:10       "NORM_VALUE") VALUES (430, N'TRACE_LOG_INFO',
    13860    15384    REP-100109        27/05/2014 08:22:10                N'F:\SAPDS/log/JobServDS/sigsirddb01_sqlsirdbd_ds_rep_dsuser/trace_05_27_2014_08_22_09_10__3a4327b0_92c8_4abe_ac24_96d49123242a.
    13860    15384    REP-100109        27/05/2014 08:22:10       txt', N'TRACE_LOG_INFO',
    13860    15384    REP-100109        27/05/2014 08:22:10                N'F:\SAPDS/LOG/JOBSERVDS/SIGSIRDDB01_SQLSIRDBD_DS_REP_DSUSER/TRACE_05_27_2014_08_22_09_10__3A4327B0_92C8_4ABE_AC24_96D49123242A.
    13860    15384    REP-100109        27/05/2014 08:22:10       TXT') >.>.
    13860    15384    REP-100112        27/05/2014 08:22:10       |Session TF_SGFA
    13860    15384    REP-100112        27/05/2014 08:22:10       Cannot save <History info> for repository object <>. Additional database information: <Cannot save <History info> into the
    13860    15384    REP-100112        27/05/2014 08:22:10       repository. Additional database information: <SQL submitted to ODBC data source <SIGSIRDDB01\SQLSIRDBD> resulted in error
    13860    15384    REP-100112        27/05/2014 08:22:10       <[Microsoft][ODBC SQL Server Driver][SQL Server]Could not allocate space for object 'dbo.AL_HISTORY_INFO' in database 'DS_REP'
    13860    15384    REP-100112        27/05/2014 08:22:10       because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup,
    13860    15384    REP-100112        27/05/2014 08:22:10       adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.>. The SQL submitted is
    13860    15384    REP-100112        27/05/2014 08:22:10       <INSERT INTO "AL_HISTORY_INFO" ("OBJECT_KEY", "NAME", "VALUE", "NORM_NAME", "NORM_VALUE") VALUES (430, N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/log/JobServDS/sigsirddb01_sqlsirdbd_ds_rep_dsuser/trace_05_27_2014_08_22_09_10__3a4327b0_92c8_4abe_ac24_96d49123242a.
    13860    15384    REP-100112        27/05/2014 08:22:10       txt', N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/LOG/JOBSERVDS/SIGSIRDDB01_SQLSIRDBD_DS_REP_DSUSER/TRACE_05_27_2014_08_22_09_10__3A4327B0_92C8_4ABE_AC24_96D49123242A.
    13860    15384    REP-100112        27/05/2014 08:22:10       TXT') >.>.>.
    13860    15384    REP-100112        27/05/2014 08:22:10       |Session TF_SGFA
    13860    15384    REP-100112        27/05/2014 08:22:10       Cannot save <History info> for repository object <>. Additional database information: <Cannot save <History info> into the
    13860    15384    REP-100112        27/05/2014 08:22:10       repository. Additional database information: <SQL submitted to ODBC data source <SIGSIRDDB01\SQLSIRDBD> resulted in error
    13860    15384    REP-100112        27/05/2014 08:22:10       <[Microsoft][ODBC SQL Server Driver][SQL Server]Could not allocate space for object 'dbo.AL_HISTORY_INFO' in database 'DS_REP'
    13860    15384    REP-100112        27/05/2014 08:22:10       because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup,
    13860    15384    REP-100112        27/05/2014 08:22:10       adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.>. The SQL submitted is
    13860    15384    REP-100112        27/05/2014 08:22:10       <INSERT INTO "AL_HISTORY_INFO" ("OBJECT_KEY", "NAME", "VALUE", "NORM_NAME", "NORM_VALUE") VALUES (430, N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/log/JobServDS/sigsirddb01_sqlsirdbd_ds_rep_dsuser/trace_05_27_2014_08_22_09_10__3a4327b0_92c8_4abe_ac24_96d49123242a.
    13860    15384    REP-100112        27/05/2014 08:22:10       txt', N'TRACE_LOG_INFO',
    13860    15384    REP-100112        27/05/2014 08:22:10                N'F:\SAPDS/LOG/JOBSERVDS/SIGSIRDDB01_SQLSIRDBD_DS_REP_DSUSER/TRACE_05_27_2014_08_22_09_10__3A4327B0_92C8_4ABE_AC24_96D49123242A.
    13860    15384    REP-100112        27/05/2014 08:22:10       TXT') >.>.>.
    and thank u.
    Sincerly

  • Scheduled SAP Job not executed as per restrictions

    Hi All,
    A job was scheduled to execute every Saturday (Job Frequency - weekly). Also in the Start time -  Restrictions, the check box for "Execute only on Workdays" is checked.
    Last saturday 27/08 was a holiday (displayed properly in the factory calendar), So the job should not have got executed.
    But it got executed. I cannot find any reason for this.
    Is there any other conditions that need to be provided so that it don't get executed on non-working days?

    - Check in job definition if the factory calendar of the job is the good one (TBTCO-CALENDARID for the job that executed saturday), check also if job definition was changed  (TBTCO-LASTCHDATE, LASTCHTIME and LASTCHNAME for next scheduled job) (or "Job details" on SM37)
    - Check in the factory calendat, if the holiday was changed after saterday (SCAL, Extras, Display change docs.)
    Regards,
    Raymond

  • ORA-27369: job of type EXECUTABLE failed with exit code: Operation not perm

    Hello!
    we have procedure that create scheduled job that should execute shell script in Linux.
    we use DB 11.2.0.2
    the file /software/oracle/dba/scripts/bin/trans_asm has full permissions for oracle,
    externaljob.ora:
    run_user = oracle
    run_group = dba
    the file has 3 more arguments that will come from procedure and should be executed like :
    trans_asm -ssid tsdwh -tsid tudwh -tbsname DW_BILLING_TTS
    DBMS_SCHEDULER.create_job (
    job_name => v_job_name,
    job_type => 'EXECUTABLE',
    job_action => '/software/oracle/dba/scripts/bin/trans_asm',
    start_date => SYSTIMESTAMP,
    number_of_arguments => 3,
    enabled => false,
    auto_drop => false);
    dbms_scheduler.set_job_argument_value(job_name=>v_job_name,argument_position=>1,argument_value=>'-ssid '||v_source);
    dbms_scheduler.set_job_argument_value(job_name=>v_job_name,argument_position=>2,argument_value=>'-tsid '||v_target);
    dbms_scheduler.set_job_argument_value(job_name=>v_job_name,argument_position=>3,argument_value=>'-tsid '||v_target);
    dbms_scheduler.enable(name=>v_job_name);
    end if;
    Please help!

    Hi!
    I have shell scripts with number of mandatory parameters.
    How can I write it in sql procedure in order to create exact command line that shuold be executed in Linux?
    the procedure looks like :
    CREATE OR REPLACE procedure CRM.run_tts (p_tts_name in varchar2 )
    is
    v_tts_name varchar2(2000);
    v_job_name varchar2(2000);
    v_source varchar2(200);
    v_target varchar2(200);
    Begin
    v_tts_name := upper(p_tts_name);
    --if p_instance_type='P' then
    v_source := 'psdwh';
    v_target := 'pudwh';
    --end if;
    if v_tts_name is not null then
    v_job_name := 'run_tts_'||v_tts_name;
    v_source := '/-ssid/'||v_source;
    v_target := '/-tsid/'||v_target;
    v_tts_name := '/-tbsname/'||v_tts_name;
    DBMS_SCHEDULER.create_job (
    job_name => v_job_name,
    job_type => 'EXECUTABLE',
    job_action => '/tmp/trans_asm',
    start_date => SYSTIMESTAMP,
    number_of_arguments => 3,
    enabled => false,
    auto_drop => false);
    --dbms_scheduler.set_job_argument_value(job_name=>v_job_name,argument_position=>1,argument_value=>'/software/oracle/dba/scripts/bin/trans_asm.sh'||' '||c_rec.id||' '||'"'||c_rec.view_name||'"'||' '||v_setnum||' '||'"'||V_MACHINE_TYPE||'"'||' '||'"'||p_mode||'"');
    dbms_scheduler.set_job_argument_value(job_name=>v_job_name,argument_position=>1,argument_value=>v_source);
    dbms_scheduler.set_job_argument_value(job_name=>v_job_name,argument_position=>2,argument_value=>v_target);
    dbms_scheduler.set_job_argument_value(job_name=>v_job_name,argument_position=>3,argument_value=>v_tts_name);
    -- dbms_scheduler.set_job_argument_value(job_name=>v_job_name,argument_position=>1,argument_value=>'/software/oracle/dba/scripts/bin/trans_asm_ps.sh');
    --dbms_scheduler.set_job_argument_value(job_name=>v_job_name,argument_position=>1,argument_value=>'/tmp/test.sh');
    -- trans_asm -ssid tsdwh -tsid tudwh -tbsname DW_BILLING_TTS
    dbms_scheduler.enable(name=>v_job_name);
    end if;
    end;
    the command looks like :
    trans_asm -ssid psdwh -tsid pudwh -tbsname YOAV_TTS
    Thank you !
    Valerie

  • ORA-27369: job of type EXECUTABLE failed with exit code: Not owner

    Hi
    I created a backup RAC database job using DBMS_SCHEDULER under RMANTEST schema (a DBA account) and I got the error as subject.
    begin
    dbms_scheduler.create_job(
    job_name => 'scheduler_backup',
    job_type => 'EXECUTABLE',
    number_of_arguments => 2,
    job_action => '/opt/oracle/admin/bin/rman_fullbackup_RAC_TEST_test.sh',
    comments => 'backup via scheduler'
    dbms_scheduler.SET_JOB_ARGUMENT_VALUE('scheduler_backup', 1, 'TEST');
    dbms_scheduler.SET_JOB_ARGUMENT_VALUE('scheduler_backup', 2, 'TEST2');
    dbms_scheduler.enable('scheduler_backup');
    end;
    Thanks,
    Kevin

    Hi Ravi
    Thanks for your input.
    "ORA-27369: job of type EXECUTABLE failed with exit code: Not owner" is what I copied from ADDITIONAL_INFO of USER_SCHEDULER_JOB_RUB_DETAILS.
    One thing I don't understand of your words is that
    "On 10gR1 and 10gR2 you can redirect the stdout/stderr within your script and take a look at those log files."
    In my script, I have log files but I cannot see it. I guess the job fails directly without hitting the redirection line in the script. Do you mean I shall write something like this
    dbms_scheduler.create_job (
    job_action => '/opt/oracle/admin/bin/backup.sh > backup.log'
    Another one is
    "make sure that the user that external jobs run as must be able to run your script"
    But OS user and database user are two different accounts at different level.
    I am using 10.2.0.2 RAC. The Unix script runs successfully every night. I just want to take advantage of DBMS_SCHEDULER to avoid host dependency.
    Thanks,
    Kevin

  • Job scheduled in background but not executed

    Dear Expert, above is section of code through which i am scheduling a program to run in background.  through this a job is created and is schedule(it show schedule using sm37) but it's not executed (generate output ).
    Edited by: shekharamit on Jul 14, 2010 1:09 PM

    See what is the status now it is showing ,
    if it is canceeled see why it got cancelled?it can be because of job closed or like.
    it can aslo be like there are more jobs in queue , so giver it as high priority .
    Try withthis
    Rajendra

  • DS Job Monitor not showing executed .BAT job -did in DI 11.5

    When I kick off a job, in Data Services 3.1, through a bat file the job does not appear in the monitor tab of Designer. The job does execute, however you can only tell through Task Manager is that al_engine appears and there is a log file created. Now only way to kill executed job is ending the al_engine process.
    The exact same BAT file and corresponding job worked fine in Data Integrator 11.5, only when we upgraded to DS did the monitor stop working. Monitor does show Designer executed jobs though.
    My question: Does anyone know how I can view BAT executions through the monitor tab in DS Designer? Note: I do have the "Open monitor on job execution" checked in Tools>Options>Designer>General.

    Hi Kevin,
    Thx for the reply..
    Yes, there table comparison and parallel processing. and also, the volume of the data is huge.
    I mean, some of the tables has 130 million row records. It occurs when processing large loads.
    the strange thing at this point is; within old version there is no problem. The flows are the same..
    thx
    omer

  • Job not executing through database link

    hello all,
    we have a database server with oracle 10g enterprise edition on RHEL 5.3 64-bit , we have created a procedure which pulls data from a remote server using a procedure. that procedure collect data in cursor ..both server are connected using RF connectivity..
    but after every 2 or 3 days i saw that data is not pulled and job is marked as broken .... and when i mark that job as unbroken and try to rexecute that procedure it hangs for long.....
    i don't understand whats going on....i have also seen large number of sessions in my database..is possible that huge number of session are consuming resources and that's why oracle is unable to execute the job??? i have check network also but it is fine...i can access that remote server using VNC or team viewer..so network is definitely not an issue...
    i also set SQLNET.EXPIRE_TIME=10 in sqlnet.ora file and also created a profile for idle_time to get rid of excess inactive sessions....what might be issue??? any suggestion is appreciable...
    thanks and regards
    VD

    hello sir,
    actually it is not about the job, i am sure that procedure is causing problem..... because when i try to execute that procedure , it also hangs, i check all requirements need to be checked..... i think lots of inactive sessions causing this issue, but even after adding sqlnet and profile i is not removing those sesions and hence i think that procedure is not executing....
    coz the database i am connecting is having already connections and doing lots of transactions...so i it possible that it might be an issue.??
    sometime i also got timeout error while connection to that remote server but it is not issue because at the same time i am connected to it from another machine..
    thanks and regards
    VD

  • Spawn jobs are not getting priority and target servergiven at selection scn

    Hi Abapers,
    I am scheduling main program as a job in backend through FMs JOB_OPEN, JOB_SUBMIT and JOB_CLOSE with priority and target server taken from user at selection screen through SE38,after executing the job,job will generate the spawn jobs (i.e child jobs). The spawn jobs  also be scheduled with same priority and target server from taken user. But after completion of main job execution, the spawn jobs are not generating with same priority and target server taken from selection screen. May be its getting refreshed.
    Please give me an idea how the spawn jobs also have to get the same priority and server of main job?
    Please reply as soon as possible. I need it urgently.

    Hi,
    I have used below statements for ADD EXTRACT and ADD REPLICAT.
    ------Extract
    ADD EXTRACT ext_1, TRANLOG, BEGIN NOW
    -------Data Pump
    ADD EXTRACT pump_1, EXTTRAILSOURCE /app/ggs/trail/local_trail_1/ta, BEGIN NOW
    -------Replicat
    ADD REPLICAT rep_1, EXTTRAIL /app/ggs/trail/remote_trail_1/tb, BEGIN NOW, CHECKPOINTTABLE ogg.tarun_chk
    Yes, i have tried tutorial at Oracle Learning Library.
    Thanks
    Tarun
    Edited by: user8886876 on Feb 12, 2012 9:56 PM

  • Background Job is not running in KW

    Hi All,
    I have problem with background job.
    I am working on Knowledge warehouse server.
    And I am scheduling background job to upload contents.
    But background job is not finishing at proper time.
    And while displaying trace it is giving following error.
    ERROR => BtcCleanUp: BtcLgAp-call failed (rc = 4) [btcjcntl.c   1251]
    Please help me to resolve this.
    Regards,
    Payal patel

    Hi Juan,
    I have tried with process as well as instance restart.
    It didn't help me.
    Hi Santosh,
    Following is the part of trace file (level -2).
    First line shows error.
    Tell me if you need any other log file.
    L  *** ERROR => BtcCleanUp: BtcLgAp-call failed (rc = 4) [btcjcntl.c   1251]
    M  read msgserver-list from MBUF
    M  ThSemRq (4, 1, 0, 0)
    M  ThSemRel (4, 1)
    M  ThSetBtcName: found batch server sapbl4_KW7_00                          
    M  ThScheduler2: server name: sapbl4_KW7_00                          
    M  ThISendMsg: send message (5) to server (wp) with name >sapbl4_KW7_00                           <
    M  ThISndName: send to name: >sapbl4_KW7_00                           <
    M  ThISend: (tm/type/info = 10/0x2000/0x0, mode_deleted=0)
    M  ThRqOutCheck: o.k.
    M  abap strategy ROLL / O.K.
    M  ThNewWpStat (type=0x2000, task_switch=0, inline_hold=0, hand_shake=0, debug=0, ..)
    M  ThNewWpStat: new state of T10/M0 = 0x3c
    M  ThISend: new wp stat: 0x0
    M  Adresse   Offset  Message by name (one way)
    M  -
    M  06C60600  000000  00000000 05000000 255f4556 454e545f |........%_EVENT_|
    M  06C60610  000016  53434845 44554c45 52202020 20202020 |SCHEDULER       |
    M  -
    M  ThMkReq: send output to canceled mode
    M  make DISP owner of wp_ca_blk 180
    M  DpRqPutIntoQueue: put request into queue (reqtype 0, prio LOW, rq_id 19358)
    M  -OUT- sender_id WORK_PROCESS      tid  10    wp_ca_blk   180     wp_id 10
    M  -OUT- action    SEND_MSG_ONEWAY   uid  11    appc_ca_blk -1      type  NOWP
    M  -OUT- new_stat  NO_CHANGE         mode 0     len         268     rq_id 19358
    M  -OUT- forward   DIA              
    M  -OUT- req_info  CANCELMODE MSG_WITH_REQ_BUF MSG_WITH_OH
    M  nihsl-getHostAddr: got hostname 'localhost' from operation system
    M  nihsi-getHostAddr: hostname 'localhost' = addr 127.0.0.1
    M  nihsl-getServNo: got servicename 'sapdp00' from operation system
    M  nihsi-getServNo: servicename 'sapdp00' = port 0C.80/3200
    M  NiIInitSocket: set default settings for socket 1196
    M  NiCreateHandle: state hdl 1 / socket 1196 NI_INITIAL
    M  NiIDgSend: init datagram send hdl 1 / socket 1196
    M  CPU byte order: little endian, reverse network, low val .. high val
    M  NiIDgSend: connect dgram to: host 127.0.0.1, port 0C.80/3200, fam 2 (low adr..high adr)
    M  NiIDgsend: sending on connected datagram-handle
    M  LOCK WP ca_blk 181
    M  ThResFree: free resources of U11 M0 I2 (normal mode, complete free) at level 3, errno=23, db_action=TH_DB_NO_ACTION, pooling=1
    M  ThResFree: reset spa state for user T10/U11/M0
    M  ThCallHooks: call hook >RtmClearSession< for event BEFORE_SESSION_CANCEL
    M  ThCallHooks: call hook >HttpClearSession< for event BEFORE_SESSION_CANCEL
    M  ThCallHooks: call hook >SpoolHandleHook< for event BEFORE_SESSION_CANCEL
    M  SosSearchAnchor: search anchor for 2
    M  PfStatDisconnect: disconnect statistics
    M  ThDealComm: del 1 cpic conn(s) of T10/U11/M0 (errno/deal_r2/free_level = 23/1/3)
    M  ThCPICFree: send DEAL to U11/M0
    M  ThICMDEAL (14030609, ..)
    M  ThCPIC: execute cpic func DEALLOCATE
    M  ThCPIC: last_ftype/last_timeout/last_requested_length/last_receive_data 1/-1/-1/0
    M  ThCPIC: allowed rq_type of T10/M0 = TH_APPC_RC_RQ
    M  ThConnectToLocGw: connect to local gateway
    M  GwConnectSapWp: connect to gateway >localhost< >sapgw00<
    M  nihsl-getHostAddr: found hostname 'localhost' in cache
    M  nihsi-getHostAddr: hostname 'localhost' = addr 127.0.0.1
    M  nihsl-getServNo: got servicename 'sapgw00' from operation system
    M  nihsi-getServNo: servicename 'sapgw00' = port 0C.E4/3300
    M  NiCreateHandle: state hdl 2 / socket -1 NI_INITIAL
    M  NiIBlockMode: switch off block-mode for hdl 2 / socket -1
    M  NiLowLevCon: connect to: host 127.0.0.1, port 0C.E4/3300, fam 2 (low adr..high adr)
    M  NiIInitSocket: set default settings for socket 1140
    M  NiISocket: hdl 2 got socket 1140
    M  NiPConnect: connect in progress
    M  SiPeekPendConn: connection of socket 1140 established
    M  NiLowLevCon: took local port 0E.51/3665
    M  nilh-localCheck: using local address list
    M  NiSetStat: state hdl 2 NI_CONNECTED
    M  NiIWrite: write 64, 1 packs, MESG_IO, hdl 2, data complete
    M  NiIPeek: peek successful for hdl 2 / socket 1140 (r)
    M  NiIRead: read 64, 2 packs, MESG_IO, hdl 2, data complete
    M  GwConnectSapWp: connect to gateway localhost / sapgw00 (pid = 6184) o.k.
    M  ThISend: (tm/type/info = 10/0x80/0x0, mode_deleted=0)
    M  ThRqOutCheck: o.k.
    M  abap strategy ROLL / O.K.
    M  ThNewWpStat (type=0x80, task_switch=0, inline_hold=0, hand_shake=0, debug=0, ..)
    M  ThNewWpStat: new state of T10/M0 = 0x3c
    M  ThISend: new wp stat: 0x0
    Regards,
    Payal Patel
    Message was edited by:
            Payal Patel

Maybe you are looking for