Job scheduling tutorial - problem

Hi
I'm recognizing job scheduling  mechanizm in NW 7.11. I have followed this tutorial:
https://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/90a95132-8785-2b10-bda5-90d82a76431e&overridelayout=true
And so far I cannot see job definition in NWA. Any suggestions will be appreciated.
Best Regards
Maciej

I've found the reason. In tutorial there is job-definitions.xml file suggested instead of job-definition.xml
Regards
Maciej

Similar Messages

  • Background job scheduling problem in APO

    Hi fellow SDNers,
    i am going through this peculiar problem of background job scheduling:
    scenario is like , i have an CSV (excel file) in aplication server which would load data into the infosource, which i have scheduled to run in backhround (in infopackage) ,after event is triggered (option in scheduling tab of infopackage...THE SCHEDULING OPTIONS)
    now everything seems to be fine.. but the data is not getting loaded...?? could u lease help me out..how to load data from excel file (in background), after an event gets triggered.
    thanks in advance,
    Rohan

    hi Alexander,
    i am triggering the event from BP_EVENT_RAISE FM in APO  by passing the  event id... this would automaitically raise the event...just like sm64
    Thanks
    Rohan

  • Reporting Agent Job Scheduling problem

    hi All,
    I am trying to schedule reporting agent job. I have few pre calculated webtemplates in one reporting agent scheduling package. when i try to schedule it and go to "start condition" - I want to put "After Event". I am selecting Event and giving the parameter name. But when i save and go out the job is already scheduled (checked in sm37). so I tried to chk the job condition again and the event i selected is not there !!!!
    I want to create after even condition and schedule that parameter via mainframe as our all job schedule through main frame only !!! can some one tell me why its not working with reporting agent ??
    I will definately assign the points.

    dinesh and SB,
    thank you for your reply. understood your point. but what i m asking is when i put "after event" criteria in the start selection... wouldnt it show that always when ever i go to Reporting Agent scheduling Package --> right click --> schedule --> start condition (i meant the event name and parameter should be saved there)
    but once i save it and get out from the scheduling package i can see the job has been scheduled but its not showing that its even controlled job !!!
    is it possible to use "after event" option in Reproting agent's job ?
    I have few queries under one Reporting Agent scheduling package which is added in one process chain - the PC is running once a month - i added RA at the end of the process chain and the variant will be schedule by mainframe once the process chain has completed successfully. now 2nd thing is: i need to run Reporting Agent job every single day. so need to schedule it twice. we schedule everything by main frame. so if i can save "after event" criteria then i can schedule that parameter by mainframe. the problem is the start condition is not saving my after event condition entries or parameter names.
    I hope i m clear. pl. guide its kind a urgent.

  • Problem with background job schedule

    Hi friends,
    How to schedule more than one data loading jobs in backround??
    When i try to schedule the second job,the first scheduled job is getting overwritten,and only this job is active.
    I tried in infopack,scheduler...
    How to overcome this???
    Regards
    sudhakar

    Hello Ragu,
    How r u ?
    Use Process Chains for this multiple Job Scheduling.
    I think u r teying to schedule the same InfoPackage !
    Could u elobrate ur issue ?
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • Job scheduling

    Hi Friends,
             i have a problem with job scheduling.
    two jobs need to be scheduled. (job1 and job2)
    i used
    job_open.
    submit job1 using parameters.
    job_close.
    then for second job.
    job_open.
    submit job2 using parameter2.
    job_close.
    when i go and look in sm37. i see when job1 is running the job2 is in release status.
    but what we need is when job1 is running job2 should be in schedule status. it should start only after job1 is completed.
    in the job2 job_close i used job1 as the PRED_JOBNAME.
    but i when i run the program job2 is in released status , not in schedule mode.
    is there any way we can make the job2 in scheduled mode instead of release.
    can we use event to control this. if yes please let me know how we can do it.
    Thanks

    Hi,
    As Appana wrote - i'm not sure there's a problem, but why don't u make one job with 2 steps? this way u will be perfectly sure that job2 (2nd step in the new job) will run only after job1 (1st step n the new job)...
    Good luck
    Igal

  • Background Job Scheduling

    Hi,
      I am scheduling a report to run in background.
    In this report it is creating background jobs automatically for different company codes.
    It submits the 1st background job and waits until it finishes.
    Then 2nd job starts in background and continues with other jobs.
    At end it finishes all the jobs and closes.
    Now my problem is.
    1.       Whether is it possible for us to submit all the jobs at 1 time. And execute at same time. Ie., 1st, 2nd job will start at same time.
    2.       If possible how can we do that.
    What I have written is
    loop at companycode.
    Create job name.
    call fun 'Job_Open'.
    submit xxxx user sy-uname via job job_name numer job_count
    to sap-spool
    spool parameters l_spool_parameter
    without spool dynpro
    with companycode
    with ......
    and return.
    endloop.
    Please help ASAP, urgent.

    hi praveen,
    Job Scheduling Explained
    Definition
    Before any background processing can actually begin, background jobs must be defined and scheduled. The scheduled time for when a job runs is one part of the job’s definition. There are several ways to schedule jobs:
    From Transaction SM36 (Define Background Job)
    With the "start program in the background" option of either Transaction SA38 (ABAP: Execute Program) or Transaction SE38 (the ABAP editor)
    Through the background processing system’s own programming interface. (Many SAP applications use the internal programming interface to schedule long-running reports for background processing.)
    Through an external interface.
    Scheduling Background Jobs   
    Use
    You can define and schedule background jobs in two ways from the Job Overview:
    ·         Directly from Transaction SM36. This is best for users already familiar with background job scheduling.
    ·         The Job Scheduling Wizard. This is best for users unfamiliar with SAP background job scheduling. To use the Job Wizard, start from Transaction SM36, and either select Goto ® Wizard version or simply use the Job Wizard button.
    Procedure
           1.      Call Transaction SM36 or choose CCMS ® Jobs ® Definition.
           2.      Assign a job name. Decide on a name for the job you are defining and enter it in the Job Name field.
           3.      Set the job’s priority, or “Job Class”:
    ·         High priority:      Class A
    ·         Medium priority: Class B
    ·         Low priority: Class C
           4.      In the Target server field, indicate whether to use system load balancing.
    ·         For the system to use system load balancing to automatically select the most efficient application server to use at the moment, leave this field empty.
    ·         To use a particular application server to run the job, enter a specific target server.
           5.      If spool requests generated by this job are to be sent to someone as email, specify the email address. Choose the Spool list recipient button.
           6.      Define when the job is to start by choosing Start Condition and completing the appropriate selections. If the job is to repeat, or be periodic, check the box at the bottom of this screen.
           7.      Define the job’s steps by choosing Step, then specify the ABAP program, external command, or external program to be used for each step.
           8.      Save the fully defined job to submit it to the background processing system.
           9.      When you need to modify, reschedule, or otherwise manipulate a job after you've scheduled it the first time, you'll manage jobs from the Job Overview.
    Note: Release the job so that it can run. No job, even those scheduled for immediate processing, can run without first being released.
    Specifying Job Start Conditions
    Use
    When scheduling a background job (either from Transaction SM36, Define Background Job or CCMS ® Jobs ® Definition), you must specify conditions that will trigger the job to start.
    Procedure
    Choose the Start condition button at the top of the Define Background Job screen.
    Choose the button at the top of the Start Time screen for the type of start condition you want to use (Immediate, Date/Time, After job, After event, or At operation mode) and complete the start time definition in the screen that appears.
    For the job to repeat, check the Periodic job box at the bottom of the Start Time screen and choose the Period values button below it to define the frequency of repetition (hourly, daily, weekly, monthly, or another specific time-related period). Then choose the Save button in the Period values screen to accept the periodicity and return to the Start Time screen.
    Once you’ve completed specifying the job start conditions, choose the Save button at the bottom of the Start Time screen to return to the Define Background Job screen.
    No job can be started until it is released, including jobs scheduled to start immediately. Since releasing jobs can be done only by a system administrator from the job management screen (Transaction SM37) or by other users who have been granted the appropriate Authorizations for Background Processing, no unauthorized user can start a job without explicit permission
    Managing Jobs from the Job Overview
    Use
    The Job Overview, or Job Maintenance, screen is the single, central area for completing a wide range of tasks related to monitoring and managing jobs, including defining jobs; scheduling, rescheduling, and copying existing jobs; rescheduling and editing jobs and job steps; repeating a job; debugging an active job; reviewing information about a job; canceling a job's release status; canceling and deleting jobs; comparing the specifications of several jobs; checking the status of jobs; reviewing job logs; and releasing a job so it can run.
    Procedures
    To display the Job Overview screen, choose CCMS ® Jobs ® Maintenance or call Transaction SM37. Before entering the Job Overview screen, the system first displays the Select Background Jobs screen. You'll need to complete this Job Selection screen to define the criteria for the jobs you want to manage. Once you've selected jobs to manage, you can choose from a wide range of management tasks:
    To copy a single existing job, choose Job ® Copy.
    To reschedule or edit job steps or attributes of a single job, choose Job ® Change. A job step is an independent unit of work within a background job. Each job step can execute an ABAP or external program. Other variants or authorizations may be used for each job step. The system allows you to display ABAP programs and variants. You can scan a program for syntax errors. You can also display the authorizations for an authorized user of an ABAP job step.
    To repeat a single job, choose Job ® Repeat scheduling.
    To debug an active job, choose Job ® Capture: active job. Only a single selection is allowed. If an active job seems to be running incorrectly (e.g., running for an excessively long time), you can interrupt and analyze it in debugging mode in a background process, and then either release it again or stop it altogether.
    You will be able to capture a background job only if you are logged on to the SAP server on which the job is running. To find server information in the Job Overview, select and mark the job, then choose Job ® Job details.
    To review information about a job, choose Job ® Job details. Details displayed can include:
    current job status
    periodicity, or the repetition interval
    other jobs linked to the current job, either as previous or subsequent jobs
    defined job steps
    spool requests generated by the current job
    To cancel a job's "Released" status, select the job or jobs from the Job Overview list and choose Job ® Release -> Scheduled.
    To cancel a job from running but keep the job definition available, select the job or jobs from the Job Overview list and choose Job ® Cancel active job.
    To delete a job entirely, select the job or jobs from the Job Overview list and choose Job ® Delete. Jobs with the status of Ready or Running cannot be deleted.
    To compare the specifications of more than one job, select the jobs from the Job Overview list and choose Job ® Compare jobs.
    To check the status of jobs, select the job or jobs from the Overview Job list and choose Job ® Check status. This allows you to either change the job status back to Planned or cancel the job altogether. This is especially useful when a job has malfunctioned.
    To review job logs, select a job or jobs with the status Completed or Canceled from the Job Overview list and
    regards
    karthik
    reward me points if helpfull

  • Error occured while posting the job schedule for JDBCAdapter

    Hi Experts,
    In Application Log in Path: "/usr/sap/<SID>/DVExxxx/j2ee/cluster/server1/log", I see the error:
    #/Applications/ExchangeInfrastructure/AdapterFramework/Services/Util
    ##com.sap.aii.af.service.scheduler.SchedulerManagerImpl.postJobScheduleOthers(String, int)
    #J2EE_GUEST#0##n/a##f7956e1f6b4711e0b851001e0b5d3ac8#SAPEngine_Application_Thread[impl:3]_23##0#0
    #Error#1#com.sap.aii.af.service.scheduler.SchedulerManagerImpl#Plain
    ###error occured while posting the job schedule for JDBCAdapter_9f0584b1bcb33b94b67ada456233bcb8 with 2#
    Frequently are created Lock's and I need remove them in Visual Admin.
    Any idea about this error?
    Tks in advance.

    Hi,
    After applying the SP 23 Patch Level 08, LOCK JDBC problems stopped.
    But now when a network error or database error, the JDBC Sender Communication channels that were open to connection, are blocked.
    Even following the instructions in [SAP Note 1083488 - XI FTP_JDBC sender channel stop polling indefinitely (04_04S)|https://websmp230.sap-ag.de/sap(bD1wdCZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1083488] the problem continues.
    I've already set the parameter "taskTimeout" and selected the option "Disconnect from Database After Each Message Processing" but not resolved.
    Any ideas?
    Thanks in advance.

  • SSIS Package compiled successful, in SQL Server Integration Service package executed sucessful, But fail to run in MS SQL Job Scheduler

    Hi Everyone,
    I having a problem to transfer data from MS SQL 2005 to IBMAS400. Previously my SSIS was running perfectly but there is some changes I need to be done in order for the system to work well. Considers my changes are minimal & just for upgrades (but I did
    include DELETE statements to truncate AS400 table before I insert fresh data from MS SQL table to the same AS400 table), so I compile my SSIS package & it run successfully & I passed it into MS SQL Integrated Service as 1 of the packages & manually
    executed the package & the result is the same, that mean it was successful again but when I try to run it in a MS SQL Job Scheduler, the job failed with these message shown below as extracted from the job View history. 
    Date today
    Log Job History (MSSQLToAS400)
    Step ID 1
    Server MSSQLServer
    Job Name MSSQLToAS400
    Step Name pumptoAS400
    Duration 00:00:36
    Sql Severity 0
    Sql Message ID 0
    Operator Emailed
    Operator Net sent
    Operator Paged
    Retries Attempted 0
    Message
    Executed as user: MSSQLServer\SYSTEM. ... 9.00.4035.00 for 32-bit  Copyright (C) Microsoft Corp 1984-2005. All rights reserved.    
    Started:  today time  
    Error: on today time     
    Code: 0xC0202009     Source: SSISMSSQLToAS400 Connection manager "SourceToDestinationOLEDB"     
    Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. 
    Error code: 0x80004005.  An OLE DB record is available.  
    Source: "IBMDA400 Session"  
    Hresult: 0x80004005  
    Description: "CWBSY0002 - Password for user AS400ADMIN on system AS400SYSTEM is not correct ".  End Error  
    Error: today     
    Code: 0xC020801C     
    Source: Data Flow Task OLE DB Destination [5160]     
    Description: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER.  The AcquireConnection method call to the connection manager "DestinationClearData" failed with error code 0xC0202009.  There may be error messages posted before
    this with more information on why the AcquireConnection method ca...  The package execution fa...  The step failed.
    So I hope somebody can shed some hints or tips for me to overcome time problem of mine. Thanks for your help in advance. As I had scoured thoroughout MSDN forums & found none solution for my problem yet. 
    PS: In the SQL Integrated Services when I deployed the package I set the security of the packages to Rely on server... 
    Hope this will help.

    Hi Ironmaidenroxz,
    From the message “Executed as user: MSSQLServer\SYSTEM”, we can see that the SQL Server Agent job ran under the Local System account. However, a Local System account doesn’t have the network rights natively, therefore, the job failed to communicate with
    the remote IBMAS400 server.
    To address this issue, you need to create a proxy account for SQL Server Agent to run the job. When creating the credentials for the proxy account, you can use the Windows domain account under which you executed the package manually.
    References:
    How to: Create a Credential
    How to: Create a Proxy
    Regards,
    Mike Yin
    TechNet Community Support

  • How to deactivate Filter and Job in Job scheduling?

    Hi,
    My scenario is:
    XI will poll the file from 3rd party system and wil send to R/3 via IDOC adapter.
    On every sunday , R/3 will be down for 3-4 hrs for maintenance purpose.
    I need to create job scheduling in such a way that XI sould collect all the message coming on sunday during that period
    of time and after endtime has reached , it should process those message and should send the message to R/3 system.
    I have created Sender/Receiver ID ..also create Filter and a JOBID.
    while creating Job ID i specified the following
    1: Start Date    = 18.08.2009
    2:Start Time     = 11.00.00
    3:End Date      = 18.08.2009
    4:End Time      = 15.00.00
    5:Period          =  7
    6:Period Unit   = D (Days)
    Job should be scheduled at 11.00.00 and end at 15.00.00 ,filter and Job both should deactivate and after 15.00.00 it should process those messages.
    But the problem is it doesnt deactivate the filter.
    I tried using SXMS_JOB_DEACTIVATE and SXMS_START_JOB_AT_ONCE
    Both deacitvtes the JOB but not filter.
    Can any body tell me , how to go with it.
    Also if I have to write a report, what all things are required.
    regards,
    Mayank

    Hi Volker,
    Down time for R/3 is more ..so i am not sure whether XI will trying till that time or not.
    And XI is not able to than all the incoming messages in XI during that period of time will be Red Flag.
    can we write a report in XI such a way that .. all the queued messages should be processed ..thorugh Job scheduling.

  • IR jobs scheduled error

    hi,
    i have 13 jobs scheduled on workspace. the task is to import bqy document. when the scheduled run, i check the folder and i only see 10 bqy imported, not 13. when i see the job output, it says unknown error. no problems appear when i try to run the jobs one by one.
    below is the error :
    Job: 'FlashReportCharts_1' executed by user 'admin'
    At 07/14/09 04:58 PM UTC+7 (server time)
    Using Interactive Reporting Job Service 11.1.1.1.0.797.
    Connecting to AI as 'sa' at 07/14/09 04:58 PM.
    Sending SQL to server:
    SELECT AL1.curr_month, AL1.curr_year FROM dbo.current_period AL1
    1 rows retrieved at 07/14/09 04:58 PM.
    Connecting to aijkt-hypdb as 'admin' at 07/14/09 04:58 PM.
    Sending SQL to server:
    <HYBRIDANALYSISOFF {SSFORMAT} {ROWREPEAT} {INDENTGEN -2} {SUPBRACKET} {SUPCOMMA} <QUOTEMBRNAMES {DECIMALS 0} <SUPSHARE {OUTALTNAMES} <OUTALTSELECT "Default" <PAGE ("Site" ,"Scale" ) "Site" "Input Value" {OUTALTNAMES} <OUTALTSELECT "Default" <ROW("Period") "Gen2,Period" "Gen3,Period" "Gen4,Period" <SYM {OUTALTNAMES} <OUTALTSELECT "Default" <COL("Year","Scenario","Measure") {SUPCOMMA} {TABDELIMIT} &CurrYear "Actual" "Budget" "Forecast" "Overburdened Removal" "Coal Sales" "Coal Barged" "Coal Produced" "Strip Ratio" "Coal Mined" "Coal Stock" "FOB Sales Price" !
    Connecting to aijkt-hypdb as 'admin' at 07/14/09 04:58 PM.
    Sending SQL to server:
    <HYBRIDANALYSISOFF {SSFORMAT} {ROWREPEAT} {INDENTGEN -2} {SUPBRACKET} {SUPCOMMA} <QUOTEMBRNAMES {DECIMALS 0} <SUPSHARE {OUTALTNAMES} <OUTALTSELECT "Default" <PAGE ("Site" ,"Scale" ) "Site" "Input Value" {OUTALTNAMES} <OUTALTSELECT "Default" <ROW("Scenario","Period") "Actual" "Year Total" <SYM {OUTALTNAMES} <OUTALTSELECT "Default" <COL("Year","Measure") {SUPCOMMA} {TABDELIMIT} &PrevYear "Overburdened Removal" "Coal Sales" "Coal Barged" "Coal Produced" "Strip Ratio" "Coal Mined" "Coal Stock" "FOB Sales Price" !
    61 rows retrieved at 07/14/09 04:58 PM.
    Saving 'C:\Hyperion\products\biplus\data\cache\IRJobCache\4211a25c451326567edb40f6121ece4edd179d0\0a06011600002604-0000-25ec-00000009\0000012278b2fb70-0000-16b0-0a060116\FlashReportCharts_1-20090714.bqy' (compresseed) at 07/14/09 04:58 PM.
    Save failed with unknown error.
    Job for document FlashReportCharts_1.bqy completed at 07/14/09 04:58 PM UTC+7 (server time) on @HOST:aijkt-hypapp.arutmin.net with errors.
    Edited by: r.senoputro on Jul 23, 2009 11:14 PM

    Put the IR job logs in TRACE:32 mode which will give more information in the logs . Refer this document link to put in TRACE mode :http://docs.oracle.com/cd/E17236_01/epm.1112/epm_install_troubleshooting_1112200.pdf
    Thanks,
    KK

  • Help! job scheduled in DB13 cannot run successfully

    Hi! I am a basis administrator of an automobil company.
    Job scheduled in DB13 in our PRD system cannot run successfully.
    We have Central instance(ci) and Database instance(db) installed on seperated hosts. We use SUN cluster 3.1 technology.
    The problem occurs after my college restarted the whole system accidentally. After that, all the job scheduled in DB13 report the same error,job log looks like:
    Job started
    Step 001 started (program RSDBAJOB, variant &0000000000192, user ID ***)
    No application server found on database host - rsh will be used
    Execute logical command BRCONNECT On host orahost
    Parameters: -u / -c -f check
    BR801I BRCONNECT 6.20 (113)
    BR252E Function fopen() failed for '/oracle/PR1/sapcheck/cduuoiij.chk' at location main-8
    BR253E errno 2: No such file or directory
    I manually type the 'rsh' command on ci to invoke 'brconnect' on db. It works all right. And I check the rfc destination 'SAPXPG_DBDEST_ORAHOST' in sm59 , it's normal. There is no gateway instance on database server.
    The parameter 'SAPDBHOST' = orahost (the database server is  called orahost)
    the parameter 'SAPLOCALHOST' = cihost(the central instance is located on cihost)
    Any suggestion is very appreciated.
    Thx!

    Hi Joe,
    DB13 will run only on application server as for that matter any SAP program. It will then connect to database at OS level. I mean you normal reports also fetch data from database for which they have to connect to DB  but they all run on application server. The problem is coming in connection to database through the application layer.
    Please again review the OSS notes sent by Sunil and me. I think a greater focus on Sunil`s notes should help you out.
    Regards.
    Ruchit.

  • Errors in job scheduled SSIS package

    A job scheduled for SSIS package failed with the below errors:
    Microsoft (R) SQL Server Execute Package Utility  Version 10.50.4321.0 for 64-bit  Copyright (C) Microsoft Corporation 2010. All rights reserved.    
    Started:  5:00:02 AM  
    Error: 2015-01-02 05:06:39.25     
    Code: 0xC0202009     
    Source: Data Flow Task OLE DB Destination [46]     
    Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. 
    Error code: 0x80004005.  An OLE DB record is available.  
    Source: "Microsoft SQL Server Native Client 10.0"  
    Hresult: 0x80004005  
    Description: "Could not allocate a new page for database because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary
    space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.".  End Error  Error: 2015-01-02 05:06:39.42     
    Code: 0xC0209029     
    Source: Data Flow Task OLE DB Destination [46]     
    Description: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR.  The "input "OLE DB Destination Input" (59)"
    failed because error code 0xC020907B occurred, and the error row disposition on "input "OLE DB Destination Input" (59)" specifies failure on error. An error occurred on the specified object of the specified component.  There may be
    error messages posted before this with more information about the failure.  
    End Error  Error: 2015-01-02 05:06:39.44     
    Code: 0xC0047022     
    Source: Data Flow Task SSIS.Pipeline     
    Description: SSIS Error Code DTS_E_PROCESSINPUTFAILED.  The ProcessInput method on component "OLE DB Destination" (46) failed
    with error code 0xC0209029 while processing input "OLE DB Destination Input" (59). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow
    task to stop running.  There may be error messages posted before this with more information about the failure.  
    End Error  Error: 2015-01-02 05:06:39.48     
    Code: 0xC02020C4     
    Source: Data Flow Task Flat File Source [1]     
    Description: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.  
    End Error  Error: 2015-01-02 05:06:39.50     
    Code: 0xC0047038     
    Source: Data Flow Task SSIS.Pipeline     
    Description: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.  The PrimeOutput method on component "Flat File Source" (1) returned
    error code 0xC02020C4.  The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.  There may be error
    messages posted before this with more information about the failure.  
    End Error  Error: 2015-01-02 05:16:23.49     
    Code: 0x00000000     
    Source: Execute SQL Task 1      
    Description: Could not allocate space for object 'bo.TLE'.'PK_new' in database because the 'PRIMARY' filegroup is full. Create disk space
    by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.  
    End Error  Error: 2015-01-02 05:16:23.70     
    Code: 0xC002F210     
    Source: Execute SQL Task 1 Execute SQL Task     
    Description: Executing the query "Sp_load" failed with the following error: "Warning: Null value is eliminated by an aggregate
    or other SET operation.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  
    End Error  DTExec: The package execution returned DTSER_FAILURE (1).  
    Started:  5:00:02 AM  
    Finished: 5:16:27 AM  
    Elapsed:  984.928 seconds.  The package execution failed.  The step failed.
    Please help!!!!

    Hi,
    Based on the error message” Could not allocate a new page for database because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth
    on for existing files in the filegroup”, we can know that the issue is caused by the there is no sufficient disk space in filegroup 'PRIMARY' for the database.
    To fix this issue, we can add additional files to the filegroup by add a new file to the PRIMARY filegroup on Files page, or setting Autogrowth on for existing files in the filegroup to increase the necessary space.
    As to the issue that the job executed successfully for the next run when executed, I think it can be caused by someone or something had made something to increase the space. 
    The following document about Add Data or Log Files to a Database is for your reference:
    http://msdn.microsoft.com/en-us/library/ms189253.aspx
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Job Schedule Management

    Hi,
    We recently upgraded our Sol Man system to EHP1 and we are in the process of Job Shedule Management. We imported few jobs from managed system and while monitoring ,getting error Business Proccess not created.
    Could any one please tell me one thing, shall we need to setup Business Process Monitoring prior to setup Job Schedule Management? Please explain high level end to end Job Schedule Management steps.
    Thanks & Regards
    Srinivas.

    Hi Srinivas,
    please have a llok under https://service.sap.com/jsm to find further links to important SAP notes that explain a lot. You can also find iTutors about JSM under https://service.sap.com/rkt-solman.
    Generally speaking BPMon is NO prerequisite for JSM. But as you probably saw during the import you have to assign your documented job either to a logical component (esp. meaningful for system related jobs like housekeeoing spool or dumps) or you assign to  a business process step. An assignment to a logical component is a minimum requirement!
    For a business process step assignment you need to maintain processes either in the project part or in the solution directory of SAP Solution Manager.
    An active BPMon is only needed if you want to monitor background jobs and want to make use of the integration between JSM and BPMon.
    You should also look for Blog and Forum entries of Martin Lauer in order to get further JSM insights.
    By the way: If you  should encounter some functional problems you can open an OSS message on SV-SMG-PSM.
    Best Regards
    Volker

  • ASE 15.7 Job Scheduler won´t start again

    Hi,
    we encoutered the following problem in our ECC6.0 / EHP5 on ASE 15.7 PL 122 System:
    DBACockpit / Collector Configuration shows the warning "The ASE-Job Scheduler is not active". The log SID_JSAGENT shows the following entries:
    00:11704:12104:2014/09/01 08:00:35.78 jamain  Opening jsagent connection.
    00:11704:12104:2014/09/01 08:00:35.78 jamain  Agent will listen on <IP>
    00:11704:12104:2014/09/01 08:00:35.78 jamain  SYB_JSAGENT waiting for connection
    00:11704:12104:2014/09/01 08:00:36.72 jamain  Job Scheduler Agent connected with Job Scheduler Task on port 4903
    00:11704:12104:2014/09/01 08:00:36.72 jamain  Initializing SYB_JSAGENT
    00:11704:12104:2014/09/01 08:00:36.72 jamain  Allocating list resources.
    00:11704:12104:2014/09/01 08:00:36.72 jamain  Allocating queue resources.
    00:11704:12104:2014/09/01 08:00:36.72 jamain  Allocating thread resources.
    00:11704:12104:2014/09/01 08:00:36.72 jamain  Initializing connection pool.
    00:11704:12104:2014/09/01 08:00:36.99 jamain  Client message: ct_connect(): user api layer: external error: The connection failed because of invalid or missing external configuration data.
    00:11704:12104:2014/09/01 08:00:36.99 jamain  ct_connect() failed.
    00:11704:12104:2014/09/01 08:00:36.99 jamain  jsj_AddConxs: jsd_MakeConnection() failed for user jstask to server SID
    00:11704:12104:2014/09/01 08:00:36.99 jamain  jsj_CreateConxPool: jsj_AddConxs() failed
    00:11704:12104:2014/09/01 08:00:36.99 jamain  Initialization failed initializing connection pool
    00:11704:12104:2014/09/01 08:00:36.99 jamain  Jsagent failed to handle INIT message.
    00:11704:12104:2014/09/01 08:00:36.99 jamain  JS Agent aborting. Cancel all running jobs.
    00:11704:12104:2014/09/01 08:00:36.99 jamain  Job Processing failed.
    00:11704:12104:2014/09/01 08:00:36.99 jamain  JS Agent exiting.
    Restarting the scheduler on ISQL doesn´t seem to work. Any ideas?
    Many thanks & greetings
    Vierengel Stefan

    Hi,
    thank you. Problem was solved by performing the following steps
    1.     deleting the following file: DRIVE:\sybase\SID\OCS-15_0\ini\ocs.cfg
    2.     Stop / Start Job Scheduler
             exec sybmgmtdb..sp_sjobcontrol '','stop_js'
            go
             exec sybmgmtdb..sp_sjobcontrol '','start_js'
            go
    3.     Refresh DBACockpit
    Greetings
    Vierengel Stefan

  • Simple threaded job scheduler memory issues

    Hi,
    I'm working on a very simple job scheduler that triggers some processes to be run (based on a config file) and will spawn a thread to run the command every X number of seconds.
    My initial version of the code showed an obvious memory leak, starting at about 20MB usage and increasing after running overnight to about 80MB... I have since stripped down the program to a very minimal bit of code that will basically create 100 job objects and trigger threads on each of them to go way and run a Cygwin sleep.exe for 5 seconds and then return, once for each job every 15 seconds. After running the profiler with this within Netbeans, the VM Memory utilisation graph showed a very similar graph, and exhibited the same memory utilisation as my previous version.
    I've tried to tweak the code as much as I can with my level of knowledge, so now I'm hoping someone might be able to help out to see if there are any flaws in my code, or any improvements that I could make in order to resolve my problem?
    Below is the code for the 3 classes that I'm using:
    Main class:
    package javascheduler;
    import java.util.ArrayList;
    public class Main {
        private ArrayList<Job> allJobs = new ArrayList();
        public static Main instance;
        public static void main(String[] args) {
         if (instance == null) {
             instance = new Main();
         instance.loop();
        public Main() {
         // Create lots of jobs
         for (int i = 0; i < 100; i++) {
             Job j = new Job(i);
             allJobs.add(j);
         System.out.println("Created 100 jobs");
        private boolean loop() {
         // Main loop
         int i = 0;
         int x = 0;
         while (true) {
             x++;
             try {
              Thread.sleep(500);
             } catch (InterruptedException ex) { }
             i++;
             if(i >= 10) {
              Runtime.getRuntime().gc();
              i=0;
             for(Job j : allJobs) {
              long now = System.currentTimeMillis();
              // Jobs run every 15 secs
              if(now >= j.getLastRunTime() + 15000 && !j.isRunning()) {
                  j.runJob();
    }Job class:
    package javascheduler;
    public class Job {
        private NativeJob runningJob = null;
        private String command = "c:/cygwin/bin/sleep.exe 5";
        private int jobId;
        private long lastRunTime = -1;
        public Job(int id) {
         jobId = id;
        public void runJob() {
         System.out.println("runJob started" + jobId);
         lastRunTime = System.currentTimeMillis();
         runningJob = null;
         runningJob = new NativeJob(command,jobId);
         runningJob.start();
         System.out.println("runJob returned" + jobId);
        public long getLastRunTime() {
         return lastRunTime;
        public boolean isRunning() {
         if(runningJob == null) {
             return false;
         } else {
             if(runningJob.isRunning()) {
              return true;
             } else {
              runningJob = null;
              return false;
    }NativeJob class:
    package javascheduler;
    import java.io.IOException;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    public class NativeJob extends Thread {
        String command;
        int jobId;
        boolean running = false;
        Runtime r = Runtime.getRuntime();
        public NativeJob(String command, int i) {
         super();
         this.command = command;
         this.jobId = i;
        @Override
        public  void run() {
         running = true;
             try {
              System.out.println("Running command " + jobId);
              Process p;
              p = r.exec(command);
              int returnCode = p.waitFor();
              p.getErrorStream().close();
                    p.getInputStream().close();
                    p.getOutputStream().close();
             } catch (IOException ex) {
              Logger.getLogger(NativeJob.class.getName()).log(Level.SEVERE, null, ex);
             } catch (InterruptedException ex) {
              Logger.getLogger(NativeJob.class.getName()).log(Level.SEVERE, null, ex);
             System.out.println("Finished command " + jobId);
             running = false;
        public boolean isRunning() {
         return running;
    }Thanks
    Adam

    Thanks ejp and sabre.
    I've made some changes to the code following your suggestions, and am now trying out jconsole on windows (rather than the Netbeans profiler).. I'll post back after I've had it running a little while.
    If you could take a quick look at the updates I've made that's be very much appreciated.
    Main class:
    package javascheduler;
    import java.util.ArrayList;
    public class Main {
        private ArrayList<Job> allJobs = new ArrayList();
        public static Main instance;
        public static void main(String[] args) {
         instance = new Main();
         instance.loop();
        public Main() {
         // Create lots of jobs
         for (int i = 0; i < 100; i++) {
             Job j = new Job(i);
             allJobs.add(j);
         System.out.println("Created 100 jobs");
        private boolean loop() {
         // Main loop
         int i = 0;
         int x = 0;
         while (true) {
             x++;
             try {
              Thread.sleep(500);
             } catch (InterruptedException ex) { }
             i++;
             if(i >= 10) {
              Runtime.getRuntime().gc();
              i=0;
             for(Job j : allJobs) {
              long now = System.currentTimeMillis();
              // Jobs run every 15 secs
              if(now >= j.getLastRunTime() + 15000 && !j.isRunning()) {
                  new Thread(j).start();
    }Job class:
    package javascheduler;
    import java.io.IOException;
    public class Job implements Runnable {
        private String[] command = { "c:/cygwin/bin/sleep.exe","5" };
        private int jobId;
        private long lastRunTime = -1;
        Runtime r = Runtime.getRuntime();
        boolean isRunning = false;
        public Job(int id) {
            jobId = id;
        public void run() {
            System.out.println("runJob started" + jobId);
            isRunning = true;
            lastRunTime = System.currentTimeMillis();
            try {
                Process p = r.exec(command);
                StreamGobbler errorGobbler = new StreamGobbler(p.getErrorStream(), "ERROR");
                StreamGobbler outputGobbler = new StreamGobbler(p.getInputStream(), "OUTPUT");
                errorGobbler.start();
                outputGobbler.start();
                int exitVal = p.waitFor();
            } catch (IOException ex) {
                ex.printStackTrace();
            } catch (InterruptedException ex) {
                ex.printStackTrace();
            System.out.println("runJob returned" + jobId);
            isRunning = false;
        public long getLastRunTime() {
            return lastRunTime;
        public boolean isRunning() {
            return isRunning;
    }Edited by: Adamski2000 on Aug 2, 2010 3:12 AM

Maybe you are looking for