Issue in Job Scheduling

Hi Friends,
I Developed a Bdc Program which is Daily backgroud Job Scheduled, But in my Program I am using GUI_DOWNLOAD to download
the data which is in internal table to a flat file. Issue is the Background Job is failing, because iam using the FM-->GUI_DOWNLOAD
so, Please help me is there any way to download the internal table data to a Flatfile without using FM-->GUI_DOWNLOAD. or pls
help me with any links.
Regards,
Neetha
Moderator message: FAQ, please search for previous discussions of this topic.
Edited by: Thomas Zloch on Aug 4, 2011 12:03 PM

The issue was with RFC user Authorization.

Similar Messages

  • External tools for monitoring job scheduling

    Hello All,
    Please give me the infomation on third party tool used for monitoring the process chains, handling issues and job scheduling.
    Also why do we need third party tools and what are the available third party tools..
    Thanks
    Regards
    M.A

    Hi,
    Giving you a scenerio,
    Data load from Oracle to BI. Flow is
    Oracle 7.0 --> Oracle 10G  Daily refresh
    Oracle 10G --> BI 7.0 Daily load ones Oracle 10G refresh complete
    how BI 7.0 will understand if 10G refresh is complete ?
    Need a triggering/scheduling tool which triggers events that this step is complete.
    These tools usually reads a small file usually kepts on servers using FTP ones one step is complete. Such tools read if the file is available ( based on conditions provided, like check between 2am to 3am if file has arrived etc.) and ones available they trigger an even and next process starts
    E.g. MAESTRO is used in such case.
    Regards,
    Akshay

  • Job Scheduler Timer issue in Cisco Prime Infrastructure 1.2 ?

    Has anyone run into this issue where the job scheduler in CPI 1.2 report that the job that is being scheduled is before the current time even though it isn't ?
    This only started happening after our time change yesterday.  The system is setup for the correct time (NTP) this is confirmed in the app and also on the CLI console access (show clock)
    Anyway we get this error message (attachment) in the lower right corner of the attachement.
    It's not allowing the jobs to be scheduled.    Rebooted the system yesterday and thought that it fixed it, but evidently I tried another scheduled job today and it's got the same issue.
    TAC Case is already opened on this but I thought I'd ask here as well.
    Regards,
    Tom W.

    I'm answering my own question:  upgrade to 1.3 from 1.2 and the problem is resolved.  Rebooting the VM on 1.2 did help for 1 day but then the problem came back, so my advice is simply to upgrade to 1.3 which I would have done initially if I had known that 1.3 was available. Hope this helps.  The problem itself in 1.2 is unclear what is causing it because NTP and the APP and the underlying time in the console (cli) are all good; so somewhere in the scheduler it may have gone off track after the daylight savings time shift this past week.  Bottom line: 1.3 upgrade and keep hope alive, bro.

  • Simple threaded job scheduler memory issues

    Hi,
    I'm working on a very simple job scheduler that triggers some processes to be run (based on a config file) and will spawn a thread to run the command every X number of seconds.
    My initial version of the code showed an obvious memory leak, starting at about 20MB usage and increasing after running overnight to about 80MB... I have since stripped down the program to a very minimal bit of code that will basically create 100 job objects and trigger threads on each of them to go way and run a Cygwin sleep.exe for 5 seconds and then return, once for each job every 15 seconds. After running the profiler with this within Netbeans, the VM Memory utilisation graph showed a very similar graph, and exhibited the same memory utilisation as my previous version.
    I've tried to tweak the code as much as I can with my level of knowledge, so now I'm hoping someone might be able to help out to see if there are any flaws in my code, or any improvements that I could make in order to resolve my problem?
    Below is the code for the 3 classes that I'm using:
    Main class:
    package javascheduler;
    import java.util.ArrayList;
    public class Main {
        private ArrayList<Job> allJobs = new ArrayList();
        public static Main instance;
        public static void main(String[] args) {
         if (instance == null) {
             instance = new Main();
         instance.loop();
        public Main() {
         // Create lots of jobs
         for (int i = 0; i < 100; i++) {
             Job j = new Job(i);
             allJobs.add(j);
         System.out.println("Created 100 jobs");
        private boolean loop() {
         // Main loop
         int i = 0;
         int x = 0;
         while (true) {
             x++;
             try {
              Thread.sleep(500);
             } catch (InterruptedException ex) { }
             i++;
             if(i >= 10) {
              Runtime.getRuntime().gc();
              i=0;
             for(Job j : allJobs) {
              long now = System.currentTimeMillis();
              // Jobs run every 15 secs
              if(now >= j.getLastRunTime() + 15000 && !j.isRunning()) {
                  j.runJob();
    }Job class:
    package javascheduler;
    public class Job {
        private NativeJob runningJob = null;
        private String command = "c:/cygwin/bin/sleep.exe 5";
        private int jobId;
        private long lastRunTime = -1;
        public Job(int id) {
         jobId = id;
        public void runJob() {
         System.out.println("runJob started" + jobId);
         lastRunTime = System.currentTimeMillis();
         runningJob = null;
         runningJob = new NativeJob(command,jobId);
         runningJob.start();
         System.out.println("runJob returned" + jobId);
        public long getLastRunTime() {
         return lastRunTime;
        public boolean isRunning() {
         if(runningJob == null) {
             return false;
         } else {
             if(runningJob.isRunning()) {
              return true;
             } else {
              runningJob = null;
              return false;
    }NativeJob class:
    package javascheduler;
    import java.io.IOException;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    public class NativeJob extends Thread {
        String command;
        int jobId;
        boolean running = false;
        Runtime r = Runtime.getRuntime();
        public NativeJob(String command, int i) {
         super();
         this.command = command;
         this.jobId = i;
        @Override
        public  void run() {
         running = true;
             try {
              System.out.println("Running command " + jobId);
              Process p;
              p = r.exec(command);
              int returnCode = p.waitFor();
              p.getErrorStream().close();
                    p.getInputStream().close();
                    p.getOutputStream().close();
             } catch (IOException ex) {
              Logger.getLogger(NativeJob.class.getName()).log(Level.SEVERE, null, ex);
             } catch (InterruptedException ex) {
              Logger.getLogger(NativeJob.class.getName()).log(Level.SEVERE, null, ex);
             System.out.println("Finished command " + jobId);
             running = false;
        public boolean isRunning() {
         return running;
    }Thanks
    Adam

    Thanks ejp and sabre.
    I've made some changes to the code following your suggestions, and am now trying out jconsole on windows (rather than the Netbeans profiler).. I'll post back after I've had it running a little while.
    If you could take a quick look at the updates I've made that's be very much appreciated.
    Main class:
    package javascheduler;
    import java.util.ArrayList;
    public class Main {
        private ArrayList<Job> allJobs = new ArrayList();
        public static Main instance;
        public static void main(String[] args) {
         instance = new Main();
         instance.loop();
        public Main() {
         // Create lots of jobs
         for (int i = 0; i < 100; i++) {
             Job j = new Job(i);
             allJobs.add(j);
         System.out.println("Created 100 jobs");
        private boolean loop() {
         // Main loop
         int i = 0;
         int x = 0;
         while (true) {
             x++;
             try {
              Thread.sleep(500);
             } catch (InterruptedException ex) { }
             i++;
             if(i >= 10) {
              Runtime.getRuntime().gc();
              i=0;
             for(Job j : allJobs) {
              long now = System.currentTimeMillis();
              // Jobs run every 15 secs
              if(now >= j.getLastRunTime() + 15000 && !j.isRunning()) {
                  new Thread(j).start();
    }Job class:
    package javascheduler;
    import java.io.IOException;
    public class Job implements Runnable {
        private String[] command = { "c:/cygwin/bin/sleep.exe","5" };
        private int jobId;
        private long lastRunTime = -1;
        Runtime r = Runtime.getRuntime();
        boolean isRunning = false;
        public Job(int id) {
            jobId = id;
        public void run() {
            System.out.println("runJob started" + jobId);
            isRunning = true;
            lastRunTime = System.currentTimeMillis();
            try {
                Process p = r.exec(command);
                StreamGobbler errorGobbler = new StreamGobbler(p.getErrorStream(), "ERROR");
                StreamGobbler outputGobbler = new StreamGobbler(p.getInputStream(), "OUTPUT");
                errorGobbler.start();
                outputGobbler.start();
                int exitVal = p.waitFor();
            } catch (IOException ex) {
                ex.printStackTrace();
            } catch (InterruptedException ex) {
                ex.printStackTrace();
            System.out.println("runJob returned" + jobId);
            isRunning = false;
        public long getLastRunTime() {
            return lastRunTime;
        public boolean isRunning() {
            return isRunning;
    }Edited by: Adamski2000 on Aug 2, 2010 3:12 AM

  • Error in Backup job scheduling in DB13

    Hi All
    Backup job scheduled in DB13 kicks error ,I am using Oracle as database and ERP6.0
    database and application are on diffrent servers.Before it was working fine,I didn't changed any password
    I can run backupjob sucessfully directly from BRtools on database server.Please provide any hint
    Job started
    Step 001 started (program RSDBAJOB, variant &0000000000060, user )
    No application server found on database host - rsh/gateway will be used
    Execute logical command BRBACKUP On host DLcSapOraG08
    Parameters:-u / -jid INLOG20090120204230 -c force -t online -m incr -p initerd.sap -w use_dbv -a -c force -p in
    iterd.sap -cds -w use_rmv
    BR0051I BRBACKUP 7.00 (31)
    BR0128I Option 'use_dbv' ignored for 'incr'
    BR0055I Start of database backup: bdztcorv.ind 2009-01-20 20.42.31
    BR0484I BRBACKUP log file: D:\oracle\ERD\sapbackup\bdztcorv.ind
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.32
    BR0301E SQL error -1017 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-01017: invalid username/password; logon denied
    BR0310E Connect to database instance ERD failed
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.32
    BR0301E SQL error -1017 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-01017: invalid username/password; logon denied
    BR0310E Connect to database instance ERD failed
    BR0056I End of database backup: bdztcorv.ind 2009-01-20 20.42.32
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.32
    BR0054I BRBACKUP terminated with errors
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.32
    BR0291I BRARCHIVE will be started with options '-U -jid INLOG20090120204230 -d disk -c force -p initerd.sap -cds -w use_rmv'
    BR0002I BRARCHIVE 7.00 (31)
    BR0181E Option '-cds' not supported for 'disk'
    BR0280I BRARCHIVE time stamp: 2009-01-20 20.42.33
    BR0301W SQL error -1017 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-01017: invalid username/password; logon denied
    BR0310W Connect to database instance ERD failed
    BR0007I End of offline redo log processing: adztcorw.log 2009-01-20 20.42.32
    BR0280I BRARCHIVE time stamp: 2009-01-20 20.42.33
    BR0005I BRARCHIVE terminated with errors
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.33
    BR0292I Execution of BRARCHIVE finished with return code 3
    External program terminated with exit code 3
    BRBACKUP returned error status E
    Job finished

    Hi,
    not sure if the recommendations given will address this issue.
    You are getting this error:
    BR0301E SQL error -1017 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-01017: invalid username/password; logon denied
    the log file indicates:
    > No application server found on database host - rsh/gateway will be used
    This indicated that the user that is connecting from the AS to the DB server is not properly configured to perform the DB tasks on it.
    So, first question would be to know if you have configured a gateway on the DB server and how, or if you are using remote shell.
    Second question, you can do backups on the DB server.
    > I can run backupjob sucessfully directly from BRtools on database server
    How did you run exactly the backup job (what is the exact command line, what is the exact OS user that executed it)?
    What is the OS of the DB server?
    I have reread your post, your OS is windows therefore you fall in the "typical" error in Windows.
    You have executed your backup as <sid>ADM and it works. Unfortunatelly, in windows, SAP is exectuted by SAPSERVICE<sid>, and this is the user who should be connecting to your DB server, and this is the user who cannot execute the backup.
    The fact that you can run the backup with <sid>ADM in Windows does not means that you have SAPService<sid> properly configured.
    For the error (see before) I think your ops$ user for this user is not properly configured in the DB server. take a look at the note mentioned by KT and pay attention to the SAPSERVICE<sid> configuration
    Edited by: Fidel Vales on Jan 24, 2009 12:45 AM

  • Background job schedule and mail triggering

    Hi Experts,
    I schedule a background job to run a custom program for project closure.The job is running successfully. But,the mail I am getting from this job run is same all the time (means it's showing same project closure again and again though I am running job for different projects). Is it some bug in our custom program or any parameters required to be check in job schedule?
    Kindly suggest.
    Thanks & Regards
    Saurabh

    Yes. That is the point I am missing. Just one 'date' is checked and project is taken into account for the custom program and after its successful run the mail is sent to users.
    And when the same program I am assigning in SM36,it is actually running the program accurately for project/s but sending the same mail which it send for very first project earlier.
    Can you please guide me on the way to create these variants? 
    You will need to Save different variants for different projects and then assign the variants with your job.
    Will it be required to create variant again and again and assign different projects individually? As, we are not sure that which project is gonna be created in future. So, need guideline how these variants can help me to sort out the e-mail issue.
    Regards
    Saurabh

  • SSIS Package compiled successful, in SQL Server Integration Service package executed sucessful, But fail to run in MS SQL Job Scheduler

    Hi Everyone,
    I having a problem to transfer data from MS SQL 2005 to IBMAS400. Previously my SSIS was running perfectly but there is some changes I need to be done in order for the system to work well. Considers my changes are minimal & just for upgrades (but I did
    include DELETE statements to truncate AS400 table before I insert fresh data from MS SQL table to the same AS400 table), so I compile my SSIS package & it run successfully & I passed it into MS SQL Integrated Service as 1 of the packages & manually
    executed the package & the result is the same, that mean it was successful again but when I try to run it in a MS SQL Job Scheduler, the job failed with these message shown below as extracted from the job View history. 
    Date today
    Log Job History (MSSQLToAS400)
    Step ID 1
    Server MSSQLServer
    Job Name MSSQLToAS400
    Step Name pumptoAS400
    Duration 00:00:36
    Sql Severity 0
    Sql Message ID 0
    Operator Emailed
    Operator Net sent
    Operator Paged
    Retries Attempted 0
    Message
    Executed as user: MSSQLServer\SYSTEM. ... 9.00.4035.00 for 32-bit  Copyright (C) Microsoft Corp 1984-2005. All rights reserved.    
    Started:  today time  
    Error: on today time     
    Code: 0xC0202009     Source: SSISMSSQLToAS400 Connection manager "SourceToDestinationOLEDB"     
    Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. 
    Error code: 0x80004005.  An OLE DB record is available.  
    Source: "IBMDA400 Session"  
    Hresult: 0x80004005  
    Description: "CWBSY0002 - Password for user AS400ADMIN on system AS400SYSTEM is not correct ".  End Error  
    Error: today     
    Code: 0xC020801C     
    Source: Data Flow Task OLE DB Destination [5160]     
    Description: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER.  The AcquireConnection method call to the connection manager "DestinationClearData" failed with error code 0xC0202009.  There may be error messages posted before
    this with more information on why the AcquireConnection method ca...  The package execution fa...  The step failed.
    So I hope somebody can shed some hints or tips for me to overcome time problem of mine. Thanks for your help in advance. As I had scoured thoroughout MSDN forums & found none solution for my problem yet. 
    PS: In the SQL Integrated Services when I deployed the package I set the security of the packages to Rely on server... 
    Hope this will help.

    Hi Ironmaidenroxz,
    From the message “Executed as user: MSSQLServer\SYSTEM”, we can see that the SQL Server Agent job ran under the Local System account. However, a Local System account doesn’t have the network rights natively, therefore, the job failed to communicate with
    the remote IBMAS400 server.
    To address this issue, you need to create a proxy account for SQL Server Agent to run the job. When creating the credentials for the proxy account, you can use the Windows domain account under which you executed the package manually.
    References:
    How to: Create a Credential
    How to: Create a Proxy
    Regards,
    Mike Yin
    TechNet Community Support

  • Mail Client Job Scheduled

    Hi,
    I have configured Mail sender adapter to read emails from exchange server
    imap://host:port/InBox
    In Communication channel monitoring I dont see any error message but  I see status message Mail Client Job Scheduled  and there is nothing in Moni
    Can any one please help me with this issue..
    Thanks in Advance..
    Regards
    Sri
    Edited by: sriram_Kan on Mar 7, 2012 9:47 PM
    Edited by: sriram_Kan on Mar 7, 2012 10:01 PM
    Edited by: sriram_Kan on Mar 7, 2012 10:10 PM

    Hi..
    the error showing occurs if the ports are not correctly opened with the mail server.
    check with your basis or IS team for opening the ports/access to the mail server.
    Once the connection is established after opening the ports, activate the channel once again and it should work!
    Cheers,
    Souvik

  • Problem with background job schedule

    Hi friends,
    How to schedule more than one data loading jobs in backround??
    When i try to schedule the second job,the first scheduled job is getting overwritten,and only this job is active.
    I tried in infopack,scheduler...
    How to overcome this???
    Regards
    sudhakar

    Hello Ragu,
    How r u ?
    Use Process Chains for this multiple Job Scheduling.
    I think u r teying to schedule the same InfoPackage !
    Could u elobrate ur issue ?
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • SSIS package works in development environment but fails when job scheduler executes, file path invalid

    SIS package works in development environment but fails when job scheduler executes, file path invalid
    Relatively simple package to get OLE-DB connection to MS FoxPro 9.0 DB
    The failure in the job log states that the path is invalid its a network path (\\192.168.1.xxx\foldername) this has been run several ways with the remote computer being mapped as a network drive and through the \\ notation described above.
    Thinking it was a security issue between the SQL agent account and my account I tested by subsitution myself as a proxy account for the agent when running this job, again same result failed on network path.
    One issue I see is that the remote computer is running Server 2000 (legacy software incompatable with newer versions) 
    Is it possible that this is a security issue, since if i understand correctly the current MS domain security model didn't exist until Server 2003.

    Hi REIData,
    Have you got the issue resolved? Based on your description, please make sure the target folder is shared properly. If the computer on which the SQL Server Agent job runs doesn’t join the domain as the server that hosts the shared folder, you have to share
    the folder with everyone by adding “Everyone” to the people list on the File Sharing page of the folder and assign "Read/Write" permission to it.
    Regards,
    Mike Yin
    TechNet Community Support

  • Job Scheduler DS consumes all connections in Pool.

    Hi all,
    I use Weblogic 12.1.3.
    My objective is to create a job scheduler, with commonj Timer API to run a in a cluster environment.
    For this, I created a cluster with two node servers (I created this in the integrated weblogic server in Jdeveloper, for simplicity).
    Then I created the DataSource which points to the table where the weblogic_timers and active tables are, which are needed for persistance of the timers, targeted this DS on the nodes in the cluster, and then went to the Cluster -> Configuration -> Scheduling and selected the respective data source as "Data Source For Job Scheduler".
    After I do this and the servers are up, all the connections in the DS pool are consumed. It seems like connections are continuosly made from weblogic to the database.
    The connection itself seems ok, since I can connect from SQLDeveloper and also tested it when I created the DataSource.
    If I have a look in the logs of the two servers, I see errors like this:
    <BEA-000627> <Reached maximum capacity of pool "JDBC Data Source-2", making "0" new resource instances instead of "1".>
    Can you give me an idea of what the issue might be?
    Please let me know if I should provide more information.
    Thanks.

    It's not an issue WebLogic can address. The thortech application is independently using
    the UCP connection pool product, not WebLogic connection pooling, so Thortech APIs
    would be the only way to debug/reconfigure the UCP.

  • Best SAP Security Practices Print,file,job schedule, archiving

    Hello All, i would like to know in your experience which will be the best practices for Security  for this list below:
    - Printer security (especially check printing)
    - File path security for export/import
    - Best Practice for Job Schedule and Spool file
    - Archiving process (I can't think of any specific to security, other than Security Audit Logs)
    Are there any special transactions/system settings/parameters that must be on place in order to hard SAP Systems?
    Do you have any documentation related?
    I mean for example Job, spool i think user must just only run heir own jobs,and se their own works for printing, is there a paremeter to athenticate Prints/user, etc.
    Please let me know your comments about those related issues.
    I appreciate your help.
    Thanks a lot.
    Ahmed

    Hi,
    PFCG_TIME_DEPENDENCY
    This is best to run once a day mostly after 12.01 am as it removes the roles which are invalid for current date. As role assignment is on date basis there is no advantage of running it hourly.
    /VIRSA/ZVFATBAK
    This is for GRC 5.3, and this job is to collect FFID logs from backend to GRC repository, so if you have frequent FFID usage you can schedule it hourly or for every 30 min too, if you have enough bandwidth in your server to get the latest log report. or else you can have it scheduled for twice a day too, so it is purely based on your need.
    Hope this helps.
    BR,
    Mangesh

  • Obtain Job invoker for a SSIS job scheduled in SQL Server Agent

    Hello,
    I was required to tell the job runner of a particular SSIS job scheduled in SQL Server Agent (in SQL Server Management Studio 2008 R2). I noticed that after running the job, a record can be found in msdb.dbo.sysjobhistory in the [message] columm saying that
    the job is invoked by 'Domain\User'. Is there anyway I can acquire and upload that information into an audit table by adding some additional script into the job? I heard about using token to get job_ID, but what about the actual user name who runs
    the job?
    Thanks

    Just add retry attempts to whatever number you want (2 as per your original explanation) in Job step properties as below
    Have a logic to include a delay of 10 mins . You can make use of WAITFOR function for that
    see
    http://www.mssqltips.com/sqlservertip/1423/create-delays-in-sql-server-processes-to-mimic-user-input/
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Errors in job scheduled SSIS package

    A job scheduled for SSIS package failed with the below errors:
    Microsoft (R) SQL Server Execute Package Utility  Version 10.50.4321.0 for 64-bit  Copyright (C) Microsoft Corporation 2010. All rights reserved.    
    Started:  5:00:02 AM  
    Error: 2015-01-02 05:06:39.25     
    Code: 0xC0202009     
    Source: Data Flow Task OLE DB Destination [46]     
    Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. 
    Error code: 0x80004005.  An OLE DB record is available.  
    Source: "Microsoft SQL Server Native Client 10.0"  
    Hresult: 0x80004005  
    Description: "Could not allocate a new page for database because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary
    space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.".  End Error  Error: 2015-01-02 05:06:39.42     
    Code: 0xC0209029     
    Source: Data Flow Task OLE DB Destination [46]     
    Description: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR.  The "input "OLE DB Destination Input" (59)"
    failed because error code 0xC020907B occurred, and the error row disposition on "input "OLE DB Destination Input" (59)" specifies failure on error. An error occurred on the specified object of the specified component.  There may be
    error messages posted before this with more information about the failure.  
    End Error  Error: 2015-01-02 05:06:39.44     
    Code: 0xC0047022     
    Source: Data Flow Task SSIS.Pipeline     
    Description: SSIS Error Code DTS_E_PROCESSINPUTFAILED.  The ProcessInput method on component "OLE DB Destination" (46) failed
    with error code 0xC0209029 while processing input "OLE DB Destination Input" (59). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow
    task to stop running.  There may be error messages posted before this with more information about the failure.  
    End Error  Error: 2015-01-02 05:06:39.48     
    Code: 0xC02020C4     
    Source: Data Flow Task Flat File Source [1]     
    Description: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.  
    End Error  Error: 2015-01-02 05:06:39.50     
    Code: 0xC0047038     
    Source: Data Flow Task SSIS.Pipeline     
    Description: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.  The PrimeOutput method on component "Flat File Source" (1) returned
    error code 0xC02020C4.  The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.  There may be error
    messages posted before this with more information about the failure.  
    End Error  Error: 2015-01-02 05:16:23.49     
    Code: 0x00000000     
    Source: Execute SQL Task 1      
    Description: Could not allocate space for object 'bo.TLE'.'PK_new' in database because the 'PRIMARY' filegroup is full. Create disk space
    by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.  
    End Error  Error: 2015-01-02 05:16:23.70     
    Code: 0xC002F210     
    Source: Execute SQL Task 1 Execute SQL Task     
    Description: Executing the query "Sp_load" failed with the following error: "Warning: Null value is eliminated by an aggregate
    or other SET operation.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  
    End Error  DTExec: The package execution returned DTSER_FAILURE (1).  
    Started:  5:00:02 AM  
    Finished: 5:16:27 AM  
    Elapsed:  984.928 seconds.  The package execution failed.  The step failed.
    Please help!!!!

    Hi,
    Based on the error message” Could not allocate a new page for database because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth
    on for existing files in the filegroup”, we can know that the issue is caused by the there is no sufficient disk space in filegroup 'PRIMARY' for the database.
    To fix this issue, we can add additional files to the filegroup by add a new file to the PRIMARY filegroup on Files page, or setting Autogrowth on for existing files in the filegroup to increase the necessary space.
    As to the issue that the job executed successfully for the next run when executed, I think it can be caused by someone or something had made something to increase the space. 
    The following document about Add Data or Log Files to a Database is for your reference:
    http://msdn.microsoft.com/en-us/library/ms189253.aspx
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • How to do the job scheduling in BDC Call transaction

    Hi Experts,
    I've a Query like how to do the job scheduling in BDC Call transaction
      If anybody knows the answer please send me the reply.
      Thanks.
       Regards,
        Rekha

    Hi ,
    any progarm can be scheduled, wether it may be BDC or report thru SM36 Tcode.
    But do rememeber that if ur BDC is using GUI_UPLOAD function module, then it wont work , coz the function Gui_upload or GUI_DOWNLOAd wont work in back ground.
    If u r going to use OPEN_DATASET , READ dataset ....then it can be scheduled. i.e BDC can work if ur program retrievesz the data from Application server.
    Rvert back if any issues,
    Reward with poinst if helpful.
    Regards,
    Naveen

Maybe you are looking for