Automate a sql loader job

Hi,
I need to automate schedule job to import excel files into Oracle tables using SQL Loader.
Can any one please tell me how to do this?
thanks
Jenny

Depends on a couple of factors.
If you are on a UNIX platform, you can create a shell script and schedule it with cron.
If you are on a Windows platform, you can create a batch file and schedule it with the Windows scheduler.
Or, if you are on Oracle 9i or 10g, you could use the external table feature instead of SQL*Loader. Then you could write a stored procedure to process the external table and schedule it using the Oracle scheduler (DBMS_JOB). This would probably be my preference.

Similar Messages

  • How Can I Run a SQL Loader Job from Schedular

    How Can I Run a SQL Loader Job from Schedular , So that It Runs every Day.

    Depends on a couple of factors.
    If you are on a UNIX platform, you can create a shell script and schedule it with cron.
    If you are on a Windows platform, you can create a batch file and schedule it with the Windows scheduler.
    Or, if you are on Oracle 9i or 10g, you could use the external table feature instead of SQL*Loader. Then you could write a stored procedure to process the external table and schedule it using the Oracle scheduler (DBMS_JOB). This would probably be my preference.

  • Running a SQL Loaded Job from Schedular

    How Can I Run a SQL Loader Job from Schedular ,
    Message was edited by:
    jus

    The trick is to create a wrapper script that is referred to by the PROGRAM object with arguments. The wrapper should do the work.
    /usr/local/bin/do_sqlldr.sh:
    #!/usr/bin/env ksh
    # setup actions
    sqlldr control=...
    Don't forget to check who is the owner of $ORACLE_HOME/bin/extjob and that the owner is available.
    My question is how this can be fitted in an DTAP environment where Developement and Test environment are on the same server and have the same job definitions. Development and Test should use different datafile and logfile locations.
    regards,
    Ronald
    http://ronr.nl/unix-dba

  • SQL*Loader job exits unexpectedly and causes table to be locked with NOWAIT

    I have a weekly report job that I run where I have to load about 48 logs with about 750k rows of data in each log. To facilitate this, we have been using a Java job that runs SQL*Loader as an external Process (using ProcessBuilder), one after the other. Recently however, this process has been terminating abnormally during the load which is causing a lock on the table and basically causes the process to grind to a halt until we can open a ticket with the DB team to kill the session that is hung. Is there maybe a better way to handle this upload process than using SQL*Loader or is there some change I could make in either the control file or command line to stop it from dying a horrible death?
    At the start of the process, I truncate the table that I'm loading to and then run this command line with the following control file:
    COMMAND LINE:
    C:\Oracle\ora92\BIN\SQLLDR.EXE userid=ID/PASS@DB_ID load=10000000 rows=100000 DIRECT=TRUE SKIP_INDEX_MAINTENANCE=TRUE control=ControlFile.ctl data=logfile.log
    CONTROL FILE:
    UNRECOVERABLE
    Load DATA
    INFILE *
    Append
    PRESERVE BLANKS
    INTO TABLE MY_REPORT_TABLE
    FIELDS TERMINATED BY ","
         filler_field1          FILLER char(16),
         filler_field2          FILLER char(16),
         time               TIMESTAMP 'MMDDYYYY-HH24MISSFF3' ENCLOSED BY '"',
         partne          ENCLOSED BY '"',
         trans               ENCLOSED BY '"',
         vendor          ENCLOSED BY '"' "SUBSTR(:vendor, 1, 1)",
         filler_field4          FILLER ENCLOSED BY '"',
         cache_hit_count,
         cache_get_count,
         wiz_trans_count,
         wiz_req_size,
         wiz_res_size,
         wiz_trans_time,
         dc_trans_time,
         hostname          ENCLOSED BY '"',
         trans_list          CHAR(2048) ENCLOSED BY '"' "SUBSTR(:trans_list, 1, 256)",
         timeouts,
         success ENCLOSED BY '"'
    Once all of the logs have finished loading, I rebuild the indexes on the table and then start the report process. It seems like it's just dying on random logs now, re-running the process it will fail at a different point each time.
    EDIT: The reasons for the UNRECOVERABLE and SKIP_INDEX_MAINTENANCE are to speed the load up. As it is, it still can take 7-12 minutes for each log to load, it's even worse without those on. Overall it's taking about 18 hours for this process to run from start to finish.
    Edited by: user6676140 on Jul 7, 2011 11:37 AM

    Please note that my post stated:  "I have opened a ticket with Oracle support. after 6 days have not had the help that I need."
    I also agree that applying the latest PSU is a Best Practice, which Oracle defines as "a cumulative collection of high impact, low risk, and proven fixes for a specific product or component".
    With that statement I feel there should not be the drastic issues that we have seen.  Our policy is to always apply PSUs, no matter what the product or component, without issue. 
    Except for now.  We did our research, and only open an Oracle ticket when we need expert help.  That has not been forthcoming from them, but we are still working the ticket.
    Hence, I opened this forum because many times I have found help here, where others have faced the same issue and now have an insight.  When having a serious problem I like to use all of my resources, this forum being one of those.
    To restate the question:
    (1)     97% of our databases reside on RAC.  From the Search List for Databases, we do not see the columns Sessions:CPU, Sessions: I/O, Sessions: Other, Instance CPU%, and are told this is working as designed because you must monitor the instance, not the database, with RAC. 
         (a) After applying PSU2 the Oracle Load Map no longer showed any databases. 
    All of this in (1) is making the tool less useful for monitoring at the database level, which we do most of the time.
    (2) Within a few days of applying PSU2, we couldn't log into EM and got the error "Authentication failed. If problem persists, contact your system administrator."
        (b) searching through the emoms.trc files we found the errors listed above posting frantically. 
    After rolling back PSU we are back in business.
    However, there is still the need to remain current with the components of EM.
    I am looking for suggestion, insights, experience.  While I appreciate Akanksha answering so quickly, a recommendation to open an SR is not what I need.
    Sherrie

  • How do I automatically backup SQL Agent jobs and SSIS packages on the mirror daily?

    I have seen this question asked before but I could not find a satisfactory answer. What is the best solution to get your SQL Agent jobs/schedules/etc. and your SSIS packages on the mirror server? Here's the details:
    Server A is the principal with 2 DBs mirrored over to server B. Everything is fine and dandy, DBs are synched and all good. In Disaster Recovery testing, we need to bring up server B, which now will serve as the principal. Server A is inaccessible. Now,
    we need all our jobs that are setup in server A to be in server B, ready to go, but disabled. We also need all our SSIS packages properly stored. Yes, we store our packages in the MSDB in server A.
    Now, I can see a few answers coming my way.
    1- Backup the MSDB to server B. When you bring server B up as principal in DR, restore the MSDB. All your jobs,schedules,steps, SSIS packages will be there. Is this possible? Can this be done on server B without rendering it incapable of serving as the principal
    for the mirrored DBs? My fear with this option is that there may be information in the MSDB itself about the mirroring state of the DBs? Or is all that in the Master DB? Have you tried this?
    2- Right click each job, script them out, re-create them on server B... No, thank you very much. :) I am looking for an AUTOMATED, DAILY backup of the jobs and SSIS packages. Not a manual process. Yes, we do change jobs and packages quite often and doing
    the job twice on two servers is not an option.
    3- Use PowerShell.. Really? Are we going back to scripting at the command prompt like that, that fast?
    Since I fear option number 3 will be the only way to go, any hints on scripts you guys have used that you like that does what I need to do?
    Any other options?
    Any help GREATLY appreciated. :-) I can be sarcastic but I am a good guy..
    Raphael
    rferreira

    I would go with option number 3. Once  you have a script simple run it....
    param([string]$serverName,[string]$outputPath) 
    function script-SQLJobs([string]$server,[string]$outputfile) 
        [reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null 
        $srv = New-Object Microsoft.SqlServer.Management.Smo.Server("$server") 
        $db = New-Object Microsoft.SqlServer.Management.Smo.Database 
        $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 
        $scrp.Options.ScriptDrops = $FALSE 
        $scrp.Options.WithDependencies = $TRUE 
        $jobs = $srv.JobServer.get_Jobs() 
        $jobs=$jobs | Where-Object {$_.Name -notlike "sys*"}     
        foreach($job in $jobs) 
            $script=$job.Script() 
            $script >> $outputfile 
            "GO" >> $outputfile 
    ---script-SQLJobs "SQLSRV12" "C:\Jobs\test.txt" 
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Blog:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance

  • Sql loader not able to load more than 4.2 billion rows

    Hi,
    I am facing a problem with Sql loader utility.
    The Sql loader process stops after loading 342 gigs of data i.e it seems to load 4.294 billion rows out of 6.9 billion rows and the loader process completes without throwing any error.
    Is there any limit on the volume of data sql loader can load ?
    Also how to identify that SQL loader process has not loaded whole data from the file as it is not throwing any error ?
    Thanks in Advance.

    it may be a prob with the db config not in the sql loader...
    check all the parameters
    Maximizing SQL*Loader Performance
    SQL*Loader is flexible and offers many options that should be considered to maximize the speed of data loads. These include:
    1. Use Direct Path Loads - The conventional path loader essentially loads the data by using standard insert statements. The direct path loader (direct=true) loads directly into the Oracle data files and creates blocks in Oracle database block format. The fact that SQL is not being issued makes the entire process much less taxing on the database. There are certain cases, however, in which direct path loads cannot be used (clustered tables). To prepare the database for direct path loads, the script $ORACLE_HOME/rdbms/admin/catldr.sql.sql must be executed.
    2. Disable Indexes and Constraints. For conventional data loads only, the disabling of indexes and constraints can greatly enhance the performance of SQL*Loader.
    3. Use a Larger Bind Array. For conventional data loads only, larger bind arrays limit the number of calls to the database and increase performance. The size of the bind array is specified using the bindsize parameter. The bind array's size is equivalent to the number of rows it contains (rows=) times the maximum length of each row.
    4. Use ROWS=n to Commit Less Frequently. For conventional data loads only, the rows parameter specifies the number of rows per commit. Issuing fewer commits will enhance performance.
    5. Use Parallel Loads. Available with direct path data loads only, this option allows multiple SQL*Loader jobs to execute concurrently.
    $ sqlldr control=first.ctl parallel=true direct=true
    $ sqlldr control=second.ctl parallel=true direct=true
    6. Use Fixed Width Data. Fixed width data format saves Oracle some processing when parsing the data. The savings can be tremendous, depending on the type of data and number of rows.
    7. Disable Archiving During Load. While this may not be feasible in certain environments, disabling database archiving can increase performance considerably.
    8. Use unrecoverable. The unrecoverable option (unrecoverable load data) disables the writing of the data to the redo logs. This option is available for direct path loads only.
    Edited by: 879090 on 18-Aug-2011 00:23

  • Registering LOAD job with OEM fails

    I tried to register a Mapping from a flat file to a table (SQL*Loader job) from OWB 9i (9.0.3) to OEM (9.2). This worked fine. However, the job failed to run because as the output states: the controlfile is not passed. And I can see that it is not among the job parameters.
    How can it be that the standard job registering process does not include that essential parameter?
    Is there a way to get a data load job properly defined in OEM?
    Why is this job scheduled as a TCL job and not as a LOAD job?
    thanks very much for any input!
    Lucas

    Lucas,
    Your first issue is regarding the control file not being passed. In order to fix that you have to specify the control file in the mapping configuration. Beware that this control file is being read through utl_file, i.e. you will have to set the utl_file_dir in the init parameters of your database to the correct directory in order for the database to be able to read it. Basically, the control file is being read in order to fill the audit tables with the correct data.
    Your second issue is regarding the type of job. As you may know OWB 'integrates' with Oracle Workflow (OWF) as well. The OWF solution also relies on the use of OEM for job execution. All of that is currently managed by Workflow Queue Listener. In order for Workflow Queue listener to be able to correctly handle the OEM jobs (defined as external functions in OWF) the job has to be defined as is. Besides, with the current way of registering the jobs all jobs (PL/SQL, ABAP or SQL Loader) are registered in the same way and directed by parameters.
    Hope this helps,
    Mark.

  • Automate SQL*LOADER

    Hi,
    I am working on a Decision Support System project.
    I need to load flat files into Oracle tables through SQL * Loader. And the entire process should be invoked through JAVA front end.
    How do I go about?
    I will deeply appreciate any help.
    Raghu.

    Hi,
    In our prev. project, We have customized-(automated) SQL*LOADER. There, we were using UNIX O/S. So We have used shell scripts for creating the control file (a script can create control file automatically). And it will call the sql loader and load the data into tables.
    Here u can use same logic.
    If ur flat file contents are in same format, u can use the static control files (means, at installation time u can create control files) and whenever u want this, u can call. Dont go for dynamic control files.
    1. If u is using Java as front end, u can use native methods and call sql loader (exe). Problem is, U can not invoke, and such thing from client m/c. U can do it only from server side.
    2. This way also u can try. By using external procedure method, u can call shared library and shared library can invoke sql loader (write a small C shared library program for invoking SQL*LOADER). Here, u can invoke SQL*LOADER from client m/c also.
    3. One more ways is there. By using listener tech. u can invoke it. Create listener program and run on server side as back ground process. Whenever, there is request, it will sql loader.
    With regards,
    Boby Jose Thekkanath
    [email protected]
    Dharma Computers(p) Ltd.
    Bangalore-India.

  • Can we schedule Loader Jobs (SQL Loader) using Grid Control  ?

    Can we schedule SQL Loaders jobs/process using Grid Control for 11g database ?
    Or
    Is it good to schedule it as external jobs using DBMS_SCHEDULER ?
    OS is LINUX. Database is 11g R1 Grid Control will be the latest, I believe 10gR3.
    Any other suggestions... (I know it can be done using OS utilities like cron and others but that is not an option for now.)
    Thanks in advance.

    Try this
    -> Create a shell script to execute sqlldr
    -> On Grid, create an "OS COMMAND" job and pass this script as parameters. You'll have options to schedule this job.
    Let us know how it works.

  • Automatic SQL Loader export

    Hi,
    Does anybody knows if it is possible to export more than one table in loader format in SQL developer.
    I have no idea how it can be possible (scripting, java or sql extension...???)
    If somebody has always managed to perform this king of stuff, can he help me ? I would give him eternal thanks ;-)
    Thanks in advance,
    Regards,
    Raphael GEGOUE
    France

    At this stage it is not possible to export to more than one table to SQL Loader at a time. Please post a request on the Exchange for this.
    Regards
    Sue

  • SQL Loader and Insert Into Performance Difference

    Hello All,
    Im in a situation to measure performance difference between SQL Loader and Insert into. Say there 10000 records in a flat file and I want to load it into a staging table.
    I know that if I use PL/SQL UTL_FILE to do this job performance will degrade(dont ask me why im going for UTL_FILE instead of SQL Loader). But I dont know how much. Can anybody tell me the performance difference in % (like 20% will decrease) in case of 10000 records.
    Thanks,
    Kannan.

    Kannan B wrote:
    Do not confuse the topic, as I told im not going to use External tables. This post is to speak the performance difference between SQL Loader and Simple Insert Statement.I don't think people are confusing the topic.
    External tables are a superior means of reading a file as it doesn't require any command line calls or external control files to be set up. All that is needed is a single external table definition created in a similar way to creating any other table (just with the additional external table information obviously). It also eliminates the need to have a 'staging' table on the database to load the data into as the data can just be queried as needed directly from the file, and if the file changes, so does the data seen through the external table automatically without the need to re-run any SQL*Loader process again.
    Who told you not to use External Tables? Do they know what they are talking about? Can they give a valid reason why external tables are not to be used?
    IMO, if you're considering SQL*Loader, you should be considering External tables as a better alternative.

  • Management Studio Crashing When Adding SSIS Step to SQL Agent Job

    Hi
    I am running Windows Server 2012, SQL Server 2012 SP2 CU2, SQL Server Management Studio 2012 and Visual Studio 2012.
    I have existing SQL Agent Jobs that run SSIS packages and they all currently work. However, if I try to edit them or add a new job that uses a SSIS package Management Studio will crash.  I was having this problem a couple weeks ago when I was still
    on SQL Server 2012 SP1 CU9, so I updated to SQL Server 2012 SP2 CU2 and the problem went away for a couple weeks and I was able to add new SQL Agent jobs that run SSIS packages.  But now it issue is back and there are no new updates to try and fix it.
    Has anyone seen this before and found a reliable fix to it?
    Steps to cause crash:
    1) Open new SQL Agent Job
    2) Select Steps and add a New Step
    4) Change the job type to SQL Server Integration Services Package.
    5) SSMS Crashes
    Error Message Below:
    ===================================
    The type initializer for '<Module>' threw an exception. (SqlManagerUI)
    Program Location:
       at Microsoft.SqlServer.Management.SqlManagerUI.DTSJobSubSystemDefinition.Microsoft.SqlServer.Management.SqlManagerUI.IJobStepPropertiesControl.Load(JobStepData data)
       at Microsoft.SqlServer.Management.SqlManagerUI.JobStepProperties.UpdateJobStep()
       at Microsoft.SqlServer.Management.SqlManagerUI.JobStepProperties.typeList_SelectedIndexChanged(Object sender, EventArgs e)
       at System.Windows.Forms.ComboBox.OnSelectedIndexChanged(EventArgs e)
       at System.Windows.Forms.ComboBox.WmReflectCommand(Message& m)
       at System.Windows.Forms.ComboBox.WndProc(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
       at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
       at System.Windows.Forms.UnsafeNativeMethods.SendMessage(HandleRef hWnd, Int32 msg, IntPtr wParam, IntPtr lParam)
       at System.Windows.Forms.Control.SendMessage(Int32 msg, IntPtr wparam, IntPtr lparam)
       at System.Windows.Forms.Control.ReflectMessageInternal(IntPtr hWnd, Message& m)
       at System.Windows.Forms.Control.WmCommand(Message& m)
       at System.Windows.Forms.Control.WndProc(Message& m)
       at System.Windows.Forms.ScrollableControl.WndProc(Message& m)
       at System.Windows.Forms.ContainerControl.WndProc(Message& m)
       at System.Windows.Forms.UserControl.WndProc(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
       at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
       at System.Windows.Forms.UnsafeNativeMethods.CallWindowProc(IntPtr wndProc, IntPtr hWnd, Int32 msg, IntPtr wParam, IntPtr lParam)
       at System.Windows.Forms.NativeWindow.DefWndProc(Message& m)
       at System.Windows.Forms.Control.DefWndProc(Message& m)
       at System.Windows.Forms.Control.WmCommand(Message& m)
       at System.Windows.Forms.Control.WndProc(Message& m)
       at System.Windows.Forms.ComboBox.WndProc(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
       at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
       at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
       at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData)
       at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
       at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
       at System.Windows.Forms.Application.RunDialog(Form form)
       at System.Windows.Forms.Form.ShowDialog(IWin32Window owner)
       at System.Windows.Forms.Form.ShowDialog()
       at Microsoft.SqlServer.Management.SqlManagerUI.JobSteps.newJobStep_Click(Object sender, EventArgs e)
       at System.Windows.Forms.Control.OnClick(EventArgs e)
       at System.Windows.Forms.Button.OnClick(EventArgs e)
       at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
       at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
       at System.Windows.Forms.Control.WndProc(Message& m)
       at System.Windows.Forms.ButtonBase.WndProc(Message& m)
       at System.Windows.Forms.Button.WndProc(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
       at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
       at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
       at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData)
       at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
       at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
       at System.Windows.Forms.Application.RunDialog(Form form)
       at System.Windows.Forms.Form.ShowDialog(IWin32Window owner)
       at System.Windows.Forms.Form.ShowDialog()
       at Microsoft.SqlServer.Management.SqlMgmt.RunningFormsTable.RunningFormsTableImpl.ThreadStarter.StartThread()
    ===================================
    The C++ module failed to load.
     (DTEParseMgd)
    Program Location:
       at <CrtImplementationDetails>.LanguageSupport.Initialize(LanguageSupport* )
       at .cctor()
    ===================================
    Index was outside the bounds of the array. (DTEParseMgd)
    Program Location:
       at _getFiberPtrId()
       at <CrtImplementationDetails>.LanguageSupport._Initialize(LanguageSupport* )
       at <CrtImplementationDetails>.LanguageSupport.Initialize(LanguageSupport* )
    Thanks,
    Anthony

    I ended up being able to work around this without having to recreate everything on a Windows Server 2008 R2. I did this by setting up an azure point to site VPN from
    my computer to the Windows Server 2012 hosted on azure.  I was then able to connect SSMS from my computer to the Azure VM allowing me to create and edit new SQL Agent jobs that run SSIS packages. Here are the steps to do this
    1) Connect to the VPN from another machine besides the server.
    2) Open SSMS using the runas.exe command with the Domain\User Name of a Windows account on the server.
    - Ex. Runas.exe /netonly /user:DOMAIN\USER "C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\Ssms.exe"
    2) Connect to the VM's Database Engine using its internal IP address and the port that SQL Server is listening on (default is 1433) and a SQL Login
    setup on the server.
    -Ex IP, 10.0.0.1,1433
    3) Create a new job with the type SQL Server Integration
    Services Package. Select the package source (SSIS Catalog). I could not get the server to be automatically discovered here so enter the server’s internal IP address and select the package you want to run.
    4) Right click the new SQL Agent Job and choose Script Job As, CREATE To, New Query Window. This is the script to create the
    SQL Agent Job and it needs to be edited to contain the Name of the server on the internal IP address that was used earlier.
    5) Edit the @command= line of the script to use the server name. Change the highlighted part below to the server name.
    - Ex . @command=N'/ISSERVER "\"\SSISDB\PACKAGELOCATION\"" /SERVER
    "\"10.0.0.1\"" /Par "\"$ServerOption::LOGGING_LEVEL(Int16)\"";1 /Par "\"$ServerOption::SYNCHRONIZED(Boolean)\"";True /CALLERINFO SQLAGENT /REPORTING E',
                - Change to this
                            - @command=N'/ISSERVER "\"\SSISDB\PACKAGELOCATION\""
    /SERVER SERVERNAME /Par "\"$ServerOption::LOGGING_LEVEL(Int16)\"";1 /Par "\"$ServerOption::SYNCHRONIZED(Boolean)\"";True /CALLERINFO SQLAGENT /REPORTING E',
    6) Now change the name of the package in the script and run it. Now the new package can be edited in the GUI to change things like the schedule and alerts. Also you
    can edit any existing packages.

  • Problem specifying SQL Loader Log file destination using EM

    Good evening,
    I am following the example given in the 2 Day DBA document chapter 8 section 16.
    In step 5 of 7, EM does not allow me to specify the destination of the SQL Loader log file to be on a mapped network drive.
    The question: Does SQL Loader have a limitation that I am not aware of, that prevents placing the log file on a network share or am I getting this error because of something else I am inadvertently doing wrong ?
    Note: I have placed the DDL, load file data and steps I follow in EM at the bottom of this post to facilitate reproducing the problem *(drive Z is a mapped drive)*.
    Thank you for your help,
    John.
    DDL (generated using SQL developer, you may want to change the space allocated to be less)
    CREATE TABLE "NICK"."PURCHASE_ORDERS"
        "PO_NUMBER"      NUMBER NOT NULL ENABLE,
        "PO_DESCRIPTION" VARCHAR2(200 BYTE),
        "PO_DATE" DATE NOT NULL ENABLE,
        "PO_VENDOR" NUMBER NOT NULL ENABLE,
        "PO_DATE_RECEIVED" DATE,
        PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
      SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 67108864
      TABLESPACE "USERS" ;
    Load.dat file contents
    1, Office Equipment, 25-MAY-2006, 1201, 13-JUN-2006
    2, Computer System, 18-JUN-2006, 1201, 27-JUN-2006
    3, Travel Expense, 26-JUN-2006, 1340, 11-JUL-2006
    Steps I am carrying out in EM
    log in, select data movement -> Load Data from User Files
    Automatically generate control file
    (enter host credentials that work on your machine)
    continue
    Step 1 of 7 ->
      Data file is located on your browser machine
      "Z:\Documentation\Oracle\2DayDBA\Scripts\Load.dat"
       click next
    step 2 of 7 ->
      Table Name
      nick.purchase_orders
      click next
    step 3 of 7 ->
      click next
    step 4 of 7 ->
      click next
    step 5 of 7 ->
      Generate log file where logging information is to be stored
      Z:\Documentation\Oracle\2DayDBA\Scripts\Load.LOG
      Validation Error
      Examine and correct the following errors, then retry the operation:
      LogFile - The directory does not exist.

    Hi John,
    But, i did'nt found any error when i am going the same what you did.
    My Oracle Version is 10.2.0.1 and using Windows xp. See what i did and i got worked
    1.I created one table in scott schema :
    SCOTT@orcl> CREATE TABLE "PURCHASE_ORDERS"
      2  (
      3      "PO_NUMBER"      NUMBER NOT NULL ENABLE,
      4      "PO_DESCRIPTION" VARCHAR2(200 BYTE),
      5      "PO_DATE" DATE NOT NULL ENABLE,
      6      "PO_VENDOR" NUMBER NOT NULL ENABLE,
      7      "PO_DATE_RECEIVED" DATE,
      8      PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
      9  )
    10  TABLESPACE "USERS";
    Table created.I logged into em Maintenance-->Data Movement-->Load Data from User Files-->My Host Credentials
    Here i total 3 text boxes :
    1.Server Data File : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
    2.Data File is Located on Your Browser Machine : z:\load.dat <--- Here z:\ means other machine's shared doc folder; and i selected this option (as option button click) and i created the same load.dat as you mentioned.
    3.Temporary File Location : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\ <--- I did'nt mentioned anything.
    Step 2 of 7 Table Name : scott.PURCHASE_ORDERS
    Step 3 of 7 I just clicked Next
    Step 4 of 7 I just clicked Next
    Step 5 of 7 I just clicked Next
    Step 6 of 7 I just clicked Next
    Step 7 of 7 Here it is Control File Contents:
    LOAD DATA
    APPEND
    INTO TABLE scott.PURCHASE_ORDERS
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
    PO_NUMBER INTEGER EXTERNAL,
    PO_DESCRIPTION CHAR,
    PO_DATE DATE,
    PO_VENDOR INTEGER EXTERNAL,
    PO_DATE_RECEIVED DATE
    And i just clicked on submit job.
    Now i got all 3 rows in purchase_orders :
    SCOTT@orcl> select count(*) from purchase_orders;
      COUNT(*)
             3So, there is no bug, it worked and please retry if you get any error/issue.
    HTH
    Girish Sharma

  • "ORA-00054 Resource Busy Error"  when running SQL*Loader in Parallel

    Hi all,
    Please help me on an issue. We are using Datastage which uses sql*loader to load data into an Oracle Table. SQL*Loader invokes 8 parallel sessions for insert on the table. When doing so, we are facing the following error intermittently:
    SQL*Loader-951: Error calling once/load initialization
    ORA-00604: error occurred at recursive SQL level 1
    ORA-00054: resource busy and acquire with NOWAIT specifiedSince the control file is generated automatically by datastage, we cannot modify/change the options and test. Control File for the same is:
    OPTIONS(DIRECT=TRUE, PARALLEL=TRUE, SKIP_INDEX_MAINTENANCE=YES)
    LOAD DATA INFILE 'ora.2958.371909.fifo.1' "FIX 1358"
    APPEND INTO TABLE X
    x1 POSITION(1:8) DECIMAL(15,0)  NULLIF  (1:8)  = X'0000000000000000',
    x2 POSITION(9:16) DECIMAL(15,0)  NULLIF  (9:16)  = X'0000000000000000',
    x3 POSITION(17:20) INTEGER  NULLIF  (17:20)  = X'80000000',
    IDNTFR  POSITION(21:40)   NULLIF  (21:40)  = BLANKS,
    IDNTFR_DTLS  POSITION(41:240)   NULLIF  (41:240)  = BLANKS,
    FROM_DATE  POSITION(241:259) DATE "YYYY-MM-DD HH24:MI:SS"  NULLIF  (241:259)  = BLANKS,
    TO_DATE  POSITION(260:278) DATE "YYYY-MM-DD HH24:MI:SS"  NULLIF  (260:278)  = BLANKS,
    DATA_SOURCE_LKPCD  POSITION(279:283)   NULLIF  (279:283)  = BLANKS,
    EFFECTIVE_DATE  POSITION(284:302) DATE "YYYY-MM-DD HH24:MI:SS"  NULLIF  (284:302)  = BLANKS,
    REMARK  POSITION(303:1302)   NULLIF  (303:1302)  = BLANKS,
    OPRTNL_FLAG  POSITION(1303:1303)   NULLIF  (1303:1303)  = BLANKS,
    CREATED_BY  POSITION(1304:1311) DECIMAL(15,0)  NULLIF  (1304:1311)  = X'0000000000000000',
    CREATED_DATE  POSITION(1312:1330) DATE "YYYY-MM-DD HH24:MI:SS"  NULLIF  (1312:1330)  = BLANKS,
    MODIFIED_BY  POSITION(1331:1338) DECIMAL(15,0)  NULLIF  (1331:1338)  = X'0000000000000000',
    MODIFIED_DATE  POSITION(1339:1357) DATE "YYYY-MM-DD HH24:MI:SS"  NULLIF  (1339:1357)  = BLANKS
    )- it occurs intermittently. When this job runs, no one will be accessing the database or the tables.
    - When we do not run in parallel, then we are not facing the error but it is very slow (obviously).

    Just in case, I am also attaching the Datastage Logs:
       Item #: 466
       Event ID: 1467
       Timestamp: 2009-06-02 23:03:19
       Type: Info
       User Name: dsadm
       Message: main_program: APT configuration file: /clu01/datastage/Ascential/DataStage/Configurations/default.apt
         node "node1"
              fastname "machine_name"
              pools ""
              resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
              resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
         node "node2"
              fastname "machine_name"
              pools ""
              resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
              resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
         node "node3"
              fastname "machine_name"
              pools ""
              resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
              resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
         node "node4"
              fastname "machine_name"
              pools ""
              resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
              resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
          node "node5"
              fastname "machine_name"
              pools ""
              resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
              resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
         node "node6"
              fastname "machine_name"
              pools ""
              resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
              resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
        node "node7"
              fastname "machine_name"
              pools ""
              resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
              resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
       node "node8"
              fastname "machine_name"
              pools ""
              resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
              resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
       Item #: 467
       Event ID: 1468
       Timestamp: 2009-06-02 23:03:20
       Type: Warning
       User Name: dsadm
       Message: main_program: Warning: the value of the PWD environment variable (/clu01/datastage/Ascential/DataStage/DSEngine) does not appear to be a synonym for the current working directory (/clu01/datastage/Ascential/DataStage/Projects/Production).  The current working directory will be used, but if your ORCHESTRATE job does not start up correctly, you should set your PWD environment variable to a value that will work on all nodes of your system.
       Item #: 468
       Event ID: 1469
       Timestamp: 2009-06-02 23:03:32
       Type: Warning
       User Name: dsadm
       Message: Lkp_1: Input dataset 1 has a partitioning method other than entire specified; disabling memory sharing.
       Item #: 469
       Event ID: 1470
       Timestamp: 2009-06-02 23:04:22
       Type: Warning
       User Name: dsadm
       Message: Lkp_2: Input dataset 1 has a partitioning method other than entire specified; disabling memory sharing.
       Item #: 470
       Event ID: 1471
       Timestamp: 2009-06-02 23:04:30
       Type: Warning
       User Name: dsadm
       Message: Xfmer1: Input dataset 0 has a partitioning method other than entire specified; disabling memory sharing.
       Item #: 471
       Event ID: 1472
       Timestamp: 2009-06-02 23:04:30
       Type: Warning
       User Name: dsadm
       Message: Lkp_2: When checking operator: Operator of type "APT_LUTProcessOp": will partition despite the
    preserve-partitioning flag on the data set on input port 0.
       Item #: 472
       Event ID: 1473
       Timestamp: 2009-06-02 23:04:30
       Type: Warning
       User Name: dsadm
       Message: SKey_1: When checking operator: A sequential operator cannot preserve the partitioning
    of the parallel data set on input port 0.
       Item #: 473
       Event ID: 1474
       Timestamp: 2009-06-02 23:04:30
       Type: Warning
       User Name: dsadm
       Message: SKey_2: When checking operator: Operator of type "APT_GeneratorOperator": will partition despite the
    preserve-partitioning flag on the data set on input port 0.
       Item #: 474
       Event ID: 1475
       Timestamp: 2009-06-02 23:04:30
       Type: Warning
       User Name: dsadm
       Message: buffer(1): When checking operator: Operator of type "APT_BufferOperator": will partition despite the
    preserve-partitioning flag on the data set on input port 0.
       Item #: 475
       Event ID: 1476
       Timestamp: 2009-06-02 23:04:30
       Type: Info
       User Name: dsadm
       Message: Tgt_member: When checking operator: The -index rebuild option has been included; in order for this option to be
    applicable and to work properly, the environment variable APT_ORACLE_LOAD_OPTIONS should contain the options
    DIRECT and PARALLEL set to TRUE, and the option SKIP_INDEX_MAINTENANCE set to YES;
    this variable has been set by the user to `OPTIONS(DIRECT=TRUE, PARALLEL=TRUE, SKIP_INDEX_MAINTENANCE=YES)'.
       Item #: 476
       Event ID: 1477
       Timestamp: 2009-06-02 23:04:35
       Type: Info
       User Name: dsadm
       Message: Tgt_member_idtfr: When checking operator: The -index rebuild option has been included; in order for this option to be
    applicable and to work properly, the environment variable APT_ORACLE_LOAD_OPTIONS should contain the options
    DIRECT and PARALLEL set to TRUE, and the option SKIP_INDEX_MAINTENANCE set to YES;
    this variable has been set by the user to `OPTIONS(DIRECT=TRUE, PARALLEL=TRUE, SKIP_INDEX_MAINTENANCE=YES)'.
       Item #: 477
       Event ID: 1478
       Timestamp: 2009-06-02 23:04:41
       Type: Warning
       User Name: dsadm
       Message: Lkp_2,6: Ignoring duplicate entry at table record 1; no further warnings will be issued for this table
       Item #: 478
       Event ID: 1479
       Timestamp: 2009-06-02 23:04:41
       Type: Warning
       User Name: dsadm
       Message: Tgt_member_idtfr,0: SQL*Loader-951: Error calling once/load initialization
       Item #: 479
       Event ID: 1480
       Timestamp: 2009-06-02 23:04:41
       Type: Warning
       User Name: dsadm
       Message: Tgt_member_idtfr,0: ORA-00604: error occurred at recursive SQL level 1
       Item #: 480
       Event ID: 1481
       Timestamp: 2009-06-02 23:04:41
       Type: Warning
       User Name: dsadm
       Message: Tgt_member_idtfr,0: ORA-00054: resource busy and acquire with NOWAIT specified
       Item #: 481
       Event ID: 1482
       Timestamp: 2009-06-02 23:04:41
       Type: Warning
       User Name: dsadm
       Message: Tgt_member_idtfr,6: SQL*Loader-951: Error calling once/load initialization
       Item #: 482
       Event ID: 1483
       Timestamp: 2009-06-02 23:04:41
       Type: Warning
       User Name: dsadm
       Message: Tgt_member_idtfr,6: ORA-00604: error occurred at recursive SQL level 1
       Item #: 483
       Event ID: 1484
       Timestamp: 2009-06-02 23:04:41
       Type: Warning
       User Name: dsadm
       Message: Tgt_member_idtfr,6: ORA-00054: resource busy and acquire with NOWAIT specified
       Item #: 484
       Event ID: 1485
       Timestamp: 2009-06-02 23:04:41
       Type: Fatal
       User Name: dsadm
       Message: Tgt_member_idtfr,6: The call to sqlldr failed; the return code = 256;
    please see the loader logfile: /clu01/datastage/Ascential/DataStage/Scratch/ora.23335.478434.6.log for details.
       Item #: 485
       Event ID: 1486
       Timestamp: 2009-06-02 23:04:41
       Type: Fatal
       User Name: dsadm
       Message: Tgt_member_idtfr,0: The call to sqlldr failed; the return code = 256;
    please see the loader logfile: /clu01/datastage/Ascential/DataStage/Scratch/ora.23335.478434.0.log for details.

  • Is there a way to get long running SQL Agent jobs information using powershell?

    Hi All,
    Is there a way to get long running SQL Agent jobs information using powershell for multiple SQL servers in the environment?
    Thanks in Advance.
    --Hunt

    I'm running SQL's to fetch the required details and store it in centralized table. 
    foreach ($svr in get-content "f:\PowerSQL\Input\LongRunningJobsPowerSQLServers.txt"){
    $dt = new-object "System.Data.DataTable"
    $cn = new-object System.Data.SqlClient.SqlConnection "server=$svr;database=master;Integrated Security=sspi"
    $cn.Open()
    $sql = $cn.CreateCommand()
    $sql.CommandText = "SELECT
    @@SERVERNAME servername,
    j.job_id AS 'JobId',
    name AS 'JobName',
    max(start_execution_date) AS 'StartTime',
    max(stop_execution_date)AS 'StopTime',
    max(avgruntimeonsucceed),
    max(DATEDIFF(s,start_execution_date,GETDATE())) AS 'CurrentRunTime',
    max(CASE WHEN stop_execution_date IS NULL THEN
    DATEDIFF(ss,start_execution_date,stop_execution_date) ELSE 0 END) 'ActualRunTime',
    max(CASE
    WHEN stop_execution_date IS NULL THEN 'JobRunning'
    WHEN DATEDIFF(ss,start_execution_date,stop_execution_date)
    > (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-History'
    ELSE 'NormalRunning-History'
    END) 'JobRun',
    max(CASE
    WHEN stop_execution_date IS NULL THEN
    CASE WHEN DATEDIFF(ss,start_execution_date,GETDATE())
    > (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-NOW'
    ELSE 'NormalRunning-NOW'
    END
    ELSE 'JobAlreadyDone'
    END)AS 'JobRunning'
    FROM msdb.dbo.sysjobactivity ja
    INNER JOIN msdb.dbo.sysjobs j ON ja.job_id = j.job_id
    INNER JOIN (
    SELECT job_id,
    AVG
    ((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100)
    +
    STDEV
    ((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100) AS 'AvgRuntimeOnSucceed'
    FROM msdb.dbo.sysjobhistory
    WHERE step_id = 0 AND run_status = 1
    GROUP BY job_id) art
    ON j.job_id = art.job_id
    WHERE
    (stop_execution_date IS NULL and start_execution_date is NOT NULL) OR
    (DATEDIFF(ss,start_execution_date,stop_execution_date) > 60 and DATEDIFF(MINUTE,start_execution_date,GETDATE())>60
    AND
    CAST(LEFT(start_execution_date,11) AS DATETIME) = CAST(LEFT(GETDATE(),11) AS DATETIME) )
    --ORDER BY start_execution_date DESC
    group by j.job_id,name
    $rdr = $sql.ExecuteReader()
    $dt.Load($rdr)
    $cn.Close()
    $dt|out-Datatable
    Write-DataTable -ServerInstance 'test124' -Database "PowerSQL" -TableName "TLOG_JobLongRunning" -Data $dt}
    You can refer the below link to refer out-datatable and write-dataTable function.
    http://blogs.technet.com/b/heyscriptingguy/archive/2010/11/01/use-powershell-to-collect-server-data-and-write-to-sql.aspx
    Once we've the table details, I'm sending one consolidated email to automatically.
    --Prashanth

Maybe you are looking for

  • Screen recording - recording part of the screen or low resolution

    I want to record part of the screen in quick time x, has anybody found a way to do this? As a work-around, I tried reducing the screen resolution to 640x480 or 800x600 but the recorder doesn't seem to pick anything up at this resolution. After record

  • Office for Mac 2011 - unable to open SharePoint docs since SSL 3.0 disabled

    We've disabled SSL 3.0 on our SharePoint 2010 servers in response to the POODLE vulnerability.  Now Mac users are unable to open SharePoint documents using Microsoft Word 2011.  The error they get is: No connectivity with the server.  The file can't

  • Report Varaint in ABAP web dynpro

    Hello All, I am currently working on an ABAP Web dynpro development. I am developing a Report, where the requirement is to have Variants (same as that of GUI reports). The user can select the particular variant, which will populate the selection scre

  • Quantity error in Production order.

    Dear all, While creating production order i have given the quantity 2500 EA,after confirmation i checked in material documents the following bill of material(totally 23 bom) had consumed same time as quantity 3 EA only Kindly find below screen shots

  • How to avoid cells being merged in SPREADSHEET output

    Hello all, The PDF output looks excellent. When I use the same report with desformat=SPREADSHEET the output is not good. Each row is occuping two lines (by merging them) and also each report column merged 4 excel columns. The excel columns are merged