A function to insert SQL which creates insert SQL

I realise this is a bit recursive, but my users are a demanding lot! We want to store in a table the instruction to check if AUDIT_SYS_OPERATIONS is TRUE and to store the result of that check in some sort of an audit table.
In other words, the basic SQL you'd run directly might be:
select case when value='TRUE' then 1 else 0 end as passfail
from v$parameter where upper(name)='AUDIT_SYS_OPERATIONS';But we want to convert this into something which will generate an INSERT statement and which itself can be inserted into a table full of these sorts of audit checks:
insert into myaudit_checks values ('select ''insert into myaudit_table values (''||case when value=''TRUE'' then 1 else 0 end ||'');'' as passfail
from v$parameter where upper(name)=''AUDIT_SYS_OPERATIONS'';');I find the rules for doing this to be quite easy:
1. Add 'insert into myaudit_test values ('|| before the original CASE keyword
2. Add ||');' before the original 'AS PASSFAIL' keywords
3. Wrap the whole lot in single quotes
4. Double up any quotes inside those new ones
5. Wrap the new statement inside a standard INSERT statement.
My users don't think that's easy, though. So I thought I'd write them a function which would return the double-quoted insert statement when fed the non-quoted basic select. It works OK, on the whole, except that calling it involves... doubling up the quotes. That is:
select create_insert_stmt('select case when value=''TRUE'' then 1 else 0 end as passfail
from v$parameter where upper(name)=''AUDIT_SYS_OPERATIONS'';') from dual;...works OK, but the parameter being passed to the CREATE_INSERT function requires the 'TRUE' and 'AUDIT_SYS_OPERATIONS' strings to be double-single-quoted up-front, which defeats the purpose a bit. I want them to be able to feed the completely original select statement, with single-quoted strings, into the function... but, syntactically, this looks impossible.
Anyone got any ideas on how to spit out the final 'select 'insert...'' given the original, single-quoted select statement, without requiring too much from my users by way of thought or effort?
The only thing I thought of was to have them store the original select in a text file and read it via UTL_FILE... but if anyone's got any better ideas, I'd be pleased to hear them.

>
Anyone got any ideas on how to spit out the final 'select 'insert...'' given the original, single-quoted select statement, without requiring too much from my users by way of thought or effort?
>
For anything that might include quotes use alternative quoting. See Text Literals in the SQL Language doc
http://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements003.htm
>
In the bottom branch of the syntax:
•Q or q indicates that the alternative quoting mechanism will be used. This mechanism allows a wide range of delimiters for the text string.
•The outermost ' ' are two single quotation marks that precede and follow, respectively, the opening and closing quote_delimiter.
•c is any member of the user's character set. You can include quotation marks (") in the text literal made up of c characters. You can also include the quote_delimiter, as long as it is not immediately followed by a single quotation mark.
•quote_delimiter is any single- or multibyte character except space, tab, and return. The quote_delimiter can be a single quotation mark. However, if the quote_delimiter appears in the text literal itself, ensure that it is not immediately followed by a single quotation mark.
If the opening quote_delimiter is one of [, {, <, or (, then the closing quote_delimiter must be the corresponding ], }, >, or ). In all other cases, the opening and closing quote_delimiter must be the same character.
Here are some valid text literals:
'Hello'
'ORACLE.dbs'
'Jackie''s raincoat'
'09-MAR-98'
N'nchar literal'
Here are some valid text literals using the alternative quoting mechanism:
q'!name LIKE '%DBMS_%%'!'
q'<'So,' she said, 'It's finished.'>'
q'{SELECT * FROM employees WHERE last_name = 'Smith';}'
nq'ï Ÿ1234 ï'
q'"name like '['"'
{quote}

Similar Messages

  • How to create an INSERT trigger which creates a "sequence number"

    Imagine a table T_Q with columns C_1, C_2, C_3 and UN_123.
    C_1, C_2 and C_3 are all numeric and given by the user.
    UN_123 has to be a calculated sequence number starting by 1 and incremented by 1 for each combination of C_1, C_2 and C_3, i.e., the sequence number depends on the key values C_1, C_2 and C_3.
    Could anybody provide a code sample on how to create a BEFORE INSERT trigger , which calculates the value of the column UN_123 based on the values of C_1, C_2 and C_3 ??
    Premise: Rather than using any sequence, the trigger code should only be based on the table T_Q
    null

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Rainer Wagner ([email protected]):
    Imagine a table T_Q with columns C_1, C_2, C_3 and UN_123.
    C_1, C_2 and C_3 are all numeric and given by the user.
    UN_123 has to be a calculated sequence number starting by 1 and incremented by 1 for each combination of C_1, C_2 and C_3, i.e., the sequence number depends on the key values C_1, C_2 and C_3.
    Could anybody provide a code sample on how to create a BEFORE INSERT trigger , which calculates the value of the column UN_123 based on the values of C_1, C_2 and C_3 ??
    Premise: Rather than using any sequence, the trigger code should only be based on the table T_Q<HR></BLOCKQUOTE>
    null

  • How to insert data which contains '&' from SQL*Plus without asking prompt

    Hi,
    I want to insert data in table from SQL*Plus but data value contains '&' as given in below example(insert script).
    There are 10000 rows. When I load from SQL*Plus it is asking for 'value for :P'.
    I dont want to replace '&' with 'and' also there should not be prompt for asking value for :P.
    Example
    Insert into CS_Tracker (TrackId,FeedBack) values ('ARARGE034678','S&P');
    Insert into CS_Tracker (TrackId,FeedBack) values ('ARARGE034676','S&F');
    Insert into CS_Tracker (TrackId,FeedBack) values ('ARARGE034677','A&P');
    Can anyone help me in above issue.
    Thanking in Advance
    Sanjeev

    use
    set define offbeforehand

  • SQL Developer create wrong SQL for creation of Database link

    Hi,
    I tried to create Database link to another machine through SQL Developer. and I got error message saying "Sql not properly ended" or similar.
    I copy-paste the sql to sqlplus shell and I found that the problem came from password part of database link so the generated from SQL Plus is something like:
    SQL> CREATE DATABASE LINK mytest CONNECT TO anotherDB IDENTIFIED BY 1234 USING 'OtherServiceName';
    but it should be
    SQL> CREATE DATABASE LINK mytest CONNECT TO anotherDB IDENTIFIED BY "1234" USING 'OtherServiceName';
    so if I type in the password field
    "1234" which is ****** /6 chars/ the database link was created correctly.
    Could you check and fix this in next release of the SQL Developer?
    Otherwise the Application is very good and far ahead from TORA :)

    Thanks, I didn't notice that for passwords. So it's most definately my problem that the SQL Developer . :)
    Thank you for clarification!
    Offtopic: I am searching for DB Comparison tool which could compare 2 schemas and return me the differences and the sync script. Do you know any /free if possible/? There is such thing in TOAD but I am on Linux and I prefer to use something else. :)
    Thank you in advance

  • How to get the SQL file name in SQL*plus

    hi all,
         I have created two sql file at C drive as "c:\Createtable.sql" and "c:\Deletetable.sql"
    afterwards i open
    C:\>sqlplus
    SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jan 30 11:37:10 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Enter user-name: scott/tiger
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> @C:\Createtable.sql'
    Table created.
    SQL> @'C:\Deletetable.sql'
    Table dropped.
    SQL>My problem is to get the name of the file as "c:\createtable.sql" and "C:\Deletetable.sql" in sql*plus enviornment.
    Thanks & Regards
    Singh

    Dear Damorgan,
         >>your version number to three decimal places
         My Oracle DB Version i have already stated in my previous post is 10.2.0.1.0
    Actually my problem is to get the sql files name we run in sqlplus enviornment with @ symbol. like
    i have created one sql file in c drive as
    "C:\Createtable.sql"
    afterwords i have connected to sqlplus as
    sql> conn scott/tiger
    sql>@c:\createtable.sql
    Now i want some query to get the name of the file which is run.
    In actual my problem is as
    i have suppose 10 or more SQL files in some folder ( sql1.sql, sql2.sql, sql3.sql ....).
    i created one file to call all the 10 sql files (main.sql)
    i have also one track_table which will keep track that which sql file is runned.
    I want some automated script which will insert the record in that track_table....... for that i need the name of sql file which is runned.
    Hope this will help you.
    Thanks & Regards
    Singh

  • PL/SQL cursors vs. SQL*plus Select statement

    Hi folks, hope you're doing well,
    Here is a question that kept me wondering:
    Why would I use cursors when i can achieve the same thing with a SQL+ Select statement which is much easier to formulate than a cursor (e.g. you need no declaration, loops etc)?.
    Thanks so much,
    -a

    There is no such thing as a SQL*Plus SELECT statement. The SELECT command is part of the SQL Language - not part of the SQL*Plus (very limited small vocabulary) macro language.
    All SQL SELECTs (from client languages) winds up in the SQL Engine as SQL cursors. A SQL Cursor is basically the:
    - SQL source code
    - SQL "compiled" code (instructions on how to fetch the rows)
    On the client side, client cursors (not to be confused with SQL cursors) are used. A client cursor is created in the client language when it makes SQL calls via the database client driver (called the OCI/Oracle Call Interface for Oracle clients).
    Typically this is what a client does. It makes a connection to the database and gets a database handler in return. The database handler is the "communication channel" from the client to the db. In Oracle, the database handler in the client refers to the Oracle session (for that client) on db server.
    A SQL statement (source code) is used by the client. This can be a SELECT statement you type in at the SQL*Plus command line. It can be a SELECT statement for the PL/SQL cursor command in a stored procedure.
    The client creates a SQL handle for this SQL statement, by calling the Oracle client driver. Note that this SQL handle is a client handle - a client cursor for that SQL statement.
    E.g.
    a) sqlHandle = CreateSQL( databaseHandle, 'SELECT ... FROM ...')
    b) sqlHandle.Parse
    c) sqlHandle.Execute
    After the SQL handle (client cursor) has been executed, the client can fetch rows from it.
    This is what SQL*Plus does automatically for you, without you having to write the code to do it. SQL*Plus CONNECT command create a database connection handle. You enter a SELECT statement and SQL*Plus creates a SQL handle (client cursor), executes it, fetches from it, displays the rows, and closes the SQL handle when done.
    The same applies to PL/SQL. You can use a SELECT statement just like that in PL/SQL. E.g.
    declare
      i integer;
    begin
      select count(*) into i from emp where deptid = 123;
    end;This is called an implicit cursor. PL/SQL creates (just like SQL*Plus) an implicit client cursor. It creates and disposes of that client SQL handle for you - you do not need to do it.
    Or, you can create an explicit cursor. E.g. declare
      cursor c is select count(*) from emp where deptid = 123;
      i integer;
    begin
      open c;
      fetch c into i;
      close c;
    end;The question as to when to use implicit (client) cursors versus explicit (client) cursors depends on your requirements. Do you need to cycle through the results of the SQL? Etc.
    And keep in mind that in either case, the SQL Engine creates a SQL cursor anyway on its side.

  • Creating PL/SQL function in SQL plus which will display desired output

    Hello PL/SQL experts,
    I am not skilled on PL/SQL so want you help for one of the PL/SQL requirement.
    Requirement: We have a database purge jobs which deletes the records from our application database on daily basis. Our Client wants a verification PL/SQL to be created which they will run pre and post the purge job to identify the number of records deleted/processed by purge job. It should do a SELECT operation and find out the number of records in table based on the where clause and give the output to the user.
    The result of PL/SQL should be something like this (considering the PL/SQL is processing multiple select statements):
    ****Verification SQL Start*****
    30 records from table S_SRV_REQUEST selected
    45 records from table S_SRV_REQUEST_X selected
    15 records from table S_SRV_REQUEST_XM selected
    *****Verification SQL complete*****
    For this I am thinking of a simple PL/SQL which will diplay the count of records from each table but I am not sure how to display the count on screen as an output. Can I request PL/SQL experts to put some light on this?
    Regards
    Sumit

    PL/SQL is a server side process running on your database server.  It is not connected to a display or a keyboard, so cannot output any results from within the code itself.
    What you need is a client application that the user can run which will query the database, or call procedure(s) to get results and then the client application will display those results appropriately.
    One of the simplest client tools you can use to build such a user-friendly application is Oracle Application Express (APEX).
    Bear in mind that if you're going to 'identify' record for deletion prior to actually deleting them, then you may also have a need to store the keys of those identified records somewhere so that only those records are deleted.  Consider the situation where the user requests the information and the application queries the records to say there are 30 records from S_SRV_REQUEST with the correct criteria to be deleted.  By the time the user confirms they are happy and want the records deleted, some other record(s) may have been updated and also meet the criteria, so just doing a delete based on the criteria itself could result in more than 30 records being deleted.  However if you've identified the records and marked them in some way for deletion or stored the keys of those records on a queue somewhere to indicate the ones to be deleted, then your actual deletion process can just deal with those records and ignore any that have met the criteria since that time.  Of course that depends on your individual requirements, but it's something to bear in mind.

  • SQL Loader and INSERT Trigger

    I have problem and your help to solve it would be very much appreciated.
    I am uploading a text file with SQL Loader into a table. Since I used APPEND option in the Loader, I don't want records to be duplicated. So, I wrote a "BEFORE INSERT .. FOR EACH ROW" trigger to check whether that row already exists or not.
    For example, let us consider a table TEST as follows.
    Fld1     NUMBER(2);
    Fld2     VARCHAR2(10);
    Fld3     VARCHAR2(10);
    I have a trigger on this table.
    CREATE OR REPLACE TRIGGER Trg_Bef_Insert_Test
    BEFORE INSERT ON Test FOR EACH ROW
    DECLARE
    vCount NUMBER(2);
    DuplicateRow EXCEPTION;
    BEGIN
    SELECT Count(*) INTO vCount FROM Test
         WHERE fld1 || fld2 || fld3 = :new.fld1 || :new.fld2 || :new.fld3;
    IF vCount > 0 THEN
         RAISE DuplicateRow;
    END IF;
    EXCEPTION
    WHEN DuplicateRow THEN
         Raise_Application_Error (-20001,'Record already exists');
    WHEN OTHERS THEN
         DBMS_OUTPUT.PUT_LINE('ERROR : ' || SQLCODE || '; ' || SUBSTR(SQLERRM, 1, 150));
    END;
    Please refer to the following SQL statements which I executed in the SQL Plus.
    SQL> insert into test values (1,'one','first');
    1 row created.
    SQL> insert into test values (1,'one','first');
    insert into test values (1,'one','first')
    ERROR at line 1:
    ORA-20001: Record already exists
    ORA-06512: at "CAMELLIA.TRG_TEST", line 13
    ORA-04088: error during execution of trigger 'CAMELLIA.TRG_TEST'
    Would anyone tell me why do errors -6512 and -4088 occur ?
    Also, if you have any other suggestion to handle this situation, please let me know.
    By the way, I am using Oracle 8.1.7.
    Thank you.

    There are a few things wrong here, but you should really use a unique constraint for this.
    SQL> create table t (a number, b number, c number,
      2      constraint uk unique (a, b, c));
    Table created.Here's an example data file with 12 records three of which are duplicates.
    1,2,3
    3,4,5
    6,7,8
    3,2,1
    5,5,5
    3,4,5
    3,2,1
    1,1,1
    2,2,2
    6,7,8
    8,8,8
    9,9,9And a control file
    load data
    infile 'in.dat'
    append
    into table t
    fields terminated by ',' optionally enclosed by '"'
    (a, b, c)Running it with sql loader, inserts the nine records, outputs the three duplicates to a .bad file and logs all the errors in the .log file. No need for triggers or any code.
    $ sqlldr control=in.ctl
    Username:xxx
    Password:
    SQL*Loader: Release 9.2.0.1.0 - Production on Mon Apr 21 23:16:44 2003
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Commit point reached - logical record count 12
    $ cat in.bad
    3,4,5
    3,2,1
    6,7,8
    SQL> select * from t;
             A          B          C
             1          2          3
             3          4          5
             6          7          8
             3          2          1
             5          5          5
             1          1          1
             2          2          2
             8          8          8
             9          9          9
    9 rows selected.

  • Sql Script containing INSERT INTO TABLE_NAME taking very long time

    Version:11g
    I have a .sql file which contains insert statements for the table ZIP_CODES like.
    INSERT INTO ZIP_CODES (ZIP_CODE, CITY, PROV, COUNTRY_CODE, LONGITUDE, LATITUDE)
    VALUES (..........);This sql file contains above 800,000 INSERT statements like these! Execution of this file takes around 20 minutes.
    Our client insists that they need a script to create this table and not a dump file (export dump of just this table)
    Is there any way i could speed up these INSERTs in this script. I have added a commit half way through this file because i was worried about UNDO tablespace.
    This table (ZIP_CODES) is not dependant on any other table (no FKs, no FK references,..).
    Edited by: Steve_74 on 03-Sep-2009 05:53

    One possible option is to use External Tables
    1. Create a CSV file with the values to be stored in the table.
    2. Create an directory object (The location where the CSV file will be stored)
    3. Create an External Table pointing to the CSV file
    4. Just do a INSERT INTO ZIP_CODES SELECT * FROM <external table> (may be try to use a APPEND hint)
    5. Drop the Directory object and External Table.

  • PL SQL with multiple inserts, how to continue after exception is raised

    Hi,
    I have a simple PL SQL function with various inserts. ex:
    Insert Into "DI01"."DUA_DIM_UNITE_ADMIN" (Ide_Unite_Admin_Sk, Num_Unite_Admin, Des_Unite_Admin) Values ('-1', '00000000', 'Défaut');
    INSERT INTO "DI01"."DUA_DIM_UNITE_ADMIN" (IDE_UNITE_ADMIN_SK, NUM_UNITE_ADMIN, DES_UNITE_ADMIN) VALUES ('-2', 'S. O.', 'Sans Objet');
    Insert Into "DI01"."DCU_DIM_CATGR_UNSPS" (Ide_Catgr_Sk, Num_Code, Des_Code, Num_Catgr, Des_C.........
    I want to be able to run the function multiple times, and have all the inserts executed everytime, even if I get a ORA-00001 unique constraint (string.string) violated error in on of the inserts. That means that if I get an error in the first insert, I want the function to continue running and executing the consecutive inserts.
    I though of including each insert in a different block, like:
    BEGIN
    Insert......
    When Exception then null;
    END;
    BEGIN
    Insert......
    When Exception then null;
    END;
    But I have at least 50 inserts, so the final code becomes huge.
    Another solution is to use merge inseat of insert, but the code seems too complex for such a simple task.
    Is there any other solution for this that I am not seeing?
    Thank you for your time,
    Joao Moreira

    You can use DML error logging approach or FORALL SAVE EXCEPTIONS.
    Since you didn't mention the version I assume you are using 11g.
    Below is the sample code for DML error logging mechanism
    SQL> select * from v$version;
    BANNER                                                                         
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production   
    PL/SQL Release 11.2.0.3.0 - Production                                         
    CORE 11.2.0.3.0 Production                                                     
    TNS for Linux: Version 11.2.0.3.0 - Production                                 
    NLSRTL Version 11.2.0.3.0 - Production                                        
    SQL> DROP TABLE tableA;
    Table dropped.
    SQL> DROP TABLE Err$_tableA;
    Table dropped.
    SQL>
    SQL> CREATE TABLE tableA
      2  (
      3     col1   NUMBER PRIMARY KEY,
      4     col2   NUMBER,
      5     col3   VARCHAR2 (10)
      6  );
    Table created.
    SQL>
    SQL> -- Create error log table
    SQL>
    SQL> BEGIN
      2     DBMS_ERRLOG.create_error_log (dml_table_name => 'TABLEA');
      3  END;
      4  /
    PL/SQL procedure successfully completed.
    SQL>
    SQL> BEGIN
      2     FOR i IN (SELECT 1 AS col1 FROM DUAL
      3               UNION ALL
      4               SELECT 1 FROM DUAL)
      5     LOOP
      6        INSERT INTO tableA (col1)
      7             VALUES (i.col1)
      8                LOG ERRORS INTO Err$_tableA REJECT LIMIT UNLIMITED;
      9     END LOOP;
    10 
    11     COMMIT;
    12  END;
    13  /
    PL/SQL procedure successfully completed.
    SQL> column column_name format a30
    SQL> set linesize 300
    SQL> select * from tableA;
    SQL> select * from err$_tablea;
    Thanks,
    GPU

  • |Error SQL: ORA-14400: inserted partition key does not map to any partition

    I have an installation of ECC6 with BI7 in which consolidated through SEM-BCS OBCS_C10 with the cube, which I have version 100 for fiscal data, 101 data for IFRS and IFRS Copying a 301 version.
    The client will ask me the empty version of the 301 in the cube OBCS_C10.
    Before switching to empty the cube "Target Real-Time Data Can Be Loaded With Data; Planning Not Allowed."
    To empty the cube using transaction RSA1 / manage / contents / Delete Selection.
    However, the data is not erased and revise the job gives me the error:
    Resumen log job para job BI_INDXD51A1NTCIYWU06OKZZESN1R94 / 07520600
    Log job
    Hora
    Txt.mje.no codificado
    04.02.2009
    07:52:06
    El job ha sido lanzado.
    04.02.2009
    07:52:06
    Paso 001 iniciado (programa RSINDEX1, variante &0000000000108, usuario ACHUY)
    04.02.2009
    07:52:10
    SQL: 04.02.2009 07:52:10 ACHUY
    04.02.2009
    07:52:10
    DELETE FROM DDSTORAGE WHERE DBSYSABBR = 'ORA'
    04.02.2009
    07:52:10
    AND INDEXNAME = ' ' AND TABNAME =
    04.02.2009
    07:52:10
    '/BI0/D0BCS_C101
    04.02.2009
    07:52:10
    SQL-END: 04.02.2009 07:52:10 00:00:00
    04.02.2009
    07:52:10
    SQL: 04.02.2009 07:52:10 ACHUY
    04.02.2009
    07:52:10
    DELETE FROM DDSTORAGE WHERE DBSYSABBR = 'ORA'
    04.02.2009
    07:52:10
    AND INDEXNAME = ' ' AND TABNAME =
    04.02.2009
    07:52:10
    '/BI0/D0BCS_C101
    04.02.2009
    07:52:10
    SQL-END: 04.02.2009 07:52:10 00:00:00
    04.02.2009
    07:52:10
    SQL: 04.02.2009 07:52:10 ACHUY
    04.02.2009
    07:52:10
    CREATE TABLE "/BI0/0100000076" PCTFREE 00
    04.02.2009
    07:52:10
    PCTUSED 00 INITRANS 001 TABLESPACE PSAPSR3
    04.02.2009
    07:52:10
    STORAGE (INITIAL     0000000016 K NEXT
    04.02.2009
    07:52:10
    0000000016 K MINEXTENTS  0000000001 MAXEXTENTS
    04.02.2009
    07:52:10
    UNLIMITED PCTINCREASE 0000 FREELISTS   001
    04.02.2009
    07:52:10
    FREELIST GROUPS 01) AS SELECT DISTINCT DIMID FROM
    04.02.2009
    07:52:10
    "/BI0/D0BCS_C101" "DIM" , "/BI0/SCS_VERSION"
    04.02.2009
    07:52:10
    "MD1" WHERE "DIM"."SID_0CS_VERSION" = "MD1"."SID"
    04.02.2009
    07:52:10
    AND ( "MD1"."CS_VERSION" BETWEEN '301' AND '301'
    04.02.2009
    07:52:10
    SQL-END: 04.02.2009 07:52:10 00:00:00
    04.02.2009
    07:52:25
    SQL: 04.02.2009 07:52:25 ACHUY
    04.02.2009
    07:52:25
    DELETE FROM DDSTORAGE WHERE DBSYSABBR = 'ORA'
    04.02.2009
    07:52:25
    AND INDEXNAME = ' ' AND TABNAME = '/BI0/F0BCS_C10
    04.02.2009
    07:52:25
    SQL-END: 04.02.2009 07:52:25 00:00:00
    04.02.2009
    07:52:25
    SQL: 04.02.2009 07:52:25 ACHUY
    04.02.2009
    07:52:25
    DELETE FROM DDSTORAGE WHERE DBSYSABBR = 'ORA'
    04.02.2009
    07:52:25
    AND INDEXNAME = ' ' AND TABNAME = '/BI0/F0BCS_C10
    04.02.2009
    07:52:25
    SQL-END: 04.02.2009 07:52:25 00:00:00
    04.02.2009
    07:52:25
    SQL: 04.02.2009 07:52:25 ACHUY
    04.02.2009
    07:52:25
    CREATE TABLE "/BI0/0100000030" PCTFREE 10
    04.02.2009
    07:52:25
    PCTUSED 00 INITRANS 001 TABLESPACE PSAPSR3
    04.02.2009
    07:52:25
    STORAGE (INITIAL     0000000016 K NEXT
    04.02.2009
    07:52:25
    0000000000 K MINEXTENTS  0000000001 MAXEXTENTS
    04.02.2009
    07:52:25
    2147483645 PCTINCREASE 0000 FREELISTS   001
    04.02.2009
    07:52:25
    FREELIST GROUPS 01) PARTITION BY RANGE
    04.02.2009
    07:52:25
    ("KEY_0BCS_C10P") ( PARTITION "/BI0/F0BCS_C100"
    04.02.2009
    07:52:25
    VALUES LESS THAN (0) TABLESPACE "PSAPSR3",
    04.02.2009
    07:52:25
    PARTITION "/BI0/F0BCS_C100000000019" VALUES LESS
    04.02.2009
    07:52:25
    THAN (0000000019) TABLESPACE "PSAPSR3", PARTITION
                  |
    04.02.2009
    07:52:25
    "/BI0/F0BCS_C100000000276" VALUES LESS THAN
    04.02.2009
    07:52:25
    (0000000276) TABLESPACE "PSAPSR3", PARTITION
    04.02.2009
    07:52:25
    "/BI0/F0BCS_C100000000277" VALUES LESS THAN
    04.02.2009
    07:52:25
    (0000000277) TABLESPACE "PSAPSR3") AS SELECT *
    04.02.2009
    07:52:25
    FROM "/BI0/F0BCS_C10" WHERE "KEY_0BCS_C101" NOT
    04.02.2009
    07:52:25
    IN ( SELECT "DIMID" FROM "/BI0/0100000076"
    04.02.2009
    07:52:29
    SQL-END: 04.02.2009 07:52:29 00:00:04
    04.02.2009
    07:52:29
    Error SQL: ORA-14400: inserted partition key does not map to any partition
    04.02.2009
    07:52:29
    Error de sistema: CREATE_TABLE_AS_SELECT/RSDU_EXEC_SQL /BI0/0100000030 14400
    04.02.2009
    07:52:29
    El job ha sido cancelado tras excepción de sistema ERROR_MESSAGE.
    Someone has an idea how to fix this?

    This is the SQL of the insert
    The partition condition of the table is the column ("KEY_0BCS_C10P")
    SELECT * FROM "/BI0/F0BCS_C10" WHERE "KEY_0BCS_C101" NOT IN ( SELECT "DIMID" FROM "/BI0/0100000076" )
    This is the info of the /BIO/0100000076 TABLE
    SQL> SELECT "DIMID" FROM sapsr3."/BI0/0100000076";
         DIMID
             6
    SQL>
    And this are the rows that I think are not included in the partitions of the table
      1  SELECT DISTINCT KEY_0BCS_C10P, KEY_0BCS_C101 FROM SAPSR3."/BI0/F0BCS_C10"
      2  WHERE KEY_0BCS_C10P > 277
      3* AND "KEY_0BCS_C101" NOT IN ( SELECT "DIMID" FROM SAPSR3."/BI0/0100000076" )
    SQL> /
    KEY_0BCS_C10P KEY_0BCS_C101
              284             4
              285             4
              293             3
              292             4
              293             4
              293             5
              285             3
              290             4
              292             5
              283             4
              285             5
    KEY_0BCS_C10P KEY_0BCS_C101
              291             5
              292             3
              291             4
    14 rows selected.
    SQL>

  • Inserting Data from Oracle to SQL Server on the Real Time Basis.

    Hi Everyone,
    I need to insert data from Oracle to SQL Server on the Real Time basis, we have to fetch data from oracle approx 20 tables, and each table has more than 30 Fields. I need to fetch data in every 15 mins.
    I have created a job using SQL SERVER Agent by writing insert queries for all the tables with conditions that no rows will be inserted which is already in SQL. note that this job is taking only 1 min to execute.
    But in this way our SQL Server getting hanged and it giving problems to other application running in the SQL SERVER.
    So i m requesting all of you that what is the best way to insert huge amount of data on the real time basis.
    Thanx in Advance.

    1) Create Linked server 
    2) insert data using openquery  and set job in sql agent
    3) run job after 15 minutes

  • Create Insert comand joining variables

    Hi, I'm having a problem, I want to create a execute immediate with the following code
    The variable v_values as the values that i want to insert in the table, but i need to add the ' in the beggining and in the end ','' because the value of the variable looks like this:
    1','1','','10-11-2004
    QRY_INS := ('INSERT INTO TABOWN.GFNT02PD_PGMNT VALUES(''' || v_values || ''', '''')');
    EXECUTE IMMEDIATE QRY_INS;
    when i execute the command the result fo the qry_ins variable is like this:
    INSERT INTO TABOWN.GFNT02PD_PGMNT VALUES('1','1',,TO_DATE('10-11-2004', 'DD-MM-YYYY'),'
    the problems is because the ') comes down one line and the value for the last column becames too long, i already tries the rtrim but it didn't work.
    thanks

    Yep, you don't need dynamic SQL here at all. You're trying to take a shortcut and what you've done is given yourself a massive headache because of single-quote issues, and given your database a headache because of all those literals you're trying to embed in an insert statement.
    The right way to handle this is to parse the values into separate variables (or array elements) and supply them as part of a static insert statement.
    Here's some code that parses a delimited string into a string array and then uses the values in a static insert statement.
    sql>CREATE OR REPLACE package pkg_split
      2  is
      3    type string_array is table of varchar2(32) index by pls_integer;
      4 
      5    function f_string_table(p_list in varchar2) return string_array;
      6  end;
      7  /
    Package created.
    sql>CREATE OR REPLACE package body pkg_split
      2  is
      3 
      4    function f_string_table(
      5      p_list in varchar2)
      6      return string_array
      7    is
      8      v_string  varchar2(256) := p_list || ',';
      9      v_pos     pls_integer;
    10      v_data    string_array;
    11    begin
    12      loop
    13        v_pos := instr(v_string, ',');
    14        exit when (nvl(v_pos, 0) = 0);
    15        v_data(v_data.count + 1) := trim(substr(v_string, 1, v_pos - 1));
    16        v_string := substr(v_string, v_pos + 1);
    17      end loop;
    18      return (v_data);
    19    end f_string_table;
    20   
    21  end;
    22  /
    Package body created.
    sql>declare
      2    v_list varchar2(256) := 'horse,pig,cow,dog';
      3    v_arr  pkg_split.string_array;
      4  begin
      5    v_arr := pkg_split.f_string_table(v_list);
      6    insert into t (a, b, c, d) values (v_arr(1), v_arr(2), v_arr(3), v_arr(4));
      7  end;
      8  /
    PL/SQL procedure successfully completed.
    sql>select * from t;
    A        B        C        D
    horse    pig      cow      dog
    1 row selected.

  • Custom PL/SQL API that inserts the data into a custom interface table.

    We are developing a custom Web ADI integrator for importing suppliers into Oracle.
    The Web ADI interface is a custom PL/SQL API that inserts the data into a custom interface table. We have defined the content, uploader and an importer. The importer is again a custom PL/SQL API that will process the records inserted into the custom table and updates the STATUS column of the custom interface table. We want to show the status column back on the spreadsheet.
    Defined the 'Document Row' import rule and added the rows that would identify the unique record.
    Errored row import rule, we are using a SELECT * from custom_table where status<>'Success' and vendor_name=$param$.vendor_name
    The source of this parameter is import.vendor_name
    We have also defined an Error lookup.
    After the above setup is completed, we invoke the create document and click on Oracle->Upload.
    The records are getting imported, but the importer program is failing with An error has occurred while running an API import. The ERRORED_ROWS step 20003:ER_500141, parameter number 1 must contain the value BIND in attribute 1.'

    The same issue.
    Need help.
    Also checked bne.log, no additional information.
    <bne:document xmlns:bne="http://www.oracle.com/bne">
    <bne:message bne:type="DATA" bne:text="BNE_VALID_ROW_COUNT" bne:value="11" />
    <bne:message bne:type="DATA" bne:text="BNE_INVALID_ROW_COUNT" bne:value="0" />
    <bne:message bne:type="ERROR" bne:text="An error has occurred while running an API import"
    bne:cause="The ERRORED_ROWS step 20003:ER_500165, parameter number 1 must contain the value BIND in attribute 1."
    bne:action="" bne:source="BneAPIImporter" >
    <bne:context bne:collection="collection_1" />
    </bne:message><bne:message bne:type="STATUS"
    bne:text="No rows uploaded" bne:value="" >
    <bne:context bne:collection="collection_1" /></bne:message>
    <bne:message bne:type="STATUS" bne:text="0 rows were invalid" bne:value="" >
    <bne:context bne:collection="collection_1" /></bne:message></bne:document>

  • SQL Loader and Insert Into Performance Difference

    Hello All,
    Im in a situation to measure performance difference between SQL Loader and Insert into. Say there 10000 records in a flat file and I want to load it into a staging table.
    I know that if I use PL/SQL UTL_FILE to do this job performance will degrade(dont ask me why im going for UTL_FILE instead of SQL Loader). But I dont know how much. Can anybody tell me the performance difference in % (like 20% will decrease) in case of 10000 records.
    Thanks,
    Kannan.

    Kannan B wrote:
    Do not confuse the topic, as I told im not going to use External tables. This post is to speak the performance difference between SQL Loader and Simple Insert Statement.I don't think people are confusing the topic.
    External tables are a superior means of reading a file as it doesn't require any command line calls or external control files to be set up. All that is needed is a single external table definition created in a similar way to creating any other table (just with the additional external table information obviously). It also eliminates the need to have a 'staging' table on the database to load the data into as the data can just be queried as needed directly from the file, and if the file changes, so does the data seen through the external table automatically without the need to re-run any SQL*Loader process again.
    Who told you not to use External Tables? Do they know what they are talking about? Can they give a valid reason why external tables are not to be used?
    IMO, if you're considering SQL*Loader, you should be considering External tables as a better alternative.

Maybe you are looking for

  • Can't use Contacts in Outlook Webmail to send mail

    I can access my company's Outlook Webmail from home, using Safari 10.3.1. No log in problems. However, when I try to send mail to a contact, the name doesn't appear. My Contacts are all listed and I can review and add to them through Safari and my Op

  • Finder Crashes All The Time - Mountain Lion.

    Helo there, My iMac used to work just fine, but now after a year I got it, Finder crashes all the time. It doesn't show my folders, only some of them. I have to search to see them. When I open Finder, and try to work with it, I will get the loading c

  • Transfer of video

    Hi, I have transfered DV film from my Sony Camera to my computer with Phoroshop Elements. Otherwise ok, but for some reason from time to time the images "goes too fast" when playing the video (not even speed). Any ideas what could be the problem? Tha

  • Call XI Web Service from Adobe form. URGENT pleaseeee

    Hi everyone! Help!I need to implement an Adobe Interactive Form, this form has to call / consume a XI Web Service. I 'm able to call a normal web service, I got the "Interactive Forms and Web Service Integration" tutorial. The problem is when the but

  • Execption In Resuest Processor

    Hi All, I am using Java Studio 8. Whenever I use some typical classes like DefaultMutableTreeNode and try to use auto code completion by pressing ctrl+space, i get an exception "Exception occurred in Request Processor". That is, when I type "defaultm