Commit & Nohup

Hi All,
I have an pl/sqlplus query that inserts records in my test tables, this query takes long time since it has 3M records, how can i commit after every 10,K records.
Is there a way if i can run it in background (nohup) from unix.
Here is the following query.
DECLARE
TEST number := 912000000;
ttno number := 1;
BEGIN
LOOP
Insert into test_let
(INTERFACE_ID,TRANSACTION_ID, BUS_CMD_ID, COMMAND_ORDER, COMMAND_ORDER_ARGUMENT,
COMMAND_MARKET, COMMAND_PRIORITY, COMMAND_STATUS, ENTRY_DATETIME, LAST_CHANGE_DATETIME, AREA,
MSISDN, BUSINESS_COMMAND, TRANSACTION_STATUS, RETURN_CODE,RESPONSE_MESSAGE,PARAM_NO,PARAM_A4KI)
Values
(ttno,TEST||':'||'I001'||':'||ttno, ttno, 'AUTHENTICATE', 'AUC2_SLI', 'SET', '10', '10',
SYSDATE, sysdate, '0', TEST,'AUTHENTICATE AUC2_SLI FOR ' || TEST || ' USING
(SLINO=634011100938964;EKI=9DFD024BC1607A33960E99845EB18F05)' ,40,
10,'Success','634011100938964','9DFD024BC1607A33960E99845EB18F05');
Insert into test_let2
(COMMAND_ID,INTERFACE_ID,INTERNAL_CODE,NE_ID,PROTOCOL_ID,RESPONSE_ID,CMD_QUEUE,CMD_STATUS,ENTRY_
DATETIME,LAST_CHANGE_DATETIME,TRANSACTION_ID,BUS_CMD_ID,COMMAND_TEXT)
Values
(ttno,
ttno,'0',1,1,14,0,30,SYSDATE,SYSDATE,TEST||':'||'I001'||':'||ttno,ttno,'agsui:isi=63401110093896
4,eki=9DFD024BC1607A33960E99845EB18F05,kind=5;');
ttno := ttno+1;
TEST := TEST+1;
EXIT WHEN TEST>913000000;
END LOOP;
END;
Please advice
Thanks & Regrds,
MB

Assuming you are on 9i or later, why not one INSERT ALL ... SELECT statement?
(untested:)
INSERT ALL
INTO test_let
       ( interface_id
       , transaction_id
       , bus_cmd_id
       , command_order
       , command_order_argument
       , command_market
       , command_priority
       , command_status
       , entry_datetime
       , last_change_datetime
       , area
       , msisdn
       , business_command
       , transaction_status
       , return_code
       , response_message
       , param_no
       , param_a4ki)
VALUES ( ttno
       , TEST||':'||'I001'||':'||ttno
       , ttno
       , 'AUTHENTICATE'
       , 'AUC2_SLI'
       , 'SET'
       , '10'
       , '10'
       , sysdate
       , sysdate
       , '0'
       , test
       , 'AUTHENTICATE AUC2_SLI FOR ' || TEST || ' USING (SLINO=634011100938964;EKI=9DFD024BC1607A33960E99845EB18F05)'
       , 40
       , 10
       , 'Success'
       , '634011100938964'
       , '9DFD024BC1607A33960E99845EB18F05' )
INTO test_let2
       ( command_id
       , interface_id
       , internal_code
       , ne_id
       , protocol_id
       , response_id
       , cmd_queue
       , cmd_status
       , entry_ datetime
       , last_change_datetime
       , transaction_id
       , bus_cmd_id
       , command_text )
VALUES ( ttno
       , ttno
       , '0'
       , 1
       , 1
       , 14
       , 0
       , 30
       , sysdate
       , sysdate
       , TEST||':'||'I001'||':'||ttno
       , ttno
       , 'agsui:isi=63401110093896 4,eki=9DFD024BC1607A33960E99845EB18F05,kind=5;' )
SELECT rownum AS ttno
     , rownum + 911999999 AS test
FROM   dual CONNECT BY LEVEL < 913000000;

Similar Messages

  • Multiple nohup commands from same path

    I had a doubt which is very urgent as my live migration is going on. I had run the 2 import commands from $ORACLE_HOME/bin for importing 2 schemas like:-
    nohup imp JISPBILCORBILLINGPRD501/JISPBILCORBILLINGPRD501 fromuser=JISPBILCORBILLINGPRD501 touser=JISPBILCORBILLINGPRD501 file=JISPBILCORBILLINGPRD501.dmp log=JISPBILCORBILLINGPRD501.log commit=y feedback=10000 ignore=Y &
    nohup imp JISPRATCORBILLINGPRD501/JISPRATCORBILLINGPRD501 fromuser=JISPRATCORBILLINGPRD501 touser=JISPRATCORBILLINGPRD501 file=JISPRATCORBILLINGPRD501.dmp log=JISPRATCORBILLINGPRD501.log commit=y feedback=10000 ignore=Y &
    Normally I run one import from the path of $ORACLE_BASE and one from $ORACLE_HOME/bin so that I have 2 different nohup.out files to see the output. But, in this case, as the 2 imports are run from the same path, will the nohup.out have output from both the files?
    So, my doubt is that hope there is no problem in running the nohup command from the same path and the nohup.out is populated with the output of both the
    imports.
    I hope, my question is clear.
    Please, help in solving the doubt.
    regards

    I had a doubt which is very urgent See how to configure application server 10g for forms 10g about urgency....
    That said...
    will the nohup.out have output from both the files?Yes, and outputs will be mixed.... but you have separate logs, haven't you ?

  • How can we remove the commas from the Formula value in SAP BW BEx query

    Hi All,
    How can we remove the commas from the Formula value in SAP BW BEx query
    We are using the formula replacing with characteristic.The characteristic value needs to be display as number with out commas.
    Regards
    Venkat.

    Do you want to remove the commas when you run the query on Bex Web or in RSRT?
    Regards

  • In import commit 100000 records

    Dear Friends,
    Please let me know whether in import any method is there to commit records for an interval of records say 100000. If we use commit=y and each table has 5M records, it take multiple days to complete.
    And if we don’t give commit=y, it requires enough memory buffers. Please let me know some good soln.
    Thanks
    VK

    Please use buffer parameter for this
    For more details:- http://www.oracleutilities.com/OSUtil/import.html
    Thanks
    Dattatraya Walgude

  • Comma delimited in Sql query decode function errors out

    Hi All,
    DB: 11.2.0.3.0
    I am using the below query to generate the comma delimited output in a spool file but it errors out with the message below:
    SQL> set lines 100 pages 50
    SQL> col "USER_CONCURRENT_QUEUE_NAME" format a40;
    SQL> set head off
    SQL> spool /home/xyz/cmrequests.csv
    SQL> SELECT
    2 a.USER_CONCURRENT_QUEUE_NAME || ','
    3 || a.MAX_PROCESSES || ','
    4 || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
    5 ||sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'I',1,0),0)) Pending_Normal ||','
    6 ||sum(decode(b.PHASE_CODE,'R',decode(b.STATUS_CODE,'R',1,0),0)) Running_Normal
    7 from FND_CONCURRENT_QUEUES_VL a, FND_CONCURRENT_WORKER_REQUESTS b
    where a.concurrent_queue_id = b.concurrent_queue_id AND b.Requested_Start_Date <= SYSDATE
    8 9 GROUP BY a.USER_CONCURRENT_QUEUE_NAME,a.MAX_PROCESSES;
    || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
    ERROR at line 4:
    ORA-00923: FROM keyword not found where expected
    SQL> spool off;
    SQL>
    Expected output in the spool /home/xyz/cmrequests.csv
    Standard Manager,10,0,1,0
    Thanks for your time!
    Regards,

    Get to work immediately on marking your previous questions ANSWERED if they have been!
    >
    I am using the below query to generate the comma delimited output in a spool file but it errors out with the message below:
    SQL> set lines 100 pages 50
    SQL> col "USER_CONCURRENT_QUEUE_NAME" format a40;
    SQL> set head off
    SQL> spool /home/xyz/cmrequests.csv
    SQL> SELECT
    2 a.USER_CONCURRENT_QUEUE_NAME || ','
    3 || a.MAX_PROCESSES || ','
    4 || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
    5 ||sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'I',1,0),0)) Pending_Normal ||','
    6 ||sum(decode(b.PHASE_CODE,'R',decode(b.STATUS_CODE,'R',1,0),0)) Running_Normal
    7 from FND_CONCURRENT_QUEUES_VL a, FND_CONCURRENT_WORKER_REQUESTS b
    where a.concurrent_queue_id = b.concurrent_queue_id AND b.Requested_Start_Date <= SYSDATE
    8 9 GROUP BY a.USER_CONCURRENT_QUEUE_NAME,a.MAX_PROCESSES;
    || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
    >
    Well if you want to spool query results to a file the first thing you need to do is write a query that actually works.
    Why do you think a query like this is valid?
    SELECT 'this, is, my, giant, string, of, columns, with, commas, in, between, each, word'
    GROUP BY this, is, my, giant, stringYou only have one column in the result set but you are trying to group by three columns and none of them are even in the result set.
    What's up with that?
    You can only group by columns that are actually IN the result set.

  • Splitting comma seperated column data into multiple rows

    Hi Gurus,
    Please help me for solving below scenario. I have multiple value in single column with comma seperated values and my requirement is load that data into multiple rows.
    Below is the example:
    Source Data:
    Product         Size                                 Stock
    ABC              X,XL,XXL,M,L,S                 1,2,3,4,5,6
    Target Data:
    Product         Size                                 Stock
    ABC              X                                     1
    ABC              XL                                   2
    ABC              XXL                                 3
    ABC              M                                    4
    ABC              L                                      5
    ABC             S                                        6
    Which transformation we need to use for getting this output?
    Thanks in advance !

    Hello,
    Do you need to do this tranformation through OWB mapping only? And can you please tell what type of source you are using? Is it a flat file or a table?
    Thanks

  • DS sends updates to DB only in commit (can't find modified data in same TX)

    Hello experts!
    We have a physical data service mapping a simple Oracle database table. When we update one record in the database (invoking submit on the DS), and use a function in that same dataservice to get the refreshed record, the updated column comes with the OLD value. But after the transaction ends (we isolated this is a simple JWS), the database gets updated.
    The most strange fact: we also did a test using another Data Service to do the update, now mapping a stored procedure to do the updates (without commits in body). Then the test works fine.
    The conclusion I can reach is DSP is holding the instance somewhere after the submit() and is not sending it to the database connection. I understand that the commit is not performed, but if I do a query in the same TX, I should see my changed, shouldn't I?
    We use WL 8.1.6 with DSP 2.5.
    We´d appreciate very much if yuo guys could help.
    Thanks,
    Zica.

    Let me get your scenario straight
    client starts a transaction
    client calls submit to update a data services
    client calls read to re-read the update value (does not see update)
    client ends the transaction
    If you read now, you will see the update
    And you're wondering why the first read doesn't seem the update values?
    By default, ALDSP 2.5 reads are through an EJB that has trans-attrib=NotSupported - which means if you do a read within a transaction, that transaction is suspended the call is made without any transaction, then the transaction is resumed. So this is one reason you don't see the read. To do the read via an EJB method with trans-attribute=Required. See TRANSACTION SUPPORT at http://e-docs.bea.com/aldsp/docs25/javadoc/com/bea/dsp/dsmediator/client/DataServiceFactory.html
    If you are using the control, the control will need to specify
    @jc:LiquidData ReadTransactionAttribute="Required"
    That is only part of the solution. You will also need to configure your connection pool with the property KeepXAConnTillTxComplete="true" to ensure that your read is on the same connection as your write.
    <JDBCConnectionPool CapacityIncrement="2"
    KeepXAConnTillTxComplete="true" />
    Then I have to ask - if your client already has the modified DataObject in memory, what's the purpose of re-reading it? If all you need is a clean ChangeSummary so you can do more changes (the ChangeSummary is not cleared when you call submit), you can simply call myDataObject..getDataGraph().getChangeSummary().beginLogging()

  • SQL Developer can't commit edited data in Table Data pane

    When I try to commit changes in "Data" pane for selected table SQL Developer gives me a strange error:
    One error saving changes to table "TABLENAME".:
    Row XXX: Data got commited in another/same session, cannot update row.
    I can see in the log that SQL Developer tries to do something like:
    UPDATE "TABLENAME" set "COLUMN"="value1" where ROWNUM="xxxx1" and ROW_SCN=nnn1;
    UPDATE "TABLENAME" set "COLUMN"="value2" where ROWNUM="xxxx2" and ROW_SCN=nnn1;
    UPDATE "TABLENAME" set "COLUMN"="value3" where ROWNUM="xxxx3" and ROW_SCN=nnn2;
    If I update the same rows in SQL window by other condition and do commit - all is OK. Why so strange behaivour?
    My table has not a primary key and no other users try to change it. SQL Developer version 3.0.04 and Oracle 10.2.0.4 Linux.
    Best regards,
    Sergey Logichev

    That's because the inaccuracy of ROW_SCN.
    I suggest you turn off Preferences - Database - ObjectViewer - Use ORA_ROWSCN (as I did the very moment we got the option).
    Have fun,
    K.

  • Materialized view - REFRESH FAST ON COMMIT - UNION ALL

    In a materialized view which uses REFRESH FAST ON COMMIT to update the data, how many UNION ALL can be used.
    I am asking this question because my materialized view works fine when there are only 2 SELECT statement (1 UNION ALL).
    Thank you.

    In a materialized view which uses REFRESH FAST ON COMMIT to update the data, how many UNION ALL can be used.As far as I remember you can have 64K UNIONized selects.
    I am asking this question because my materialized view works fine when there are only 2 SELECT statement (1 UNION ALL).Post SQL that does not work.

  • After call commit sql , data can not flush to disk

    I use berkey db which support sql . It's version is db-5.1.19.
    1, Open a database.
    2. Create a table.
    3. exec "begin;" sql
    4. exec sql which is insert record into table
    5. exec "commit;" sql
    6. copy database file (SourceDB_912_1.db and SourceDB_912_1.db-journal) to Local Disk of D, then use a tool of dbsql to open the database.
    7. use select sql to check data, there is no record in table.
    1
    sqlite3 * m_pDB;
    int nRet = sqlite3_open_v2(strDBName.c_str(), & m_pDB,SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE,NULL);
    2
    string strSQL="CREATE TABLE [TBLClientAccount] ( [ClientId] CHAR (36), [AccountId] CHAR (36) );";
    char * errors;
    nRet = sqlite3_exec(m_pDB, strSQL.c_str(), NULL, NULL, &errors);
    3
    nRet = sqlite3_exec(m_pDB, "begin;", NULL, NULL, &errors);
    4
    nRet = sqlite3_exec(m_pDB, "INSERT INTO TBLClientAccount (ClientId,AccountId) VALUES('dd','ddd'); ", NULL, NULL, &errors);
    5
    nRet = sqlite3_exec(m_pDB, "commit;", NULL, NULL, &errors);
    Edited by: 887973 on Sep 27, 2011 11:15 PM

    Hi,
    Here is a simple test case program I used based on your description:
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include "sqlite3.h"
    int error_handler(sqlite3*);
    int main()
         sqlite3 *m_pDB;
         const char *strDBName = "C:/SRs/OTN Core 2290838 - after call commit sql , data can not flush to disk/SourceDB_912_1.db";
         char * errors;
         sqlite3_open_v2(strDBName, &m_pDB, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, NULL);
         error_handler(m_pDB);
         sqlite3_exec(m_pDB, "CREATE TABLE [TBLClientAccount] ( [ClientId] CHAR (36), [AccountId] CHAR (36) );", NULL, NULL, &errors);
         error_handler(m_pDB);
         sqlite3_exec(m_pDB, "begin;", NULL, NULL, &errors);
         error_handler(m_pDB);
         sqlite3_exec(m_pDB, "INSERT INTO TBLClientAccount (ClientId,AccountId) VALUES('dd','ddd'); ", NULL, NULL, &errors);
         error_handler(m_pDB);
         sqlite3_exec(m_pDB, "commit;", NULL, NULL, &errors);
         error_handler(m_pDB);
         //sqlite3_close(m_pDB);
         //error_handler(m_pDB);
    int error_handler(sqlite3 *db)
         int err_code = sqlite3_errcode(db);
         switch(err_code) {
         case SQLITE_OK:
         case SQLITE_DONE:
         case SQLITE_ROW:
              break;
         default:
              fprintf(stderr, "ERROR: %s. ERRCODE: %d.\n", sqlite3_errmsg(db), err_code);
              exit(err_code);
         return err_code;
    }Than I copied the SourceDB_912_1.db database and the SourceDB_912_1.db-journal directory containing the environment files (region files, log files) to D:\, opened the database using the "dbsql" command line tool, and queried the table; the data is there:
    D:\bdbsql-dir>ls -al
    -rw-rw-rw-   1 acostach 0 32768 2011-10-12 12:51 SourceDB_912_1.db
    drw-rw-rw-   2 acostach 0     0 2011-10-12 12:51 SourceDB_912_1.db-journal
    D:\bdbsql-dir>C:\BerkeleyDB\db-5.1.19\build_windows\Win32\Debug\dbsql SourceDB_912_1.db
    Berkeley DB 11g Release 2, library version 11.2.5.1.19: (August 27, 2010)
    Enter ".help" for instructions
    Enter SQL statements terminated with a ";"
    dbsql> .tables
    TBLClientAccount
    dbsql> .schema TBLClientAccount
    CREATE TABLE [TBLClientAccount] ( [ClientId] CHAR (36), [AccountId] CHAR (36) );
    dbsql> select * from TBLClientAccount;
    dd|dddI do not see where the issue is. The data can be successfully retrieved, it is present in the database.
    Could you try putting in the sqlite3_close() call and see if you still get the error?
    Did you remove the __db.* files from the SourceDB_912_1.db-journal directory?
    Did you use PRAGMA synchronous, and if so, what is the value you set?
    If this is still an issue for you, please describe in more detail the exact steps needed to get this reproduced and provide a simple stand-alone test case program that reproduces it.
    Regards,
    Andrei

  • How to Capture Commit Point in Forms

    Dear Members,
    We are on E-Business Suite 11.5.10.2.
    We are trying to change the behavior of AP Invoice Work Bench form through CUSTOM.pll.
    When you try to reverse an existing distribution line, then oracle does a lot of validations and many triggers are fired until commit occurs.
    I turned on custom events and found WHEN-VALIDATE-RECORD trigger fires 14 times until commit occurs at the end.
    MY question is how can we know the commit occurred. I mean in custom.pll i need to write some logic where i need to take some field values in a block at the very end when commit occurs. The field values that i am talking about keep changing from the start to the end and i want to capture the values at the point commit occurs,
    Is there any means to know that commit occurred; so that i can retrieve values at that moment.
    Thanks
    Sandeep

    Try to debug the Block_status for line.I will regularly change from change mode to some mode for commit.

  • CO54: Message is not sent to any destination: Commit Work is getting failed

    Hello,
    While processing the Messages from XFP to SAP Update is getting terminated due to the use of Commit Work. SAP Note 147467 - Update termination when sending process messages has been referred which is for 4.6C, but we are in ECC6.0.
    While processing the message in second iteration Commit Work triggres and Update happens in the data base.
    Following Issue is only concerned with Nested HUs
    T-Code used CO54
    Following are the analysis:
    it looks like u201CCreate and Post a Physical Inventory Docu201D is failing in the initial processing because the inventory doc is created when the Transfer Order to the PSA (or the TO from the PSA) is not yet completed (quant is still locked somehow).
    CO54, process message category=ZHU_CONS: HU to be consumed is a nested pallet (unpacking/ repacking and TOs to/from) and qty to consume is greater that HU qty in SAP (creation of a physical inventory doc is required).
    1. The error occurs while clearing the inventory posting for the physical inventory document.
    2. The surprising factor is the u201CProcess Messageu201D is not processed correctly for the first time.
    3. Indeed it is successfully processed without any error if you do process it second time.
    Error Log from C054 T-Code.
    02.08.2010                                         Dynamic List Display                                                1
    Type
    Message text
    LTxt
    Message category: ZHU_CONS ---    Process message: 100000000000000621   "Send to All Destinations" Is Active
    Message to be sent to destination:
    ZHGI ZPP_0285_XFP_GOODSISSUE Individual Processing Is Active
    => Message will be sent to destination (check log for destination)
    Message category: ZHU_CONS ---    Process message: 100000000000000621   "Send to All Destinations" Is Active
    Message destination ZPP_0285_XFP_GOODSISSUE triggered COMMIT WORK
    Input parameters OK, passed to source field structure
    Step 0: Now checking if scenario with HU
    Step 2: Scenario with HU, Now checking if HU nested
    Step 3: HU nested, checking if HU in repack area
    Step 4: HU not in repack area, moving it to repack area
    HU moved to repack area, TO number 0000000873
    Step 5: Depacking nested HU...
    Nested HU depacked, HU pallet n°: 00176127111000461994
    Steps 6-7: Moving back HUs to supply storage type / bin...
    HU 00376127111000462001 moved back to original area with TO number 0000000875
    Steps 6-7: Moving back HUs to supply storage type / bin...
    No need to move back HU 00176127111000461994 (not in table LEIN)
    Step 8: Checking if HU fully used and quantity matches HU system quantity
    HU not fully used but picked quantity > HU quantity: inventory necessary
    Step 9: Inventory needed, creating physical inventory document...
    Physical inventory document 0000000126 created
    Step 10: Adding weighted quantity on inventory document...
    Weighted quantity entered in document 0000000126
    Step 11: Posting rectification in inventory document 0000000126...
    Physical inventory document 0000000126 rectified
    Error: rectification for doc 0000000126 not updated in DB!
    => Destination ZHGI ZPP_0285_XFP_GOODSISSUE can currently not process the message
    => Message is not sent to any destination
    The errors in SM13 for this contain the program SAPLZPP_0285_HUINV_ENH (creating/changing the HUM physical inventory doc). WM Function module L_LK01_VERARBEITEN is also involved.
    From SM13, it displays the following piece of code in include LL03TF2M (read the LEIN table=Storage Units table):
    WHEN CON_LK01_NACH.
           IF (LEIN-LGTYP = LK01-NLTYP AND         
                 LEIN-LGPLA = LK01-NLPLA) OR
                  NOT P_LEDUM IS INITIAL.
           ELSE.
    Das ist der Fall einer TA-Quittierung wo Von-Hu = Nach-HU ist und sofort die WA-Buchung erfolgt. Dann steht die HU noch auf dem Von-Platz, daher darf hier kein Fehler kommen, sondern es wird ein. Flag gesetzt, daß verhindert daß die LE fortgeschrieben wird.
             FLG_NO_LE_UPDATE = CON_X.
            MESSAGE A558 WITH P_LENUM.
           ENDIF.
    Translation in English of the German text via Google:
    "This is the case where confirmation of a TO source-HU = destination-HU, and now the WA (Good Issue?)- made book. Then, the HU is still on the From-space may therefore come here not a mistake, but it sets a flag that prevents that the LE is updated."
    Thanks and Regards,
    Prabhjot Singh
    Edited by: Prabhjot  Singh on Aug 2, 2010 4:39 PM

    Hope you have carried out following things in Production ( Please refer to SAP help before actually doing it in production).
    1. Transport the predefined characteristics from the SAP reference client (000) to your logon client.
    2. Adopt Predefined Message Categories - In this step, you copy the process message categories supplied by SAP from internal tables as Customizing settings in your plants.

  • Failed to commit objects to server. Error while publishing reports from BW

    Hi,
    I am getting below error while publishing reports from BW to BO.
    "0000000001 Unable to commit the changes to Enterprise. Reason: Failed to commit objects to server : #Duplicate object name in the same folder."
    Anyone having any solution for this. Thanks in advance.

    Hi Amit
    It would be great if you could add a little info about how you solved this issue. Others might run into similar situations - I just did:-(
    Thank you:-)

  • Need to Convert Comma separated data in a column into individual rows from

    Hi,
    I need to Convert Comma separated data in a column into individual rows from a table.
    Eg: JOB1 SMITH,ALLEN,WARD,JONES
    OUTPUT required ;-
    JOB1 SMITH
    JOB1 ALLEN
    JOB1 WARD
    JOB1 JONES
    Got a solution using Oracle provided regexp_substr function, which comes handy for this scenario.
    But I need to use a database independent solution
    Thanks in advance for your valuable inputs.
    George

    Go for ETL solution. There are couple of ways to implement.
    If helps mark

  • Difference between Implicit and Explicit Commit

    Hi All ,
    Can some one pls let me know the difference between  Implicit commit and Explicit Commit .
    Thanks in Advance..

    >
    Kalyan wrote:
    > Hi,
    >
    > An explicit commit happens when we execute an SQL "commit" command.
    >
    > Implicit commits occur without running a commit command and occur only when certain SQL (DDL) statements are executed
    >
    > (Ie, INSERT,UPDATE OR DELETE Statements)
    > >
    > Hope this is clear & Helpful Short and Simple.
    >
    > Thanks
    > Kalyan
    Highlighted bit is incorrect.

Maybe you are looking for

  • Fixing your iPhone 5 Battery Life - Once and for all!

    Greetings  folks - I'll get right into it here: 1. Let your iphone 5 drain completely out. 2. BUT, Once it gets on 1% however, you must go into: Settings>General>Rest>Reset All Settings. (note: This will not ruin ANY of your personal info the phone.

  • Oracle Server 8.1.7 on Windows 2000 Dynamic Drives (?)

    Does somebody know if Oracle Server ver.8.1.7 works with Dynamic Drives under Windows 2000?

  • DoGet() & doPost() methods in servlets

    If we have method=�Get� in <form> tag and the servlet is having only the doPost() method then how the servlet handles the request? Is it possible or not? If we have method=�Post� in <form> tag and the servlet is having only the doGet() method then ho

  • Out of Memory error while builng HTML String from a Large HashMap.

    Hi, I am building an HTML string from a large map oject that consits of about 32000 objects using the Transformer class in java. As this HTML string needs to be displayed in the JSP page, the reponse time was too high and also some times it is throwi

  • Itunes page deleted,

    I'm actually withered, and in tears from this. I purchased 4 new songs yesterday, brought my ipod into work to download them to my ipod. I'll admit, I'm not very computer savy, and lately my itunes keeps asking for my password etc, as its been tellin