Bulk Load question for an insert statement.

I'm looking to put the following statement into a FORALL statement using BULK COLLLECT and I need some guidance.
Am I going to be putting the SELECT statement into a cursor and then load the cursor values into a defined Nested Table type defined variable?
INSERT INTO TEMP_ASSOC_CURRENT_WEEK_IDS
SELECT aor.associate_office_record_id ,
sched.get_assoc_sched_rotation_week(aor.associate_office_record_id, v_weekType.start_date) week_id
FROM ASSOCIATE_OFFICE_RECORDS aor
WHERE aor.OFFICE_ID = v_office_id
AND (
(aor.lt_assoc_stage_result_id in (4,8)
AND v_officeWeekType.start_date >= trunc(aor.schedule_start_date)
OR aor.lt_assoc_stage_result_id in (1, 2)
));

I see people are reading this so for the insanely curious here's how I did it.
Type AOR_REC is RECORD(
associate_office_record_id dbms_sql.number_table,
week_id dbms_sql.number_table); --RJS.***Setting up Type for use with Bulk Collect FORALL statements.
v_a_rec AOR_REC; -- RJS. *** defining variable of defined Type to use with Bulk Collect FORALL statements.
CURSOR cur_aor_ids -- RJS *** Cursor for BULK COLLECT.
IS
SELECT aor.associate_office_record_id associate_office_record_id,
sched.get_assoc_sched_rotation_week(aor.associate_office_record_id, v_weekType.start_date) week_id
FROM ASSOCIATE_OFFICE_RECORDS aor
WHERE aor.OFFICE_ID = v_office_id
AND (
(aor.lt_assoc_stage_result_id in (4,8)
AND v_officeWeekType.start_date >= trunc(aor.schedule_start_date)
OR aor.lt_assoc_stage_result_id in (1, 2)
FOR UPDATE NOWAIT;
BEGIN
BEGIN
OPEN cur_aor_ids;
LOOP
FETCH cur_aor_ids BULK COLLECT into
v_a_rec.associate_office_record_id, v_a_rec.week_id; --RJS. *** Bulk Load your cursor data into a buffer to do the Delete all at once.
FORALL i IN 1..v_a_rec.associate_office_record_id.COUNT SAVE EXCEPTIONS
INSERT INTO TEMP_ASSOC_CURRENT_WEEK_IDS
(associate_office_record_id,week_id)
VALUES
(v_a_rec.associate_office_record_id(i), v_a_rec.week_id(i)); --RJS. *** Single FORALL BULK DELETE statement.
EXIT WHEN cur_aor_ids%NOTFOUND;
END LOOP;
CLOSE cur_aor_ids;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('ERROR ENCOUNTERED IS SQLCODE = '|| SQLCODE ||' AND SQLERRM = ' || SQLERRM);
dbms_output.put_line('Number of INSERT statements that
failed: ' || SQL%BULK_EXCEPTIONS.COUNT);
End;
Easy right?

Similar Messages

  • SQL*Loader-930: Error parsing insert statement for column

    we upload data on daily basis in application throug apps user and these table are invloved
    1. DEV_RA_INTERFACE_LINES_ALL(owner is a apps)
    2.RA_INTERFACE_LINES_ALL(owner is a AR)
    we do steps
    1 delete record from DEV_RA_INTERFACE_LINES_ALL table
    2 delete record from      RA_INTERFACE_LINES_ALL table
    3 load data using sql loader with apps user
    4 insert in RA_INTERFACE_LINES_ALL table
    we want to change user i mean these step do dataupload user not apps
    we give the proper rights to dataupload like select,delete and insert rights on these table to dataupload user but when i going to load data throug sql loader we receive error
    SQL*Loader-930: Error parsing insert statement for column APPS.DEV_RA_INTERFACE_
    LINES_ALL.ORIG_SYSTEM_BILL_ADDRESS_ID.
    ORA-00904: "F_ORIG_SYSTEM_BILL_ADDR_REF": invalid identifier
    and if i insert data through apps then done.

    make sure that u have no speces left
    between lines.
    give the path of control file path correctly.

  • SQL*Loader-929: Error parsing insert statement for table

    Hi,
    I get the following error with SQL*Loader:
    Table MYTABLE loaded from every logical record.
    Insert option in effect for this table: INSERT
    Column Name Position Len Term Encl Datatype
    IDE FIRST * ; CHARACTER
    SQL string for column : "mysequence.NEXTVAL"
    CSI_NBR 1:10 10 ; CHARACTER
    POLICY_NBR 11:22 12 ; CHARACTER
    CURRENCY_COD 23:25 3 ; CHARACTER
    POLICY_STAT 26:27 2 ; CHARACTER
    PRODUCT_COD 28:35 8 ; CHARACTER
    END_DAT 44:53 10 ; CHARACTER
    FISCAL_COD 83:83 1 ; CHARACTER
    TOT_VAL 92:112 21 ; CHARACTER
    SQL*Loader-929: Error parsing insert statement for table MYTABLE.
    ORA-01031: insufficient privileges
    I am positive that I can SELECT the sequence and INSERT into the table with the user invoking sql*loader.
    Where does that "ORA-01031" come from?
    Regards
    ...

    Options:
    1) you are wrong about privileges OR
    2) you have the privilege only when you connect via SQL*Plus (or whichever other tool you used to test the insert).
    Is it possible that during your test you enabled the role which granted you the INSERT privilege - and that SQL*Loader doesn't do this?
    Can you see the table in this list?
    select *
    from user_tab_privs_recd
    where table_name='MY_TABLE'
    and owner='table owner whoever';
    select *
    from user_role_privs;Any roles where DEFAULT_ROLE is not YES?
    HTH
    Regards Nigel

  • Error while running bulk load utility for account data with CSV file

    Hi All,
    I'm trying to run the bulk load utility for account data using CSV but i'm getting following error...
    ERROR ==> The number of CSV files provided as input does not match with the number of account tables.
    Thanks in advance........

    Please check your child table.
    http://docs.oracle.com/cd/E28389_01/doc.1111/e14309/bulkload.htm#CHDCGGDA
    -kuldeep

  • Issu for running insert statement in oracle procedure.

    Hi expert,
    I ran a oracle procedure with a insert statement inside as:
    insert into table1 select....
    but I got error message related to this insert statement as "SQLERRM= ORA-08103: object no longer exists"
    I ran this statement separately in toad, no error message, but no data result from this execute.
    please tell how to fix this issue.
    Many Thanks,
    Edited by: 918440 on 27-Jun-2012 8:04 AM

    Hi friend,
    my insert statement is as follows:
            INSERT INTO HIROC_RU_FACT_S   
            select   
                    pp.policy_fk,  
                    pp.transaction_log_fk,  
                    p.policy_no,  
                    p.policy_type_code,   
                    hiroc_rpt_user.hiroc_get_entity_name(pp.policy_fk,'POLHOLDER')  policy_holder,  
                    pp.risk_fk,   
                    r.risk_base_record_fk,   
                    r.entity_fk,  
                    hiroc_sel_entity_risk_name2 (pp.risk_fk,r.entity_fk)  risk_name,   
                    substr(trim(nvl(r.county_code_used_to_rate,pth.issue_state_code)),1,2) rating_state_code,  
                    hiroc_get_province_name(substr(trim(nvl(r.county_code_used_to_rate,pth.issue_state_code)),1,2), 'PROVINCE_CODE', 'L') rating_state_name,  
                    hiroc_get_provicne_pol_prefix(substr(trim(nvl(r.county_code_used_to_rate,pth.issue_state_code)),1,2),p.policy_type_code) rating_prov_pol_prefix,   
                    nvl(r.risk_cls_used_to_rate,pth.peer_groups_code) rating_peer_group_code,  
                    hiroc_get_lookup_desc('PEER_GROUP',nvl(r.risk_cls_used_to_rate,pth.peer_groups_code),'L')  rating_peer_group_name,   
                    pth.policy_term_history_pk,   
                    pth.term_base_record_fk,   
                    to_char(pth.effective_from_date,'yyyy') term_effective_year,   
                    c.coverage_pk,   
                    c.coverage_base_record_fk,   
                    pc.coverage_code,   
                    c.product_coverage_code,   
                    pc.long_description,   
                    pp.coverage_component_code,  
                    c.effective_from_date,   
                    c.effective_to_date,  
                    cls.coverage_code coverage_class_code,   
                    cls.coverage_long_desc coverage_class_long_desc,   
                    decode(pp.coverage_component_code ,'GROSS',cls.exposure_unit,null) exposure_unit, --hiroc_get_expos_units_by_cov(c.coverage_pk,pc.coverage_code,c.effective_from_date,c.effective_to_date) exposure_unit,   
                    decode(pp.coverage_component_code ,'GROSS',cls.number_of_patient_day,null) number_of_patient_day,   
                    pth.effective_from_date  term_eff_from_date,   
                    pth.effective_to_date term_eff_to_date,    
                    pp.premium_amount premium_amount,    
                    (case when (pc.coverage_code in ('CP','MC1','MC2','MC3','MC4','HR','F') or pc.coverage_code like 'ST%') and  
                                  pp.coverage_component_code != 'RISKMGMT' then     
                            (nvl(pp.premium_amount,0))  
                        else  
                            0  
                    end) primary_premium,   
                    (hiroc_get_risk_units(hiroc_get_provicne_pol_prefix(substr(trim(nvl(r.county_code_used_to_rate,pth.issue_state_code)),1,2),p.policy_type_code)-- rating_prov_pol_prefix  
                                        ,nvl(r.risk_cls_used_to_rate,pth.peer_groups_code) -- rating_peer_group_code  
                                        ,cls.coverage_code --coverage_class_code  
                                        ,decode(pp.coverage_component_code ,'GROSS',cls.exposure_unit,null)  
                                        ,pp.premium_amount  
                                        ,(case when (pc.coverage_code in ('CP','MC1','MC2','MC3','MC4','HR','F') or pc.coverage_code like 'ST%') and  
                                                      pp.coverage_component_code != 'RISKMGMT' then     
                                                (nvl(pp.premium_amount,0))  
                                            else  
                                                0  
                                         end)  -- primary_premium  
                                        ,p.policy_type_code           
                                        ,trunc(pth.effective_to_date))) risk_units  
             from     proddw_mart.rmv_territory_makeup tm,  
                      proddw_mart.rmv_premium_class_makeup pcm,  
                      proddw_mart.rmv_product_coverage pc,  
                      proddw_mart.rmv_coverage c,  
                      proddw_mart.rmv_risk r,  
                      proddw_mart.rmv_policy_term_history pth,  
                      proddw_mart.rmv_policy p,  
                      proddw_mart.rmv_transaction_log tl,  
                      proddw_mart.rmv_policy_premium pp,  
                      (select  /* +rule */  
                               p.policy_no,  
                               p.policy_start_date,  
                               p.policy_end_date,   
                               r.risk_pk,  
                               r.risk_description,  
                               c.coverage_pk,  
                               c.parent_coverage_base_record_fk,  
                               pc.parent_product_covg_code,  
                               pc.coverage_code,  
                               pc.short_description coverage_short_desc,   
                               pc.long_description coverage_long_desc,  
                               c.exposure_unit,  
                               pc.exposure_basis_code,  
                               c.number_of_patient_day,  
                               p.policy_start_date policy_effective_date,  
                               p.policy_end_date policy_expiry_date,  
                               c.effective_from_date,  
                               c.effective_to_date,  
                               to_char(c.effective_from_date,'YYYY') class_eff_year  
                        from   proddw_mart.odwr_coverage_only      c  
                              ,proddw_mart.odwr_product_coverage   pc  
                              ,proddw_mart.odwr_risk               r  
                              ,proddw_mart.odwr_policy             p  
                        where  pc.code                 = c.product_coverage_code  
                          and  pc.parent_product_covg_code is not null                 -- coverage classes only  
                          and  r.risk_pk = c.risk_base_record_fk  
                          and  c.accounting_to_date = to_date('1/1/3000','mm/dd/yyyy') -- only open records  
                          and  c.base_record_b = 'N'  
                          and  p.base_record_b = 'N'  
                          and  p.policy_pk = r.policy_fk  
                          and  p.accounting_to_date = to_date('1/1/3000','mm/dd/yyyy')  -- only open records  
                       group by p.policy_no,  
                               p.policy_start_date,  
                               p.policy_end_date,   
                               r.risk_pk,  
                               r.risk_description,  
                               c.coverage_pk,  
                               c.parent_coverage_base_record_fk,  
                               pc.parent_product_covg_code,  
                               pc.coverage_code,  
                               pc.short_description, -- coverage_short_desc,   
                               pc.long_description, -- coverage_long_desc,  
                               c.exposure_unit,  
                               pc.exposure_basis_code,  
                               c.number_of_patient_day,  
                               p.policy_start_date, -- policy_effective_date,  
                               p.policy_end_date, -- policy_expiry_date,  
                               c.effective_from_date,  
                               c.effective_to_date,  
                               to_char(c.effective_from_date,'YYYY')-- class_eff_year  
                      ) cls  
                where    tm.risk_type_code = r.risk_type_code  
                and        tm.county_code = r.county_code_used_to_rate  
                and        tm.effective_from_date <= pp.rate_period_from_date  
                and        tm.effective_to_date   >  pp.rate_period_from_date  
                and        pcm.practice_state_code (+) = r.practice_state_code  
                and        pcm.risk_class_code (+) = r.risk_cls_used_to_rate  
                and        nvl(pcm.effective_from_date, pp.rate_period_from_date) <= pp.rate_period_from_date  
                and        nvl(pcm.effective_to_date, to_date('01/01/3000','mm/dd/yyyy')) > pp.rate_period_from_date  
                and        pc.code = c.product_coverage_code  
                and        c.base_record_b = 'N'  
                and        ( c.record_mode_code = 'OFFICIAL'  
                         and (c.closing_trans_log_fk is null or  
                              c.closing_trans_log_fk != tl.transaction_log_pk)  
                         or c.record_mode_code = 'TEMP'  
                         and c.transaction_log_fk = tl.transaction_log_pk )  
                and   c.parent_coverage_base_record_fk is null  
                and        c.effective_from_date  <  c.effective_to_date  
                and        c.effective_from_date  <= pp.rate_period_from_date  
                and        c.effective_to_date    >  pp.rate_period_from_date  
                and   c.accounting_from_date <= tl.accounting_date  
                and   c.accounting_to_date   >  tl.accounting_date  
                and        c.coverage_base_record_fk=pp.coverage_fk  
                and        r.base_record_b = 'N'  
                and        ( r.record_mode_code = 'OFFICIAL'  
                        and (r.closing_trans_log_fk is null or  
                             r.closing_trans_log_fk != tl.transaction_log_pk)  
                        or r.record_mode_code = 'TEMP'  
                        and r.transaction_log_fk = tl.transaction_log_pk )  
                and        r.effective_from_date  <  r.effective_to_date  
                and        r.effective_from_date  <= pp.rate_period_from_date  
                and        r.effective_to_date    >  pp.rate_period_from_date  
                and   r.accounting_from_date <= tl.accounting_date  
                and   r.accounting_to_date   >  tl.accounting_date  
                and         r.risk_base_record_fk = pp.risk_fk  
                and        pth.base_record_b = 'N'  
                and        ( pth.record_mode_code = 'OFFICIAL'  
                        and (pth.closing_trans_log_fk is null or  
                             pth.closing_trans_log_fk != tl.transaction_log_pk)  
                        or pth.record_mode_code = 'TEMP'  
                        and pth.transaction_log_fk = tl.transaction_log_pk )  
                and        pth.accounting_from_date <= tl.accounting_date  
                and        pth.accounting_to_date   >  tl.accounting_date  
                and        pth.term_base_record_fk = pp.policy_term_fk  
                and   p.policy_pk = pp.policy_fk  
                and        tl.transaction_log_pk  =  pp.transaction_log_fk  
                and   pp.active_premium_b = 'Y'  
                and        pp.rate_period_type_code in ('CS_PERIOD','SR_PERIOD')  
                and        pp.rate_period_to_date > pp.rate_period_from_date  
                and tl.accounting_date <= sysdate   
                and p.policy_cycle_code = 'POLICY'  
                and substr(p.policy_no,1,1) <> 'Q'  
                and tl.transaction_log_pk = (select max(pp.transaction_log_fk)  
                                               from proddw_mart.rmv_policy_premium pp,proddw_mart.rmv_transaction_log tl2  
                                              where pth.term_base_record_fk = pp.policy_term_fk  
                                                and pp.transaction_log_fk = tl2.transaction_log_pk  
                                                and tl2.accounting_date <= sysdate )    
                 and p.policy_type_code in ('LIABCRIME','MIDWIFE')    
                 and pth.accounting_to_date =  to_date('01/01/3000','mm/dd/yyyy') --<<<*******  eliminates duplicates  
                 and p.policy_no = cls.policy_no  
            --     and r.risk_pk = cls.risk_pk  
                 and c.coverage_base_record_fk = cls.parent_coverage_base_record_fk(+)  
                 and  cls.effective_from_date < pth.effective_to_date -- from date less than period end date  
                 and  cls.effective_to_date   > pth.effective_from_date -- to date greater than period start date  
                 and  cls.policy_effective_date   < pth.effective_to_date -- from date less than period end date  
                 and  cls.policy_expiry_date     > pth.effective_from_date -- to date greater than period start date  
           group by pp.policy_fk,  
                    pp.transaction_log_fk,  
                    p.policy_no,  
                    p.policy_type_code,   
                    pp.risk_fk,   
                    r.risk_base_record_fk,   
                    r.entity_fk,  
                    substr(trim(nvl(r.county_code_used_to_rate,pth.issue_state_code)),1,2), -- rating_state_code,  
                    r.county_code_used_to_rate,  
                    pth.issue_state_code,  
                    nvl(r.risk_cls_used_to_rate,pth.peer_groups_code) , --  rating_peer_group_code,  
                    r.risk_cls_used_to_rate,  
                    pth.peer_groups_code,  
                    pth.policy_term_history_pk,   
                    pth.term_base_record_fk,   
                    to_char(pth.effective_from_date,'yyyy'), --term_effective_year,   
                    c.coverage_pk,   
                    c.coverage_base_record_fk,   
                    pc.coverage_code,   
                    c.product_coverage_code,   
                    pc.long_description,   
                    pp.coverage_component_code,  
                    c.effective_from_date,   
                    c.effective_to_date,  
                    cls.coverage_code, -- coverage_class_code,   
                    cls.coverage_long_desc, -- coverage_class_long_desc,   
                    decode(pp.coverage_component_code ,'GROSS',cls.exposure_unit,null),-- exposure_unit,   
                    decode(pp.coverage_component_code ,'GROSS',cls.number_of_patient_day,null), -- number_of_patient_day,   
                    pth.effective_from_date, --term_eff_from_date,   
                    pth.effective_to_date, --, --term_eff_to_date,    
                    pp.premium_amount ;Edited by: BluShadow on 27-Jun-2012 16:12
    added {noformat}{noformat} tags for readability.  PLEASE READ {message:id=9360002} AS PREVIOUSLY REQUESTED!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           &

  • Script for generating insert statements

    I have different tables with data and I would like to have the insert statements generated of the data in the tables. Does someone have a generic script to do the job?

    Hi trafoc,
      You can also check out SQL Developer which has function to output table ddl and/or
    data under Tools selection.  Link is "http://www.oracle.com/technology/software/products/sql/index.html"
    Below is a sample output :
    --   DATA FOR TABLE PEOPLE
    --   FILTER = none used
    REM INSERTING into PEOPLE
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('SMITH','CLERK',800,20);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('ALLEN','SALESMAN',1600,30);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('WARD','SALESMAN',1250,30);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('JONES','MANAGER',2975,20);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('MARTIN','SALESMAN',1250,30);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('BLAKE','MANAGER',2850,30);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('CLARK','MANAGER',2450,10);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('KING','PRESIDENT',5000,10);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('TURNER','SALESMAN',1500,30);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('GRIZZLY','CLERK',1100,20);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('JAMES','CLERK',950,30);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('FORD','ANALYST',3000,20);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('MILLER','CLERK',1300,10);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('SCOTT','Clerk',9999,20);
    Insert into PEOPLE (USERNAME,JOB,SALARY,DEPTNO) values ('TEST_VPD','Clerk',1500,20);
    --   END DATA FOR TABLE PEOPLE
    HTH
    Zack

  • Question related to INSERT statement

    Consider this scenario:
    - table_a has 5,000,000 records at time t1
    - table_b is empty at time t1
    - At time t1, I execute the following statement:
              insert into table_b
                select * from table_a;
        - The insert statement finishes at time t2
    - Between t1 and t2, say 3,000 new records were added to table_a (by OLTP processes).
    So at time t2, table_a has 5,003,000 records.
    My question is, at time t2, how many records will be in table_b? Will it be 5,000,000 or
    5,003,000 or somewhere between these two counts?
    I am guessing it will be 5,000,000 since that was the number when the insert statement
    started to execute.
    Any feedback is much appreciated.

    Syed Ullah wrote:
    Consider this scenario:
    - table_a has 5,000,000 records at time t1
    - table_b is empty at time t1
    - At time t1, I execute the following statement:
    insert into table_b
    select * from table_a; - The insert statement finishes at time t2
    - Between t1 and t2, say 3,000 new records were added to table_a (by OLTP processes).
    So at time t2, table_a has 5,003,000 records.
    My question is, at time t2, how many records will be in table_b? Will it be 5,000,000 or
    5,003,000 or somewhere between these two counts?
    I am guessing it will be 5,000,000 since that was the number when the insert statement
    started to execute.
    Any feedback is much appreciated.At time t2 it's possible the answer is 0. Once the insert operation finishes (could be any time after t1 in your example, possibly before t2, possibly after) and assuming you checked within the same session where you ran the insert, you'd see 5,000,000. In other sessions you would continue to see 0 until the session that performed the insert did a COMMIT.
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/consist.htm#CNCPT121
    Is something you should be reading. It outlines these fundamental concepts much better than you're likely going to find in a few forum posts.

  • OIM 11g - Issue with Bulk Load Utility for Account Data

    Hi,
    We are trying to load the account data for users in OIM 11g using bulk load utility.
    We are trying to load the account data for resource "iPlanet". For testing purpose, we made one account entry in csv file and run the bulk load utility. After the bulk load process completes, we have noticed that resource is provisioned to the user multiple times and multiple entries have been created in process form table.
    We have tried to run the utility multiple times with a different user record each time.
    The out put of the below sql query:
    SELECT MSG FROM OIM_BLKLD_LOG
    WHERE MODULE = 'ACCOUNT' AND LOG_LEVEL = 'PROGRESS_MSG'
    ORDER BY MSG_SEQ_NO;
    is coming as follows:
    MSG
    Number of Records Loaded: 126
    Number of Records Loaded: 252
    Number of Records Loaded: 504
    Number of Records Loaded: 1008
    Number of Records Loaded: 2016
    Number of Records Loaded: 4032
    We have noticed that each time the number of records loaded is increased to double from the records loaded in last run even when the csv file contains only one record.
    Provided below are the parent and child csv file entries.
    Parent file:
    UD_IPNT_USR_USERID,UD_IPNT_USR_FIRST_NAME,UD_IPNT_USR_LAST_NAME,UD_IPNT_USR_COMMON_NAME,UD_IPNT_USR_NSUNIQUEID
    KPETER,Peter,Kevin,Peter Kevin,
    Child file 1:
    UD_IPNT_USR_USERID,UD_IPNT_GRP_GROUP_NAME
    KPETER,group1
    Child file 2:
    UD_IPNT_USR_USERID,UD_IPNT_ROL_ROLE_NAME
    KPETER,role1
    Can you please throw some insight on what could be the potential cause for this issue and how it could be resolved?
    Thanks
    Deepa
    Edited by: user10955790 on Jun 25, 2012 6:45 AM

    Hi Deepa,
    I know from 'User load' perspective that is required to restart Oracle Identity Manager when we need to reload data that was not loaded during the first run.
    So, my suggestion is restart it before reload.
    Reference: http://docs.oracle.com/cd/E21764_01/doc.1111/e14309/bulkload.htm#CHDEICEH
    I hope this helps,
    Thiago Leoncio.

  • How to return the newly generated sequence id for an INSERT statement

    A record is to be inserted into a table with a sequence for the primary key. The newly inserted sequence value is to returned on successful insertion. Is it possible to do all this in a single statement (say executeUpdate or any other) using java.sql.* ?
    E.g.: - A student record is to be inserted into the STUDENT table. There is a sequence (by name Student_ID_SEQ) on the primary key Student_ID. Student_ID_SEQ.nextval will generate the new sequence id which will be provided as input to the SQL statement (say statement.executeUpdate) along with other student attribute values. On insertion the created sequence id should be returned. And all this should happen in a single statement (single call to database). Stored Procedures can accomplish this. But is this feasible without the use of Stored Procedures?
    Thanks.

    a better aproach is to generate the auto key on the
    database side, not on the application side.That's his problem - since the database is supplying the key for the new record his application which executed the SQL has no way to identify the record that was just added. I just create the key on the app server and accept the likelihood of overlap (which is extremely small).
    Here is a more technical explanation:
    Table Person
       ID,
       Name,
       Phone Number,
       Age
    }The field ID is an autonumber, and all other fields are not unique.
    Now, when this code executes:
    PreparedStatement pst = conn.prepareStatement("Insert Into Person (Name, Phone Number, Age) Values ?, ?, ?");
    pst.setString(1, "John");
    pst.setString(2, "405-444-5555");
    pst.setInt(3, 44);
    pst.executeUpdate();How can the app determine the ID of the person just added since no query is possible which is guaranteed to select just the record that was inserted?
    Since I am generally against Stored Procedures I would develop a way to insure that your keys were unique and generate them inside the app server.

  • Looking for an insert statement to insert into a table around 500 rows

    Hi  Gurus,
    Your help is greatly appreciated !!,
    I need to have a query to insert into the mer_spec_feature table with the following , Please sugesst apart from the Utl_file ,input thing
    as the data is differnt in the regions and am lokking if it workes out with a insert and select query .
    1)I have to create a  script for inserting records into  mer_spec_feature where  on appl_id ‘VXRR’
    having feature ID 786 and have to Exclude TIDs which already have the 786 feature added for this appl_id like in the eblwo select query .
    2)There are 509 such TIDs where it doesnt have the 786 feature with appl_id -'VXRR' in the mer_spec_feature table
    3)
    select distinct terminal_id,  appl_id, from  mer_spec_feature
    where appl_id = 'VXRR'
    minus
    select distinct terminal_id, appl_id from mer_spec_feature
    where appl_id = 'VXRR'
    and terminal_feature_id = 786
    desc mer_spec_feature :
    Terminal_id  --varachr2(9)
    appl_id    --varcahr2(10 )
    terminal_feature-id --number(8)
    TERMINAL_FEATURE_ID
    TERMINAL_APPL_ID
    TERMINAL_ID
    299
    405T330a
    1004665
    786
    VXRR
    1004665

    @Frank Kulash !!
    Thanks always for your replys .!!
    i have tried with the above sql , and am having issue with the pk constraint after inserting  232 rows.
    And a Tertminal_id  will have 100 feature_id's for one  appl_id.
    becoz it has pk constraint on  -- TERMINAL_FEATURE_ID, APPL_ID, TERMINAL_ID
    TERMINAL_FEATURE_ID
    APPL_ID
    TERMINAL_ID
    299
    405T330a
    1004665
    786
    VXRR
    1004665
    can you please sugeest any other option.

  • Rowcount for multiple insert statements

    Hi all,
    Is there a way I can get separate rowcount in multi table insert?
    My sql is written as below:
        INSERT ALL
          INTO TAB1 (COL1, COL2....)  VALUES          (VAL1, VAL2....)
          INTO TAB2 (COL1, COL2....)  VALUES          (VAL1, VAL2....)
            (SELECT * FROM TABLE);
    rowcount of tab1?
    rowcount of tab2?
    Thank you,
    Niroop

    Here is the work around to get the multi-table insert counts.
    SQL>
    SQL> select * from v$version;
    BANNER                                                                         
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production   
    PL/SQL Release 11.2.0.1.0 - Production                                         
    CORE    11.2.0.1.0    Production                                                     
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production                        
    NLSRTL Version 11.2.0.1.0 - Production                                         
    SQL>
    SQL> DROP PACKAGE utils_pkg;
    Package dropped.
    SQL> DROP TABLE src_tab;
    Table dropped.
    SQL> DROP TABLE tab1;
    Table dropped.
    SQL> DROP TABLE tab2;
    Table dropped.
    SQL>
    SQL> CREATE OR REPLACE
      2  PACKAGE utils_pkg
      3  IS
      4    v_tab1_rec_count pls_integer :=0;
      5    v_tab2_rec_count pls_integer :=0;
      6    FUNCTION fCounter(
      7        pTab_Name IN VARCHAR2)
      8      RETURN NUMBER;
      9    FUNCTION fGetInsCounts(
    10        pTab_Name IN VARCHAR2)
    11      RETURN NUMBER;
    12    FUNCTION fResetCounts
    13      RETURN NUMBER;
    14  END utils_pkg;
    15  /
    Package created.
    SQL> CREATE OR REPLACE
      2  PACKAGE body utils_pkg
      3  IS
      4  FUNCTION fcounter(
      5      pTab_Name IN VARCHAR2)
      6    RETURN NUMBER
      7  IS
      8  BEGIN
      9    IF pTab_Name        = 'TAB1' THEN
    10      v_tab1_rec_count := v_tab1_rec_count +1;
    11    ELSE
    12      v_tab2_rec_count := v_tab2_rec_count +1;
    13    END IF;
    14    RETURN 0;
    15  END fcounter;
    16  FUNCTION fGetInsCounts(
    17      pTab_Name IN VARCHAR2)
    18    RETURN NUMBER
    19  IS
    20  BEGIN
    21    IF pTab_Name = 'TAB1' THEN
    22      RETURN v_tab1_rec_count;
    23    ELSE
    24      RETURN v_tab2_rec_count;
    25    END IF;
    26  END fGetInsCounts;
    27  FUNCTION fResetCounts
    28    RETURN NUMBER
    29  IS
    30  BEGIN
    31    v_tab1_rec_count :=0;
    32    v_tab2_rec_count :=0;
    33    RETURN NULL;
    34  END fResetCounts;
    35  END utils_pkg;
    36  /
    Package body created.
    SQL> CREATE TABLE src_tab AS
      2  SELECT *
      3  FROM
      4    (SELECT 1 col1, 'TEST' col2, 'TAB2_DATA' col3 FROM dual
      5    UNION ALL
      6    SELECT 2, 'TESTING', 'NO_TAB2_DATA' FROM dual
      7    )temp;
    Table created.
    SQL>
    SQL> CREATE TABLE tab1 AS
      2  SELECT col1, col2 FROM src_tab WHERE 1=2;
    Table created.
    SQL>
    SQL> CREATE TABLE tab2 AS
      2  SELECT col3 FROM src_tab WHERE 1=2;
    Table created.
    SQL>
    SQL> SELECT utils_pkg.fGetInsCounts('TAB1') AS tab1_inserts,
      2    utils_pkg.fGetInsCounts('TAB2')      AS tab2_inserts,
      3    utils_pkg.fresetcounts    AS reset_done
      4  FROM dual;
    TAB1_INSERTS TAB2_INSERTS RESET_DONE                                           
               0            0                                                      
    SQL>
    SQL> INSERT ALL
      2  INTO tab1
      3    (
      4      col1,
      5      col2
      6    )
      7    VALUES
      8    (
      9      DECODE(utils_pkg.fcounter('TAB1'),0,col1),
    10      col2
    11    )
    12  INTO tab2
    13    (
    14      col3
    15    )
    16    VALUES
    17    (
    18      DECODE(utils_pkg.fcounter('TAB2'),0,col3)
    19    )
    20  SELECT col1, col2, col3 FROM src_tab;
    4 rows created.
    SQL>
    SQL>
    SQL> SELECT utils_pkg.fGetInsCounts('TAB1') AS tab1_inserts,
      2    utils_pkg.fGetInsCounts('TAB2')      AS tab2_inserts,
      3    utils_pkg.fresetcounts    AS reset_done
      4  FROM dual;
    TAB1_INSERTS TAB2_INSERTS RESET_DONE                                           
               2            2                                                      
    SQL>
    SQL> rollback;
    Rollback complete.
    SQL>
    SQL> INSERT  WHEN 1=1 THEN
      2  INTO tab1
      3    (
      4      col1,
      5      col2
      6    )
      7    VALUES
      8    (
      9      DECODE(utils_pkg.fcounter('TAB1'),0,col1),
    10      col2
    11    )
    12    WHEN col3 = 'TAB2_DATA' THEN
    13  INTO tab2
    14    (
    15      col3
    16    )
    17    VALUES
    18    (
    19      DECODE(utils_pkg.fcounter('TAB2'),0,col3)
    20    )
    21  SELECT col1, col2, col3 FROM src_tab;
    3 rows created.
    SQL>
    SQL>
    SQL> SELECT utils_pkg.fGetInsCounts('TAB1') AS tab1_inserts,
      2    utils_pkg.fGetInsCounts('TAB2')      AS tab2_inserts,
      3    utils_pkg.fresetcounts    AS reset_done
      4  FROM dual;
    TAB1_INSERTS TAB2_INSERTS RESET_DONE                                           
               2            1                                                      
    SQL>
    SQL> rollback;
    Rollback complete.
    Thanks,
    GPU

  • Short  Dump in Standard program for the INSERT Statement

    Hi All,
    In the Transaction FPCJ, when I click on the Button Close Cash Desk, I ma getting a short Dump.
    The problem is in the Standard SAP program 'SAPLSUU1' and the include program is 'LSUU1F01'.
    The reason is the code is trying to insert duplicate entries into a Data base table. And SAP is suggesting for a SAp NOTE.
    The problem is when I am trying with the suggested dearch terms I get very large number of notes and I am unable
    to zero down any single note.
    If someone has worked on these or have any idea on the Note number, please share that with me.
    Thans in advance.
    Thanks & regards,
    Y Gautham

    Hi Nicole,
    Thanks a lot! It actually worked.
    Thanks & regards,
    Y Gautham

  • Delta and Full Load question for cube and ODS

    Hi all,
    I need to push full load from Delta ODS.
    I have process chain.... in which the steps are like below,
    1. R/3 extractor for ODS1 (delta)
    2. ODS1 to ODS2 (delta)
    3. ODS2 to Cube ---> needs to be full load
    Now when i run process chain by further processing automatically ODS2 does init/delta for Cube.
    How can i make it possible for full load ??
    can any one guide anything in this ?
    Thanks,
    KS

    Hi,
    1. R/3 extractor for ODS1 (delta) :  This is OK, normally you can put the Delta InfoPack in Process Chian
    2. ODS1 to ODS2 (delta): It automatically flow from ODS1 to ODS2 (you need to select Update Data automaticall in the Targets at the time of ODS creation)
    3. ODS2 to Cube ---> needs to be full load  :
    This you create a Update rules from ODS1 to Cube then Create InfoPackage in between ODS2 and Cube then do full loads. You can delete the data in the CUbe before the load ann dthen do Full load to Cube.
    Note: In ODS2 don't select Upadate Data autmaticlly to Data Targets
    Thanks
    Reddy
    Edited by: Surendra Reddy on Nov 21, 2008 1:57 PM

  • I have a question for you: Inserting Word document in BLOB column

    Hey Experts,
    I have found a good info and a sample on how to achieve this on
    http://www.sys-con.com/java/source/5-6/code.cfm?Page=76.
    declare
    f_lob bfile;
    b_lob blob;
    begin
    insert into sam_emp(empno,ename,resume)
    values ( 9001, 'Samir',empty_blob() )
    return risumi into b_lob;
    f_lob := bfilename( 'MY_FILES', 'MyResume.doc' );
    dbms_lob.fileopen(f_lob, dbms_lob.file_readonly);
    dbms_lob.loadfromfile
    ( b_lob, f_lob, dbms_lob.getlength(f_lob) );
    dbms_lob.fileclose(f_lob);
    commit;
    end;
    I have a jsp project and the users ( on the client side)must be
    able to create a word document and send it to the server with an
    uplaod servlet. With another servlet or jsp i want to process
    this word document in BLOB column using JAVA. The sample above
    uses PL/SQL to achieve this. Is there a way i can do this in my
    servlet/jsp to do the same thing?
    Any hints are welcome!

    The option should be visible here: http://support.mozilla.com/en-US/kb/Printing%20a%20web%20page#w_print-window-settings
    Print range section - Lets you specify which pages of the current web page are printed:
    * Select '''All''' to print everything.
    * Select '''Pages''' and enter the range of pages you want to print. For example, selecting "from 1 to 1" prints the first page only.
    * Select '''Selection''' to print only the part the page you've highlighted.
    Does it work for you?

  • Unicode issue for the INSERT statement...

    Hi,
    In ECC 6.0, the following INSERT statment give short dumps:
    FIELD-SYMBOLS:
      <fs_tabname>   TYPE ANY."Dynamic table name
    DATA: t_bdi_entry  TYPE STANDARD TABLE OF bdi_entry,
    INSERT (<fs_tabname>) FROM TABLE t_bdi_entry.
    If any one know the reason, Please let me know.
    Thanks!
    Puneet.

    Hi,
    Please try this.
    FIELD-SYMBOLS:
    <fs_tabname> TYPE ANY.    "Dynamic table name
    DATA: t_bdi_entry TYPE STANDARD TABLE OF bdi_entry,
    INSERT <fs_tabname> INTO TABLE t_bdi_entry.
    Regards,
    Ferry Lianto

Maybe you are looking for

  • BW 0CS_ORDER CLOSE DATE

    Hi everyone, Hope you are doing well. I need help in the 0CS_ORDER infoObject (Service Order). In our solution, we call a service order as an engagement. Currently, in the dashboards in our solution (like the Accounts Receivable Aging Report), all th

  • How can I transfer my old information from my old iCloud account to my new one without resetting my phone and having to use my old account?

    How can I transfer my old information from my old iCloud account to my new one without resetting my phone and having to use my old account?

  • Spinning ball while mail loads?

    Is it normal in Mail for the beach ball to spin simply while it is loading new messages? If so, I have a new app to dump. I hope it is just a setting I need to adjust. Thanks

  • Spotlight Search for 0KB Images?

    I'm having a big problem with my iPhoto Library. Is there a way for me to search for 0KB images in an iPhoto Library (say after I do a Show Package Contents)? If so, I'd love to get specific instructions on how to do this. Also, if there is a Spotlig

  • ASM Configuration on oracle 10.2 RAC

    hi all os : sun sparc solaris 10 db : 10.2 rac database clusterware configured using raw devices. we have no other option than configuring asm on raw devices. could u tell me narrows steps how to do it manually as well as using dbca