Send some example of bulk collect option in loop
Hi
I have three type parameter which is bulk collect from same table
i want to use two of the parameter to verify the data in other table.
and if data won't find using this 3rd bulk collect option to update 3 rd table..
help is appreciated,
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10472/composites.htm
Similar Messages
-
Bulk collect with Nested loops
Hi I've a requirement like this
I need to pull request nos from table a(Master table)
For every request no I need to pull request details from table b(Detail Table)
For every request no I need to pull contact details from table c
For every request no I need to pull customer data table d and I need to create a flat file with that data so I'm using utl_file in normal query criterion because of nested loops it's taking lot of time so I want to use bulk collect with dynamic query option:
Sample code
=======
create or replace procedure test(region varchar2) as
type tablea_request_typ is table of varchar2(10);
tablea_data tablea_request_typ;
type tableb_request_typ is table of varchar2(1000);
tableb_data tableb_request_typ;
type tablec_request_typ is table of varchar2(1000);
tablec_data tablec_request_typ;
type tabled_request_typ is table of varchar2(1000);
tabled_data tabled_request_typ;
stmta varchar2(32000);
stmtb varchar2(32000);
stmtc varchar2(32000);
stmtd varchar2(32000);
rcura SYS_REFCURSOR;
rcurb SYS_REFCURSOR;
rcurc SYS_REFCURSOR;
rcurd SYS_REFCURSOR;
begin
stmta:='select request_no from tablea where :region'||'='NE';
stmtb:='select request_no||request_detail1||request_detail2 stringb from table b where :region'||'='NE';
stmtc:='select contact1||contact2||contact3||contact4 stringc from table c where :region'||'='NE';
stmtd:='select customer1||customer2||customer3||customer4 stringd from table c where :region'||'='NE';
OPEN rcura for stmta;
LOOP
FETCH rcura BULK COLLECT INTO request_no
LIMIT 1000;
FOR f in 1..request_no.count
LOOP
--Tableb
OPEN rcurb for stmtb USING substr(request_no(f),1,14);
LOOP
FETCH rcurb BULK COLLECT INTO tableb_data
for i in 1..tableb_data.count
LOOP
utl_file(...,tableb_data(i));
END LOOP;
EXIT WHEN rcurb%NOTFOUND;
END LOOP;
-- Tablec
OPEN rcurc for stmtc USING substr(request_no(f),1,14);
LOOP
FETCH rcurb BULK COLLECT INTO tablec_data
for i in 1..tablec_data.count
LOOP
utl_file(...,tablec_data(i));
END LOOP;
EXIT WHEN rcurc%NOTFOUND;
END LOOP;
-- Tabled
OPEN rcurd for stmtd USING substr(request_no(f),1,14);
LOOP
FETCH rcurd BULK COLLECT INTO tabled_data
for i in 1..tabled_data.count
LOOP
utl_file(...,tabled_data(i));
END LOOP;
EXIT WHEN rcurd%NOTFOUND;
END LOOP;
END LOOP;
EXIT WHEN rcura%NOTFOUND;
END LOOP;
exception
when other then
dbms_output.put_line(sqlerrm);
end;I 'm using code mentioned above but request nos are repeating as if it's an infinete loop ?for ex if request no is 222 It should run once but here it's running more than once?
How to pass bind parameters say in my case region?
Are there any alternate solutions to run it faster apart from using bulk collect?
Right now I'm using explicit cursor with for loop which is taking lot of time ?so is this better sol?
Thanks,
Mahender
Edited by: BluShadow on 24-Aug-2011 08:52
added {noformat}{noformat} tags. Please read {message:id=9360002} to learn to format your code/data yourself.Use Parameterized cursor :
CREATE OR REPLACE PROCEDURE test(region varchar2)
AS
type tablea_request_typ is table of varchar2(10);
type tableb_request_typ is table of varchar2(1000);
type tablec_request_typ is table of varchar2(1000);
type tabled_request_typ is table of varchar2(1000);
tablea_data tablea_request_typ;
tableb_data tableb_request_typ;
tablec_data tablec_request_typ;
tabled_data tabled_request_typ;
CURSOR rcura(v_region VARCHAR2(100))
IS
select request_no from tablea where region = v_region;
CURSOR rcurb(v_input VARCHAR2(100))
IS
select request_no||request_detail1||request_detail2 stringb from table b where request_num = v_input;
CURSOR rcurc(v_input VARCHAR2(100))
IS
select select contact1||contact2||contact3||contact4 stringc from table c where request_num = v_input;
CURSOR rcurd(v_input VARCHAR2(100))
IS
select select customer1||customer2||customer3||customer4 stringd from table c where request_num = v_input;
BEGIN
OPEN rcura('NE');
LOOP
FETCH rcura BULK COLLECT INTO request_no LIMIT 1000;
FOR f in 1..request_no.count
LOOP
--Tableb
OPEN rcurb(substr(request_no(f),1,14));
LOOP
FETCH rcurb BULK COLLECT INTO tableb_data
for i in 1..tableb_data.count
LOOP
utl_file(...,tableb_data(i));
END LOOP;
EXIT WHEN rcurb%NOTFOUND;
END LOOP;
-- Tablec
OPEN rcurc (substr(request_no(f),1,14));
LOOP
FETCH rcurb BULK COLLECT INTO tablec_data
for i in 1..tablec_data.count
LOOP
utl_file(...,tablec_data(i));
END LOOP;
EXIT WHEN rcurc%NOTFOUND;
END LOOP;
-- Tabled
OPEN rcurd ( substr(request_no(f),1,14) );
LOOP
FETCH rcurd BULK COLLECT INTO tabled_data
for i in 1..tabled_data.count
LOOP
utl_file(...,tabled_data(i));
END LOOP;
EXIT WHEN rcurd%NOTFOUND;
END LOOP;
END LOOP;
EXIT WHEN rcura%NOTFOUND;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(dbms_utility.format_error_backtrace);
END;
/Hope this helps. If not, post your table structures... -
Hi what the use f4_filename plz send some example code.?
hi iam shabeer iam abap fresher i want to select a file which i have saved in c drive of harddisk through sap report iam confuse what values would give import and export will u plz help me.
hi check this...
TABLES : MARA.
INTERNAL TABLE DECLARATION
TYPES : BEGIN OF ITAB,
MATNR TYPE MARA-MATNR,
MEINS TYPE MARA-MEINS,
ERNAM LIKE MARA-ERNAM,
AENAM LIKE MARA-AENAM,
MTART LIKE MARA-MTART,
END OF ITAB.
DATA : IG_ITAB type ITAB OCCURS 0.
DATA : V_FILE TYPE STRING.
DATA : T_FILE TYPE RLGRAP-FILENAME.
RETRIVE DATA FROM DATABASE
SELECT
MATNR
MEINS ERNAM AENAM MTART FROM MARA
INTO CORRESPONDING FIELDS OF TABLE
IG_ITAB
up to 10 rows.
CALLING FUNCTION MODULES
CALL FUNCTION 'F4_FILENAME' "PASS THE FILE NAME AS U NEED.
EXPORTING
FIELD_NAME = 'T_FILE'
IMPORTING
FILE_NAME = T_FILE.
V_FILE = T_FILE. "STRING CONVERSION
CALLING GUI DOWNLOAD TO EXTRACT
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
filename = V_FILE
FILETYPE = 'ASC'
WRITE_FIELD_SEPARATOR = 'X'
WRITE_LF = 'X'
tables
data_tab = IG_ITAB .
regards,
venkat appikonda -
Can I bulk collect based on a date variable?
I'm want to use some sort of Bulk collect to speed up this query inside a PL/SQL block (we're in 10g). I have seen examples of using a FORALL statement but I don't know how to accommodate the variable v_cal_date. My idea is to be able to run the script to update a portion of the table "tbl_status_jjs" based on a date range that I provide. tbl_status_jjs contains a list of dates by minute of the day for an entire year and a blank column to be filled in.
I though of using something like FORALL v_cal_date in '01-apr-2009 00:00:00'..'01-jun-2009 00:00:00' -- somehow need to increment by minute!? ... but that doesn't seem right and i can't find any exmples of a bulk collect based on a date. How do I bulk collect on a date variable? Can I use the date/time from a subset of records from the final table in some sort of cursor?
Thanks
Jason
-- loop through one day by minute of the day and update counts into table
v_cal_date Date :=TO_DATE('01-apr-2005 00:00:00','dd-mm-yyyy hh24:mi:ss');
intX := 1;
WHILE intX <= 1440 LOOP
UPDATE tbl_status_jjs
SET (cal_date, my_count) =
(SELECT v_cal_date,
NVL(SUM(CASE WHEN v_cal_date >= E.START_DT AND v_cal_date < E.END_DT THEN 1 END),0) AS my_count
FROM tbl_data_jjs E
WHERE cal_date = v_cal_date;
v_cal_date := v_cal_date + (1/1440);
intX := intX + 1;
COMMIT;
END LOOP;Here are the two tables. The goal is to find an efficient way to count how many records in tbl_data have a start_dt before and a end_dt after each cal_date in tbl_status.
01-apr-2005 00:05:00 ==> 3
01-apr-2005 00:25:00 ==> 1
DROP TABLE tbl_status;
CREATE TABLE tbl_status
( CAL_DATE DATE,
MY_COUNT NUMBER);
DECLARE
start_date Date :=TO_DATE('01-jan-2006 00:00:00','dd-mm-yyyy hh24:mi:ss');
end_date Date :=TO_DATE('01-jan-2006 01:00:00','dd-mm-yyyy hh24:mi:ss');
BEGIN
INSERT INTO tbl_status (CAL_DATE)
SELECT start_date + ( (LEVEL - 1) / (24 * 60))
FROM dual
CONNECT BY LEVEL <= 1 + ( (end_date - start_date) * (24 * 60) );
END;
DROP TABLE tbl_data;
CREATE TABLE tbl_data
( START_DT DATE,
END_DT DATE);
INSERT INTO tbl_data VALUES (TO_DATE('2006-01-01 00:05:00', 'yyyy-mm-dd hh24:mi:ss'), TO_DATE('2006-01-01 00:15:00', 'yyyy-mm-dd hh24:mi:ss'));
INSERT INTO tbl_data VALUES (TO_DATE('2006-01-01 00:05:00', 'yyyy-mm-dd hh24:mi:ss'), TO_DATE('2006-01-01 00:20:00', 'yyyy-mm-dd hh24:mi:ss'));
INSERT INTO tbl_data VALUES (TO_DATE('2006-01-01 00:05:00', 'yyyy-mm-dd hh24:mi:ss'), TO_DATE('2006-01-01 00:30:00', 'yyyy-mm-dd hh24:mi:ss'));
INSERT INTO tbl_data VALUES (TO_DATE('2006-01-01 00:35:00', 'yyyy-mm-dd hh24:mi:ss'), TO_DATE('2006-01-01 00:40:00', 'yyyy-mm-dd hh24:mi:ss'));
DECLARE
v_cal_date Date :=TO_DATE('01-jan-2006 00:00:00','dd-mm-yyyy hh24:mi:ss');
intX Integer :=1;
BEGIN
WHILE intX <= 60 LOOP
UPDATE tbl_status
SET (cal_date, my_count) =
(SELECT v_cal_date,
NVL(SUM(CASE WHEN v_cal_date >= E.START_DT AND v_cal_date < E.END_DT THEN 1 END),0) AS my_count
FROM tbl_data E
WHERE cal_date = v_cal_date;
v_cal_date := v_cal_date + (1/1440);
intX := intX + 1;
COMMIT;
END LOOP;
END;Edited by: Jason_S on Oct 21, 2009 9:00 AM -- i messed up the years/months .. fixed now
Edited by: Jason_S on Oct 21, 2009 9:13 AM -
Why do some of the e-mails a friend sends me go into bulk mail?
Why do some of the e-mails a friend sends me go into bulk mail?
It has to do with how your mail client sees the mails. Some addresses get flagged, other times they trigger other alerts, such as sending out too many at once, or having sent out spam in the past (perhaps innocently with their addresses spoofed) Other times topics can trigger the spam filters or key words.
-
How to use bulk collect into clause
hi all,
i have requirement like this i want to change the query by passing the different table names in oracle database 10gr2.
like if i use first i pass the scott.emp table name select * from scott.emp;
then i want pass scott.dept table name select * from scott.dept;
using select * option in the select list.
how can i execute it.
give me any solution.
please reply....Hi,
i recently also ran into some serious trouble to make dynamic sql work.
It was about parallel pipelined function with strongly typed cursor (for "partition ... by hash(...)").
But in addition requiring dynamic SQL for the input to this cursor.
I couldn't make it work with execute immediate or something else.
So i used another way - I translated dynamic SQL into dynamically created static SQL:
1. create a base SQL data object type with abstract interface for code (e.g. some Execute() member function).
2. dynamically create a derived SQL data object type with concrete implementation of Execute() holding your "dynamic SQL" "in a static way"
3. delegate to Execute().
Let me quote my example from comp.databases.oracle.server, thread "dynamically created cursor doesn't work for parallel pipelined functions"
- pls. see below (it's an old one - with likely some odd stuff inside).
It might sound some kind of strange for DB programmer.
(I come from C++. Though this is not an excuse smile)
Maybe i just missed another possible option to handle the problem.
And it's a definitely verbose.
But i think it's at least a (last) option.
Actually i would be interested to hear, what others think about it.
best regards,
Frank
--------------- snip -------------------------
drop table parallel_test;
drop type MyDoit;
drop type BaseDoit;
drop type ton;
create type ton as table of number;
CREATE TABLE parallel_test (
id NUMBER(10),
description VARCHAR2(50)
BEGIN
FOR i IN 1 .. 100000 LOOP
INSERT INTO parallel_test (id, description)
VALUES (i, 'Description or ' || i);
END LOOP;
COMMIT;
END;
create or replace type BaseDoit as object (
id number,
static function make(p_id in number)
return BaseDoit,
member procedure doit(
p_sids in out nocopy ton,
p_counts in out nocopy ton)
) not final;
create or replace type body BaseDoit as
static function make(p_id in number)
return BaseDoit
is
begin
return new BaseDoit(p_id);
end;
member procedure doit(
p_sids in out nocopy ton,
p_counts in out nocopy ton)
is
begin
dbms_output.put_line('BaseDoit.doit(' || id || ') invoked...');
end;
end;
-- Define a strongly typed REF CURSOR type.
CREATE OR REPLACE PACKAGE parallel_ptf_api AS
TYPE t_parallel_test_row IS RECORD (
id1 NUMBER(10),
desc1 VARCHAR2(50),
id2 NUMBER(10),
desc2 VARCHAR2(50),
sid NUMBER
TYPE t_parallel_test_tab IS TABLE OF t_parallel_test_row;
TYPE t_parallel_test_ref_cursor IS REF CURSOR RETURN
t_parallel_test_row;
FUNCTION test_ptf (p_cursor IN t_parallel_test_ref_cursor)
RETURN t_parallel_test_tab PIPELINED
PARALLEL_ENABLE(PARTITION p_cursor BY any);
END parallel_ptf_api;
SHOW ERRORS
CREATE OR REPLACE PACKAGE BODY parallel_ptf_api AS
FUNCTION test_ptf (p_cursor IN t_parallel_test_ref_cursor)
RETURN t_parallel_test_tab PIPELINED
PARALLEL_ENABLE(PARTITION p_cursor BY any)
IS
l_row t_parallel_test_row;
BEGIN
LOOP
FETCH p_cursor
INTO l_row;
EXIT WHEN p_cursor%NOTFOUND;
select userenv('SID') into l_row.sid from dual;
PIPE ROW (l_row);
END LOOP;
RETURN;
END test_ptf;
END parallel_ptf_api;
SHOW ERRORS
PROMPT
PROMPT Serial Execution
PROMPT ================
SELECT sid, count(*)
FROM TABLE(parallel_ptf_api.test_ptf(CURSOR(SELECT t1.id,
t1.description, t2.id, t2.description, null
FROM parallel_test t1,
parallel_test t2
where t1.id = t2.id
) t2
GROUP BY sid;
PROMPT
PROMPT Parallel Execution
PROMPT ==================
SELECT sid, count(*)
FROM TABLE(parallel_ptf_api.test_ptf(CURSOR(SELECT /*+
parallel(t1,5) */ t1.id, t1.description, t2.id, t2.description, null
FROM parallel_test t1,
parallel_test t2
where t1.id = t2.id
) t2
GROUP BY sid;
PROMPT
PROMPT Parallel Execution 2
PROMPT ==================
set serveroutput on;
declare
v_sids ton := ton();
v_counts ton := ton();
-- v_cur parallel_ptf_api.t_parallel_test_ref_cursor;
v_cur sys_refcursor;
procedure OpenCursor(p_refCursor out sys_refcursor)
is
begin
open p_refCursor for 'SELECT /*+ parallel(t1,5) */ t1.id,
t1.description, t2.id, t2.description, null
FROM parallel_test t1,
parallel_test t2
where t1.id = t2.id';
end;
begin
OpenCursor(v_cur);
SELECT sid, count(*) bulk collect into v_sids, v_counts
FROM TABLE(parallel_ptf_api.test_ptf(v_cur)) t2
GROUP BY sid;
for i in v_sids.FIRST.. v_sids.LAST loop
dbms_output.put_line (v_sids(i) || ', ' || v_counts(i));
end loop;
end;
PROMPT
PROMPT Parallel Execution 3
PROMPT ==================
set serveroutput on;
declare
instance BaseDoit;
v_sids ton := ton();
v_counts ton := ton();
procedure CreateMyDoit
is
cmd varchar2(4096 char);
begin
cmd := 'create or replace type MyDoit under BaseDoit ( ' ||
' static function make(p_id in number) ' ||
' return MyDoit, ' ||
' overriding member procedure doit( ' ||
' p_sids in out nocopy ton, ' ||
' p_counts in out nocopy ton) ' ||
execute immediate cmd;
cmd := 'create or replace type body MyDoit as ' ||
' static function make(p_id in number) ' ||
' return MyDoit ' ||
' is ' ||
' begin ' ||
' return new MyDoit(p_id); ' ||
' end; ' ||
' ' ||
' overriding member procedure doit( ' ||
' p_sids in out nocopy ton, ' ||
' p_counts in out nocopy ton) ' ||
' is ' ||
' begin ' ||
' dbms_output.put_line(''MyDoit.doit('' || id || '')
invoked...''); ' ||
' SELECT sid, count(*) bulk collect into p_sids, p_counts ' ||
' FROM TABLE(parallel_ptf_api.test_ptf(CURSOR( ' ||
' SELECT /*+ parallel(t1,5) */ t1.id, t1.description,
t2.id, t2.description, null ' ||
' FROM parallel_test t1, parallel_test t2 ' ||
' where t1.id = t2.id ' ||
' ))) ' ||
' GROUP BY sid; ' ||
' end; ' ||
' end; ';
execute immediate cmd;
end;
begin
CreateMyDoit;
execute immediate 'select MyDoit.Make(11) from dual' into instance;
instance.doit(v_sids, v_counts);
if v_sids.COUNT > 0 then
for i in v_sids.FIRST.. v_sids.LAST loop
dbms_output.put_line (v_sids(i) || ', ' || v_counts(i));
end loop;
end if;
end;
--------------- snap ------------------------- -
Problem with BULK COLLECT with million rows - Oracle 9.0.1.4
We have a requirement where are supposed to load 58 millions of rows into a FACT Table in our DATA WAREHOUSE. We initially planned to use Oracle Warehouse Builder but due to performance reasons, decided to write custom code. We wrote a custome procedure which opens a simple cursor and reads all the 58 million rows from the SOURCE Table and in a loop processes the rows and inserts the records into a TARGET Table. The logic works fine but it took 20hrs to complete the load.
We then tried to leverage the BULK COLLECT and FORALL and PARALLEL options and modified our PL/SQL code completely to reflect these. Our code looks very simple.
1. We declared PL/SQL BINARY_INDEXed Tables to store the data in memory.
2. We used BULK COLLECT into FETCH the data.
3. We used FORALL statement while inserting the data.
We did not introduce any of our transformation logic yet.
We tried with the 600,000 records first and it completed in 1 min and 29 sec with no problems. We then doubled the no. of rows to 1.2 million and the program crashed with the following error:
ERROR at line 1:
ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu
call ,pmucalm coll)
ORA-06512: at "VVA.BULKLOAD", line 66
ORA-06512: at line 1
We got the same error even with 1 million rows.
We do have the following configuration:
SGA - 8.2 GB
PGA
- Aggregate Target - 3GB
- Current Allocated - 439444KB (439 MB)
- Maximum allocated - 2695753 KB (2.6 GB)
Temp Table Space - 60.9 GB (Total)
- 20 GB (Available approximately)
I think we do have more than enough memory to process the 1 million rows!!
Also, some times the same program results in the following error:
SQL> exec bulkload
BEGIN bulkload; END;
ERROR at line 1:
ORA-03113: end-of-file on communication channel
We did not even attempt the full load. Also, we are not using the PARALLEL option yet.
Are we hitting any bug here? Or PL/SQL is not capable of mass loads? I would appreciate any thoughts on this?
Thanks,
Haranadh
Following is the code:
set echo off
set timing on
create or replace procedure bulkload as
-- SOURCE --
TYPE src_cpd_dt IS TABLE OF ima_ama_acct.cpd_dt%TYPE;
TYPE src_acqr_ctry_cd IS TABLE OF ima_ama_acct.acqr_ctry_cd%TYPE;
TYPE src_acqr_pcr_ctry_cd IS TABLE OF ima_ama_acct.acqr_pcr_ctry_cd%TYPE;
TYPE src_issr_bin IS TABLE OF ima_ama_acct.issr_bin%TYPE;
TYPE src_mrch_locn_ref_id IS TABLE OF ima_ama_acct.mrch_locn_ref_id%TYPE;
TYPE src_ntwrk_id IS TABLE OF ima_ama_acct.ntwrk_id%TYPE;
TYPE src_stip_advc_cd IS TABLE OF ima_ama_acct.stip_advc_cd%TYPE;
TYPE src_authn_resp_cd IS TABLE OF ima_ama_acct.authn_resp_cd%TYPE;
TYPE src_authn_actvy_cd IS TABLE OF ima_ama_acct.authn_actvy_cd%TYPE;
TYPE src_resp_tm_id IS TABLE OF ima_ama_acct.resp_tm_id%TYPE;
TYPE src_mrch_ref_id IS TABLE OF ima_ama_acct.mrch_ref_id%TYPE;
TYPE src_issr_pcr IS TABLE OF ima_ama_acct.issr_pcr%TYPE;
TYPE src_issr_ctry_cd IS TABLE OF ima_ama_acct.issr_ctry_cd%TYPE;
TYPE src_acct_num IS TABLE OF ima_ama_acct.acct_num%TYPE;
TYPE src_tran_cnt IS TABLE OF ima_ama_acct.tran_cnt%TYPE;
TYPE src_usd_tran_amt IS TABLE OF ima_ama_acct.usd_tran_amt%TYPE;
src_cpd_dt_array src_cpd_dt;
src_acqr_ctry_cd_array src_acqr_ctry_cd;
src_acqr_pcr_ctry_cd_array src_acqr_pcr_ctry_cd;
src_issr_bin_array src_issr_bin;
src_mrch_locn_ref_id_array src_mrch_locn_ref_id;
src_ntwrk_id_array src_ntwrk_id;
src_stip_advc_cd_array src_stip_advc_cd;
src_authn_resp_cd_array src_authn_resp_cd;
src_authn_actvy_cd_array src_authn_actvy_cd;
src_resp_tm_id_array src_resp_tm_id;
src_mrch_ref_id_array src_mrch_ref_id;
src_issr_pcr_array src_issr_pcr;
src_issr_ctry_cd_array src_issr_ctry_cd;
src_acct_num_array src_acct_num;
src_tran_cnt_array src_tran_cnt;
src_usd_tran_amt_array src_usd_tran_amt;
j number := 1;
CURSOR c1 IS
SELECT
cpd_dt,
acqr_ctry_cd ,
acqr_pcr_ctry_cd,
issr_bin,
mrch_locn_ref_id,
ntwrk_id,
stip_advc_cd,
authn_resp_cd,
authn_actvy_cd,
resp_tm_id,
mrch_ref_id,
issr_pcr,
issr_ctry_cd,
acct_num,
tran_cnt,
usd_tran_amt
FROM ima_ama_acct ima_ama_acct
ORDER BY issr_bin;
BEGIN
OPEN c1;
FETCH c1 bulk collect into
src_cpd_dt_array ,
src_acqr_ctry_cd_array ,
src_acqr_pcr_ctry_cd_array,
src_issr_bin_array ,
src_mrch_locn_ref_id_array,
src_ntwrk_id_array ,
src_stip_advc_cd_array ,
src_authn_resp_cd_array ,
src_authn_actvy_cd_array ,
src_resp_tm_id_array ,
src_mrch_ref_id_array ,
src_issr_pcr_array ,
src_issr_ctry_cd_array ,
src_acct_num_array ,
src_tran_cnt_array ,
src_usd_tran_amt_array ;
CLOSE C1;
FORALL j in 1 .. src_cpd_dt_array.count
INSERT INTO ima_dly_acct (
CPD_DT,
ACQR_CTRY_CD,
ACQR_TIER_CD,
ACQR_PCR_CTRY_CD,
ACQR_PCR_TIER_CD,
ISSR_BIN,
OWNR_BUS_ID,
USER_BUS_ID,
MRCH_LOCN_REF_ID,
NTWRK_ID,
STIP_ADVC_CD,
AUTHN_RESP_CD,
AUTHN_ACTVY_CD,
RESP_TM_ID,
PROD_REF_ID,
MRCH_REF_ID,
ISSR_PCR,
ISSR_CTRY_CD,
ACCT_NUM,
TRAN_CNT,
USD_TRAN_AMT)
VALUES (
src_cpd_dt_array(j),
src_acqr_ctry_cd_array(j),
null,
src_acqr_pcr_ctry_cd_array(j),
null,
src_issr_bin_array(j),
null,
null,
src_mrch_locn_ref_id_array(j),
src_ntwrk_id_array(j),
src_stip_advc_cd_array(j),
src_authn_resp_cd_array(j),
src_authn_actvy_cd_array(j),
src_resp_tm_id_array(j),
null,
src_mrch_ref_id_array(j),
src_issr_pcr_array(j),
src_issr_ctry_cd_array(j),
src_acct_num_array(j),
src_tran_cnt_array(j),
src_usd_tran_amt_array(j));
COMMIT;
END bulkload;
SHOW ERRORS
-----------------------------------------------------------------------------do you have a unique key available in the rows you are fetching?
It seems a cursor with 20 million rows that is as wide as all the columnsyou want to work with is a lot of memory for the server to use at once. You may be able to do this with parallel processing (dop over 8) and a lot of memory for the warehouse box (and the box you are extracting data from)...but is this the most efficient (and thereby fastest) way to do it?
What if you used a cursor to select a unique key only, and then during the cursor loop fetch each record, transform it, and insert it into the target?
Its a different way to do a lot at once, but it cuts down on the overall memory overhead for the process.
I know this isnt as elegant as a single insert to do it all at once, but sometimes trimming a process down so it takes less resources at any given moment is much faster than trying to do the whole thing at once.
My solution is probably biased by transaction systems, so I would be interested in what the data warehouse community thinks of this.
For example:
source table my_transactions (tx_seq_id number, tx_fact1 varchar2(10), tx_fact2 varchar2(20), tx_fact3 number, ...)
select a cursor of tx_seq_id only (even at 20 million rows this is not much)
you could then either use a for loop or even bulk collect into a plsql collection or table
then process individually like this:
procedure process_a_tx(p_tx_seq_id in number)
is
rTX my_transactions%rowtype;
begin
select * into rTX from my_transactions where tx_seq_id = p_tx_seq_id;
--modify values as needed
insert into my_target(a, b, c) values (rtx.fact_1, rtx.fact2, rtx.fact3);
commit;
exception
when others
rollback;
--write to a log or raise and exception
end process_a_tx;
procedure collect_tx
is
cursor tx is
select tx_seq_id from my_transactions;
begin
for rTx in cTx loop
process_a_tx(rtx.tx_seq_id);
end loop;
end collect_tx; -
Using bulk collect and for all to solve a problem
Hi All
I have a following problem.
Please forgive me if its a stupid question :-) im learning.
1: Data in a staging table xx_staging_table
2: two Target table t1, t2 where some columns from xx_staging_table are inserted into
Some of the columns from the staging table data are checked for valid entries and then some columns from that row will be loaded into the two target tables.
The two target tables use different set of columns from the staging table
When I had a thousand records there was no problem with a direct insert but it seems we will now have half a million records.
This has slowed down the process considerably.
My question is
Can I use the bulk collect and for all functionality to get specific columns from a staging table, then validate the row using those columns
and then use a bulk insert to load the data into a specific table.?
So code would be like
get_staging_data cursor will have all the columns i need from the staging table
cursor get_staging_data
is select * from xx_staging_table (about 500000) records
Use bulk collect to load about 10000 or so records into a plsql table
and then do a bulk insert like this
CREATE TABLE t1 AS SELECT * FROM all_objects WHERE 1 = 2;
CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
IS
TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
l_data ARRAY;
CURSOR c IS SELECT * FROM all_objects;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO t1 VALUES l_data(i);
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END test_proc;
In the above example t1 and the cursor have the same number of columns
In my case the columns in the cursor loop are a small subset of the columns of table t1
so can i use a forall to load that subset into the table t1? How does that work?
Thanks
Juser7348303 wrote:
checking if the value is valid and theres also some conditional processing rules ( such as if the value is a certain value no inserts are needed)
which are a little more complex than I can put in a simpleWell, if the processing is too complex (and conditional) to be done in SQL, then doing that in PL/SQL is justified... but will be slower as you are now introducing an additional layer. Data now needs to travel between the SQL layer and PL/SQL layer. This is slower.
PL/SQL is inherently serialised - and this also effects performance and scalability. PL/SQL cannot be parallelised by Oracle in an automated fashion. SQL processes can.
To put in in simple terms. You create PL/SQL procedure Foo that processes SQL cursor and you execute that proc. Oracle cannot run multiple parallel copies of Foo. It perhaps can parallelise that SQL cursor that Foo uses - but not Foo itself.
However, if Foo is called by the SQL engine it can run in parallel - as the SQL process calling Foo is running in parallel. So if you make Foo a pipeline table function (written in PL/SQL), and you design and code it as a thread-safe/parallel enabled function, it can be callled and used and executed in parallel, by the SQL engine.
So moving your PL/SQL code into a parallel enabled pipeline function written in PL/SQL, and using that function via parallel SQL, can increase performance over running that same basic PL/SQL processing as a serialised process.
This is of course assuming that the processing that needs to be done using PL/SQL code, can be designed and coded for parallel processing in this fashion. -
I understand that Oracle 10g will rewrite your CURSOR FOR LOOP statements into a BULK COLLECT statement.
I am contemplating no longer explicitly writing the BULK COLLECT from now on as it reduces the number of lines of code and greatly simplifies code.
Can anyone see any serious flaws in this strategy?
Kind Regards
Chris> I also think it is a good idea if people do take the
time to decide their strategy. You seem to be
suggesting that it is a bad idea to stop and think
about what you require from your loop.
Well, that depends on the type of programmer. When one deals with programmers that are not true PL/SQL developers and view PL/SQL.. well, as some kind of inferior database language (compared to something like Java for example).. you want to have templates and stuff to enforce best practises.
> I also don't agree with the 'package tuning knob'.
Each query may have different requirements and, as
with most things in programming, fixing one thing can
have a negative effect on another. It is about the
only place where would not advocate constants.
You have a point - but even so, defining these as constants (even if it has to be inside the actual proc doing the bulk fetch, one per bulk fetch) make it a lot easier to maintain than having to search out the actual bulk fetch statements in the code.
> But i would suggest that analysis is performed on a
close-to-live enviroment with a production level
server, large body of test data and multiple users. I
think we agree on that point.
Yeah.. but the problem there is that I have never really seen such an environment. Usually due to costs. How do you for example duplicate a large RAC, terabytes of SAN space, 1000's of users, for use as a close-to-live enviroment?
The usual approach (by management) is to spend as little as possible on development and Q&A platforms. Which at times means that the performance of dev vs. production can vary a lot.
So we have to play the hand we're dealt with unfortunately.
> Hmmm cmegar and I have never once said "don't worry
about, PL/SQL does it for you".
Yes - of course not. I'm just rambling on in general describing the usual attitudes I see when it comes to features like this.
It is managable in small dev team, but larger ones.. not really. There this attitude is often previlant in my experience. The "silver bullet" bullet syndrome.
> Would you use the same argument with regard to unit
testing. There are not many pl/sql developers who
unit test but should that prevent me using the
technique?
Well.. to be honest, I do not think that a developer that writes at least some basic unit tests for his/her code can be called a developer.
> >[i]Relying on implicit features to "fix" code for
you negates a deeper and better understanding of the
language and what you writeI don't see how using an implicit bulk collect is
fixing code.
Which is why I put it in brackets - "fixing" ito making it more performant, or "fixing" it as a FOR loop contains DMLs that can be changed to FORALLs.
> Don't tell me you've never
updated older code to take advangate of a new
feature.
I can never stop the urge to refactor old code I'm working with. :-)
> >[i]I think of features as an implicit bulk collect
behind the scenes, as crutches for mediocre
programmers.I take it that statement is suggesting that cmedgar
and I are mediocre programmers? Not a nice way to end
an otherwise constructive argument.
How does that saying go? You claim the cloth that I cut? :-)
My sincere apologies to both you guys - I did not intend that statement to personal at all.
Besides, I'm usually more blunt than that what it comes to throwing personal insults around. ;-)
This statement was just a general observation going back to my early days of writing Cobol and Natural. Programmers at time do not seem to care about grokking the features and apply them correctly. Actually I want to say "most programmers" and "a lot of times", but then I would be accused of generalisation. ;-)
I simply find it very frustrating dealing with programmers that does not simply love to write code. Programmers that see it as a mere job.
Someone once said that he never starts out to write beautiful code. But when he is done and the code is not beautiful and elegant (and simple), he knows he has screwed up.
In my experience.. many programmers will not understand this. -
How to use BULK COLLECT, FORALL and TREAT
There is a need to read match and update data from and into a custom table. The table would have about 3 millions rows and holds key numbers. BAsed on a field value of this custom table, relevant data needs to be fetched from joins of other tables and updated in the custom table. I plan to use BULK COLLECT and FORALL.
All examples I have seen, do an insert into a table. How do I go about reading all values of a given field and fetching other relevant data and then updating the custom table with data fetched.
Defined an object with specifics like this
CREATE OR REPLACE TYPE imei_ot AS OBJECT (
recid NUMBER,
imei VARCHAR2(30),
STORE VARCHAR2(100),
status VARCHAR2(1),
TIMESTAMP DATE,
order_number VARCHAR2(30),
order_type VARCHAR2(30),
sku VARCHAR2(30),
order_date DATE,
attribute1 VARCHAR2(240),
market VARCHAR2(240),
processed_flag VARCHAR2(1),
last_update_date DATE
Now within a package procedure I have defined like this.
type imei_ott is table of imei_ot;
imei_ntt imei_ott;
begin
SELECT imei_ot (recid,
imei,
STORE,
status,
TIMESTAMP,
order_number,
order_type,
sku,
order_date,
attribute1,
market,
processed_flag,
last_update_date
BULK COLLECT INTO imei_ntt
FROM (SELECT stg.recid, stg.imei, cip.store_location, 'S',
co.rtl_txn_timestamp, co.rtl_order_number, 'CUST',
msi.segment1 || '.' || msi.segment3,
TRUNC (co.txn_timestamp), col.part_number, 'ZZ',
stg.processed_flag, SYSDATE
FROM custom_orders co,
custom_order_lines col,
custom_stg stg,
mtl_system_items_b msi
WHERE co.header_id = col.header_id
AND msi.inventory_item_id = col.inventory_item_id
AND msi.organization_id =
(SELECT organization_id
FROM hr_all_organization_units_tl
WHERE NAME = 'Item Master'
AND source_lang = USERENV ('LANG'))
AND stg.imei = col.serial_number
AND stg.processed_flag = 'U');
/* Update staging table in one go for COR order data */
FORALL indx IN 1 .. imei_ntt.COUNT
UPDATE custom_stg
SET STORE = TREAT (imei_ntt (indx) AS imei_ot).STORE,
status = TREAT (imei_ntt (indx) AS imei_ot).status,
TIMESTAMP = TREAT (imei_ntt (indx) AS imei_ot).TIMESTAMP,
order_number = TREAT (imei_ntt (indx) AS imei_ot).order_number,
order_type = TREAT (imei_ntt (indx) AS imei_ot).order_type,
sku = TREAT (imei_ntt (indx) AS imei_ot).sku,
order_date = TREAT (imei_ntt (indx) AS imei_ot).order_date,
attribute1 = TREAT (imei_ntt (indx) AS imei_ot).attribute1,
market = TREAT (imei_ntt (indx) AS imei_ot).market,
processed_flag =
TREAT (imei_ntt (indx) AS imei_ot).processed_flag,
last_update_date =
TREAT (imei_ntt (indx) AS imei_ot).last_update_date
WHERE recid = TREAT (imei_ntt (indx) AS imei_ot).recid
AND imei = TREAT (imei_ntt (indx) AS imei_ot).imei;
DBMS_OUTPUT.put_line ( TO_CHAR (SQL%ROWCOUNT)
|| ' rows updated using Bulk Collect / For All.'
EXCEPTION
WHEN NO_DATA_FOUND
THEN
DBMS_OUTPUT.put_line ('No Data: ' || SQLERRM);
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line ('Other Error: ' || SQLERRM);
END;
Now for the unfortunate part. When I compile the pkg, I face an error
PL/SQL: ORA-00904: "LAST_UPDATE_DATE": invalid identifier
I am not sure where I am wrong. Object type has the last update date field and the custom table also has the same field.
Could someone please throw some light and suggestion?
Thanks
udsI suspect your error comes from the »bulk collect into« and not from the »forall loop«.
From a first glance you need to alias sysdate with last_update_date and some of the other select items need to be aliased as well :
But a simplified version would be
select imei_ot (stg.recid,
stg.imei,
cip.store_location,
'S',
co.rtl_txn_timestamp,
co.rtl_order_number,
'CUST',
msi.segment1 || '.' || msi.segment3,
trunc (co.txn_timestamp),
col.part_number,
'ZZ',
stg.processed_flag,
sysdate
bulk collect into imei_ntt
from custom_orders co,
custom_order_lines col,
custom_stg stg,
mtl_system_items_b msi
where co.header_id = col.header_id
and msi.inventory_item_id = col.inventory_item_id
and msi.organization_id =
(select organization_id
from hr_all_organization_units_tl
where name = 'Item Master' and source_lang = userenv ('LANG'))
and stg.imei = col.serial_number
and stg.processed_flag = 'U';
... -
Doubt about Bulk Collect with LIMIT
Hi
I have a Doubt about Bulk collect , When is done Commit
I Get a example in PSOUG
http://psoug.org/reference/array_processing.html
CREATE TABLE servers2 AS
SELECT *
FROM servers
WHERE 1=2;
DECLARE
CURSOR s_cur IS
SELECT *
FROM servers;
TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
s_array fetch_array;
BEGIN
OPEN s_cur;
LOOP
FETCH s_cur BULK COLLECT INTO s_array LIMIT 1000;
FORALL i IN 1..s_array.COUNT
INSERT INTO servers2 VALUES s_array(i);
EXIT WHEN s_cur%NOTFOUND;
END LOOP;
CLOSE s_cur;
COMMIT;
END;If my table Servers have 3 000 000 records , when is done commit ? when insert all records ?
could crash redo log ?
using 9.2.08muttleychess wrote:
If my table Servers have 3 000 000 records , when is done commit ? Commit point has nothing to do with how many rows you process. It is purely business driven. Your code implements some business transaction, right? So if you commit before whole trancaction (from business standpoint) is complete other sessions will already see changes that are (from business standpoint) incomplete. Also, what if rest of trancaction (from business standpoint) fails?
SY. -
Where i might be doing wrong in Bulk collect ,forall
DECLARE
CURSOR Cur_sub_rp IS
SELECT A.SUB_ACCOUNT, B.RATE_PLAN,B.SUB_LAST_NAME,A.SUB_SSN
FROM STG_SUB_MASTER_MONTH_HISTORY A, SUB_PHONE_RATEPLAN B
WHERE A.SUB_ACCOUNT = B.SUB_ACCOUNT (+)
AND A.MONTH_ID = B.MONTH_ID ;
TYPE t_values_tab IS TABLE OF cur_sub_rp%rowtype index by binary_integer;
values_tab t_values_tab ;
BEGIN
OPEN Cur_sub_rp ;
LOOP
FETCH Cur_sub_rp BULK COLLECT INTO Values_tab
LIMIT 1000;
EXIT WHEN Cur_sub_rp%NOTFOUND ;
END LOOP ;
CLOSE Cur_sub_rp;
FORALL i IN VALUES_TAB.first..values_tab.last
INSERT INTO SUB_PHN_1 VALUES VALUES_TAB(i);
END;
when i select records from sub_phn_1 it shows no rows selected.I have a working example wich is close to your script (It is already was posted before)
--Create some data
DROP TABLE emp;
DROP TABLE dept;
CREATE TABLE emp
(empno NUMBER(4) NOT NULL,
ename VARCHAR(10),
job VARCHAR(9),
mgr NUMBER(4),
hiredate DATE,
sal NUMBER(7, 2),
comm NUMBER(7, 2),
deptno NUMBER(2));
INSERT INTO emp
VALUES (7369, 'SMITH', 'CLERK', 7902, '17-DEC-1980', 800, NULL, 20);
INSERT INTO emp
VALUES (7499, 'ALLEN', 'SALESMAN', 7698, '20-FEB-1981', 1600, 300, 30);
INSERT INTO emp
VALUES (7521, 'WARD', 'SALESMAN', 7698, '22-FEB-1981', 1250, 500, 30);
INSERT INTO emp
VALUES (7566, 'JONES', 'MANAGER', 7839, '2-APR-1981', 2975, NULL, 20);
CREATE TABLE dept
(deptno NUMBER(2),
dname VARCHAR(14),
loc VARCHAR(13) );
INSERT INTO dept
VALUES (20, 'RESEARCH', 'DALLAS');
INSERT INTO dept
VALUES (30, 'SALES', 'CHICAGO');
COMMIT;
DECLARE
CURSOR c1
IS
(SELECT e.*, d.dname
FROM emp e JOIN dept d ON d.deptno = e.deptno
TYPE c1_tbl_typ IS TABLE OF c1%ROWTYPE
INDEX BY PLS_INTEGER;
emp_tbl c1_tbl_typ;
BEGIN
OPEN c1;
FETCH c1
BULK COLLECT INTO emp_tbl;
CLOSE c1;
--Test emp_tbl
FOR i IN 1 .. emp_tbl.COUNT
LOOP
DBMS_OUTPUT.put_line (emp_tbl (i).empno || ', ' || emp_tbl (i).dname);
NULL;
END LOOP;
END;
--or :
DECLARE
CURSOR c1
IS
(SELECT *
FROM emp
WHERE deptno = 20);
TYPE c1_tbl_typ IS TABLE OF c1%ROWTYPE
INDEX BY PLS_INTEGER;
emp_tbl c1_tbl_typ;
BEGIN
OPEN c1;
LOOP
FETCH c1
BULK COLLECT INTO emp_tbl LIMIT 100;
FORALL i IN 1 .. emp_tbl.COUNT
INSERT INTO emp_1
VALUES emp_tbl (i);
EXIT WHEN c1%NOTFOUND;
END LOOP;
CLOSE c1;
END;
SELECT *
FROM emp_1;
--Additionally, What if your select does not return records? -
Bulk collect forall vs single merge statement
I understand that a single DML statement is better than using bulk collect for all having intermediate commits. My only concern is if I'm loading a large amount of data like 100 million records into a 800 million record table with foreign keys and indexes and the session gets killed, the rollback might take a long time which is not acceptable. Using bulk collect forall with interval commits is slower than a single straight merge statement, but in case of dead session, the rollback time won't be as bad and a reload of the not yet committed data will not be as bad. To design a recoverable data load that may not be affected as badly, is bulk collect + for all the right approach?
1. specifics about the actual data available
2. the location/source of the data
3. whether NOLOGGING is appropriate
4. whether PARALLEL is an option
1. I need to transform the data before, so I can build the staging tables to match to be the same structure as the tables I'm loading to.
2. It's in the same database (11.2)
3. Cannot use NOLOGGING or APPEND because I need to allow DML in the target table and I can't use NOLOGGING because I cannot afford to lose the data in case of failure.
4. PARALLEL is an option. I've done some research on DBMS_PARALLEL_EXECUTE and it sounds very cool. Can this be used to load to two tables? I have a parent child tables. I can chunk the data and load these two tables separately, but the only requirement would be that I need to commit together. I cannot load a chunk into the parent table and commit before I load the corresponding chunk into its child table. Can this be done using DBMS_PARALLEL_EXECUTE? If so, I think this would be the perfect solution since it looks like it's exactly what I'm looking for. However, if this doesn't work, is bulk collect + for all the best option I am left with?
What is the underlying technology of DBMS_PARALLEL_EXECUTE? -
Clever bulk collect working with 9.2 and up for snaps .
Hi,
I'm doing snaps of v$sysstat with 1 second delay to count some metrics like that:
select name,value from v$sysstat
where
name in ('execute count', 'parse count (hard)', 'parse count (total)',
'physical read total IO requests', 'physical read total bytes',
'physical write total IO requests', 'physical write total bytes',
'redo size', 'redo writes', 'session cursor cache hits',
'session logical reads', 'user calls', 'user commits') ;
dbms_lock.sleep(1);
select name,value from v$sysstat
where
name in ('execute count', 'parse count (hard)', 'parse count (total)',
'physical read total IO requests', 'physical read total bytes',
'physical write total IO requests', 'physical write total bytes',
'redo size', 'redo writes', 'session cursor cache hits',
'session logical reads', 'user calls', 'user commits') ;But Im really interest in deltas so think, maybe bulk collect into is the obvious solution .
I can workaround that with doing pivot and self join to get only 1 row per query , and then just do the math .
Wondering how to construct collection to support my query ?
Code should be working on 9.2 and above .
Regards
GregGSomething like this:
SQL> set lines 120 pages 999
SQL> col name for a50
SQL> var r refcursor
SQL> set autoprint on
SQL> declare
2 beg_stats_type_tt stats_type_tt := stats_type_tt();
3 end_stats_type_tt stats_type_tt := stats_type_tt();
4 srt_stats_type_tt stats_type_tt := stats_type_tt();
5 begin
6 select stats_type(name, value) bulk collect into beg_stats_type_tt
7 from v$sysstat
8 where
9 name in ('execute count', 'parse count (hard)', 'parse count (total)',
10 'physical read total IO requests', 'physical read total bytes',
11 'physical write total IO requests', 'physical write total bytes',
12 'redo size', 'redo writes', 'session cursor cache hits',
13 'session logical reads', 'user calls', 'user commits');
14 dbms_lock.sleep(1);
15 select stats_type(name, value) bulk collect into end_stats_type_tt
16 from v$sysstat
17 where
18 name in ('execute count', 'parse count (hard)', 'parse count (total)',
19 'physical read total IO requests', 'physical read total bytes',
20 'physical write total IO requests', 'physical write total bytes',
21 'redo size', 'redo writes', 'session cursor cache hits',
22 'session logical reads', 'user calls', 'user commits');
23 open :r for
24 select b.name, e.value - b.value
25 from table(cast(beg_stats_type_tt as stats_type_tt)) b
26 , table(cast(end_stats_type_tt as stats_type_tt)) e
27 where e.name = b.name;
28 end;
29 /
PL/SQL procedure successfully completed.
NAME E.VALUE-B.VALUE
user commits 0
user calls 39
session logical reads 0
physical read total IO requests 0
physical read total bytes 0
physical write total IO requests 0
physical write total bytes 0
redo size 0
redo writes 0
session cursor cache hits 1
parse count (total) 0
parse count (hard) 0
execute count 3
13 rows selected.
SQL>Pipelined table function another option - then you can just issue a select.
That will require permanent object types as well though (for 9.2. compatibility at least).
Edited by: Dom Brooks on May 22, 2012 11:10 AM -
EAI SQL Adapter BusinessService some examples?
Hi guys,
I am trying to find some examples or detail documentation of using EAI SQL Adapter BusinessService.
If somebody knows something, please help!
Regards,
SlaviHi,
There is an Enhancement Request 12-I2PVIT asking for documentation on the "EAI SQL Adapter" Business Service (BS). However the enhancement was declined because this BS must not be called directly as it was developed and designed to be only used internally by the Oracle and Peoplesoft connectors.
That is why there is no details on the “EAI SQL Adapter” BS on the Bookshelf.
To access an external database, Siebel's External Business Component feature can be used. References to the the EBC feature are on the Bookshelf entry below:
* Integration Platform Technologies: Siebel Enterprise Application Integration > External Business Components.
Other options consist in developing custom integrations using COM Interfaces or Java Business Service so that you are able to get the data from Siebel and send that data to the database using a custom solution on the COM Interfaces or Java Business Service .
Details can be found on the Bookshelves below:
* Transports and Interfaces: Siebel eBusiness Application Integration Volume III > Java Business Service
* Siebel Object Interfaces Reference > Programming > Siebel Object Interfaces
The advantage of using EBCs is that inside Siebel it will work as any other Business Component, except for the special cases explained on the Bookshelf.
In summary, calling the “EAI SQL Adapter” BS directly is not supported and some possible options are EBC, COM Interfaces and Java Business Services.
Regards,
Joseph Arul Dass
Maybe you are looking for
-
A general question for the group (sorry if this is off-topic, if there's a better forum for this type of Q let me know). I'm a web services and web app developer who is new to BEA. In the past I've used toolkits like Axis for web services and I've al
-
White and black dots in drop down menu?
In all of my drop down menus there are little black and white dots that 'glitter' around as I move my mouse through it. What is this, what is the cause, or just how do I fix it?
-
Hi, i have created a query on YTD sales query and comparison with the previous year. i have created a structure using 0calmonth2 info object and restricted it for each month starting from July to June. In the key figures i have used sales values key
-
Hi, What are the exceptions in function module. Thanks in advance
-
Hi experts, I am trying to remove files in our unix box where there are nearly 10000 of files . i am using the below command, but it says no match, or too many arguments etc. rm -rf 200607?/* and rm -rf 200607/*. Please help. thanks anuroop