Do I need to use a Collection/VArray or ...?
Hi, all. Here's what I'm trying to do on an Oracle 9i database: I've got some 4 dates coming in from a table; I need to verify that they're valid by checking them against the person's birthdate, and then use only the valid dates in my package. So the dates need to be available at the package level to all functions and procedures within it.
I've looked at VARRAY and other collection types, but wasn't sure how to make them available to the entire session and have them automatically killed at the session end. Whatever I use, it needs to have at least 3 columns of data for the duation of the session. Here are a few questions for the PL/SQL gurus,
- Is this possible?
- Should I be using a global temporary table? Like this, defined in one function,
execute immediate 'create global temporary table...';
- Are the values in that available to other functions/procedures within the same session?
Your help is deeply appreciated.
In short, you want to create such a table once as part of setting up your application.
SQL> create global temporary table dates(
2 d1 date,
3 d2 date,
4 d3 date )
5 on commit preserve rows;
Table created.Then in one session you can put some data into it:
SQL> insert into dates values(sysdate-30, sysdate-60, sysdate-90);
1 row created.
SQL> insert into dates values(sysdate-45, sysdate-45, sysdate-45);
1 row created.
SQL> select * from dates;
D1 D2 D3
17-AUG-05 18-JUL-05 18-JUN-05
02-AUG-05 02-AUG-05 02-AUG-05At the same time, in a second session you can put entirely different data into it:
SQL> insert into dates values(sysdate, sysdate+1, sysdate+2);
1 row created.
SQL> select * from dates;
D1 D2 D3
16-SEP-05 17-SEP-05 18-SEP-05Now both sessions have their own separate data in this global temporary table. When the sessions end their data simply goes away.
Typos removed.
Similar Messages
-
Needed help in bulk collect using collections
Hi,
I have created a schema level collection like "CREATE OR REPLACE TYPE T_EMP_NO IS TABLE OF NUMBER ;
will i able to use this in a where clause which involves bulk collect?
Please share ur thoughts.
My oracle version is 10guser13710379 wrote:
Will i be able to do a bulk collect into a table using this collection of my sql type?Bulk fetches collects into an array like structure - not into a SQL table like structure. So calling a collection variable in PL/SQL a "+PL/SQL table+" does not make much sense as this array structure is nothing like a table. For the same reason, one needs to question running SQL select statements against PL/SQL arrays.
As for your SQL type defined - it is a collection (array) of numbers. Thus it can be used to bulk fetch a numeric column. -
Hello guys..does anybody know how to install and use adobe master collection with the new lion?
I need to use Flash and illustratore, but apparently those programmes are incompatible with the new operative sistem...
I am a new mac users and I'd like to know if there are other similar programmes I can use with lion!Lab79 wrote:
Are you on Apple's payroll?
well dude I can only let you know that as I work with those programme I don't have to pay for it is my company that pays the programme I whant to use( that's why I was asking if there where other programmes ..that I could use with lion insted that Illustrator and Flash!)..I know Adobe since 2005 and I can say that Adobs products are very good...I think that if it's an Adobe probleme or fault ..they will solve it very soon...but unfortunally I have the impression that after Jobs passed away Appel decided to change politics..and everything started to go very bad! (see FCP X)..
good luck with apple dude..
Where is the Apple problem? I have CS4 and CS5 running perfectly fine on my Macbook Pro. Installed 5 after Lion upgrade. Worth every cent. Adobe did have some catching up to do with Lion but with the CS5.5 update all runs fine. But not yours. So it is a problem with the Lion OS? You say you have been with Adobe since 2005. So you would be aware of all the other issues that Adobe had catching up with past Oss in Mac and Windows then. They get it right, but it is up to them. It is not up to Apple, nor Microsoft for that matter, to run around and check that every software developer in the world is running their business properly.
And what has politics got to do with anything. Some people just have to blame Software for their poor Hardware maintainence of failure of the same.
<The only think I can really do is to go back on my old windows...give back this orrible lap top and ask for my money back!>
Great suggestion. You should go with that one, but good luck getting a refund.
Bye -
Can I use Bulk Collect results as input parameter for another cursor
MUSIC ==> remote MUSIC_DB database, MUSIC table has 60 million rows
PRICE_DATA ==> remote PRICING_DB database, PRICE_DATE table has 1 billion rows
These two table once existed in same database, but size of database exceeded available hardware size and hardware budget, so the PRICE_DATA table was moved to another Oracle database. I need to create a single report that combines data from both of these tables, and a distributed join with DRIVING_SITE hint will not work because the size of both table is too large to push to one DRIVING_SITE location, so I wrote this PLSQL block to process in small blocks.
QUESTION: how can use bulk collect from one cursor and pass that bulk collected information as input to second cursor without specifically listing each cell of the PLSQL bulk collection? See sample pseudo-code below, I am trying to determine more efficient way to code than hard-coding 100 parameter names into 2nd cursor.
NOTE: below is truly pseudo-code, I had to change the names of everything to adhere to NDA, but below works and is fast enough for my purposes, but if I want to change from 100 input parameters to 200, I have to add more hard-coded values. There has got to be a better way.
DECLARE
-- define cursor that retrieves distinct SONG_IDs from MUSIC table in remote music database
CURSOR C_CURRENT_MUSIC
IS
select distinct SONG_ID
from MUSIC@MUSIC_DB
where PRODUCTION_RELEASE=1
/* define a parameterized cursor that accepts 100 SONG_IDs and retrieves
required pricing information
CURSOR C_get_music_price_data
P_SONG_ID_001 NUMBER, P_SONG_ID_002 NUMBER, P_SONG_ID_003 NUMBER, P_SONG_ID_004 NUMBER, P_SONG_ID_005 NUMBER, P_SONG_ID_006 NUMBER, P_SONG_ID_007 NUMBER, P_SONG_ID_008 NUMBER, P_SONG_ID_009 NUMBER, P_SONG_ID_010 NUMBER,
P_SONG_ID_011 NUMBER, P_SONG_ID_012 NUMBER, P_SONG_ID_013 NUMBER, P_SONG_ID_014 NUMBER, P_SONG_ID_015 NUMBER, P_SONG_ID_016 NUMBER, P_SONG_ID_017 NUMBER, P_SONG_ID_018 NUMBER, P_SONG_ID_019 NUMBER, P_SONG_ID_020 NUMBER,
P_SONG_ID_021 NUMBER, P_SONG_ID_022 NUMBER, P_SONG_ID_023 NUMBER, P_SONG_ID_024 NUMBER, P_SONG_ID_025 NUMBER, P_SONG_ID_026 NUMBER, P_SONG_ID_027 NUMBER, P_SONG_ID_028 NUMBER, P_SONG_ID_029 NUMBER, P_SONG_ID_030 NUMBER,
P_SONG_ID_031 NUMBER, P_SONG_ID_032 NUMBER, P_SONG_ID_033 NUMBER, P_SONG_ID_034 NUMBER, P_SONG_ID_035 NUMBER, P_SONG_ID_036 NUMBER, P_SONG_ID_037 NUMBER, P_SONG_ID_038 NUMBER, P_SONG_ID_039 NUMBER, P_SONG_ID_040 NUMBER,
P_SONG_ID_041 NUMBER, P_SONG_ID_042 NUMBER, P_SONG_ID_043 NUMBER, P_SONG_ID_044 NUMBER, P_SONG_ID_045 NUMBER, P_SONG_ID_046 NUMBER, P_SONG_ID_047 NUMBER, P_SONG_ID_048 NUMBER, P_SONG_ID_049 NUMBER, P_SONG_ID_050 NUMBER,
P_SONG_ID_051 NUMBER, P_SONG_ID_052 NUMBER, P_SONG_ID_053 NUMBER, P_SONG_ID_054 NUMBER, P_SONG_ID_055 NUMBER, P_SONG_ID_056 NUMBER, P_SONG_ID_057 NUMBER, P_SONG_ID_058 NUMBER, P_SONG_ID_059 NUMBER, P_SONG_ID_060 NUMBER,
P_SONG_ID_061 NUMBER, P_SONG_ID_062 NUMBER, P_SONG_ID_063 NUMBER, P_SONG_ID_064 NUMBER, P_SONG_ID_065 NUMBER, P_SONG_ID_066 NUMBER, P_SONG_ID_067 NUMBER, P_SONG_ID_068 NUMBER, P_SONG_ID_069 NUMBER, P_SONG_ID_070 NUMBER,
P_SONG_ID_071 NUMBER, P_SONG_ID_072 NUMBER, P_SONG_ID_073 NUMBER, P_SONG_ID_074 NUMBER, P_SONG_ID_075 NUMBER, P_SONG_ID_076 NUMBER, P_SONG_ID_077 NUMBER, P_SONG_ID_078 NUMBER, P_SONG_ID_079 NUMBER, P_SONG_ID_080 NUMBER,
P_SONG_ID_081 NUMBER, P_SONG_ID_082 NUMBER, P_SONG_ID_083 NUMBER, P_SONG_ID_084 NUMBER, P_SONG_ID_085 NUMBER, P_SONG_ID_086 NUMBER, P_SONG_ID_087 NUMBER, P_SONG_ID_088 NUMBER, P_SONG_ID_089 NUMBER, P_SONG_ID_090 NUMBER,
P_SONG_ID_091 NUMBER, P_SONG_ID_092 NUMBER, P_SONG_ID_093 NUMBER, P_SONG_ID_094 NUMBER, P_SONG_ID_095 NUMBER, P_SONG_ID_096 NUMBER, P_SONG_ID_097 NUMBER, P_SONG_ID_098 NUMBER, P_SONG_ID_099 NUMBER, P_SONG_ID_100 NUMBER
IS
select
from PRICE_DATA@PRICING_DB
where COUNTRY = 'USA'
and START_DATE <= sysdate
and END_DATE > sysdate
and vpc.SONG_ID IN
P_SONG_ID_001 ,P_SONG_ID_002 ,P_SONG_ID_003 ,P_SONG_ID_004 ,P_SONG_ID_005 ,P_SONG_ID_006 ,P_SONG_ID_007 ,P_SONG_ID_008 ,P_SONG_ID_009 ,P_SONG_ID_010,
P_SONG_ID_011 ,P_SONG_ID_012 ,P_SONG_ID_013 ,P_SONG_ID_014 ,P_SONG_ID_015 ,P_SONG_ID_016 ,P_SONG_ID_017 ,P_SONG_ID_018 ,P_SONG_ID_019 ,P_SONG_ID_020,
P_SONG_ID_021 ,P_SONG_ID_022 ,P_SONG_ID_023 ,P_SONG_ID_024 ,P_SONG_ID_025 ,P_SONG_ID_026 ,P_SONG_ID_027 ,P_SONG_ID_028 ,P_SONG_ID_029 ,P_SONG_ID_030,
P_SONG_ID_031 ,P_SONG_ID_032 ,P_SONG_ID_033 ,P_SONG_ID_034 ,P_SONG_ID_035 ,P_SONG_ID_036 ,P_SONG_ID_037 ,P_SONG_ID_038 ,P_SONG_ID_039 ,P_SONG_ID_040,
P_SONG_ID_041 ,P_SONG_ID_042 ,P_SONG_ID_043 ,P_SONG_ID_044 ,P_SONG_ID_045 ,P_SONG_ID_046 ,P_SONG_ID_047 ,P_SONG_ID_048 ,P_SONG_ID_049 ,P_SONG_ID_050,
P_SONG_ID_051 ,P_SONG_ID_052 ,P_SONG_ID_053 ,P_SONG_ID_054 ,P_SONG_ID_055 ,P_SONG_ID_056 ,P_SONG_ID_057 ,P_SONG_ID_058 ,P_SONG_ID_059 ,P_SONG_ID_060,
P_SONG_ID_061 ,P_SONG_ID_062 ,P_SONG_ID_063 ,P_SONG_ID_064 ,P_SONG_ID_065 ,P_SONG_ID_066 ,P_SONG_ID_067 ,P_SONG_ID_068 ,P_SONG_ID_069 ,P_SONG_ID_070,
P_SONG_ID_071 ,P_SONG_ID_072 ,P_SONG_ID_073 ,P_SONG_ID_074 ,P_SONG_ID_075 ,P_SONG_ID_076 ,P_SONG_ID_077 ,P_SONG_ID_078 ,P_SONG_ID_079 ,P_SONG_ID_080,
P_SONG_ID_081 ,P_SONG_ID_082 ,P_SONG_ID_083 ,P_SONG_ID_084 ,P_SONG_ID_085 ,P_SONG_ID_086 ,P_SONG_ID_087 ,P_SONG_ID_088 ,P_SONG_ID_089 ,P_SONG_ID_090,
P_SONG_ID_091 ,P_SONG_ID_092 ,P_SONG_ID_093 ,P_SONG_ID_094 ,P_SONG_ID_095 ,P_SONG_ID_096 ,P_SONG_ID_097 ,P_SONG_ID_098 ,P_SONG_ID_099 ,P_SONG_ID_100
group by
vpc.SONG_ID
,vpc.STOREFRONT_ID
TYPE SONG_ID_TYPE IS TABLE OF MUSIC@MUSIC_DB%TYPE INDEX BY BINARY_INTEGER;
V_SONG_ID_ARRAY SONG_ID_TYPE ;
v_commit_counter NUMBER := 0;
BEGIN
/* open cursor you intent to bulk collect from */
OPEN C_CURRENT_MUSIC;
LOOP
/* in batches of 100, bulk collect ADAM_ID mapped TMS_IDENTIFIER into PLSQL table or records */
FETCH C_CURRENT_MUSIC BULK COLLECT INTO V_SONG_ID_ARRAY LIMIT 100;
EXIT WHEN V_SONG_ID_ARRAY.COUNT = 0;
/* to avoid NO DATA FOUND error when pass 100 parameters to OPEN cursor, if the arrary
is not fully populated to 100, pad the array with nulls to fill up to 100 cells. */
IF (V_SONG_ID_ARRAY.COUNT >=1 and V_SONG_ID_ARRAY.COUNT <> 100) THEN
FOR j IN V_SONG_ID_ARRAY.COUNT+1..100 LOOP
V_SONG_ID_ARRAY(j) := null;
END LOOP;
END IF;
/* pass a batch of 100 to cursor that get price information per SONG_ID and STOREFRONT_ID */
FOR j IN C_get_music_price_data
V_SONG_ID_ARRAY(1) ,V_SONG_ID_ARRAY(2) ,V_SONG_ID_ARRAY(3) ,V_SONG_ID_ARRAY(4) ,V_SONG_ID_ARRAY(5) ,V_SONG_ID_ARRAY(6) ,V_SONG_ID_ARRAY(7) ,V_SONG_ID_ARRAY(8) ,V_SONG_ID_ARRAY(9) ,V_SONG_ID_ARRAY(10) ,
V_SONG_ID_ARRAY(11) ,V_SONG_ID_ARRAY(12) ,V_SONG_ID_ARRAY(13) ,V_SONG_ID_ARRAY(14) ,V_SONG_ID_ARRAY(15) ,V_SONG_ID_ARRAY(16) ,V_SONG_ID_ARRAY(17) ,V_SONG_ID_ARRAY(18) ,V_SONG_ID_ARRAY(19) ,V_SONG_ID_ARRAY(20) ,
V_SONG_ID_ARRAY(21) ,V_SONG_ID_ARRAY(22) ,V_SONG_ID_ARRAY(23) ,V_SONG_ID_ARRAY(24) ,V_SONG_ID_ARRAY(25) ,V_SONG_ID_ARRAY(26) ,V_SONG_ID_ARRAY(27) ,V_SONG_ID_ARRAY(28) ,V_SONG_ID_ARRAY(29) ,V_SONG_ID_ARRAY(30) ,
V_SONG_ID_ARRAY(31) ,V_SONG_ID_ARRAY(32) ,V_SONG_ID_ARRAY(33) ,V_SONG_ID_ARRAY(34) ,V_SONG_ID_ARRAY(35) ,V_SONG_ID_ARRAY(36) ,V_SONG_ID_ARRAY(37) ,V_SONG_ID_ARRAY(38) ,V_SONG_ID_ARRAY(39) ,V_SONG_ID_ARRAY(40) ,
V_SONG_ID_ARRAY(41) ,V_SONG_ID_ARRAY(42) ,V_SONG_ID_ARRAY(43) ,V_SONG_ID_ARRAY(44) ,V_SONG_ID_ARRAY(45) ,V_SONG_ID_ARRAY(46) ,V_SONG_ID_ARRAY(47) ,V_SONG_ID_ARRAY(48) ,V_SONG_ID_ARRAY(49) ,V_SONG_ID_ARRAY(50) ,
V_SONG_ID_ARRAY(51) ,V_SONG_ID_ARRAY(52) ,V_SONG_ID_ARRAY(53) ,V_SONG_ID_ARRAY(54) ,V_SONG_ID_ARRAY(55) ,V_SONG_ID_ARRAY(56) ,V_SONG_ID_ARRAY(57) ,V_SONG_ID_ARRAY(58) ,V_SONG_ID_ARRAY(59) ,V_SONG_ID_ARRAY(60) ,
V_SONG_ID_ARRAY(61) ,V_SONG_ID_ARRAY(62) ,V_SONG_ID_ARRAY(63) ,V_SONG_ID_ARRAY(64) ,V_SONG_ID_ARRAY(65) ,V_SONG_ID_ARRAY(66) ,V_SONG_ID_ARRAY(67) ,V_SONG_ID_ARRAY(68) ,V_SONG_ID_ARRAY(69) ,V_SONG_ID_ARRAY(70) ,
V_SONG_ID_ARRAY(71) ,V_SONG_ID_ARRAY(72) ,V_SONG_ID_ARRAY(73) ,V_SONG_ID_ARRAY(74) ,V_SONG_ID_ARRAY(75) ,V_SONG_ID_ARRAY(76) ,V_SONG_ID_ARRAY(77) ,V_SONG_ID_ARRAY(78) ,V_SONG_ID_ARRAY(79) ,V_SONG_ID_ARRAY(80) ,
V_SONG_ID_ARRAY(81) ,V_SONG_ID_ARRAY(82) ,V_SONG_ID_ARRAY(83) ,V_SONG_ID_ARRAY(84) ,V_SONG_ID_ARRAY(85) ,V_SONG_ID_ARRAY(86) ,V_SONG_ID_ARRAY(87) ,V_SONG_ID_ARRAY(88) ,V_SONG_ID_ARRAY(89) ,V_SONG_ID_ARRAY(90) ,
V_SONG_ID_ARRAY(91) ,V_SONG_ID_ARRAY(92) ,V_SONG_ID_ARRAY(93) ,V_SONG_ID_ARRAY(94) ,V_SONG_ID_ARRAY(95) ,V_SONG_ID_ARRAY(96) ,V_SONG_ID_ARRAY(97) ,V_SONG_ID_ARRAY(98) ,V_SONG_ID_ARRAY(99) ,V_SONG_ID_ARRAY(100)
LOOP
/* do stuff with data from Song and Pricing Database coming from the two
separate cursors, then continue processing more rows...
END LOOP;
/* commit after each batch of 100 SONG_IDs is processed */
COMMIT;
EXIT WHEN C_CURRENT_MUSIC%NOTFOUND; -- exit when there are no more rows to fetch from cursor
END LOOP; -- bulk fetching loop
CLOSE C_CURRENT_MUSIC; -- close cursor that was used in bulk collection
/* commit rows */
COMMIT; -- commit any remaining uncommitted data.
END;I've got a problem when using passing VARRAY of numbers as parameter to remote cursor: it takes a super long time to run, sometimes doesn't finish even after an hour as passed.
Continuing with my example in original entry, I replaced the bulk collect into PLSQL table collection with a VARRAY and i bulk collect into the VARRAY, this is fast and I know it works because I can DBMS_OUTPUT.PUT_LINE cells of VARRAY so I know it is getting populated correctly. However, when I pass the VARRAY containing 100 cells populated with SONG_IDs as parameter to cursor, execution time is over an hour and when I am expecting a few seconds.
Below code example strips the problem down to it's raw details, I skip the bulk collect and just manually populate a VARRAY with 100 SONG_ID values, then try to pass to as parameter to a cursor, but the execution time of cursor is unexpectedly long, over 30 minutes, sometime longer, when I am expecting seconds.
IMPORTANT: If I take the same 100 SONG_IDs and place them directly in the cursor query's where IN clause, the SQL runs in under 5 seconds and returns result. Also, if I pass the 100 SONG_IDs as individual cells of a PLSQL table collection, then it also runs fast.
I thought that since the VARRAY is used via select subquery that is it queried locally, but the cursor is remote, and that I had a distribute problem on my hands, so I put in the DRIVING_SITE hint to attempt to force the result of query against VARRAY to go to remote server and rest of query will run there before returning result, but that didn't work either, still got slow response.
Is something wrong with my code, or I am running into a Oracle problem that may require support to resolve?
DECLARE
/* define a parameterized cursor that accepts XXX number of in SONG_IDs and
retrieves required pricing information
CURSOR C_get_music_price_data
p_array_song_ids SYS.ODCInumberList
IS
select /*+DRIVING_SITE(pd) */
count(distinct s.EVE_ID)
from PRICE_DATA@PRICING_DB pd
where pd.COUNTRY = 'USA'
and pd.START_DATE <= sysdate
and pd.END_DATE > sysdate
and pd.SONG_ID IN
select column_value from table(p_array_song_ids)
group by
pd.SONG_ID
,pd.STOREFRONT_ID
V_ARRAY_SONG_IDS SYS.ODCInumberList := SYS.ODCInumberList();
BEGIN
V_ARRAY_SONG_IDS.EXTEND(100);
V_ARRAY_SONG_IDS( 1 ) := 31135 ;
V_ARRAY_SONG_IDS( 2 ) := 31140 ;
V_ARRAY_SONG_IDS( 3 ) := 31142 ;
V_ARRAY_SONG_IDS( 4 ) := 31144 ;
V_ARRAY_SONG_IDS( 5 ) := 31146 ;
V_ARRAY_SONG_IDS( 6 ) := 31148 ;
V_ARRAY_SONG_IDS( 7 ) := 31150 ;
V_ARRAY_SONG_IDS( 8 ) := 31152 ;
V_ARRAY_SONG_IDS( 9 ) := 31154 ;
V_ARRAY_SONG_IDS( 10 ) := 31156 ;
V_ARRAY_SONG_IDS( 11 ) := 31158 ;
V_ARRAY_SONG_IDS( 12 ) := 31160 ;
V_ARRAY_SONG_IDS( 13 ) := 33598 ;
V_ARRAY_SONG_IDS( 14 ) := 33603 ;
V_ARRAY_SONG_IDS( 15 ) := 33605 ;
V_ARRAY_SONG_IDS( 16 ) := 33607 ;
V_ARRAY_SONG_IDS( 17 ) := 33609 ;
V_ARRAY_SONG_IDS( 18 ) := 33611 ;
V_ARRAY_SONG_IDS( 19 ) := 33613 ;
V_ARRAY_SONG_IDS( 20 ) := 33615 ;
V_ARRAY_SONG_IDS( 21 ) := 33617 ;
V_ARRAY_SONG_IDS( 22 ) := 33630 ;
V_ARRAY_SONG_IDS( 23 ) := 33632 ;
V_ARRAY_SONG_IDS( 24 ) := 33636 ;
V_ARRAY_SONG_IDS( 25 ) := 33638 ;
V_ARRAY_SONG_IDS( 26 ) := 33640 ;
V_ARRAY_SONG_IDS( 27 ) := 33642 ;
V_ARRAY_SONG_IDS( 28 ) := 33644 ;
V_ARRAY_SONG_IDS( 29 ) := 33646 ;
V_ARRAY_SONG_IDS( 30 ) := 33648 ;
V_ARRAY_SONG_IDS( 31 ) := 33662 ;
V_ARRAY_SONG_IDS( 32 ) := 33667 ;
V_ARRAY_SONG_IDS( 33 ) := 33669 ;
V_ARRAY_SONG_IDS( 34 ) := 33671 ;
V_ARRAY_SONG_IDS( 35 ) := 33673 ;
V_ARRAY_SONG_IDS( 36 ) := 33675 ;
V_ARRAY_SONG_IDS( 37 ) := 33677 ;
V_ARRAY_SONG_IDS( 38 ) := 33679 ;
V_ARRAY_SONG_IDS( 39 ) := 33681 ;
V_ARRAY_SONG_IDS( 40 ) := 33683 ;
V_ARRAY_SONG_IDS( 41 ) := 33685 ;
V_ARRAY_SONG_IDS( 42 ) := 33700 ;
V_ARRAY_SONG_IDS( 43 ) := 33702 ;
V_ARRAY_SONG_IDS( 44 ) := 33704 ;
V_ARRAY_SONG_IDS( 45 ) := 33706 ;
V_ARRAY_SONG_IDS( 46 ) := 33708 ;
V_ARRAY_SONG_IDS( 47 ) := 33710 ;
V_ARRAY_SONG_IDS( 48 ) := 33712 ;
V_ARRAY_SONG_IDS( 49 ) := 33723 ;
V_ARRAY_SONG_IDS( 50 ) := 33725 ;
V_ARRAY_SONG_IDS( 51 ) := 33727 ;
V_ARRAY_SONG_IDS( 52 ) := 33729 ;
V_ARRAY_SONG_IDS( 53 ) := 33731 ;
V_ARRAY_SONG_IDS( 54 ) := 33733 ;
V_ARRAY_SONG_IDS( 55 ) := 33735 ;
V_ARRAY_SONG_IDS( 56 ) := 33737 ;
V_ARRAY_SONG_IDS( 57 ) := 33749 ;
V_ARRAY_SONG_IDS( 58 ) := 33751 ;
V_ARRAY_SONG_IDS( 59 ) := 33753 ;
V_ARRAY_SONG_IDS( 60 ) := 33755 ;
V_ARRAY_SONG_IDS( 61 ) := 33757 ;
V_ARRAY_SONG_IDS( 62 ) := 33759 ;
V_ARRAY_SONG_IDS( 63 ) := 33761 ;
V_ARRAY_SONG_IDS( 64 ) := 33763 ;
V_ARRAY_SONG_IDS( 65 ) := 33775 ;
V_ARRAY_SONG_IDS( 66 ) := 33777 ;
V_ARRAY_SONG_IDS( 67 ) := 33779 ;
V_ARRAY_SONG_IDS( 68 ) := 33781 ;
V_ARRAY_SONG_IDS( 69 ) := 33783 ;
V_ARRAY_SONG_IDS( 70 ) := 33785 ;
V_ARRAY_SONG_IDS( 71 ) := 33787 ;
V_ARRAY_SONG_IDS( 72 ) := 33789 ;
V_ARRAY_SONG_IDS( 73 ) := 33791 ;
V_ARRAY_SONG_IDS( 74 ) := 33793 ;
V_ARRAY_SONG_IDS( 75 ) := 33807 ;
V_ARRAY_SONG_IDS( 76 ) := 33809 ;
V_ARRAY_SONG_IDS( 77 ) := 33811 ;
V_ARRAY_SONG_IDS( 78 ) := 33813 ;
V_ARRAY_SONG_IDS( 79 ) := 33815 ;
V_ARRAY_SONG_IDS( 80 ) := 33817 ;
V_ARRAY_SONG_IDS( 81 ) := 33819 ;
V_ARRAY_SONG_IDS( 82 ) := 33821 ;
V_ARRAY_SONG_IDS( 83 ) := 33823 ;
V_ARRAY_SONG_IDS( 84 ) := 33825 ;
V_ARRAY_SONG_IDS( 85 ) := 33839 ;
V_ARRAY_SONG_IDS( 86 ) := 33844 ;
V_ARRAY_SONG_IDS( 87 ) := 33846 ;
V_ARRAY_SONG_IDS( 88 ) := 33848 ;
V_ARRAY_SONG_IDS( 89 ) := 33850 ;
V_ARRAY_SONG_IDS( 90 ) := 33852 ;
V_ARRAY_SONG_IDS( 91 ) := 33854 ;
V_ARRAY_SONG_IDS( 92 ) := 33856 ;
V_ARRAY_SONG_IDS( 93 ) := 33858 ;
V_ARRAY_SONG_IDS( 94 ) := 33860 ;
V_ARRAY_SONG_IDS( 95 ) := 33874 ;
V_ARRAY_SONG_IDS( 96 ) := 33879 ;
V_ARRAY_SONG_IDS( 97 ) := 33881 ;
V_ARRAY_SONG_IDS( 98 ) := 33883 ;
V_ARRAY_SONG_IDS( 99 ) := 33885 ;
V_ARRAY_SONG_IDS(100 ) := 33889 ;
/* do stuff with data from Song and Pricing Database coming from the two
separate cursors, then continue processing more rows...
FOR i IN C_get_music_price_data( v_array_song_ids ) LOOP
. (this is the loop where I pass in v_array_song_ids
. populated with only 100 cells and it runs forever)
END LOOP;
END; -
Using a collection in a where clause
Guys,
I am sure this is pretty simple but it has me beat at the moment. Any help would be much appreciated.
I am sure there are other solutions but I need to use bulk colleck with a 100 limit, so I can't rewrite without that feature.
I am using 10.2
1 DECLARE
2 v_dim_name_list varchar2(2000);
3 TYPE object_type_tab IS TABLE OF user_objects.object_type%TYPE;
4 TYPE dim_list_tab IS TABLE OF user_objects.object_type%TYPE;
5 l_object_type object_type_tab := object_type_tab();
6 l_dimensions dim_list_tab := dim_list_tab();
7 CURSOR unit_data_cur
8 IS
9 SELECT distinct object_type
10 from user_objects
11 where object_type in (select column_value from TABLE(cast(l_dimensions as dim_list_tab)))
12 ORDER BY 1,2;
13 BEGIN
14 l_dimensions.EXTEND(3);
15 l_dimensions(1) := 'INDEX';
16 l_dimensions(2) := 'TABLE';
17 l_dimensions(3) := 'VIEW';
18 OPEN unit_data_cur;
19 LOOP
20 FETCH unit_data_cur
21 BULK COLLECT INTO l_object_type LIMIT 100;
22 For g IN 1 .. l_object_type.COUNT
23 LOOP
24 DBMS_OUTPUT.PUT_LINE(l_object_type(g));
25 END LOOP;
26 EXIT WHEN l_object_type.COUNT < 100;
27 END LOOP;
28 CLOSE unit_data_cur;
29* end;
SQL> /
where object_type in (select column_value from TABLE(cast(l_dimensions as dim_list_tab)))
ERROR at line 11:
ORA-06550: line 11, column 75:
PL/SQL: ORA-00902: invalid datatype
ORA-06550: line 9, column 1:
PL/SQL: SQL Statement ignoredI got this from askTom, which seems to suggest I am using the correct type - Am I misreading this?
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3170352229012
and we said...
The major difference between:
(index by tables) and (nested tables/varrays)
is that index by tables are only available in PLSQL, nested tables/varrays are avaialable
in both PLSQL and SQL.
Gaz -
To many security approvals need to use my Google maps! (Running KitKat)
Now that my Verizon S3 phone has been updated to KitKat the number of security boilerplate approvals needed to use anything related to maps or GPS is unacceptable.
I can't leave the GPS on as it drains that battery. So here is the problem: I'm driving along and I need to see my maps, before I would just hit the map button and I would see the map in the approximate area, no GPS required. Now when I do the same, I get a box for settings, I click on that and it sends me to a Location page where I need to turn on the GPS, now I need to Agree that I have turned on the GPS, now I get another box to Agree that Google apps are collecting my location info, at this point I need back step out to the map app to see where I'm at.
Most users of the maps app will use it while driving (even if you don't think it's a good idea). I don't think going from 1 step to 5 is making the situation any safer, it's now takes many more seconds of not looking at the road to getting the app working then it had before.
If you have a fix for this, that would be great!Thanks for it, I was too looking for the hello world example
of using externalInterface to load a Google Map into Flash using
AS2, I had asked this question many here, but did not get proper
answer, when I had googled I had got this page, I got the answer.
Thanks again & hope I will get more info from this forum. -
Need some help in collection initiallization
Hi
i am facing some problem with collection intiallization.we have 10g db.it is throwing 'ORA-06531: Reference to uninitialized collection' error.i tried out the same procedure with emp table.it is not throwing any error.i am pasting both codes here.
----On emp table---following code on emp in 10g is working--
DECLARE
TYPE EMP_REC_TYPE IS TABLE OF SCOTT.EMP%ROWTYPE;
EMP_REC EMP_REC_TYPE;
cursor c1(p_emp number) is
select * from scott.emp where empno=p_emp;
BEGIN
for i in (select empno from scott.emp where empno=7369) loop
for j in c1(i.empno) loop
select j.empno,j.ename,j.job,j.mgr,j.hiredate,j.sal,j.comm,j.deptno
bulk collect into emp_rec
from dual;
end loop;
end loop;
FOR I IN emp_rec.first ..EMP_REC.last LOOP
DBMS_OUTPUT.PUT_LINE(EMP_REC(I).EMPNO);
END LOOP;
END;
----following throws ORA-06531----
declare
type gpds_rec_type is table of BHENPD28.T70101CCD%rowtype;
gpds_rec gpds_rec_type;
cursor c_insert(p_bom varchar) is
select * from BHENPD28.T70101CCD CCD
where expl_bld_no=p_bom
and ibmsnap_operation='I'
AND NOT EXISTS
(SELECT 1 FROM
BHENPD28.T70101CCD CCD1
WHERE CCD1.UPC_CDE=CCD.UPC_CDE
AND CCD1.FNA_CDe=CCD.FNA_CDE
AND CCD1.PART_NO=CCD.PART_NO
AND CcD.PART_PLS=CCD1.PART_PLS
AND CCD1.PART_DLS=CCD1.PART_DLS
AND CCD1.PART_USAGE_QTY=CCD.PART_USAGE_QTY
AND CCD1.IBMSNAP_OPERaTION IN ('D','U')
AND CCD1.IBMSNAP_LOGMARKER>CCD.IBMSNAP_LOGMARKER); ---TAKE ALL REC WITH 'i' AND DOES NT HAVE AN UPDATE OR DELETE LATER
BEGIN
for i in (select distinct expl_bld_no from BHENPD28.T70101CCD where expl_bld_no='RCGN4180') loop
for j in c_insert(i.expl_bld_no) loop
SELECT j.EXPL_BLD_NO,j.upc_prefx,j.UPC_CDE,j.fna_cde,j.prod_yr,j.prod_line_cde,j.part_no,j.part_dls,j.part_pls,J.USAGE_ITEM_NO,J.MOS_KEY_CDE,
J.ENGNG_DIV_CDE,J.BLD_INTNT_DIV_CDE,j.modlr_asm_statn,J.UNIT_MSRE_CDE,J.PAINT_PRC_CDE,j.veh_sys_mgmt_team,j.prod_dev_impl_team,
j.chrg_no,j.ewo_no,J.SUB1_PART_NO,J.SUB1_PART_DLS,J.SUB1_PART_PLS,J.SUB2_PART_NO,J.SUB2_PART_DLS,j.sub2_part_pls,j.mdl_cde,
j.fna_mdf_desc,J.HAND_CDE,j.part_usage_qty,J.NO_OF_UNITS_QTY,J.USAGE_QLFN_CDE,J.MAKE_PUR_CDE,j.usage_st_cde,J.LESS_FIN_IND_CDE,
j.clr_tbl_cde,J.STK_DISP_CDE,j.asm_plt_actn_cde,J.SOURCE_CDE,J.PART_SPCL_ORD_CDE,J.TORQUE1,j.torque2,j.bom_usage_asm_ind,j.bom_usage_det_reqd,
j.bld_blk_phase,J.BOM_USAGE_ISTL_QTY,J.BOM_USAGE_ISTL_IND,j.usage_trmt_ind_cde,J.USAGE_TRMT_DTE_TME,j.calc_flag,j.insp_wt,j.insp_dte,J.METAL_GAUG,
J.ASM_CDE,j.expl_id_sak,j.add_dte_tme,j.tran_dte_tme,j.upd_usr_id_cde,J.UPD_PGM_CDE,j.dum_col_cde,j.sys_bom_item_no,j.ibmsnap_intentseq,j.ibmsnap_operation,
J.IBMSNAP_COMMITSEQ,J.IBMSNAP_LOGMARKER
bulk collect into gpds_rec
from dual;
end loop;
end loop;
for i in gpds_rec.first..gpds_rec.last loop
dbms_output.put_line(gpds_rec(i).expl_bld_no);
end loop;
end;
Basic logic involved in both scripts are same.
any input/help is highly apprecited
Regards
CklpPlease consider the following when you post a question.
1. New features keep coming in every oracle version so please provide your Oracle DB Version to get the best possible answer.
You can use the following query and do a copy past of the output.
select * from v$version 2. We dont know your DB structure or How your Data is. So you need to let us know. The best way would be to give some sample data like this.
I have the following table called sales
with sales
as
select 1 sales_id, 1 prod_id, 1001 inv_num, 120 qty from dual
union all
select 2 sales_id, 1 prod_id, 1002 inv_num, 25 qty from dual
select *
from sales 3. Rather than telling what you want in words its more easier when you give your expected output.
For example in the above sales table, I want to know the total quantity and number of invoice for each product.
The output should look like this
Prod_id sum_qty count_inv
1 145 2 4. Next thing is a very important thing to remember. Please post only well formatted code. Unformatted code is very hard to read.
Your code format gets lost when you post it in the Oracle Forum. So in order to preserve it you need to
use the {noformat}{noformat} tags.
The usage of the tag is like this.
<place your code here>\ -
Using bulk collect and for all to solve a problem
Hi All
I have a following problem.
Please forgive me if its a stupid question :-) im learning.
1: Data in a staging table xx_staging_table
2: two Target table t1, t2 where some columns from xx_staging_table are inserted into
Some of the columns from the staging table data are checked for valid entries and then some columns from that row will be loaded into the two target tables.
The two target tables use different set of columns from the staging table
When I had a thousand records there was no problem with a direct insert but it seems we will now have half a million records.
This has slowed down the process considerably.
My question is
Can I use the bulk collect and for all functionality to get specific columns from a staging table, then validate the row using those columns
and then use a bulk insert to load the data into a specific table.?
So code would be like
get_staging_data cursor will have all the columns i need from the staging table
cursor get_staging_data
is select * from xx_staging_table (about 500000) records
Use bulk collect to load about 10000 or so records into a plsql table
and then do a bulk insert like this
CREATE TABLE t1 AS SELECT * FROM all_objects WHERE 1 = 2;
CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
IS
TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
l_data ARRAY;
CURSOR c IS SELECT * FROM all_objects;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO t1 VALUES l_data(i);
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END test_proc;
In the above example t1 and the cursor have the same number of columns
In my case the columns in the cursor loop are a small subset of the columns of table t1
so can i use a forall to load that subset into the table t1? How does that work?
Thanks
Juser7348303 wrote:
checking if the value is valid and theres also some conditional processing rules ( such as if the value is a certain value no inserts are needed)
which are a little more complex than I can put in a simpleWell, if the processing is too complex (and conditional) to be done in SQL, then doing that in PL/SQL is justified... but will be slower as you are now introducing an additional layer. Data now needs to travel between the SQL layer and PL/SQL layer. This is slower.
PL/SQL is inherently serialised - and this also effects performance and scalability. PL/SQL cannot be parallelised by Oracle in an automated fashion. SQL processes can.
To put in in simple terms. You create PL/SQL procedure Foo that processes SQL cursor and you execute that proc. Oracle cannot run multiple parallel copies of Foo. It perhaps can parallelise that SQL cursor that Foo uses - but not Foo itself.
However, if Foo is called by the SQL engine it can run in parallel - as the SQL process calling Foo is running in parallel. So if you make Foo a pipeline table function (written in PL/SQL), and you design and code it as a thread-safe/parallel enabled function, it can be callled and used and executed in parallel, by the SQL engine.
So moving your PL/SQL code into a parallel enabled pipeline function written in PL/SQL, and using that function via parallel SQL, can increase performance over running that same basic PL/SQL processing as a serialised process.
This is of course assuming that the processing that needs to be done using PL/SQL code, can be designed and coded for parallel processing in this fashion. -
How to use BULK COLLECT in Oracle Forms 11g
Forms is showing error that "Feature is not support in Client Side Program" when i am trying to impliment Bulk collect in Forms 11g.
i need to load full data from DB to my form becuase using cursor is very slow....
Is there any method/Work around to achieve this ....declare
type arr is table of emp%rowtype ;
lv_arr arr;
begin
select * bulk collect in to lv_arr from emp;
/*written code here to process the data and write in to file*/
end;Unless you are inserting/updating the data you are holding in the array into a database table I don't think there is much peformance gain in using bulk-collect in conjunction with writing a file. Bulk processing will increase performance by minimizing context switches from the SQL to the PL/SQL engine, nothing more, nothing less.
In any case bulk processing is not available in forms, if you really need to make use of it you need to do it in a stored procedure.
cheers -
Using multiple Collections in Report queries
I am using many collections in my Report Queries (where conditions) to create the XML needed for running reports (see where condition below). I name each one with a different collection name. I am seeing some performance issues and it looks like it has to do with the amount of collections I am using. It looks like after around 6 collections used in the query, the sql slows down tremendously. It doesn't matter which ones I use, so it doesn't seem to be a specific one.
Are there issues with using many collections? Should I try using 1 collection and add an identifier to one of the collection columns instead?
where ...
and C.FK_SCHOOL IN (SELECT c001
FROM apex_collections AC1
WHERE AC1.collection_name = 'SIS_REPORTS_SCHOOLS')
and nvl(C.ACTIVE,'NONE') IN (SELECT decode(c001,'NONE',nvl(C.ACTIVE,'NONE'),C001)
FROM apex_collections AC2
WHERE AC2.collection_name = 'SIS_REPORTS_ACTIVES')
and nvl(C5.CALENDAR_NO,'NONE') IN (SELECT decode(c001,'NONE',nvl(C5.CALENDAR_NO,'NONE'),C001)
FROM apex_collections AC3
WHERE AC3.collection_name = 'SIS_REPORTS_SCHOOL_CALENDAR_NOS')
and nvl(C10.CLUSTER_CODE,'NONE') IN (SELECT decode(c001,'NONE',nvl(C10.CLUSTER_CODE,'NONE'),C001)
FROM apex_collections AC4
WHERE AC4.collection_name = 'SIS_REPORTS_CLUSTER_CODES')
and nvl(C4.EMPLOYEE_NUMBER,'NONE') IN (SELECT decode(c001,'NONE',nvl(C4.EMPLOYEE_NUMBER,'NONE'),C001)
FROM apex_collections AC5
WHERE AC5.collection_name = 'SIS_REPORTS_COUNSELORS')
and nvl(I3.DISTRICT,'NONE') IN (SELECT decode(c001,'NONE',nvl(I3.DISTRICT,'NONE'),C001)
FROM apex_collections AC6
WHERE AC6.collection_name = 'SIS_REPORTS_DISTRICTS')
and nvl(A1.ETHNIC,'NONE') IN (SELECT decode(c001,'NONE',nvl(A1.ETHNIC,'NONE'),C001)
FROM apex_collections AC7
WHERE AC7.collection_name = 'SIS_REPORTS_ETHNICS')
and nvl(C2.GRADE_LEVEL,'NONE') IN (SELECT decode(c001,'NONE',nvl(C2.GRADE_LEVEL,'NONE'),C001)
FROM apex_collections AC8
WHERE AC8.collection_name = 'SIS_REPORTS_GRADE_LEVELS')
and nvl(C3.HOMEROOM,'NONE') IN (SELECT decode(c001,'NONE',nvl(C3.HOMEROOM,'NONE'),C001)
FROM apex_collections AC9
WHERE AC9.collection_name = 'SIS_REPORTS_HOMEROOMS')
and nvl(C9.PROGRAM_CODE,'NONE') IN (SELECT decode(c001,'NONE',nvl(C9.PROGRAM_CODE,'NONE'),C001)
FROM apex_collections AC10
WHERE AC10.collection_name = 'SIS_REPORTS_PROGRAM_CODES')
and nvl(E2.S_TYPE,'NONE') IN (SELECT decode(c001,'NONE',nvl(E2.S_TYPE,'NONE'),C001)
FROM apex_collections AC11
WHERE AC11.collection_name = 'SIS_REPORTS_SCH_TYPES')
and nvl(A.SEX,'NONE') IN (SELECT decode(c001,'NONE',nvl(A.SEX,'NONE'),C001)
FROM apex_collections AC12
WHERE AC12.collection_name = 'SIS_REPORTS_SEXS')
and nvl(C7.SPECIAL_ED,'NONE') IN (SELECT decode(c001,'NONE',nvl(C7.SPECIAL_ED,'NONE'),C001)
FROM apex_collections AC13
WHERE AC13.collection_name = 'SIS_REPORTS_SPECIAL_EDS')
and nvl(A.STUDENT_ID,'NONE') IN (SELECT decode(c001,'NONE',nvl(A.STUDENT_ID,'NONE'),C001)
FROM apex_collections AC14
WHERE AC14.collection_name = 'SIS_REPORTS_STUDENTS')
and A.PK_ID IN (SELECT decode(c001,'NONE',A.PK_ID,AC16.FK_STU_BASE)
FROM student_list_det AC16, apex_collections AC15
WHERE AC15.collection_name = 'SIS_REPORTS_STUDENT_LISTS' and
AC16.fk_student_list (+) = AC15.c001)Hi,
APEX_COLLECTIONS are special structures that do not have indexes, expect for the one on SEQ_ID. The result is that as the number of collections used in a query increases the number of full table scans on underlying tables kill speed. They are not intended for such heavy use as has been discussed in some of the threads in this forum.
They are extremely useful , but no good for very large data sets or large number of joins. Global temporary tables are also not an option with Apex.
You may have to resort to Materialized Views or intermediate/temp tables to get speed.
Regards, -
How best to use catalogs, collections etc
I wonder if anyone can help out on a newbie question about how best to use catalogs, collections, etc
Let's say I shoot an event A. I take loads of photographs, which I import into Lightroom. I choose some that I like and I flag them. So far so good.
Then I do the same for events B to Z.
Now time passes and I would like to go back to all of the photographs from event C, and just select them for further processing.
Is this what collections are for? Should I be making "All Of Event C" into a collection, then in the future I can come back and just ask to show the photographs from a given collection?
Many thanks,
regards,
/alanalan_potter had this to say:
I wonder if anyone can help out on a newbie question about how best to use catalogs, collections, etc
Let's say I shoot an event A. I take loads of photographs, which I import into Lightroom. I choose some that I like and I flag them. So far so good.
Then I do the same for events B to Z.
Now time passes and I would like to go back to all of the photographs from event C, and just select them for further processing.
Is this what collections are for?
You can do it that way.
>Should I be making "All Of Event C" into a collection, then in the future I can come back and just ask to show the photographs from a given collection?
That would work. Or you could include a specific job identifier into the
metadata (I believe there's an IPTC field specifically for that), and
then make a smart collection that isolates just that value of "Event C".
If you use the same star ratings for all jobs - 1 is finished, 2 is
further work needed, 3 is review, etc - you could further winnow it
down. And that way, you don't have to worry about removing photos from
one collection or another; the smart collection will automatically add
or remove them from view, based on the criteria and the current values
of the photos.
With collection sets, you could have a set called "Event 1". And within
this set could be the various collection criteria mentioned above. -
How to use a collection in ADF
Hi all,
How to use the collection in ADF.
Thanks in advance
C.Karukkuvelhi John,
Scenario:
In my page,i have two tab pages .In that when the user enter the Location names in the first tab.when the user navigates to the next tab,we need to give the LOV having the Loactions based on the first tab values.
when i try to use the view object it will list only the data's in the database.but i needs the locations entered in the current page also.pls guide me in this
Thanks & Regards
C.Karukkuvel -
How to use BULK COLLECT, FORALL and TREAT
There is a need to read match and update data from and into a custom table. The table would have about 3 millions rows and holds key numbers. BAsed on a field value of this custom table, relevant data needs to be fetched from joins of other tables and updated in the custom table. I plan to use BULK COLLECT and FORALL.
All examples I have seen, do an insert into a table. How do I go about reading all values of a given field and fetching other relevant data and then updating the custom table with data fetched.
Defined an object with specifics like this
CREATE OR REPLACE TYPE imei_ot AS OBJECT (
recid NUMBER,
imei VARCHAR2(30),
STORE VARCHAR2(100),
status VARCHAR2(1),
TIMESTAMP DATE,
order_number VARCHAR2(30),
order_type VARCHAR2(30),
sku VARCHAR2(30),
order_date DATE,
attribute1 VARCHAR2(240),
market VARCHAR2(240),
processed_flag VARCHAR2(1),
last_update_date DATE
Now within a package procedure I have defined like this.
type imei_ott is table of imei_ot;
imei_ntt imei_ott;
begin
SELECT imei_ot (recid,
imei,
STORE,
status,
TIMESTAMP,
order_number,
order_type,
sku,
order_date,
attribute1,
market,
processed_flag,
last_update_date
BULK COLLECT INTO imei_ntt
FROM (SELECT stg.recid, stg.imei, cip.store_location, 'S',
co.rtl_txn_timestamp, co.rtl_order_number, 'CUST',
msi.segment1 || '.' || msi.segment3,
TRUNC (co.txn_timestamp), col.part_number, 'ZZ',
stg.processed_flag, SYSDATE
FROM custom_orders co,
custom_order_lines col,
custom_stg stg,
mtl_system_items_b msi
WHERE co.header_id = col.header_id
AND msi.inventory_item_id = col.inventory_item_id
AND msi.organization_id =
(SELECT organization_id
FROM hr_all_organization_units_tl
WHERE NAME = 'Item Master'
AND source_lang = USERENV ('LANG'))
AND stg.imei = col.serial_number
AND stg.processed_flag = 'U');
/* Update staging table in one go for COR order data */
FORALL indx IN 1 .. imei_ntt.COUNT
UPDATE custom_stg
SET STORE = TREAT (imei_ntt (indx) AS imei_ot).STORE,
status = TREAT (imei_ntt (indx) AS imei_ot).status,
TIMESTAMP = TREAT (imei_ntt (indx) AS imei_ot).TIMESTAMP,
order_number = TREAT (imei_ntt (indx) AS imei_ot).order_number,
order_type = TREAT (imei_ntt (indx) AS imei_ot).order_type,
sku = TREAT (imei_ntt (indx) AS imei_ot).sku,
order_date = TREAT (imei_ntt (indx) AS imei_ot).order_date,
attribute1 = TREAT (imei_ntt (indx) AS imei_ot).attribute1,
market = TREAT (imei_ntt (indx) AS imei_ot).market,
processed_flag =
TREAT (imei_ntt (indx) AS imei_ot).processed_flag,
last_update_date =
TREAT (imei_ntt (indx) AS imei_ot).last_update_date
WHERE recid = TREAT (imei_ntt (indx) AS imei_ot).recid
AND imei = TREAT (imei_ntt (indx) AS imei_ot).imei;
DBMS_OUTPUT.put_line ( TO_CHAR (SQL%ROWCOUNT)
|| ' rows updated using Bulk Collect / For All.'
EXCEPTION
WHEN NO_DATA_FOUND
THEN
DBMS_OUTPUT.put_line ('No Data: ' || SQLERRM);
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line ('Other Error: ' || SQLERRM);
END;
Now for the unfortunate part. When I compile the pkg, I face an error
PL/SQL: ORA-00904: "LAST_UPDATE_DATE": invalid identifier
I am not sure where I am wrong. Object type has the last update date field and the custom table also has the same field.
Could someone please throw some light and suggestion?
Thanks
udsI suspect your error comes from the »bulk collect into« and not from the »forall loop«.
From a first glance you need to alias sysdate with last_update_date and some of the other select items need to be aliased as well :
But a simplified version would be
select imei_ot (stg.recid,
stg.imei,
cip.store_location,
'S',
co.rtl_txn_timestamp,
co.rtl_order_number,
'CUST',
msi.segment1 || '.' || msi.segment3,
trunc (co.txn_timestamp),
col.part_number,
'ZZ',
stg.processed_flag,
sysdate
bulk collect into imei_ntt
from custom_orders co,
custom_order_lines col,
custom_stg stg,
mtl_system_items_b msi
where co.header_id = col.header_id
and msi.inventory_item_id = col.inventory_item_id
and msi.organization_id =
(select organization_id
from hr_all_organization_units_tl
where name = 'Item Master' and source_lang = userenv ('LANG'))
and stg.imei = col.serial_number
and stg.processed_flag = 'U';
... -
Having problem using BULK COLLECT - FORALL
Hi,
I'm facing a problem while setting a table type value before inserting into a table using FORALL.
My concern is that i'm unable to generate the values in FOR LOOP, as by using dbms_output.put_line i observed that after 100 rows execution the process exits giving error as
ORA-22160: element at index [1] does not exist
ORA-06512: at "XYZ", line 568
ORA-06512: at line 2
I need to use the values stored in FOR LOOP in the same order for insertion in table TEMP using FOR ALL;
I'm guessing that i'm using the wrong technique for storing values in FOR LOOP.
Below given is my SP structure.
Any suggestion would be hepful.
Thanks!!
create or replace procedure XYZ
cursor cur is
select A,B,C,D from ABCD; ---expecting 40,000 row fetch
type t_A is table of ABCD.A%type index by pls_integer;
type t_B is table of ABCD.B%type index by pls_integer;
type t_C is table of ABCD.C%type index by pls_integer;
type t_D is table of ABCD.D%type index by pls_integer;
v_A t_A;
v_B t_B;
v_C t_C;
v_D t_D;
type t_E is table of VARCHAR2(100);
type t_F is table of VARCHAR2(100);
v_E t_E := t_E();
v_F t_F := t_F();
begin
open cur;
loop
fetch cur BULK COLLECT INTO v_A,v_B,v_C,v_D limit 100;
for i in 1 .. v_A.count loop
v_E.extend(i);
select 1111 into v_E(i) from dual;
v_F.extend(i);
v_F(i) := 'Hi !!';
----calculating v_E(i) and v_F(i) here----
end loop;
forall in i in 1 .. v_A.count
insert into table TEMP values (v_E(i), v_F(i));
exit when cur%NOTFOUND;
end loop;
close cur;
end;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for HPUX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
-------The problem is that inside the IF ELSIF blocks i need to query various tables..As I thought. But why did you concentrate on BULK COLLECT - FORALL?
The cursor whereas does take more time to execute.More time then?
We have join of two tables have 18,00,000(normal table) and >17,92,2067(MView) records, having inidex on one the join column.
After joining these two and adding the filter conditions i'm having around >40,000 >rows.? You have a cursor row. And then inside the loop you have a query which returns 40'000 rows? What do you do with that data?
Is the query you show running INSIDE the loop?
I guess you still talk about the LOOP query and your are unhappy that it is not taking an index?
1. The loop is NOT the problem. It's the "... i need to query various tables"
2. ORACLE is ok when it's NOT taking the index. That is faster!!
3. If you add code and execution plans, please add tags. Otherwise it's unreadable.
Try to merge your LOOP query with the "various tables" and make ONE query out of 40000*various ;-) -
Hi All,
I need a help to create a bulk statement. Please find the scenario below
I would like to copy a table A from table B using bulk collect also the table A has more records (1Million). Before doing this I need to either truncate the table B or drop the table to load the data from table A.
Please provide me the correct statement to achieve this request. Thanks in advance!!
Regards,
Boovan.disabling any indexes on the target should be looked at first. If there are none then look at the above.
When you do a direct path load the indexes are build after loading.The point is that the direct path load does not avoid the undo due to the indexes.
In this example on a table with no indexes the undo used goes from 216kb to 16kb using append.
When an index is added the undo used goes up from 216kb to 704kb an increase of 488kb for a standard insert.
For the direct path insert the undo goes up from 16kb to 440kb so almost the full amount of undo due to the index.
So the presence of a single index can have a much greater impact on the amount of undo required than the use of a direct path load and that undo may not be avoided by the use of a direct path load unless the index is disabled beforehand.
Also note the tiny amounts of undo we are talking about for 50k rows.
SQL> create table t as select * from all_objects where 0 = 1;
Table created.
SQL> insert into t select * from all_objects;
56108 rows created.
SQL> select
2 used_ublk undo_used_blk,
3 used_ublk * blk_size_kb undo_used_kb,
4 log_io logical_io,
5 cr_get consistent_gets
6 from
7 v$transaction, v$session s,
8 (select distinct sid from v$mystat) m,
9 (select to_number(value)/1024 blk_size_kb
10 from v$parameter where name='db_block_size')
11 where
12 m.sid = s.sid
13 and ses_addr = saddr;
UNDO_USED_BLK UNDO_USED_KB LOGICAL_IO CONSISTENT_GETS
27 216 13893 1042736
SQL> rollback;
Rollback complete.
SQL> insert /*+ append */ into t select * from all_objects;
56108 rows created.
SQL> select
2 used_ublk undo_used_blk,
3 used_ublk * blk_size_kb undo_used_kb,
4 log_io logical_io,
5 cr_get consistent_gets
6 from
7 v$transaction, v$session s,
8 (select distinct sid from v$mystat) m,
9 (select to_number(value)/1024 blk_size_kb
10 from v$parameter where name='db_block_size')
11 where
12 m.sid = s.sid
13 and ses_addr = saddr;
UNDO_USED_BLK UNDO_USED_KB LOGICAL_IO CONSISTENT_GETS
2 16 1307 1041151
SQL> rollback;
Rollback complete.
SQL> create unique index t_idx on t (object_id);
Index created.
SQL> insert into t select * from all_objects;
56109 rows created.
SQL> select
2 used_ublk undo_used_blk,
3 used_ublk * blk_size_kb undo_used_kb,
4 log_io logical_io,
5 cr_get consistent_gets
6 from
7 v$transaction, v$session s,
8 (select distinct sid from v$mystat) m,
9 (select to_number(value)/1024 blk_size_kb
10 from v$parameter where name='db_block_size')
11 where
12 m.sid = s.sid
13 and ses_addr = saddr;
UNDO_USED_BLK UNDO_USED_KB LOGICAL_IO CONSISTENT_GETS
88 704 20908 1043193
SQL> rollback;
Rollback complete.
SQL> insert /*+ append */ into t select * from all_objects;
56109 rows created.
SQL> select
2 used_ublk undo_used_blk,
3 used_ublk * blk_size_kb undo_used_kb,
4 log_io logical_io,
5 cr_get consistent_gets
6 from
7 v$transaction, v$session s,
8 (select distinct sid from v$mystat) m,
9 (select to_number(value)/1024 blk_size_kb
10 from v$parameter where name='db_block_size')
11 where
12 m.sid = s.sid
13 and ses_addr = saddr;
UNDO_USED_BLK UNDO_USED_KB LOGICAL_IO CONSISTENT_GETS
57 456 2310 1041047
Maybe you are looking for
-
How to set two colored text in one textarea
hello friends, i am busy preparing chat frame which shows messages in textarea, it shows two things, one the name of chatter , the other is the message, i want the two things of different color, you can change the text color of textarea by setForegro
-
Drop ship, Sales order line quanity changes
At what stages can we able to change the line quanities on the sales order for the drop ship transaction type. 1. Book 2. Run purchase release from OM 3. Run requisition import from PO( Requistion creation) 4. Auto create PO 5. Approved PO. Above are
-
Embedding a fill in form on a web page
I have a personal survey form, (multiple choice and fill-in only) on my web site. I use a commercial web form building site. The submit button sends results viewed by going to their site only. I'd prefer the form be emailed to me. To that end I need
-
Signing jar files - did what I was supposed to but still
Hi all, I signed 1 jar file that has 1 class which creates a file on c:\ I signed it like this: keytool -genkey -keystore myKeystore -alias newKey keytool -selfcert -alias newKey -keystore myKeystore jarsigner -keystore myKeystore classes.jar newKey
-
i am installing xml database on live server with following step:- created tablespace create tablespace XDB datafile '/umsdata03/umsdb/datafile/ums_data_xdb01.dbf' size 1G; executing script @?/rdbms/admin/catqm.sql xdb123 XDB UMS_AUTOALLOC_TEMP but th