Collection.iterator consistency
According to the javadoc, when getting an iterator:
There are no guarantees concerning the order in which the elements are returned
However, I was wondering whether the iterator returned by a Collection always iterates in the same order (provided no changes are made to the Collection).
In other words, if the sixth Object I add to a Collection is the third one found during iteration, will it always be the third?
Or in code
Collection coll = getCollection();
Object third = null;
Object nextThird = null;
Object obj = null;
Iterator firstIter = coll.iterator();
for( int i = 0; i < 3; i++ )
obj = firstIter.next();
third = obj;
Iterator secondIter = coll.iterator();
for( int i = 0; i < 3; i++ )
obj = secondIter.next();
nextThird = obj;
boolean itWorks = (third == nextThirs);
Will itWorks always be true?
Unless I am very mistaken, I think it depends on the Collection implementation you are using. For instance, anything that also implements List will necessarily return the items in the same order.
So yeah, it's probably better not to depend on it, unless you know specifically the type of collection does indeed preserve an order via the Iterator.
Similar Messages
-
Report cannot be rendered (com/sun/java/util/collections/Iterator)
Hello all
I'm new both to Weblogic Server and BI Publisher.
A few days ago I thought that I managed to install BI Publisher on top of Weblogic. It turns out to be untrue because I am not able to view any report, being it a sample or a newly created one.
Platform: Windows 2003 32-bit
Weblogic version: 10.3.3.0
BI Publisher version: 10.1.3.4.1 (doesn't work both w/ and w/o the latest patchset 9791839)
And now to the problem. Whenever I try to view a report, I get an error message stating "The report cannot be rendered because of an error, please contact the administrator". Being both the user and the administrator, I am forced to press the "Error Detail" link, upon which the only thing that pops below is "com/sun/java/util/collections/Iterator" (in red).
The same non-verbose error message appears also when running in debug mode. Weblogic logs are empty from warnings, errors, etc.
As for the Weblogic Server, it claims that the xmlpserver application has been deployed and started successfully.
It seems to me that the BI Publisher application is trying to use java class that doesn't exist (com.sun.java.util.collections.Iterator). Of course I have no clue how to prove that because I do not have the source code for this app.
Oracle support is hardly able to understand the problem, so I thought maybe one of you could give me some answer.
Any Ideas?
JonathanBy the way, I deployed the app under Oracle Report's cluster. Don't know whether it matters.
-
COLLECTION ITERATOR PICKLER FETCH along with XMLSEQUENCEFROMXMLTYPE
Hi All,
We have Oracle database 10.2.0.4 on solaris 10.
I found some xml queries which are consuming CPU and memory highly, below is the execution plan for one of this xml sql.
PLAN_TABLE_OUTPUT
SQL_ID gzsfqp1mkfk8t, child number 0
SELECT B.PACKET_ID FROM CM_PACKET_ALT_KEY B, CM_ALT_KEY_TYPE C, TABLE (XMLSEQUENCE (EXTRACT (:B1 ,
'/AlternateKeys/AlternateKey'))) T WHERE B.ALT_KEY_TYPE_ID = C.ALT_KEY_TYPE_ID AND C.ALT_KEY_TYPE_NAME = EXTRACTVALUE
(VALUE (T), '/AlternateKey/@keyType') AND B.ALT_KEY_VALUE = EXTRACTVALUE (VALUE (T), '/AlternateKey') AND NVL
(B.CHILD_BROKER_CODE, '6209870F57C254D6E04400306E4A78B0') = NVL (EXTRACTVALUE (VALUE (T), '/AlternateKey/@broker'),
'6209870F57C254D6E04400306E4A78B0')
Plan hash value: 855909818
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | | | 16864 (100)| | | |
|* 1 | HASH JOIN | | 45 | 3240 | 16864 (2)| 00:03:23 | | |
| 2 | TABLE ACCESS FULL | CM_ALT_KEY_TYPE | 5 | 130 | 6 (0)| 00:00:01 | | |
|* 3 | HASH JOIN | | 227 | 10442 | 16858 (2)| 00:03:23 | | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE | | | | | | |
| 5 | PARTITION HASH ALL | | 10M| 447M| 16758 (2)| 00:03:22 | 1 | 16 |
| 6 | TABLE ACCESS FULL | CM_PACKET_ALT_KEY | 10M| 447M| 16758 (2)| 00:03:22 | 1 | 16 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - access("B"."ALT_KEY_TYPE_ID"="C"."ALT_KEY_TYPE_ID" AND
"C"."ALT_KEY_TYPE_NAME"=SYS_OP_C2C(EXTRACTVALUE(VALUE(KOKBF$),'/AlternateKey/@keyType')))
3 - access("B"."ALT_KEY_VALUE"=EXTRACTVALUE(VALUE(KOKBF$),'/AlternateKey') AND
NVL("B"."CHILD_BROKER_CODE",'6209870F57C254D6E04400306E4A78B0')=NVL(EXTRACTVALUE(VALUE(KOKBF$),'/AlternateKey/@broker'
),'6209870F57C254D6E04400306E4A78B0'))Seems due to
1.COLLECTION ITERATOR PICKLER FETCH along with XMLSEQUENCEFROMXMLTYPE which i think is due to usage of table( XMLSEQUENCE() )
2.Conversion taking place according to SYS_OP_C2C function as shown in Predicate Information.
3.Table is not using xmltype datatype to store XML
4.Wilcards have been used (/AlternateKey/@keyType)
Could anyone please help me in tuning this query as i know very less about XML DB
Including one more sql which also use to consume huge CPU and memory, these tables are also not hving any column with xmltype datatype.
SELECT /*+ INDEX(e) */ XMLAGG(XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES (G.TAG_CATEGORY_CODE AS
"categoryType"), XMLELEMENT ("TaggingValue", XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS
"origin"), XMLAGG (XMLCONCAT (XMLELEMENT ("Value", XMLATTRIBUTES (F.TAG_LIST_CODE AS "listType"),
E.TAG_VALUE), CASE WHEN LEVEL = 1 THEN :B4 ELSE NULL END))) )) FROM TABLE (CAST (:B1 AS
T_TAG_MAP_HIERARCHY_TAB)) A, TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C, REM_TAG_VALUE E, REM_TAG_LIST F,
REM_TAG_CATEGORY G, CM_ORIGIN H WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID AND F.TAG_LIST_ID = E.TAG_LIST_ID
AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID AND H.ORIGIN_ID = C.ORIGIN_ID AND C.ENUM_TAG_ID =
A.MAPPED_ENUM_TAG_ID GROUP BY G.TAG_CATEGORY_CODE, C.IS_PRIMARY, H.ORIGIN_CODE START WITH
A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 ) CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_ID
Plan hash value: 2393257319
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 16455 (100)| |
| 1 | SORT AGGREGATE | | 1 | 185 | 16455 (1)| 00:03:18 |
| 2 | SORT GROUP BY | | 1 | 185 | 16455 (1)| 00:03:18 |
|* 3 | CONNECT BY WITH FILTERING | | | | | |
|* 4 | FILTER | | | | | |
| 5 | COUNT | | | | | |
|* 6 | HASH JOIN | | 667K| 117M| 16413 (1)| 00:03:17 |
| 7 | COLLECTION ITERATOR PICKLER FETCH | | | | | |
|* 8 | HASH JOIN | | 8168 | 1459K| 16384 (1)| 00:03:17 |
| 9 | TABLE ACCESS FULL | REM_TAG_CATEGORY | 25 | 950 | 5 (0)| 00:00:01 |
|* 10 | HASH JOIN | | 8168 | 1156K| 16378 (1)| 00:03:17 |
| 11 | TABLE ACCESS FULL | REM_TAG_LIST | 117 | 7137 | 5 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 8168 | 670K| 16373 (1)| 00:03:17 |
| 13 | MERGE JOIN | | 8168 | 215K| 27 (4)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID | CM_ORIGIN | 2 | 50 | 2 (0)| 00:00:01 |
| 15 | INDEX FULL SCAN | PK_CM_ORIGIN | 2 | | 1 (0)| 00:00:01 |
|* 16 | SORT JOIN | | 8168 | 16336 | 25 (4)| 00:00:01 |
| 17 | COLLECTION ITERATOR PICKLER FETCH| | | | | |
| 18 | TABLE ACCESS BY INDEX ROWID | REM_TAG_VALUE | 1 | 57 | 2 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | PK_REM_TAG_VALUE | 1 | | 1 (0)| 00:00:01 |
|* 20 | HASH JOIN | | | | | |
| 21 | CONNECT BY PUMP | | | | | |
| 22 | COUNT | | | | | |
|* 23 | HASH JOIN | | 667K| 117M| 16413 (1)| 00:03:17 |
| 24 | COLLECTION ITERATOR PICKLER FETCH | | | | | |
|* 25 | HASH JOIN | | 8168 | 1459K| 16384 (1)| 00:03:17 |
| 26 | TABLE ACCESS FULL | REM_TAG_CATEGORY | 25 | 950 | 5 (0)| 00:00:01 |
|* 27 | HASH JOIN | | 8168 | 1156K| 16378 (1)| 00:03:17 |
| 28 | TABLE ACCESS FULL | REM_TAG_LIST | 117 | 7137 | 5 (0)| 00:00:01 |
| 29 | NESTED LOOPS | | 8168 | 670K| 16373 (1)| 00:03:17 |
| 30 | MERGE JOIN | | 8168 | 215K| 27 (4)| 00:00:01 |
| 31 | TABLE ACCESS BY INDEX ROWID | CM_ORIGIN | 2 | 50 | 2 (0)| 00:00:01 |
| 32 | INDEX FULL SCAN | PK_CM_ORIGIN | 2 | | 1 (0)| 00:00:01 |
|* 33 | SORT JOIN | | 8168 | 16336 | 25 (4)| 00:00:01 |
| 34 | COLLECTION ITERATOR PICKLER FETCH| | | | | |
| 35 | TABLE ACCESS BY INDEX ROWID | REM_TAG_VALUE | 1 | 57 | 2 (0)| 00:00:01 |
|* 36 | INDEX UNIQUE SCAN | PK_REM_TAG_VALUE | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=PRIOR NULL)
4 - filter(SYS_OP_ATG(VALUE(KOKBF$),2,3,2)=HEXTORAW(:B3))
6 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=SYS_OP_ATG(VALUE(KOKBF$),2,3,2))
8 - access("G"."TAGGING_CATEGORY_ID"="F"."TAGGING_CATEGORY_ID")
10 - access("F"."TAG_LIST_ID"="E"."TAG_LIST_ID")
16 - access("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
filter("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
19 - access("E"."TAG_VALUE_ID"=SYS_OP_ATG(VALUE(KOKBF$),7,8,2))
20 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=PRIOR NULL)
23 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=SYS_OP_ATG(VALUE(KOKBF$),2,3,2))
25 - access("G"."TAGGING_CATEGORY_ID"="F"."TAGGING_CATEGORY_ID")
27 - access("F"."TAG_LIST_ID"="E"."TAG_LIST_ID")
33 - access("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
filter("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
36 - access("E"."TAG_VALUE_ID"=SYS_OP_ATG(VALUE(KOKBF$),7,8,2))-Yasser
Edited by: YasserRACDBA on Feb 24, 2010 8:30 PM
Added one more sql..Looking at the second query, it too has a lot of bind variables... Can you find out the types and values of each BIND. Also, I'm suspcious about the use of XMLCONCAT.. Can you found out why the developer is using it..
SELECT /*+ INDEX(e) */ XMLAGG
XMLELEMENT
"TaggingCategory",
XMLATTRIBUTES (G.TAG_CATEGORY_CODE AS "categoryType"),
XMLELEMENT
"TaggingValue",
XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS "origin"),
XMLAGG
XMLCONCAT
XMLELEMENT
"Value",
XMLATTRIBUTES (F.TAG_LIST_CODE AS "listType"),
E.TAG_VALUE
CASE WHEN LEVEL = 1
THEN :B4
ELSE NULL
END
FROM TABLE (CAST (:B1 AS T_TAG_MAP_HIERARCHY_TAB)) A,
TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C,
REM_TAG_VALUE E,
REM_TAG_LIST F,
REM_TAG_CATEGORY G,
CM_ORIGIN H
WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID
AND F.TAG_LIST_ID = E.TAG_LIST_ID
AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID
AND H.ORIGIN_ID = C.ORIGIN_ID
AND C.ENUM_TAG_ID = A.MAPPED_ENUM_TAG_ID
GROUP BY G.TAG_CATEGORY_CODE, C.IS_PRIMARY, H.ORIGIN_CODE
START WITH A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 )
CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_IDEdited by: mdrake on Feb 24, 2010 8:11 AM -
How to tune COLLECTION ITERATOR CONSTRUCTOR FETCH
Hi Gurus
I explained a plan for an SQL.
it has "COLLECTION ITERATOR CONSTRUCTOR FETCH" and consumes more time and cost. Please share your tips or tricks to at-least improve little bit time or cost.
ThanksAgreed, Cardinality hint is not safe to use as its undocumented.
But Tom says :
one of the few 'safe' undocumented things to use.
Because it's use will not lead to data corruption, wrong answers, unpredictable outcomes. If it works - it will influence a query plan, if it doesn't - it won't. That is all - it is rather 'safe' in that respect.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2233040800346569775
Cheers,
Manik. -
What is the best Collection/Iterator to use?
Anyone come across this scenario before?
I have a list of elements that I want to put into a Collection/List of some description. I then want to iterate through the list searching for an element that matches my requirements.
When I find one I then want to 'read ahead' through the list to see if there is a matching element matching my requirements.
If I find I match, I'd like to be able to delete both elements from the list and then continue reading through the list after the first element I found.
I've had a look at ListIterator, but this doesn't seem to allow you to re-position the cursor or even restart() the iterator.
Can anyone tell me if any or all of this is possible?
thanksPlace the items in a Set. Since Sets don't allow duplicates, that problem is solved. Then just iterate the set and delete those entries you don't want.
-
Collection iteration wierdness
Hey folks - I am having a wierd problem. If I am looping
through a collection with a filterFunction applied and am making
changes to the property values of the objects in the collection. If
the objects in the collection are bindable or otherwise dispatch
PropertyChangeEvent's then the collection executes the
filterFunction immediately. Normally this would be fine but after
the filterFunction runs, the object get moved to the end of the
collection. As you can imagine, re-ordering objects while trying to
iterate through them produces some strange results. I don't know if
this has made sense, but please look at my attached sample
application that demonstrates the problem. Can somebody give some
insight into why the objects are getting re-ordered?
[code]
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="
http://www.adobe.com/2006/mxml"
initialize="init()">
<mx:Script>
<![CDATA[
import mx.utils.ObjectProxy;
import mx.collections.ArrayCollection;
private function init():void
// create new collection
var collection:ArrayCollection = new ArrayCollection();
//add some dummy objects and wrap them in an ObjectProxy
collection.addItem( new ObjectProxy({id:"A", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"B", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"C", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"D", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"E", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"F", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"G", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"H", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"I", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"J", foo:"bar"}) );
collection.addItem( new ObjectProxy({id:"K", foo:"bar"}) );
// apply a filter (which always returns true so really
// shouldn't affect anything)
collection.filterFunction = filterObjs;
collection.refresh();
// do a simple loop through the collection and
// print out the values
trace("Printing out collection values...");
for each (o in collection)
trace(o.id);
// do another simple loop through the collection
// but this time modify one of the property values.
// notice how the iteration through the collection
// goes awry!
trace("Modifying a property on each object...");
for each (var o:Object in collection)
// print out which object we're at
trace(o.id);
// change a property value
o.foo = "something";
private function filterObjs(o:Object):Boolean
return true;
]]>
</mx:Script>
</mx:Application>
[/code]
Any thoughts on this would be much appreciated.
Regards,
RyanDECLARE
TYPE type_a IS RECORD(field_a NUMBER, field_b VARCHAR2(50));
TYPE type_table IS TABLE OF type_a INDEX BY VARCHAR2(50);
my_table type_table;
BEGIN
FOR indx IN 1 .. my_table.count
LOOP
dbms_output.put_line(my_table(indx).field_a);
END LOOP;
END; -
Managing statistics for object collections used as table types in SQL
Hi All,
Is there a way to manage statistics for collections used as table types in SQL.
Below is my test case
Oracle Version :
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> Original Query :
SELECT
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:02.90
Execution Plan
Plan hash value: 3970072279
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 1 | HASH JOIN | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 2 | HASH JOIN | | 8168 | 287K| 695 (3)| 00:00:09 |
| 3 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | TG_FILE | 565K| 12M| 659 (2)| 00:00:08 |
| 7 | TABLE ACCESS FULL | TG_FILE_DATA | 852K| 128M| 3863 (1)| 00:00:47 |
Predicate Information (identified by operation id):
1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
Statistics
7 recursive calls
0 db block gets
16783 consistent gets
16779 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
select
index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
from
all_indexes
where table_name in ('TG_FILE','TG_FILE_DATA');
INDEX_NAME BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE
TG_FILE_PK 2 2160 552842 21401 552842 285428
TG_FILE_DATA_PK 2 3544 852297 61437 852297 852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
So my question is, is there any way by which I can change the statistics while using collections in SQL ?
I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
Modified query with hints :
SELECT
/*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 1670128954
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 29978 (1)| 00:06:00 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 194 | 29978 (1)| 00:06:00 |
| 3 | NESTED LOOPS | | 8168 | 1363K| 16379 (1)| 00:03:17 |
| 4 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | TG_FILE_DATA | 1 | 158 | 2 (0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | TG_FILE_DATA_PK | 1 | | 1 (0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | TG_FILE_PK | 1 | | 1 (0)| 00:00:01 |
| 10 | TABLE ACCESS BY INDEX ROWID | TG_FILE | 1 | 23 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
filter("TF"."FILE_ID"="TFD"."FILE_ID")
Statistics
0 recursive calls
0 db block gets
16 consistent gets
8 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
Thanks,
BThanks Tubby,
While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
http://www.oracle-developer.net/display.php?id=427
If we go across the document, it has mentioned in total 3 hints to set statistics :
1) CARDINALITY (Undocumented)
2) OPT_ESTIMATE ( Undocumented )
3) DYNAMIC_SAMPLING ( Documented )
4) Extensible Optimiser
Tried it out with different hints and it is working as expected.
i.e. cardinality and opt_estimate are taking the default set value
But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
With CARDINALITY hint
SELECT
/*+ cardinality( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 10 | 29 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With OPT_ESTIMATE hint
SELECT
/*+ opt_estimate(table, e, scale_rows=0.0006) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Execution Plan
Plan hash value: 4043204977
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 485 | 29 (0)| 00:00:01 |
| 1 | VIEW | | 5 | 485 | 29 (0)| 00:00:01 |
| 2 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 3 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With DYNAMIC_SAMPLING hint
SELECT
/*+ dynamic_sampling( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 4 | 11 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 2 | 4 | 11 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
Note
- dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
Regards,
B -
Following code uses iterator.remove() ArrayList.remove().
public class CollectionRemove
private static void checkIteratorRemove()
List<String> list = new ArrayList<String>();
list.add("1");
list.add("2");
list.add("3");
System.out.println("in checkWithIterator*************");
Iterator<String> iter = list.iterator();
while (iter.hasNext()) {
String str = iter.next();
if (str.equals("2")) {
iter.remove();
System.out.println("list Size: " + list.size() + " Element: " + str);
private static void checkListRemove()
List<String> list = new ArrayList<String>();
list.add("1");
list.add("2");
list.add("3");
System.out.println("in ncheckWithForLoop*************");
Iterator<String> iter = list.iterator();
while (iter.hasNext())
String str = (String) iter.next();
if (str.equals("2")) {
list.remove(str);
System.out.println("list Size: " + list.size() + " Element: " + str);
public static void main(String args[])
checkIteratorRemove();
checkListRemove();
output is :
in checkWithIterator*************
list Size: 3 Element: 1
list Size: 2 Element: 2
list Size: 2 Element: 3
in ncheckWithForLoop*************
list Size: 3 Element: 1
list Size: 2 Element: 2Why is this difference ? what is the difference between iterator.remove() ArrayList.remove()?In the case of Fail-fast iterator, if a thread modifies a collection directly while iterating over it, the iterator will thow ConcurrentModificationException . Say,
for (Iterator it = collection.iterator(); it.hasNext()) {
Object object = it.next();
if (isConditionTrue) {
// collection.remove(object); can throw ConcurrentModificationException
it.remove(object);
}As per specs,
Note that this exception does not always indicate that an object has been concurrently modified by a different thread. If a single thread issues a sequence of method invocations that violates the contract of an object, the object may throw this exception. For example, if a thread modifies a collection directly while it is iterating over the collection with a fail-fast iterator, the iterator will thow this exception. -
How to use results of ejbfind methods when it is a collection ?
How to use ejbFind methods result , when it is a collection ?
Hi thank you for reading my post.
EJB find methods return a collection , i want to know how i can use that collection ?
should i use Collection.toArray() and then use that array by casting it?
what is DTOs and ho i can use them in this case?
How i can use the returned collection is a swing application as it is a colection of ejbs ?
Should i Cast it back to ejb class which it comes from or some other way ?
for example converting it to an array of DTO (in session bean) and return it to swing application ?
or there are some other ways ?
Thank youHi
pleas , some one answer
Collection collection = <home-interface>.<finderMethod>;
Iterator iter = collection.iterator();
while (iter.hasNext()) {
<remote-interface> entityEJB = (<remote-interface>) iter.next();
} what if i do the above job in session bean and convert the result to a DTO and pass the dto back ?
thank you -
Really really really really want to modify objects in an iterator call
Hey,
I know you are not really allowed to do this, but I am iterating over some objects and I want to modify them while iterating over them without getting a ConcurrentModificationException.
Collection collection = myHashMap.values();
for (Iterator iter = collection .iterator(); iter.hasNext())
MyObject obj = (MyObject) iter.next();
modifySomethingWithinMyObject(obj);
not only does modifySomethingWithinMyObject(obj) modify the object being passed in but it also may modify other values within the hashmap.
This is really killing me. anyone know how to get around this?I know you are not really allowed to do this, but I am
iterating over some objects and I want to modify them
while iterating over them without getting a
ConcurrentModificationException.If you're using HashMap in a concurrent situation you should synchronize it. That takes care of the ConcurrentModificationException.
You're not allowed to modify the HashMap structure while iterating through it (other than by using the metods available in the Iterator itself). If you do you may get something like a BadStructureException or so.
You can modify the objects stored in the HashMap as long as you don't modify anything used by the equals or hashCode methods (if you've overridden them), but in a concurrent situation you should synchronize all object methods that modifies the objects content. -
10g: delay for collecting results from parallel pipelined table functions
When parallel pipelined table functions are properly started and generate output record, there is a delay for the consuming main thread to gather these records.
This delay is huge compared with the run-time of the worker threads.
For my application it goes like this:
main thread timing efforts to start worker and collect their results:
[10:50:33-*10:50:49*]:JOMA: create (master): 015.93 sec (#66356 records, #4165/sec)
worker threads:
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.24 sec (#2449 EDRs, #467/sec, #0 errored / #6430 EBTMs, #1227/sec, #0 errored) - bulk #1 / sid #816
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.56 sec (#2543 EDRs, #457/sec, #0 errored / #6792 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #718
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.69 sec (#2610 EDRs, #459/sec, #0 errored / #6950 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #614
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.55 sec (#2548 EDRs, #459/sec, #0 errored / #6744 EBTMs, #1216/sec, #0 errored) - bulk #1 / sid #590
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.33 sec (#2461 EDRs, #462/sec, #0 errored / #6504 EBTMs, #1220/sec, #0 errored) - bulk #1 / sid #508
You can see, the worker threads are all started at the same time and terminating at the same time: 10:50:34-10:50:*39*.
But the main thread just invoking them and saving their results into a collection has finished at 10:50:*49*.
Why does it need #10 sec more just to save the data?
Here's a sample sqlplus script to demonstrate this:
--------------------------- snip -------------------------------------------------------
set serveroutput on;
drop table perf_data;
drop table test_table;
drop table tmp_test_table;
drop type ton_t;
drop type test_list;
drop type test_obj;
create table perf_data
sid number,
t1 timestamp with time zone,
t2 timestamp with time zone,
client varchar2(256)
create table test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create global temporary table tmp_test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_obj as object(
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_list as table of test_obj;
create or replace type ton_t as table of number;
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
end;
create or replace package body test_pkg
as
* Calculate timestamp with timezone difference
* in milliseconds
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer
is
begin
return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
+ (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
+ (extract(second from t2) - extract(second from t1)) * 1000;
end TZDeltaToMilliseconds;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
mytab test_tab;
mytab2 test_list := test_list();
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
mytab2.extend;
mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
end loop;
for i in mytab2.first..mytab2.last loop
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
sid number;
t1 timestamp with time zone;
t2 timestamp with time zone;
procedure LogPerfTable
is
type ton is table of number;
type tot is table of timestamp with time zone;
type clients_t is table of varchar2(256);
sids ton;
t1s tot;
t2s tot;
clients clients_t;
deltaTime integer;
btsPerSecond number(19,0);
edrsPerSecond number(19,0);
begin
select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
if clients.count > 0 then
for i in clients.FIRST .. clients.LAST loop
deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
if deltaTime = 0 then deltaTime := 1; end if;
dbms_output.put_line(
'[' || to_char(t1s(i), 'hh:mi:ss') ||
'-' || to_char(t2s(i), 'hh:mi:ss') ||
']:' ||
' client ' || clients(i) || ' / sid #' || sids(i)
end loop;
end if;
end LogPerfTable;
begin
select userenv('SID') into sid from dual;
for i in 1..200000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
-- save into the tmp table
insert into tmp_test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
end;
--------------------------- snap -------------------------------------------------------
best regards,
FrankHello
I think the delay you are seeing is down to choosing the partitioning method as HASH. When you specify anything other than ANY, an additional buffer sort is included in the execution plan...
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
function TF_Any(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by ANY);
end;
create or replace package body test_pkg
as
* Calculate timestamp with timezone difference
* in milliseconds
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer
is
begin
return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
+ (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
+ (extract(second from t2) - extract(second from t1)) * 1000;
end TZDeltaToMilliseconds;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, myRec.b, myRec.c));
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
function TF_any(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by ANY)
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, myRec.b, myRec.c));
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
explain plan for
select /*+ first_rows */ test_obj(a, b, c)
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
select * from table(dbms_xplan.display);
Plan hash value: 1037943675
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 3972K| 20 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | 8168 | 3972K| | | Q1,01 | PCWP | |
| 4 | VIEW | | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,01 | PCWP | |
| 5 | COLLECTION ITERATOR PICKLER FETCH| TF | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 931K| 140M| 136 (2)| 00:00:02 | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | TEST_TABLE | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWP | |
Note
- dynamic sampling used for this statement
explain plan for
select /*+ first_rows */ test_obj(a, b, c)
from table(test_pkg.TF_Any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
select * from table(dbms_xplan.display);
Plan hash value: 4097140875
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 3972K| 20 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,00 | PCWP | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| TF_ANY | | | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL | TEST_TABLE | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWP | |
Note
- dynamic sampling used for this statementI posted about it here a few years ago and I more recently posted a question on Asktom. Unfortunately Tom was not able to find a technical reason for it to be there so I'm still a little in the dark as to why it is needed. The original question I posted is here:
Pipelined function partition by hash has extra sort#
I ran your tests with HASH vs ANY and the results are in line with the observations above....
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
sid number;
t1 timestamp with time zone;
t2 timestamp with time zone;
procedure LogPerfTable
is
type ton is table of number;
type tot is table of timestamp with time zone;
type clients_t is table of varchar2(256);
sids ton;
t1s tot;
t2s tot;
clients clients_t;
deltaTime integer;
btsPerSecond number(19,0);
edrsPerSecond number(19,0);
begin
select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
if clients.count > 0 then
for i in clients.FIRST .. clients.LAST loop
deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
if deltaTime = 0 then deltaTime := 1; end if;
dbms_output.put_line(
'[' || to_char(t1s(i), 'hh:mi:ss') ||
'-' || to_char(t2s(i), 'hh:mi:ss') ||
']:' ||
' client ' || clients(i) || ' / sid #' || sids(i)
end loop;
end if;
end LogPerfTable;
begin
select userenv('SID') into sid from dual;
for i in 1..200000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
-- save into the tmp table
insert into tmp_test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(4) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function ANY:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(5) copy physical ''test_table'' to ''mylist2'' by streaming via table function using ANY:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
end;
(1) copy 'mylist' to 'mylist2' by streaming via table function...
test_pkg.TF( sid => '918' ): enter
test_pkg.TF( sid => '918' ): exit, piped #200000 records
[01:40:19-01:40:29]: client master / sid #918
[01:40:19-01:40:29]: client slave / sid #918
... saved #600000 records
(2) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function:
[01:40:31-01:40:36]: client master / sid #918
[01:40:31-01:40:32]: client slave / sid #659
[01:40:31-01:40:32]: client slave / sid #880
[01:40:31-01:40:32]: client slave / sid #1045
[01:40:31-01:40:32]: client slave / sid #963
[01:40:31-01:40:32]: client slave / sid #712
... saved #600000 records
(3) copy physical 'test_table' to 'mylist2' by streaming via table function:
[01:40:37-01:41:05]: client master / sid #918
[01:40:37-01:40:42]: client slave / sid #738
[01:40:37-01:40:42]: client slave / sid #568
[01:40:37-01:40:42]: client slave / sid #618
[01:40:37-01:40:42]: client slave / sid #659
[01:40:37-01:40:42]: client slave / sid #963
... saved #3000000 records
(4) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function ANY:
[01:41:12-01:41:16]: client master / sid #918
[01:41:12-01:41:16]: client slave / sid #712
[01:41:12-01:41:16]: client slave / sid #1045
[01:41:12-01:41:16]: client slave / sid #681
[01:41:12-01:41:16]: client slave / sid #754
[01:41:12-01:41:16]: client slave / sid #880
... saved #600000 records
(5) copy physical 'test_table' to 'mylist2' by streaming via table function using ANY:
[01:41:18-01:41:38]: client master / sid #918
[01:41:18-01:41:38]: client slave / sid #681
[01:41:18-01:41:38]: client slave / sid #712
[01:41:18-01:41:38]: client slave / sid #754
[01:41:18-01:41:37]: client slave / sid #880
[01:41:18-01:41:38]: client slave / sid #1045
... saved #3000000 recordsHTH
David -
SQLJ Iterator vs java.util.Iterator
Hi -
I'm trying to start using SQLJ a bit more, and I was wondering if anybody had some code samples for implementing a java.util.Collection or java.util.Iterator interface for a SQLJ Iterator? Lots of my utility code works on the standard collection/iterator/enumeration trio. Any great ideas out there for putting a standard interface on a SQLJ iterator?
Thanks!
-best-darr-
nullSounds intriguing - other than a similarity of name, these currently do not have anything in common.
There is one approach currently available to you: you can subclass the iterator class to provide your own functionality and use that subclass. See:
http://technet.oracle.com/tech/java/sqlj_jdbc/htdocs/sqlj-primer06i.htm
Source is in the distribution in:
.../sqlj/demo/SubclassIterDemo.sqlj
Your take -if I understand correctly - on this is to have the SQLJ translator automatically create implementations of the the various collection types from an iterator when you specify:
#sql iterator Iter implements SomeCollectionType (...);
In that case I am still unsure as to what the elements of the collection are. -
Filtering immutable/unmodifiable collections
Hi,
I need a general purpose collection filter:
Given a collection, return another collection that excludes certain members of the first collection. Additionally I'd prefer that the new collection retains other attributes of the original (e.g. Comparator if SortedSet etc).
I considered cloning the given collection and then removing undesired elements. Is there a clean way to do this?
In jakarta commons I notice something that comes close (http://cvs.apache.org/viewcvs.cgi/*checkout*/jakarta-commons/collections/src/java/org/apache/commons/collections/CollectionUtils.java), however it modifies the original collection.
* Filter the collection by applying a Predicate to each element. If the
* predicate returns false, remove the element.
* <p>
* If the input collection or predicate is null, there is no change made.
* @param collection the collection to get the input from, may be null
* @param predicate the predicate to use as a filter, may be null
public static void filter(Collection collection, Predicate predicate) {
if (collection != null && predicate != null) {
for (Iterator it = collection.iterator(); it.hasNext();) {
if (predicate.evaluate(it.next()) == false) {
it.remove();copy constructors. I can't remember the last time I
made a class I wrote cloneable.Me neither :) However, I frequently use existing clone() methods in the collections framework (e.g. Vector.clone() etc.)
Is there a reason you need a completely generic
method
to do this? (other than the appeal of writing
beautiful code).I'm writing a utility. I don't want to build in restrictions if I can help it.
Do you actually have an application that requires
this
functionality where you actually use every collection
type?Not sure I get your drift ... The client application currently uses Vector(legacy), ArrayList, HashSet, TreeSet and LinkedHashSet (no LinkedList yet). So there aren't too many implmentations at all. However, I'd like the filter method to be general (see above).
And .clone() won't work for some reason?Can I clone/copy without code like:
if(collection instanceof <particular-collection-implementation>) {
}which I'm trying to avoid (for reasons above) ? -
How do collection updates really work?
Hi,
Few questions about collection updates.
I have been trying to figure out the most effective way to implement collection updates so, that there would be minimal delay in adding a workstation to collections. The documentation on how the updates exactly work is however a bit vague and the functionality
seems to have no clear course of action... This is going to be a long post, since there's much to describe.
We have two main collection structures. One is for software and the other one is for Active Directory.
Software structure has collections for each software and each software collection is consisted of direct members, Query rules for AD Security Group members and included collections from Active Directory collection structure.
Active Directory collection structure is a tree-like structure where the root collection and all descending collections include their child items. The real query for workstations from AD is in the last collection/node of the tree. For example in the picture
"Staff" collection has collection rules to include "Faculty1" and "Faculty2". Faculty1 has rules which include "dept1" and "dept2" to the collection and so forth. All collections are limited to All Workstations
(members queried from AD), which is limited to All Systems.
I have disabled all incremental and scheduled updates from each collection (excl. built-in collections) and set hourly updates on All workstations. All built-in collections are incrementally updating. Now, this should only update all workstations collection
and the rest should be left alone? Nope. When the All Workstations collection schedule triggers almost 2/3 of all other collections are refreshed too. There however seem to be no consistency what collections are updated. First I thought that all collections
which have All workstations as limiting collection would update, but it does not seem to work like that.
First question: Does anyone have any idea how the update works in this case? Does updating limiting collection affect other collections?
I have created an SQL query to easily see the last refresh time of a collection. The query also shows all manually made "Update collection membership" requests with time and what the settings are in that collection. This query is scoped on the view
"vCollections".
Second question: This list shows one extra collection which cannot be seen from ConfigMgr console. The collection has ID "SMSOTHER" and is named "All custom resources". What is this for? The collection updates on schedule and has incremental
updates turned on.
Our organization has around 900 collections in total and if all collections are updated (full update) synchronously it will take almost 30 minutes to complete (BTW, why is the performance so poor and why no async updates?). Software collections probably are
easiest to do with incrementals, but using schedule on all AD collections with the current functionality seems like an overkill. I have tried scheduled updates on different levels of the structure, but none of these seem to have the constant effect of updating
all child collections also.
How does updating one collection affect other collections? Do the included collections get updated as well?
Would be great if there was a comprehensive documentation about collection updates.
BR,
Juha-MattiHi,
First question:
Based on my knowledge, when All Workstations collection schedule triggers, all other collections related to this collection would be updated too.
Second question:
The collection “All custom resources” hasn’t been documented. I think it might be reserved for future
use.
Best Regards,
Joyce Li
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Scripting: Collection object members are null?
We are trying to validate the contents of a collection, and are having difficulty accessing individual members.
We have tried both using the iterator
Iterator iterator = collection.iterator();
while (iterator.hasNext()) {
Object element = iterator.next();
throw doc.createApplicationException("test",element);
And the .get() method of the collection variable that is supplied.
int i;
for (i=0;i<=collection.size();i++) {
throw doc.createApplicationException("test",collection.get(i));
collection.size() shows that we have 2 members in the collection, but looping through them appears to return null objects.We are simply writing a collection validation script in the E-sourcing web interface, which use Beanshell. No tool per se.
I tried this syntax and I still get an error on the line
fieldX = member.get(fieldX);
internal Error: null variable value
The complete code is
iterTest = collection.iterator();
for(member : iterTest){
fieldX = member.get(fieldX);
System.out.println("FieldX: " + fieldX);
collection.size is 2, so I know there's something in there.
Maybe you are looking for
-
Index or alphabetization on a form
Hello and thank you, is there a way to index or alphabetize entry's on a page that is written in Adobe?
-
How do i use Facebook in china using my macbook pro?
how do i use Facebook in china using my macbook pro?
-
Installation of flash player11.5.502.149 on mac osx 10.8.2
i installed the new flash player, better i used to.... but everytime i want to install it on my mac, theres an error and the installation stops... NOW i have a flash player NOT WORKING and cant watch anything on the internet! what can i do now? will
-
In case nobody knows, Apple released iOS 8.4 today. I'm installing it on my iPad Mini 2 now, check it out later. Will install it on my iPhone 6 after I come home from work.
-
Can i use my ipad2 as inertnet device for connect my desktopMy ipad2 is connected with wifi at my home my question is use internet through ipad2with my desktop through iapd2usb