Insert with a duplicate constraint
Hello guys,
Version 5.0.97
Could I please clarify if it is possible to distinguish an insert from an update in BerkeleyDB JE (Collections API) ?
I can provide some code if you need it but I thought I'd ask the question first, as I could be doing something stupid.
In the BerkeleyDB-GSG (page 74), there is this statement:
DatabaseConfig.setSortedDuplicates() If true, duplicate records are allowed in the database. If this value is false, then putting a duplicate record into the database results in an error return from the put call. Note that this property can be set only at database creation time. Default is false.
I set this value as part of the setup mechanism:
private void initDBConfig(boolean readOnly) {
dbCfg = new DatabaseConfig();
dbCfg.setReadOnly(readOnly); // we want Read/Write
dbCfg.setAllowCreate(true); // create if it does not exist
dbCfg.setSortedDuplicates(false); // do not allow duplicates in primaryDB
dbCfg.setTemporary(false); // this must be a persistent database, not in-memory
dbCfg.setDeferredWrite(false); // No deferred writes, it must be transactional
// dbCfg.setTransactional(true); // explicitly make DB transactional TODO: make !readOnly ??
I can insert an item into the primaryDB but if I try to insert a second record using the same key, I find the new record overwrites the old and does not throw an error.
Have I misinterpreted this or am I missing a configuration key somewhere ?
I'm happy to provide an example if this helps although I suspect I am not setting things up correctly.
Thanks for any help.
Clive
Hi Clive,
Well, the BDB JE GSG documentation is a bit misleading in that particular section, because it does not detail all possible cases of doing the Database.put*() calls.
DatabaseConfig.setSortedDuplicates() configures the database to support duplicates (duplicates records having the same key) or not. Note that this database property is persistent and cannot be changed once set; the default is false, that is, duplicates are not allowed.
If the database is configured to not support duplicates -- setSortedDuplicates(false) -- as in your case, then a Database.put() call to insert a record having a key that already exists in the database will result in overwriting the record with the same key, basically replacing the data associated with the key (an update); it will not result in an error. A Database.putNoOverwrite() call to insert a record having a key that already exists in the database will result in an OperationStatus.KEYEXIST error being returned, regardless of whether the database is configured to support duplicates or not (this is what you would want to attempt an insert).
There's also Database.putNoDupData() which stores the key/data pair into the database if it does not already appear in the database, but this method may only be called if the database supports sorted duplicates.
Though, you mentioned you are using the Collections API. Also, as your database is a primary database, then you've correctly configured it so that it does not allow duplicates.
In the JE Collection API, if the database is configured to not allow duplicates then the StoredMap.put() call will result in inserting a new record if the key does not already exist in the database or in updating the data if the key already exists in the database. Note that the return value will be null if the key was not present or it will be the previous value associated with the key, if the key was present in the database.
So, if you've configured to not allow duplicates, and if you want to prevent a StoredMap.put() call from replacing/overwriting existing data if the key is already present, then you would need to first check if the key is present, by using Map.containsKey(). See the Adding Database Items section in the Java Collections Tutorial documentation.
Regards,
Andrei
Similar Messages
-
APP-FND-00178 Concurrent Manager cannot insert with a duplicate request ID
Dear All,
I have exported the FND_CONCURRENT_REQUESTS table from PROD to development.
After restarting the services when i am submitting concurrent program i am getting the below error.
APP-FND-00178 Concurrent Manager cannot insert with a duplicate request ID
Please advice along with FND_CONCURRENT_REQUESTS what other tables needs to be imported/any other solution.
Thanks & Regards,
Bhaskar MudunuriHi Bhaskar;
You are the best.Yes i agree Hussein is the best
Have you collected all the Doc ID's based on the issues found on this forum, because if any issue raised in this forum it will be reroute it to correct DocID's of the issue for the resolution.
i have checked for this in metalink and found nothing.In many issue of me i never find doc in metalink but he can find solution to my issue by meatlink note.. Thatswhy i am normal dba and he is LEGEND!
Regards
Helios -
Files moving to NFS error folder - Could not insert message into duplicate check table
Hi Friends
Have anyone faced this error, could suggest me why.
The CSV Files failed on Sender Channel and moves to NFS error path and in the log it says as below.
Error: com.sap.engine.interfaces.messaging.api.exception.MessagingException: Could not insert message into duplicate check table. Reason: com.ibm.db2.jcc.am.SqlTransactionRollbackException: DB2 SQL ErrorHi Uma - is that a duplicate file? have you enabled duplicate file check in sender channel?
please check if the below note is applicable
1979353 - Recurring TxRollbackException with MODE_STORE_ON_ERROR stage configuration -
He ABAP/4 Open SQL array insert results in duplicate database records
Dear Gurus,
II am getting a dump when I run MD02/ MD03. (t- code to run MRP)
Below is the message system is showing:
Please help
Thanks in Advance
Best Regards
Adhish
Short text
The ABAP/4 Open SQL array insert results in duplicate database records.
What happened?
Error in the ABAP Application Program
The current ABAP program "SAPLM61U" had to be terminated because it has
come across a statement that unfortunately cannot be executed.
Error analysis
An exception occurred that is explained in detail below.
The exception, which is assigned to class 'CX_SY_OPEN_SQL_DB', was not caught
in
procedure "INSERT_MDSBI_IN_MDSB" "(FORM)", nor was it propagated by a RAISING
clause.
Since the caller of the procedure could not have anticipated that the
exception would occur, the current program is terminated.
The reason for the exception is:
If you use an ABAP/4 Open SQL array insert to insert a record in
the database and that record already exists with the same key,
this results in a termination.
(With an ABAP/4 Open SQL single record insert in the same error
situation, processing does not terminate, but SY-SUBRC is set to 4.)
1 *----
2 * ARRAY-INSERT auf MDSB
3 *----
4 FORM INSERT_MDSBI_IN_MDSB.
INSERT MDSB6 FROM TABLE MDSBI.
7 ADD SY-DBCNT TO STATS-RESBI. "statistics
8 ENDFORM.Hi,
There must be inconsistency in the number range. This happens when the current number in the number range for dependent requirements is lower than the highest number in the database table of the dependent requirements RESB.
Please check the current number in transaction OMI2. Here in the interval you can see the current number. Then please check the highest number in table RESB. If the current number in OMI2 is lower than the highest number in table RESB then this should be the reason for the dump.
Check and revert. If that's not the case we'll look into other possibilities.
In mean time check for SAP Note 138108. -
Insert with unique index slow in 10g
Hi,
We are experiencing very slow response when a dup key is inserted into a table with unique index under 10g. the scenario can be demonstrated in sqlplus with 'timing on':
CREATE TABLE yyy (Col_1 VARCHAR2(5 BYTE) NOT NULL, Col_2 VARCHAR2(10 BYTE) NOT NULL);
CREATE UNIQUE INDEX yyy on yyy(col_1,col_2);
insert into yyy values ('1','1');
insert into yyy values ('1','1');
the 2nd insert results in "unique constraint" error, but under our 10g the response time is consistently in the range of 00:00:00.64. The 1st insert only took 00:00:00.01. BTW, if no index or non-unique index then you can insert many times and all of them return fast. Under our 9.2 DB the response time is always under 00:00:00.01 with no-, unique- and non-unique index.
We are on AIX 5.3 & 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production.
Has anybody seen this scenario?
Thanks,
DavidIt seems that in 10g Oracle simply is doing something more.
I used your example and run following script on 9.2 and 10.2. Hardware is the same i.e. these are two instances on the same box.
begin
for i in 1..10000 loop
begin
insert into yyy values ('1','1');
exception when others then null;
end;
end loop;
end;
/on 10g it took 01:15.08 and on 9i 00:47.06
Running trace showed that in 9i there was difference in plan of following recursive sql:
9i plan:
select c.name, u.name
from
con$ c, cdef$ cd, user$ u where c.con# = cd.con# and cd.enabled = :1 and
c.owner# = u.user#
call count cpu elapsed disk query current rows
Parse 10000 0.43 0.43 0 0 0 0
Execute 10000 1.09 1.07 0 0 0 0
Fetch 10000 0.23 0.19 0 20000 0 0
total 30000 1.76 1.70 0 20000 0 0
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 2)
Rows Row Source Operation
0 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS BY INDEX ROWID CDEF$
0 INDEX RANGE SCAN I_CDEF4 (object id 53)
0 TABLE ACCESS BY INDEX ROWID CON$
0 INDEX UNIQUE SCAN I_CON2 (object id 49)
0 TABLE ACCESS CLUSTER USER$
0 INDEX UNIQUE SCAN I_USER# (object id 11)10g plan
select c.name, u.name
from
con$ c, cdef$ cd, user$ u where c.con# = cd.con# and cd.enabled = :1 and
c.owner# = u.user#
call count cpu elapsed disk query current rows
Parse 10000 0.21 0.20 0 0 0 0
Execute 10000 1.20 1.31 0 0 0 0
Fetch 10000 2.37 2.59 0 20000 0 0
total 30000 3.79 4.11 0 20000 0 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 2)
Rows Row Source Operation
0 HASH JOIN (cr=2 pr=0 pw=0 time=301 us)
0 NESTED LOOPS (cr=2 pr=0 pw=0 time=44 us)
0 TABLE ACCESS BY INDEX ROWID CDEF$ (cr=2 pr=0 pw=0 time=40 us)
0 INDEX RANGE SCAN I_CDEF4 (cr=2 pr=0 pw=0 time=27 us)(object id 53)
0 TABLE ACCESS BY INDEX ROWID CON$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_CON2 (cr=0 pr=0 pw=0 time=0 us)(object id 49)
0 TABLE ACCESS FULL USER$ (cr=0 pr=0 pw=0 time=0 us)So in 10g it had hash join instead of nested loop join at least for this particular select. Probably time to gather stats on sys tables?
The difference in time wasn't so big though 4.11 vs 1.70 so it doesn't explain all the time taken.
But you can probably check whether you haven't more difference.
Also you can download Thomas Kyte runstats_pkg and run it on both environments to compare whether some stats or latches haven't very big difference.
Gints Plivna
http://www.gplivna.eu -
ABAP/4 Open SQL array insert results in duplicate database records in SM58
Hi Everyone,
I am testing a file to idoc scenario in my Quality system. When I passed the input file, the mapping executed successfully and there are no entries in SMQ2 but still the idoc wasn't created in the ECC system. When I have checked in TRFC, I am getting the error ABAP/4 Open SQL array insert results in duplicate database records for IDOC_INBOUND_AYNCHRONOUS function module. I thought this is a data issue and I have tested with a fresh data which was never used for testing in Quality but even then I am getting the same error.Kindly advise.
Thanks,
Laawanyause FM idoc_status_write_to_database to change the IDoc status from 03 to 30 and then run WE14 or RSEOUT00 to change the status back to 03
resending idoc from status 03 ...is a data duplicatino issue on receiving side...why do u need to do that ?
Use WE19 tcode to debug
In we19
1)U can choose your Idoc number in existing Idoc textbox
2)Press execute
3)u will display ur Idoc struct
4)Dbl click on any field then u can modify its content
5)PressStd Outbound Processing Btn to process modified Idoc
Thats it -
Multi-table INSERT with PARALLEL hint on 2 node RAC
Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel
servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I
used is what is given below.
create table t1 ( x int );
create table t2 ( x int );
insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
when (dummy='X') then into t1(x) values (y)
when (dummy='Y') then into t2(x) values (y)
select dummy, 1 y from dual;
I can see multiple sessions using the below query, but on only one instance only. This happens not
only for the above statement but also for a statement where real time table(as in table with more
than 20 million records) are used.
select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
sql.sql_text
from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
WHERE p.sid = s.sid
and p.serial# = s.serial#
and p.sid = ps.sid
and p.serial# = ps.serial#
and s.sql_address = sql.address
and s.sql_hash_value = sql.hash_value
and qcsid=945
Won't parallel servers be spawned across instances for multi-table insert with parallelism on RAC?
Thanks,
MaheshPlease take a look at these 2 articles below
http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
thanks
http://swervedba.wordpress.com -
Script to merge multiple CSV files together with no duplicate records.
I like a Script to merge multiple CSV files together with no duplicate records.
None of the files have any headers and column A got a unique ID. What would be the best way to accomplish that?OK here is my answer :
2 files in a directory with no headers.
first column is the unique ID, second colomun you put whatever u want
The headers are added when using the import-csv cmdlet
first file contains :
1;a
2;SAMEID-FIRSTFILE
3;c
4;d
5;e
second file contains :
6;a
2;SAMEID-SECONDFILE
7;c
8;d
9;e
the second file contains the line : 2;b wich is the same in the first file
the code :
$i = 0
Foreach($file in (get-childitem d:\yourpath)){
if($i -eq 0){
$ref = import-csv $file.fullname -Header id,value -Delimiter ";"
}else{
$temp = import-csv $file.fullname -Header id,value -Delimiter ";"
foreach($line in $temp){
if(!($ref.id.contains($line.id))){
$objet = new-object Psobject
Add-Member -InputObject $objet -MemberType NoteProperty -Name id -value $line.id
Add-Member -InputObject $objet -MemberType NoteProperty -Name value -value $line.value
$ref += $objet
$i++
$ref
$ref should return:
id
value
1
a
2
SAMEID-FIRSTFILE
3
c
4
d
5
e
6
a
7
c
8
d
9
e
(get-childitem d:\yourpath) -> yourpath containing the 2 csv file -
The ABAP/4 Open SQL array insert results in duplicate database records
Hi,
Iam getting following error :
The ABAP/4 Open SQL array insert results in duplicate database records.
Error in ABAP application program.
The current ABAP program "SAPLV60U" had to be terminated because one of the
statements could not be executed.
This is probably due to an error in the ABAP program.
" Information on where terminated
The termination occurred in the ABAP program "SAPLV60U" in "VBUK_BEARBEITEN".
The main program was "SAPMSSY4 ".
The termination occurred in line 503 of the source code of the (Include)
program "LV60UF0V"
of the source code of program "LV60UF0V" (when calling the editor 5030).
Processing was terminated because the exception "CX_SY_OPEN_SQL_DB" occurred in
the
procedure "VBUK_BEARBEITEN" "(FORM)" but was not handled locally, not declared
in the
RAISING clause of the procedure.
The procedure is in the program "SAPLV60U ". Its source code starts in line 469
of the (Include) program "LV60UF0V "."
Please assist how to proceed further ..
Many thanks
Mujeeb.Sorry, THe correct note is 402221.
Description from the note
<< Please do not post SAP notes - they are copyrighed material >>
Edited by: Rob Burbank on Feb 22, 2009 3:46 PM -
will merging contacts from a macbook air with iCloud duplicate my contacts? There is no sync option given when I select contacts in my iCloud preferences screen, only 'merge'! Before I do it I'd like to know if I'll create a ton of duplicates. If yes, then how do you link contacts from your macbook with icloud? Thanks
If the same contacts exist in iCloud and on your MBA then there will clearly be duplicates. Export the contacts from the MBA, (On My Mac account) and then delete the content of the On My Mac account, then join iCloud on the MBA.
-
The ABAP/4 Open SQL array insert results in duplicate Record in database
Hi All,
I am trying to transfer 4 plants from R/3 to APO. The IM contains only these 4 plants. However a queue gets generated in APO saying 'The ABAP/4 Open SQL array insert results in duplicate record in database'. I checked for table /SAPAPO/LOC, /SAPAPO/LOCMAP & /SAPAPO/LOCT for duplicate entry but the entry is not found.
Can anybody guide me how to resolve this issue?
Thanks in advance
Sandeep PatilHi Sandeep,
Now try to delete ur location before activating the IM again.
Use the program /SAPAPO/DELETE_LOCATIONS to delete locations.
Note :
1. Set the deletion flag (in /SAPAPO/LOC : Location -> Deletion Flag)
2. Remove all the dependencies (like transportation lane, Model ........ )
Check now and let me know.
Regards,
Siva.
null -
My CLOB insert with PreparedStatements WORKS but is SLOOOOWWW
Hi All,
I am working on an application which copies over a MySQL database
to an Oracle database.
I got the code to work including connection pooling, threads and
PreparedStatements. For tables with CLOBs in them, I go through the
extra process of inserting the CLOBs according to Oracle norm, i.e.
getting locator and then writing to that:
http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/files/advanced/LOBSample/LOBSample.java.html (Good Example for BLOBs, CLOBs)
However, for tables with such CLOBs, I only get a Record per second insert
of about 1/sec!!! Tables without CLOBs (and thus, without the round-about-way)
of inserting CLOBs are approx. 10/sec!!
How can I improve the speed of my clob inserts / improve the code? At the moment, for
a table of 30,000 records (with CLOBs) it takes about 30,000 which is 8 hours!!!!
Here is my working code, which is run when my application notices that the table has
CLOBs. The record has already been inserted with all non-clob fields and the "EMPTY_BLOB()"
blank for the CLOB. The code then selects that row (the one just inserted), gets a handle on the
EMPTY_BLOB location and writes the my CLOB content (over 4000 characters) to that handles
and then closes the handle. At the very end, I do conn.commit().
Any tips for improving speed?
conn.setAutoCommit(false);
* This first section is Pseudo-Code. The actual code is pretty straight
* forward. (1) I create the preparedStatement, (2) I go record by record
* - for each record, I (a) loop through each column and run the corresponding
* setXXX to set the preparedStatement parameters, (b) run
* preparedStatement.executeUpdate(), and (c) if CLOB is present, run below
* actual code.
* During insertion of the record (executeUpdate), if I notice that
* a Clob needs to be inserted, I insert a "EMPTY_CLOB()" placeholder and set
* the flag "clobInTheHouse" to true. Once the record is inserted, if "clobInTheHouse"
* is actually "true," I go to the below code to insert the Blob into that
* newly created record's "EMPTY_CLOB()" placeholder
// clobSelect = "SELECT * FROM tableName WHERE "uniqueRecord" LIKE '1'
// I create the above for each record I insert and have this special uniqueRecord value to
// identify what record that is so I can get it below. clobInTheHouse is true when, where I
// insert the records, I find that there is a CLOB that needs to be inserted.
if(clobInTheHouse){
ResultSet lobDetails = stmt.executeQuery(clobSelect);
ResultSetMetaData rsmd = lobDetails.getMetaData();
if(lobDetails.next()){
for(int i = 1; i <= rsmd.getColumnCount(); i++){
// if column name matches clob name, then go and do the rest
if(clobs.contains(rsmd.getColumnName(i))){
Clob theClob = lobDetails.getClob(i);
Writer clobWriter = ((oracle.sql.CLOB)theClob).getCharacterOutputStream();
StringReader clobReader = new StringReader((String) clobHash.get(rsmd.getColumnName(i)));
char[] cbuffer = new char[30* 1024]; // Buffer to hold chunks of data to be written to Clob, the slob
int nread = 0;
try{
while((nread=clobReader.read(cbuffer)) != -1){
clobWriter.write(cbuffer,0,nread);
}catch(IOException ioe){
System.out.println("E: clobWriter exception - " + ioe.toString());
}finally{
try{
clobReader.close();
clobWriter.close();
//System.out.println(" Clob-slob entered for " + tableName);
}catch(IOException ioe2){
System.out.println("E: clobWriter close exception - " + ioe2.toString());
try{
stmt.close();
}catch(SQLException sqle2){
conn.commit();Can you use insert .. returning .. so you do not have to select the empty_clob back out.
[I have a similar problem but I do not know the primary key to select on, I am really looking for an atomic insert and fill clob mechanism, somone said you can create a clob fill it and use that in the insert, but I have not seen an example yet.] -
Hi,
Can we write Insert with 'Where' clause? I'm looking for something similar to the below one (which is giving me an error)
insert into PS_AUDIT_OUT (AUDIT_ID, EMPLID, RECNAME, FIELDNAME, MATCHVAL, ERRORMSG)
Values ('1','10000139','NAMES','FIRST_NAME',';','')
Where AUDIT_ID IN
( select AUDIT_ID from PS_AUDIT_FLD where AUDIT_ID ='1' and RECNAME ='NAMES'
AND FIELDNAME = 'FIRST_NAME' AND MATCHVAL = ';' );
Thanks
DuraiIt is not clear what are you trying to do, but it looks like:
insert
into PS_AUDIT_OUT(
AUDIT_ID,
EMPLID,
RECNAME,
FIELDNAME,
MATCHVAL,
ERRORMSG
select '1',
'10000139',
'NAMES',
'FIRST_NAME',
from PS_AUDIT_FLD
where AUDIT_ID = '1'
and RECNAME ='NAMES'
and FIELDNAME = 'FIRST_NAME'
and MATCHVAL = ';'
SY. -
Can somebody explain to me how to use the SQLstatement INSERT
with dremweaver 8.
Before i should SELECT i should be able to INSERT details
from a .asp into 'i my case' database.mdb.
please can somebody explain to me how Dreamweaver 8 goes over
this issue and how to do it
Thanks in advancehttp://www.aspwebpro.com/aspscripts/records/insertnew.asp
Something like this?
Dan Mode
--> Adobe Community Expert
*Flash Helps*
http://www.smithmediafusion.com/blog/?cat=11
*THE online Radio*
http://www.tornadostream.com
*Must Read*
http://www.smithmediafusion.com/blog
"thebold" <[email protected]> wrote in
message
news:eptack$nmb$[email protected]..
> Can somebody explain to me how to use the SQLstatement
INSERT with
> dremweaver 8.
> Before i should SELECT i should be able to INSERT
details from a .asp into
> 'i
> my case' database.mdb.
> please can somebody explain to me how Dreamweaver 8 goes
over this issue
> and
> how to do it
>
> Thanks in advance
> -
Performance of insert with spatial index
I'm writing a test that inserts (using OCI) 10,000 2D point geometries (gtype=2001) into a table with a single SDO_GEOMETRY column. I wrote the code doing the insert before setting up the index on the spatial column, thus I was aware of the insert speed (almost instantaneous) without a spatial index (with layer_gtype=POINT), and noticed immediately the performance drop with the index (> 10 seconds).
Here's the raw timing data of 3 runs in each 3 configuration (the clock ticks every 14 or 15 or 16 ms, thus the zero when it completes before the next tick):
truncate execute commit
no spatial index 0.016 0.171 0.016
no spatial index 0.031 0.172 0.000
no spatial index 0.031 0.204 0.000
index (1000 default for batch size) 0.141 10.937 1.547
index (1000 default for batch size) 0.094 11.125 1.531
index (1000 default for batch size) 0.094 10.937 1.610
index SDO_DML_BATCH_SIZE=10000 0.203 11.234 0.359
index SDO_DML_BATCH_SIZE=10000 0.094 10.828 0.344
index SDO_DML_BATCH_SIZE=10000 0.078 10.844 0.359As you can see, I played with SDO_DML_BATCH_SIZE to change the default of 1,000 to 10,000, which does improve the commit speed a bit from 1.5s to 0.35s (pretty good when you only look at these numbers...), but the shocking part of the almost 11s the inserts are now taking, compared to 0.2s without an index: that's a 50x drop in peformance!!!
I've looked at my table in SQL Developer, and it has no triggers associated, although there has to be something to mark the index as dirty so that it updates itself on commit.
So where is coming the huge overhead during the insert???
(by insert I mean the time OCIStmtExecute takes to run the array-bind of 10,000 points. It's exactly the same code with or without an index).
Can anyone explain the 50x insert performance drop?
Any suggestion on how to improve the performance of this scenario?
To provide another data point, creating the index itself on a populated table (with the same 10,000 points) takes less than 1 second, which is consistent with the commit speeds I'm seeing, and thus puzzles me all the more regarding this 10s insert overhead...
SQL> set timing on
SQL> select count(*) from within_point_distance_tab;
COUNT(*)
10000
Elapsed: 00:00:00.01
SQL> CREATE INDEX with6CDF1526$point$idx
2 ON within_point_distance_tab(point)
3 INDEXTYPE IS MDSYS.SPATIAL_INDEX
4 PARAMETERS ('layer_gtype=POINT');
Index created.
Elapsed: 00:00:00.96
SQL> drop index WITH6CDF1526$POINT$IDX force;
Index dropped.
Elapsed: 00:00:00.57
SQL> CREATE INDEX with6CDF1526$point$idx
2 ON within_point_distance_tab(point)
3 INDEXTYPE IS MDSYS.SPATIAL_INDEX
4 PARAMETERS ('layer_gtype=POINT SDO_DML_BATCH_SIZE=10000');
Index created.
Elapsed: 00:00:00.98
SQL>Thanks for your input. We are likely to use partioning down the line, but what you are describing (partition exchange) is currently beyond my abilities in plain SQL, and how this could be accomplished from an OCI client application without affecting other users and keep the transaction boundaries sounds far from trivial. (i.e. can it made transparent to the client application, and does it require privileges the client does have???). I'll have to investigate this further though, and this technique sounds like one accessible to a DBA only, not from a plain client app with non-privileged credentials.
The thing that I fail to understand though, despite your explanation, is why the slow down is not entirely on the commit. After all, documentation for the SDO_DML_BATCH_SIZE parameter of the Spatial index implies that the index is updated on commit only, where new rows are fed 1,000 or 10,000 at a time to the indexing engine, and I do see time being spent during commit, but it's the geometry insert that slow down the most, and that to me looks quite strange.
It's so much slower that it's as if each geometry was indexed one at a time, when I'm doing a single insert with an array bind (i.e. equivalent to a bulk operation in PL/SQL), and if so much time is spend during the insert, then why is any time spent during the commit. In my opinion it's one or the other, but not both. What am I missing? --DD
Maybe you are looking for
-
Reading Each String From a text File
Hello everyone..., I've a doubt in File...cos am not aware of File.....Could anyone plz tell me how do i read each String from a text file and store those Strings in each File...For example if a file contains "Java Tchnology forums, File handling in
-
Until I upgraded to 4.0 I had no trouble with accessing Mozilla.
-
On a Mac Mini server with OS X Lion 10.7.2, I am unalbe to get the web server working on port 80. It switches automatically to port 443 (https). This situation complicates the access to FileMaker Web publishing, as I don't want my clients having to u
-
I am having problems adding files to my external hard drive. Can anyone help me?
I can't add files to my external hard drive from my computer. I keep getting a message saying that the hard drive can't be modified. I've added files to it before, I'm not sure why it's not working now. I looked at the hard drive information and it s
-
Parameters to collect dynamic user input
Hi all, I need to create a discoverer parameter which collects one or more group's name (this parameter is connected to a LOV); After this, I need to create a condition where this parameter (which could be only one values, or a collection of values,