RogueWave RWSortedVector insert is slow
Hi,
We have a legacy code which uses RogueWave RWSortedVector for caching some data after reading from database. The startup is taking hours because the sortedVector insert is slow as the number of items to be added are around 20K. Is there a workaround/another RogueWave which can be used to improve the performance?
Thanks in advance,
Hi,
I have created two temp tables. (temp1, temp2)
After that an index is created for temp2.
Then insert statement is executed by using union
operator and then inserted into the temp2 table.What sort of union? UNION or UNION ALL?
If you just want all records from both tables then UNION ALL is what you want.
UNION will remove duplicates (and will more than likely be sorting your data); if you know you don't have duplicates then UNION ALL is also what you want.
Here the data inserted into the temp2 table is very
huge around 3 millions records.3 million is not very huge, 3 billion is very huge.
It takes nearly 5 hours to complete the insert
statement.Too long :-)
I think the index is affecting the performance to
degrade.Possibly, but the culprit is more than likely elsewhere.
Please let me know whether disabling the index untill
the insert is completed is recommended to improve the
performance.Please trace the transaction. You can post your scripts and trace here if you want (not everything, but the useful bits).
Cheers,
Colin
Similar Messages
-
Hi All,
Have a table with 7 columns where 4 columns are of Varchar2 type, 2 columns are of NUMBER type and 1 column is of type BLOB.
Myself inserting the values to the table from JAVA program. Insertion to VARCHAR2 and NUMBER type columns are very much fast. But insertion to BLOB column is dead slow(data to BLOB column values are about 10KB).
Please help me in this regard to insert BLOB values fastly.
Regards/Sreekeshava SSreekeshava S wrote:
Running JAVA program in the same server as that of DB. Connecting how? IPC? TCP? Dedicated server? Shared server?
Calling Oracle how? Doing a SQL statement prepare per insert? Reusing the SQL cursor handle? Binding variables?
And inserting 250 records/ sec(during peak load and 50 records/sec during normal load). where each record is having size of 10K(Blob column size).And what is slow? You have NOT yet provided ANY evidence that points to the actual INSERT being slow.
As I have already explained, there are a number of layers from client to server - and any, or all of these, could be contributing to the problem.
Use your web browser and look up what instrumentation is. Apply it. Instrument your code. On the client. On the server. So you have evidence (call stats and metrics) to use to determine what and where the performance problem is. And not have to guess - and like most developers point your finger at the database in the false belief that your client code, client design, and client usage of the database, are all perfect. -
Oracle insert very slow (very urgent)
Hello
I am new to this forum and also new to oracle .... I am woking in a C# 3.5 desktop application
I am Leasing data from socket (1 message per 10 millisecond) and save in queue<T> and then i have a background thread which dequeu the data and perform some calculation and create “insert sql query “ on run time NO STORE PROCEDURE just simple insert query
For example
insert into Product values(0,computer , 125.35);
I pass that insert query to my datalayer which create oracle connection and insert in to a data base. see the code below
using System.Data.OracleClient
class db
OracleConnection conns = null
public static void conn(string dbalias, string userid, string password){
try
string connString = @"server =" + dbalias + ";uid =" + userid + ";password =" + password + ";";
conns = new OracleConnection(connString);
conns.Open();
catch (OracleException e){
Console.WriteLine("Error: " + e);}}
public static void ExecuteCommand(string sqlquery)
try
OracleCommand cmd = new OracleCommand(sqlquery,conns);
cmd.ExecuteNonQuery();
NOW the problem is that inseration in oracle database is very slow please tell me how to solve this issueAdditionally:
How slow? Just one single insert is slow? Or you're doing thousands of inserts that way and they add up to being slow?
If you're doing a bunch of inserts, wrap a bunch of them inside a transaction instead of doing them one by one which will avoid a commit each time as well.
Or use Array binding or Associatve arrays as indicated previously (You'd need to use Oracle's provider for that though ~ ou're using System.Data.OracleClient).
You're using a literal hard coded statement, per your example? Use bind variables.
Also, this forum is for tools that plug in to VS. Problems with ODP.NET code you've written would be more appropriate in the [ODP.NET forum|http://forums.oracle.com/forums/forum.jspa?forumID=146], but that forum deals with problems with Oracle's ODP, not Microsofts (which is in maintenance mode by the way)
Hope it helps,
Greg -
Hi DB Gurus,
Our application is inserting 60-70K records in a table in each transaction. When multiple sessions are open on this table user face performance issues like application response is too slow.
Regarding this table:
1.Size = 56424 Mbytes!
2.Count = 188,858,094 rows!
3.Years of data stored = 4 years
4.Average growth = 10 million records per month, 120 million each year! (has grown 60 million since end of June 2007)
5.Storage params = 110 extents, Initial=40960, Next=524288000, Min Extents=1, Max Extents=505
6.There are 14 indexes on this table all of which are in use.
7. Data is inserted through bulk insert
8. DB: Oracle 10g
Sheer size of this table (56G) and its rate of growth may be the culprits behind performance issue. But to ascertain that, we need to dig out more facts so that we can decide conclusively how to mail this issue.
So my questions are:
1. What other facts can be collected to find out the root cause of bad performance?
2. Looking at given statistics, is there a way to resolve the performance issue - by using table partition or archiving or some other better way is there?
We've already though of dropping some indexes but it looks difficult since they are used in reports based on this table (along with other tables)
3. Any guess what else can be causing this issue?
4. How many records per session can be inserted in a table? Is there any limitation?
Thanks in advance!!Hi DB Gurus,
Our application is inserting 60-70K records in a
table in each transaction. When multiple sessions are
open on this table user face performance issues like
application response is too slow.
Regarding this table:
1.Size = 56424 Mbytes!
2.Count = 188,858,094 rows!
3.Years of data stored = 4 years
4.Average growth = 10 million records per month, 120
million each year! (has grown 60 million since end of
June 2007)
5.Storage params = 110 extents, Initial=40960,
Next=524288000, Min Extents=1, Max Extents=505
6.There are 14 indexes on this table all of which are
in use.
7. Data is inserted through bulk insert
8. DB: Oracle 10g
Sheer size of this table (56G) and its rate of growth
may be the culprits behind performance issue. But to
ascertain that, we need to dig out more facts so that
we can decide conclusively how to mail this issue.
So my questions are:
1. What other facts can be collected to find out the
root cause of bad performance?
2. Looking at given statistics, is there a way to
resolve the performance issue - by using table
partition or archiving or some other better way is
there?
We've already though of dropping some indexes but it
looks difficult since they are used in reports based
on this table (along with other tables)
3. Any guess what else can be causing this issue?
4. How many records per session can be inserted in a
table? Is there any limitation?
Thanks in advance!!You didn't like the responses from your same post - DB Performance issue -
Insert query slows in Timesten
Hello DB Experts ,
I am inserting bulk data with ttbulkcp command. my permsize is 20GB . Insert query gets slow . can anyone help me that how can i maximize throughput by ttbulkcp.
Regards,Hi Chris thanks for your reply.
I have uncommented that memlock parameter is working now. I will not use system DSN now onwards. thanks for that suggestion .
1. The definition of the table you are loading data into, including indexes.
My Comments : Table defination.The table doesnot having any primary key and any indexes.
create table TBLEDR
(snstarttime number,
snendtime number,
radiuscallingstationid number,
ipserveripaddress varchar2(2000) DEFAULT '0',
bearer3gppimsi varchar2(2000) DEFAULT '0',
ipsubscriberipaddress varchar2(2000),
httpuseragent varchar2(2000) DEFAULT '0',
bearer3gppimei varchar2(256) DEFAULT '0',
httphost varchar2(2000) DEFAULT '0',
ipprotocol varchar2(256) DEFAULT '0',
voipduration varchar2(256) DEFAULT '0',
traffictype varchar2(256) DEFAULT '0',
httpcontenttype varchar2(2000) DEFAULT '0',
transactiondownlinkbytes number DEFAULT '0',
transactionuplinkbytes number DEFAULT '0',
transactiondownlinkpackets number DEFAULT '0',
transactionuplinkpackets number DEFAULT '0',
radiuscalledstationid varchar2(2000) DEFAULT '0',
httpreferer varchar2(4000) DEFAULT '0',
httpurl varchar2(4000) DEFAULT '0',
p2pprotocol varchar2(4000) DEFAULT '0'
2. Whether the indexes (if any) are in place while you are loading the data.
My comments: No indexes are there.
3. The CPU type and speed.
Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz .32 core .
4. The type of disk storage you are using for the filesystem containing the database.
We are not using any external storage. we are using linux ext3 filesystem.
5. The location of the CSV file that you are loading - is it on the same filesystem as the database files?
My comment - database files are resides on /opt partition. and yes the CSV files are also placed in same directories .those files are in /opt/Files.
6. The number of rows of data in the CSV file.
My comment - in per CSV file there is around 50,000 Records.
7. Originally you said 'I am only getting 15000 to 17000 TPS'. How are you measuring this? Do you TPS (i.e. commutes per second) or 'rows inserted per second'? Note that by default ttBulkCp commits every 1024 rows so if you are measuring commits then the insert rate is 1024 x that.
My comment- Now I have set timing on at bash prompt. lets say when i have run command ./ttbulkcp at that time i have note down the timing. now when the command complete , i am again note down the time. and then i am calculating the TPS. further in this, i have one file with ttbulkcp . I am having 50000 records in file. and out of these records around 38000 records gets sucsucced. and thus i am calculating TPS. -
Inserts are slow if table have lots of record (400K) vs. if it's empty
It takes 1 minute to insert 100,000 records into a table. But if the table already contains some records (400K), then it takes 4 minutes and 12 seconds; also CPU-wait jumps up and “Free Buffer Waits” become really high (from dbconsole).
Do you know what’s happing here? Is this because of frequent table extents? The extent size for these tables is 1,048,576 bytes. I have a feeling DB is trying to extend the table storage.
I am really confused about this. So any help would be great!Your DB_CACHE_SIZE is likely too small (or DBWR writing to disk is too slow).
Since you are doing regular INSERTs (not being Direct Path with APPEND), Oracle has to find the free block for the next row and load it into the Database Cache to insert the row into it. However, as your insert more records, the "dirty" blocks still present in the cache have to be written out to disk and DBWR is unable to write out the dirty blocks quickly enough.
What is the size of the table in USER_SEGMENTS and also as shown in
NUM_ROWS, SAMPLE_SIZE, AVG_ROW_LENGTH from USER_TABLES ?
What is your DB_CACHE_SIZE ? -
Insert really slow on 100Mb file
Hello,
I have an XMLType table with a registered XSD. Performance on smaller files is good but 1 XML file which is 100 Mb takes 8+ hours only to load. I really need to speeds this up.
I detected the most time being consumed in the INSERT statement. This i didnt expect but also i dont know how to improve this. I hope someone can help me out here.
See following for detailed information.
Thanks in advance
Remy
Table definition
CREATE TABLE PHILIPS_XML of XMLType
XMLSCHEMA "http://localhost/philips_format.xsd" ELEMENT "Tree"First thing i tried is loading the file directly into the Table using
INSERT INTO PHILIPS_XML
VALUES (XMLTYPE(bfilename('XMLDIR',:1 ),NLS_CHARSET_ID('AL32UTF8')))No performance so i tried loading using the sql*loader but no major succes on this one also
In order to find out where my performance loss is i tried loading into a CLOB and then insert into the XMLType table
CREATE TABLE TEMP_CLOB
(TEMP_DATA CLOB)
dest_clob CLOB;
src_clob BFILE;
dst_offset number := 1 ;
src_offset number := 1 ;
lang_ctx number := DBMS_LOB.DEFAULT_LANG_CTX;
warning number;
BEGIN
INSERT INTO temp_clob(temp_data)
VALUES(empty_clob())
RETURNING temp_data INTO dest_clob;
-- OPENING THE SOURCE BFILE IS MANDATORY
src_clob:=bfilename('XMLDIR',p_supplier_xml);
DBMS_LOB.OPEN(src_clob, DBMS_LOB.LOB_READONLY);
DBMS_LOB.LoadCLOBFromFile(
DEST_LOB => dest_clob
, SRC_BFILE => src_clob
, AMOUNT => DBMS_LOB.GETLENGTH(src_clob)
, DEST_OFFSET => dst_offset
, SRC_OFFSET => src_offset
, BFILE_CSID => DBMS_LOB.DEFAULT_CSID
, LANG_CONTEXT => lang_ctx
, WARNING => warning
DBMS_LOB.CLOSE(src_clob);
I let the loading part run with a trace.
INSERT INTO PHILIPS_XML
SELECT XMLTYPE(TEMP_DATA) FROM TEMP_CLOB; Here are the trace results (please forget the first error .. i made a typo in the statement)
TKPROF: Release 10.1.0.2.0 - Production on Mon May 19 09:20:45 2008
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Trace file: ecd1d_ora_19063_REMY.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
The following statement encountered a error during parse:
insert into philips_xml
(select xmltype(xml_data) from temp_table
Error encountered: ORA-00942
select metadata
from
kopm$ where name='DB_FDO'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.01 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 2 0 1
total 4 0.01 0.00 0 2 0 1
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS BY INDEX ROWID KOPM$ (cr=2 pr=0 pw=0 time=59 us)
1 INDEX UNIQUE SCAN I_KOPM1 (cr=1 pr=0 pw=0 time=31 us)(object id 365)
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
NVL(SUM(C2),:"SYS_B_1")
FROM
(SELECT /*+ NO_PARALLEL("TEMP_CLOB") FULL("TEMP_CLOB")
NO_PARALLEL_INDEX("TEMP_CLOB") */ :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM
"TEMP_CLOB" "TEMP_CLOB") SAMPLESUB
call count cpu elapsed disk query current rows
Parse 3 0.00 0.00 0 0 0 0
Execute 3 0.01 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 23 0 3
total 9 0.01 0.00 0 23 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 46 (recursive depth: 1)
Rows Row Source Operation
1 SORT AGGREGATE (cr=9 pr=0 pw=0 time=601 us)
0 TABLE ACCESS FULL TEMP_CLOB (cr=9 pr=0 pw=0 time=431 us)
insert into philips_xml
(select xmltype (temp_data) from temp_clob)
call count cpu elapsed disk query current rows
Parse 1 0.02 0.01 0 108 0 0
Execute 1 0.00 0.00 0 7 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.02 0.01 0 115 0 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Rows Row Source Operation
0 TABLE ACCESS FULL TEMP_CLOB (cr=7 pr=0 pw=0 time=64 us)
(select xmltype (temp_data) from temp_clob)
call count cpu elapsed disk query current rows
Parse 2 0.01 0.00 0 8 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 100.80 260.23 263377 1756877 0 1
total 6 100.81 260.24 263377 1756885 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Rows Row Source Operation
1 TABLE ACCESS FULL TEMP_CLOB (cr=7 pr=0 pw=0 time=492 us)
select sys_nc_oid$
from
xdb.xdb$resource where rowid = :1
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 2
total 6 0.00 0.00 0 4 0 2
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS BY USER ROWID XDB$RESOURCE (cr=2 pr=0 pw=0 time=40 us)
select value(p$)
from
"XDB"."XDB$RESOURCE" as of snapshot(:2) p$ where SYS_NC_OID$
= :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 2
total 5 0.00 0.00 0 4 0 2
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1,
spare2
from
obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner is null
and linkname is null and subname is null
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 0
total 5 0.00 0.00 0 4 0 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=2 pr=0 pw=0 time=53 us)
0 INDEX RANGE SCAN I_OBJ2 (cr=2 pr=0 pw=0 time=48 us)(object id 37)
insert into philips_xml
(select xmltype(temp_data) from temp_clob)
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 15 0 0
Execute 1 25003.01 30465.99 526759 3514263 29095 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 25003.02 30465.99 526759 3514278 29095 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
select file#
from
file$ where ts#=:1
call count cpu elapsed disk query current rows
Parse 82 0.00 0.00 0 0 1 0
Execute 82 0.08 0.11 0 0 0 0
Fetch 164 0.03 0.01 0 328 0 82
total 328 0.11 0.13 0 328 1 82
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS FULL FILE$ (cr=4 pr=0 pw=0 time=145 us)
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=
:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL, :13),
groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=
DECODE(:17,0,NULL,:17),scanhint=:18
where
ts#=:1 and file#=:2 and block#=:3
call count cpu elapsed disk query current rows
Parse 82 0.00 0.00 0 0 2 0
Execute 82 0.03 0.06 0 410 82 82
Fetch 0 0.00 0.00 0 0 0 0
total 164 0.03 0.06 0 410 84 82
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 UPDATE SEG$ (cr=5 pr=0 pw=0 time=537 us)
1 TABLE ACCESS CLUSTER SEG$ (cr=5 pr=0 pw=0 time=175 us)
1 INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=2 pr=0 pw=0 time=21 us)(object id 9)
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#,
sample_size, minimum, maximum, distcnt, lowval, hival, density, col#,
spare1, spare2, avgcln
from
hist_head$ where obj#=:1 and intcol#=:2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 6 0
Execute 7 0.01 0.01 0 0 0 0
Fetch 7 0.00 0.00 0 21 0 7
total 15 0.01 0.01 0 21 6 7
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: RULE
Parsing user id: SYS (recursive depth: 2)
Rows Row Source Operation
1 TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=3 pr=0 pw=0 time=398 us)
1 INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=0 pw=0 time=374 us)(object id 257)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 4 0.04 0.02 0 131 0 0
Execute 4 25003.01 30465.99 526759 3514270 29095 1
Fetch 2 100.80 260.23 263377 1756877 0 1
total 10 25103.85 30726.25 790136 5271278 29095 2
Misses in library cache during parse: 3
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 173 0.00 0.02 0 0 9 0
Execute 182 0.14 0.20 0 410 82 82
Fetch 181 0.03 0.01 0 386 0 97
total 536 0.17 0.24 0 796 91 179
Misses in library cache during parse: 5
Misses in library cache during execute: 5
7 user SQL statements in session.
177 internal SQL statements in session.
184 SQL statements in session.
Trace file: ecd1d_ora_19063_REMY.trc
Trace file compatibility: 10.01.00
Sort options: default
0 session in tracefile.
7 user SQL statements in trace file.
177 internal SQL statements in trace file.
184 SQL statements in trace file.
11 unique SQL statements in trace file.
1653 lines in trace file.
31142 elapsed seconds in trace file.XSD + register statement is here:
DBMS_XMLSCHEMA.REGISTERSCHEMA(
'http://localhost/philips_format.xsd' ,
'<?xml version="1.0" encoding="UTF-8"?>
<!-- edited with XMLSPY v5 rel. 3 U (http://www.xmlspy.com) by Philips Consumer Electronics BV (Philips Consumer Electronics BV) -->
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">
<xs:element name="Tree">
<xs:complexType>
<xs:sequence>
<xs:element ref="Node" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="Node">
<xs:annotation>
<xs:documentation>This XSD is designed to have very similar structure as the XML v2 specification. Please see XMLv2Spec_rev_15.pdf for more information </xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="Id" type="xs:string"/>
<xs:element name="SuppliersId" type="xs:string" minOccurs="0">
<xs:annotation>
<xs:documentation>Is used to make a unique identifier that is readable. You dont have to worry about it, this key is unique across all service providers because the id that is stored in the DB will have the service nodes id prepended to it</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="TopLevel" minOccurs="0">
<xs:annotation>
<xs:documentation>Add this tag if the node is to be the first node after the service node. </xs:documentation>
</xs:annotation>
<xs:complexType/>
</xs:element>
<xs:element name="Name" type="xs:string" minOccurs="0"/>
<xs:element name="ChildNodes" minOccurs="0">
<xs:annotation>
<xs:documentation>The list of child nodes. Note that this doesnt form a strict tree!! A child node can have multiple parent nodes.</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="ChildNode" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="DataItems" minOccurs="0">
<xs:annotation>
<xs:documentation>Exactly the same as the XMLv2 Spec</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="DataItem" type="DataItem" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="NodeProperties" type="Properties" minOccurs="0">
<xs:annotation>
<xs:documentation>For internal use. It is used for as a point to add service specific code that is needed for that dataloader. </xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="Filters" minOccurs="0">
<xs:annotation>
<xs:documentation>For internal use. This is where a list of filters can be added. There are many filters in ECD if we need.</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence maxOccurs="unbounded">
<xs:element name="Filter" type="xs:string" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="DynamicNavigationNode" minOccurs="0">
<xs:annotation>
<xs:documentation>For internal use. A flag that indicates that there will be child nodes that are going to be created so this node should be a navigationNode and not a playableNode</xs:documentation>
</xs:annotation>
<xs:complexType/>
</xs:element>
<xs:element name="DynamicDataLoader" type="xs:string" minOccurs="0">
<xs:annotation>
<xs:documentation>For internal use. This points to an component</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="MenuHelper" type="xs:string" minOccurs="0">
<xs:annotation>
<xs:documentation>For internal use. A menu helper is used to return how the node should be displayed and not what it actually is for example there is a DropThoughMenuHelper that will make the current node effectively invisible and will display the nodes children instead</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="PlayableProperties" type="PlayableProperties" minOccurs="0"/>
<xs:element name="Type" type="xs:int" minOccurs="0">
<xs:annotation>
<xs:documentation>This allows you to override the default node type. In almost all cases the default node type is needed. The data loader will make nodes that have children a navigationNode and the ones that have no children, playableNodes</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:complexType name="DataItem">
<xs:all>
<xs:element name="Title" type="xs:string" minOccurs="0"/>
<xs:element name="Album" type="xs:string" minOccurs="0"/>
<xs:element name="Artist" type="xs:string" minOccurs="0"/>
<xs:element name="Playlength" type="xs:int" minOccurs="0"/>
<xs:element name="MIMEType" type="xs:string" minOccurs="0"/>
<xs:element name="Aspect" type="xs:string" minOccurs="0"/>
<xs:element name="Bitrate" type="xs:string" minOccurs="0"/>
<xs:element name="Description" type="xs:string" minOccurs="0"/>
<xs:element name="Filesize" type="xs:string" minOccurs="0"/>
<xs:element name="FramesPerSecond" type="xs:string" minOccurs="0"/>
<xs:element name="Genre" type="xs:string" minOccurs="0"/>
<xs:element name="Height" type="xs:int" minOccurs="0"/>
<xs:element name="Quality" type="xs:string" minOccurs="0"/>
<xs:element name="Samplerate" type="xs:string" minOccurs="0"/>
<xs:element name="TrackNumber" type="xs:string" minOccurs="0"/>
<xs:element name="Width" type="xs:int" minOccurs="0"/>
<xs:element name="Year" type="xs:int" minOccurs="0"/>
<xs:element ref="DetailSet" minOccurs="0"/>
<xs:element ref="URLSet" minOccurs="0"/>
<xs:element name="Usage" type="xs:string" minOccurs="0"/>
</xs:all>
</xs:complexType>
<xs:complexType name="Properties">
<xs:sequence>
<xs:element name="Property" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="Key" type="xs:string"/>
<xs:element name="Value" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
<xs:element name="DetailSet">
<xs:complexType>
<xs:sequence>
<xs:element name="Detail" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="Title" type="xs:string" minOccurs="0"/>
<xs:element name="Value" type="xs:string" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="URLSet">
<xs:complexType>
<xs:sequence>
<xs:element name="URL" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:complexType name="PlayableProperties">
<xs:sequence minOccurs="0">
<xs:element name="QualityLevels" minOccurs="0">
<xs:complexType>
<xs:sequence minOccurs="0" maxOccurs="unbounded">
<xs:element name="Level" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="Recordable" type="xs:integer" minOccurs="0"/>
<xs:element name="SkipForward" type="xs:integer" minOccurs="0"/>
<xs:element name="SkipBackward" type="xs:integer" minOccurs="0"/>
<xs:element name="Pauseable" type="xs:integer" minOccurs="0"/>
<xs:element name="Seekable" type="xs:integer" minOccurs="0"/>
<xs:element name="RepeatAll" type="xs:integer" minOccurs="0"/>
<xs:element name="RepeatTrack" type="xs:integer" minOccurs="0"/>
<xs:element name="MandatoryConnect" type="xs:integer" minOccurs="0"/>
<xs:element name="RequestSingleSet" type="xs:integer" minOccurs="0"/>
<xs:element name="InvalidateSetOnError" type="xs:integer" minOccurs="0"/>
<xs:element name="ReconnectOnError" type="xs:integer" minOccurs="0"/>
<xs:element name="Syncable" type="xs:integer" minOccurs="0"/>
<xs:element name="ShowProgressBar" type="xs:integer" minOccurs="0"/>
<xs:element name="AllowDigitalOut" type="xs:integer" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:schema>'
, TRUE
, TRUE
, FALSE
, TRUE);You can "boost" via FTP by setting database initialization parameters:
- shared servers = 5
- large_pool_size at a minimum of 150 Mb (please increase ++ if you have the room / mem.)
- java_pool_size at a minimum of 150 Mb
- don't use sga_target and other "memory wizards" but set the parameters manually
- purge the dba_recyclebin and set the feature off.
- have a look a the tnsnames.ora and listener.ora files and / or local_listener and dispatcher settings and avoid TCP/IP name resolution. Use hard coded names, tcp/ip names (name resolution via the hosts file / NOT via a nameserver)
to name some pointers. -
i have oracle 9i i make proceduce make selete and insert 1600 row
it run in server 9i take 7 sec and in oracle 11.2.0 take 2 min know that the server 11g is 8*of 9i
have any body have new idea for this problemit run in server 9i take 7 sec and in oracle 11.2.0 take 2 min know that the server 11g is 8*of 9i ?:|
http://tkyte.blogspot.com/2005/06/how-to-ask-questions.html
HOW TO: Post a SQL statement tuning request - template posting -
Air for Android - Sqlite - Slow inserts
Hello there,
What is the best way to follow in order to make a lot of insert on a sqlite db?
I ask because I noticed a great and unpredicatable slowdown on my Galaxy S when I make a lot of insert.
I use an openAsync connection because I need to monitor what is happening.
Everytime a row is created I increment a counter.
It starts well, then after a 100 insert it slow down and nearly dies.
Any suggestions?
Thanks in advanceI optimized the query:
- I reuse the same statement and changing only the parameters
- I use the transaction with begin and commit
This speeded up the whole things in an incredible way, however I still notice a slow down that luckily, after the optimizations I made, never dies.
Any other suggestion will be appreciated! -
Slow Motion: Insert, PIP, or...?
With Tom's guidance, I have made a little video http://www.youtube.com/watch?v=EloCiO818ZY and need to fix it up a little. I need to insert a "slow motion replay" of the action at 3:30. i have put that clip back into the Viewer and set new inpoints and out points. aside from having trouble "Inserting it with Transition" , I assume my choices are 1) to insert it somehow between 2 existing clips or to do a 2)PIP with a partial wipe. I am looking for the FCE command or steps which makes a clips play in slo-mo.
thanks in advance.
-RussI got it. Selected the clip after it is in the inserted sequence and then modify>speed> I set it to 25%. now for some text.
thanks
R -
Very slow simple spatial query on 11g
I've created two spatial tables as following:
CREATE TABLE usregions
region_code NUMBER(1,0) NOT NULL,
shape ST_GEOMETRY,
CONSTRAINT usregions_pk PRIMARY KEY usregions(region_code)
INSERT INTO MDSYS.user_sdo_geom_metadata
(table_name, column_name, diminfo, srid)
VALUES ('USREGIONS', 'SHAPE',
sdo_dim_array (sdo_dim_element ('X', -180, 180, 0.5),
sdo_dim_element ('Y', -90, 90, 0.5)),
4269);
CREATE INDEX usregions_dx ON usregions (shape)
INDEXTYPE IS MDSYS.spatial_index;
CREATE TABLE usstates
state_code NUMBER(1,0) NOT NULL,
state_name VARCHAR2(30),
shape ST_GEOMETRY,
CONSTRAINT usstates_pk PRIMARY KEY usstates(state_code)
INSERT INTO MDSYS.user_sdo_geom_metadata
(table_name, column_name, diminfo, srid)
VALUES ('USSTATES', 'SHAPE',
sdo_dim_array (sdo_dim_element ('X', -180, 180, 0.5),
sdo_dim_element ('Y', -90, 90, 0.5)),
4269);
CREATE INDEX usstates_dx ON usstates (shape)
INDEXTYPE IS MDSYS.spatial_index;
I then loaded both tables with data from a shapefile.
The state shapefile is just the US map with all the states.
The region shapefile is the same US map with only 5 regions (Northeast, mid atlantic, mid west, south, and west).
I created the region shapefile from the state shapefile using ESRI ArcMap to dissolve the States border. So Pennsylvania, Virginia, Maryland, and DC is one polygon; New York and up is another polygon, etc.
I also created the same two tables, with the same data in SQL Server 2008 (KatMai), as well as in PostGRE 8.3.
Then I ran the following query:
SELECT s.state_name
FROM usstates s
WHERE s.shape.ST_Within((SELECT shape
FROM usregions
WHERE region_code=2))=1;
Region 2 is Mid Atlantic and I was expecting to see:
STATE_NAME
District of Columbia
Maryland
Pennsylvania
Virgina
Instead, Oracle 11g only returned "District of Columbia"
The query took 6.4 seconds to run.
On SQL Server 2008, I got the expected result and it took 0.4 seconds to execute.
On PostGRE 8.3, I also got the expected result and it took 0.5 seconds to execute.
Why is Oracle not returning all the States? Is this a bug?
Am I doing something wrong???
Thanks.save data into internal backup format ???backup format doesn't matter, program is just reading rows from backup file and inserting them into database - so is generating SQL insert commands
>
and then it will restore it back to database. It isdone by inserts, each 500 row commit
I can't follow your post, but...yuck.program is just inserting records into T2 table, but on T2 table is trigger and inside trigger is SQL command "UPDATE TABLE T3 ..... where ......". In this time is T3 already filled with 60569 records.
And inserting going slower and slower. Without trigger is speed ok. But on oracle 10g speed is ok with this trigger. So I am concern about what could changed in oracle 11g and cause such behavior.
I will try produce some simple test example -
INSERT in PLSQL loop in Oracle 9i scheduled job has poor performance
Hi,
I have a scheduled job running in Oracle 9i. But everytime it executes the following piece of code, the INSERT process slows down drastically.
-------------------------------typical piece of code having problem-----------------------------------
LOOP
FOR increbkgs IN bookings_cur (in_chr_fiscal_period_id,
allrec.cust_name_string
LOOP
l_num_rec_count := l_num_rec_count + 1;
INSERT INTO SA_PORTAL_CDW_BOOKINGS_INCTEMP
(product_id, territory_code,
global_target_id, service_type,
equipment_deployment, created_date, updated_date,
fiscal_period_id, customer_id,
ship_to_country,
bookings_amount, sams_alliance_id
VALUES (increbkgs.product_id, increbkgs.territory_code,
increbkgs.global_target_id, increbkgs.service_type,
increbkgs.equipment_deployment, SYSDATE, SYSDATE,
increbkgs.fiscal_period_id, increbkgs.customer_id,
increbkgs.ship_to_country,
increbkgs.bookings_amount, allrec.sams_alliance_id
IF (l_num_rec_count = 500)
THEN
l_num_rec_count := 0;
COMMIT;
END IF;
END LOOP;
END LOOP;
All the tablespaces are auto-extend. But we have still tried to increase the tablespace manually from 2% to 30% by adding datafiles. Still the INSERT is slowing down for some reason.
(The same process in Oracle 8i is much faster)
Any hint or guidance is greatly appreciated.
Thanks and regards,
Ambilicommits in loops are great for slowing down things. Actually commits in loops are just about the best way of stalling any 'process' in Oracle.
A much better way is to resize your undo tablespace to permit one single commit at the end of the whole thing. Yes it could be big, but that's the way Oracle works.
If you want more info about this, buy Thomas Kyte's book found at http://apress.com/book/bookDisplay.html?bID=10008 -
SQL loader load data very slow...
Hi,
On my production server have issue of insert. Regular SQL loder load file, it take more time for insert the data in database.
First 2 and 3 hours one file take 8 to 10 seconds after that it take 5 minutes.
As per my understanding OS I/O is very slow, First 3 hours DB buffer is free and insert data in buffer normal.
But when buffer is fill then going for buffer waits and then insert is slow on. If it rite please tell me how to increase I/O.
Some analysis share here of My server...................
[root@myserver ~]# iostat
Linux 2.6.18-194.el5 (myserver) 06/01/2012
avg-cpu: %user %nice %system %iowait %steal %idle
3.34 0.00 0.83 6.66 0.00 89.17
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 107.56 2544.64 3140.34 8084953177 9977627424
sda1 0.00 0.65 0.00 2074066 16
sda2 21.57 220.59 1833.98 700856482 5827014296
sda3 0.00 0.00 0.00 12787 5960
sda4 0.00 0.00 0.00 8 0
sda5 0.69 2.75 15.07 8739194 47874000
sda6 0.05 0.00 0.55 5322 1736264
sda7 0.00 0.00 0.00 2915 16
sda8 0.50 9.03 5.24 28695700 16642584
sda9 0.51 0.36 24.81 1128290 78829224
sda10 0.52 0.00 5.98 9965 19004088
sda11 83.71 2311.26 1254.71 7343426336 3986520976
[root@myserver ~]# hdparm -tT /dev/sda11
/dev/sda11:
Timing cached reads: 10708 MB in 2.00 seconds = 5359.23 MB/sec
Timing buffered disk reads: 540 MB in 3.00 seconds = 179.89 MB/sec
[root@myserver ~]# sar -u -o datafile 1 6
Linux 2.6.18-194.el5 (mca-webreporting2) 06/01/2012
09:57:19 AM CPU %user %nice %system %iowait %steal %idle
09:57:20 AM all 6.97 0.00 1.87 16.31 0.00 74.84
09:57:21 AM all 6.74 0.00 1.25 17.48 0.00 74.53
09:57:22 AM all 7.01 0.00 1.75 16.27 0.00 74.97
09:57:23 AM all 6.75 0.00 1.12 13.88 0.00 78.25
09:57:24 AM all 6.98 0.00 1.37 16.83 0.00 74.81
09:57:25 AM all 6.49 0.00 1.25 14.61 0.00 77.65
Average: all 6.82 0.00 1.44 15.90 0.00 75.84
[root@myserver ~]# sar -u -o datafile 1 6
Linux 2.6.18-194.el5 (mca-webreporting2) 06/01/2012
09:57:19 AM CPU %user %nice %system %iowait %steal %idle
mca-webreporting2;601;2012-05-27 16:30:01 UTC;2.54;1510.94;3581.85;0.00
mca-webreporting2;600;2012-05-27 16:40:01 UTC;2.45;1442.78;3883.47;0.04
mca-webreporting2;599;2012-05-27 16:50:01 UTC;2.44;1466.72;3893.10;0.04
mca-webreporting2;600;2012-05-27 17:00:01 UTC;2.30;1394.43;3546.26;0.00
mca-webreporting2;600;2012-05-27 17:10:01 UTC;3.15;1529.72;3978.27;0.04
mca-webreporting2;601;2012-05-27 17:20:01 UTC;9.83;1268.76;3823.63;0.04
mca-webreporting2;600;2012-05-27 17:30:01 UTC;32.71;1277.93;3495.32;0.00
mca-webreporting2;600;2012-05-27 17:40:01 UTC;1.96;1213.10;3845.75;0.04
mca-webreporting2;600;2012-05-27 17:50:01 UTC;1.89;1247.98;3834.94;0.04
mca-webreporting2;600;2012-05-27 18:00:01 UTC;2.24;1184.72;3486.10;0.00
mca-webreporting2;600;2012-05-27 18:10:01 UTC;18.68;1320.73;4088.14;0.18
mca-webreporting2;600;2012-05-27 18:20:01 UTC;1.82;1137.28;3784.99;0.04
[root@myserver ~]# vmstat
procs -----------memory---------- -swap -----io---- system -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 1 182356 499444 135348 13801492 0 0 3488 247 0 0 5 2 89 4 0
[root@myserver ~]# dstat -D sda
----total-cpu-usage---- dsk/sda -net/total- -paging -system
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
3 1 89 7 0 0|1240k 1544k| 0 0 | 1.9B 1B|2905 6646
8 1 77 14 0 1|4096B 3616k| 433k 2828B| 0 0 |3347 16k
10 2 77 12 0 0| 0 1520k| 466k 1332B| 0 0 |3064 15k
8 2 77 12 0 0| 0 2060k| 395k 1458B| 0 0 |3093 14k
8 1 78 12 0 0| 0 1688k| 428k 1460B| 0 0 |3260 15k
8 1 78 12 0 0| 0 1712k| 461k 1822B| 0 0 |3390 15k
7 1 78 13 0 0|4096B 6372k| 449k 1950B| 0 0 |3322 15k
AWR sheet output
Wait Events
ordered by wait time desc, waits desc (idle events last)
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
free buffer waits 1,591,125 99.95 19,814 12 129.53
log file parallel write 31,668 0.00 1,413 45 2.58
buffer busy waits 846 77.07 653 772 0.07
control file parallel write 10,166 0.00 636 63 0.83
log file sync 11,301 0.00 565 50 0.92
write complete waits 218 94.95 208 955 0.02
SQL> select 'free in buffer (NOT_DIRTY)',round((( select count(DIRTY) N_D from v$bh where DIRTY='N')*100)/(select count(*) from v$bh),2)||'%' DIRTY_PERCENT from dual
union
2 3 select 'keep in buffer (YES_DIRTY)',round((( select count(DIRTY) N_D from v$bh where DIRTY='Y')*100)/(select count(*) from v$bh),2)||'%' DIRTY_PERCENT from dual;
'FREEINBUFFER(NOT_DIRTY)' DIRTY_PERCENT
free in buffer (NOT_DIRTY) 10.71%
keep in buffer (YES_DIRTY) 89.29%
Rag....1)
Yah This is partition table and on it Local partition index.
SQL> desc GR_CORE_LOGGING
Name Null? Type
APPLICATIONID VARCHAR2(20)
SERVICEID VARCHAR2(25)
ENTERPRISENAME VARCHAR2(25)
MSISDN VARCHAR2(15)
STATE VARCHAR2(15)
FROMTIME VARCHAR2(25)
TOTIME VARCHAR2(25)
CAMP_ID VARCHAR2(50)
TRANSID VARCHAR2(25)
MSI_INDEX NUMBER
SQL> select index_name,column_name from user_ind_columns where table_name='GR_CORE_LOGGING';
INDEX_NAME
COLUMN_NAME
GR_CORE_LOGGING_IND
MSISDN
2) I was try direct but after that i was drop this table and again create new partition table and create fresh index. but still same issue. -
No geometry validation on insert ?
Hi!
How come Oracle let me insert a non-closed polygon in an spatial-indexed table ? When I run SDO_GEOM.VALIDATE_GEOMETRY() I get the right error code (ORA-13348 polygon boundary is not closed) but I really CAN insert it !!!
Here's the polygon i've inserted. This is really just example 2.3.3, but modified to be a polygon instead of a line.
INSERT INTO cola_markets VALUES(
11,
'compound_line_string',
MDSYS.SDO_GEOMETRY(
2003,
NULL,
NULL,
MDSYS.SDO_ELEM_INFO_ARRAY(1,1005,2, 1,2,1, 3,2,2),
MDSYS.SDO_ORDINATE_ARRAY(10,10, 10,14, 6,10, 14,10)
Is it the programmer's responsability to write his own trigger to validate the geometry that is inserted ? I dont mind doing it, but I still think there's something weird going on !
Thank you!
Mathieu Gauthier
Development Team
JCMB Technology IncOracle Spatial does not do geometry validation on insert, as you've found. The reason for this is that some people require geometry loading to go as fast as possible, and having a trigger on every insert would slow things down.
You can create a trigger to do the validation as you've described, if you need it. -
Performance of insert with spatial index
I'm writing a test that inserts (using OCI) 10,000 2D point geometries (gtype=2001) into a table with a single SDO_GEOMETRY column. I wrote the code doing the insert before setting up the index on the spatial column, thus I was aware of the insert speed (almost instantaneous) without a spatial index (with layer_gtype=POINT), and noticed immediately the performance drop with the index (> 10 seconds).
Here's the raw timing data of 3 runs in each 3 configuration (the clock ticks every 14 or 15 or 16 ms, thus the zero when it completes before the next tick):
truncate execute commit
no spatial index 0.016 0.171 0.016
no spatial index 0.031 0.172 0.000
no spatial index 0.031 0.204 0.000
index (1000 default for batch size) 0.141 10.937 1.547
index (1000 default for batch size) 0.094 11.125 1.531
index (1000 default for batch size) 0.094 10.937 1.610
index SDO_DML_BATCH_SIZE=10000 0.203 11.234 0.359
index SDO_DML_BATCH_SIZE=10000 0.094 10.828 0.344
index SDO_DML_BATCH_SIZE=10000 0.078 10.844 0.359As you can see, I played with SDO_DML_BATCH_SIZE to change the default of 1,000 to 10,000, which does improve the commit speed a bit from 1.5s to 0.35s (pretty good when you only look at these numbers...), but the shocking part of the almost 11s the inserts are now taking, compared to 0.2s without an index: that's a 50x drop in peformance!!!
I've looked at my table in SQL Developer, and it has no triggers associated, although there has to be something to mark the index as dirty so that it updates itself on commit.
So where is coming the huge overhead during the insert???
(by insert I mean the time OCIStmtExecute takes to run the array-bind of 10,000 points. It's exactly the same code with or without an index).
Can anyone explain the 50x insert performance drop?
Any suggestion on how to improve the performance of this scenario?
To provide another data point, creating the index itself on a populated table (with the same 10,000 points) takes less than 1 second, which is consistent with the commit speeds I'm seeing, and thus puzzles me all the more regarding this 10s insert overhead...
SQL> set timing on
SQL> select count(*) from within_point_distance_tab;
COUNT(*)
10000
Elapsed: 00:00:00.01
SQL> CREATE INDEX with6CDF1526$point$idx
2 ON within_point_distance_tab(point)
3 INDEXTYPE IS MDSYS.SPATIAL_INDEX
4 PARAMETERS ('layer_gtype=POINT');
Index created.
Elapsed: 00:00:00.96
SQL> drop index WITH6CDF1526$POINT$IDX force;
Index dropped.
Elapsed: 00:00:00.57
SQL> CREATE INDEX with6CDF1526$point$idx
2 ON within_point_distance_tab(point)
3 INDEXTYPE IS MDSYS.SPATIAL_INDEX
4 PARAMETERS ('layer_gtype=POINT SDO_DML_BATCH_SIZE=10000');
Index created.
Elapsed: 00:00:00.98
SQL>Thanks for your input. We are likely to use partioning down the line, but what you are describing (partition exchange) is currently beyond my abilities in plain SQL, and how this could be accomplished from an OCI client application without affecting other users and keep the transaction boundaries sounds far from trivial. (i.e. can it made transparent to the client application, and does it require privileges the client does have???). I'll have to investigate this further though, and this technique sounds like one accessible to a DBA only, not from a plain client app with non-privileged credentials.
The thing that I fail to understand though, despite your explanation, is why the slow down is not entirely on the commit. After all, documentation for the SDO_DML_BATCH_SIZE parameter of the Spatial index implies that the index is updated on commit only, where new rows are fed 1,000 or 10,000 at a time to the indexing engine, and I do see time being spent during commit, but it's the geometry insert that slow down the most, and that to me looks quite strange.
It's so much slower that it's as if each geometry was indexed one at a time, when I'm doing a single insert with an array bind (i.e. equivalent to a bulk operation in PL/SQL), and if so much time is spend during the insert, then why is any time spent during the commit. In my opinion it's one or the other, but not both. What am I missing? --DD
Maybe you are looking for
-
Re: Satellite U400-15G - Can I upgrade it to a Core2Duo CPU?
Hello, Iam Benji from Germany and ive got a question about my new Sattelite U400-15G. Installed is a DualCore T3200, now ive fugured out, that the Notebook runs on a G45 Express Chipset that supports much better Core2Duo CPUs with lower energy. But i
-
How do I restore it?
-
Ipod wont turn on, or cant be seen on my computer or anyothere
The only thing i can do to it is click the holf on and off, then hit the middle button and left for about 6 seconds, that turns on the screen, but i have tried the click hold, adn menu and center button I have tired this about 40 times, any other Sug
-
How can I pass objects across neywork?
How can I pass a ObjectOutputStream to a ObjectInputStream, across a socket? Thanks in advance for your posts!
-
Hi I have a content engine ,its uses for transprancy proxy. i have two router .one router used for lan and remote connect(wccp enable) and another router connect only for internet. as example: router1:203.110.153.10 content engine:203.110.153.11 inte