Spatial Insert Performance
I'm running 9.2.0.3EE on W2K.
Ran some simple performance tests...
With a simple non-spatial table (id, lat, lon), I can get inserts up around 12,000 records per second.
I setup a similar table for use with spatial:
CREATE TABLE test2 (
id number not null,
location MDSYS.SDO_GEOMETRY not null,
constraint pk_test2 primary key (id)
When there is no spatial index, I can get about 10,000 inserts per second, similar to the non-spatial table.
After adding a spatial index, performance drops to 135 inserts/second. Thats about 2 orders of magnitude different. Am I doing something radically wrong here, or is this typical with this product?
Here is the index setup (RTREE Geodetic):
INSERT INTO USER_SDO_GEOM_METADATA
VALUES (
'test2',
'location',
MDSYS.SDO_DIM_ARRAY(
MDSYS.SDO_DIM_ELEMENT('Longitude', -180, 180, 10),
MDSYS.SDO_DIM_ELEMENT('Latitude', -90, 90, 10)
8307 -- SRID for 'Lon/Lat WGS84 coordinate system
commit;
CREATE INDEX test2_spatial_idx
ON test2(location)
INDEXTYPE IS MDSYS.SPATIAL_INDEX
PARAMETERS('LAYER_GTYPE=POINT');
Any pointers are appreciated!
thanks,
--Peter
Hi,
Recent testing of 10g on HP 4640 hardware (linux itanium, 1.5 Ghz processors, good disks) yielded insert rates of over 1300 points per second (single process insert rate).
Features were put into 10g to enable this increase in performance. On other hardware (testing 9iR2 vs. 10g), 10g was better than 2x as fast as 9iR2. I didn't have an older version of Oracle on this machine, so I couldn't compare insert speeds.
Similar Messages
-
Bad INSERT performance when using GUIDs for indexes
Hi,
we use Ora 9.2.0.6 db on Win XP Pro. The application (DOT.NET v1.1) is using ODP.NET. All PKs of the tables are GUIDs represented in Oracle as RAW(16) columns.
When testing with mass data we see more and more a problem with bad INSERT performance on some tables that contain many rows (~10M). Those tables have an RAW(16) PK and an additional non-unique index which is also set on a RAW(16) column (both are standard B*tree). An PerfStat reports tells that there is much activity on the Index tablespace.
When I analyze the related table and its indexes I see a very very high clustering factor.
Is there a way how to improve the insert performance in that case? Use another type of index? Generally avoid indexed RAW columns?
Please help.
DanielHi
After my last tests I conclude at the followings:
The query returns 1-30 records
Test 1: Using Form Builder
- Execution time 7-8 seconds
Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
- Execution time 25-27 seconds
Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
- Execution time 17-18 seconds
When I use:
session.setLogLevel(SessionLog.FINE) and
session.setProfiler(new PerformanceProfiler())
I don’t see any improvement in the execution time of the query.
Thank you
Thanos -
Hi Experts ,
1. could someone guide me on understanding what are things that impact insert performance in an oltp application with ~25 concurrent sessions doing 20 inserts/session into table X. ? (env- oracle 11g ,3 node RAC , ASSM tablespace , tables X is range partitioned )
2. If any storage parameter is not property set then how to identify which one needs to be fixed?
Note: current insert performance is : 0.02 sec/insert.Hi Garry,
Thanks for your response.
some more info regarding app : DB version 11.2.0.3 . Below is the awr info during peak load for 1 hr snap. any suggestions are helpful.
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 18,624M 18,624M Std Block Size: 8K
Shared Pool Size: 3,200M 3,200M Log Buffer: 25,888K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 4.9 0.0 0.01 0.00
DB CPU(s): 0.5 0.0 0.00 0.00
Redo size: 585,778.7 2,339.6
Logical reads: 24,046.6 96.0
Block changes: 2,374.5 9.5
Physical reads: 1,101.6 4.4
Physical writes: 394.6 1.6
User calls: 2,086.6 8.3
Parses: 9.5 0.0
Hard parses: 0.5 0.0
W/A MB processed: 5.8 0.0
Logons: 0.6 0.0
Executes: 877.7 3.5
Rollbacks: 218.6 0.9
Transactions: 250.4
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 99.99
Buffer Hit %: 95.44 In-memory Sort %: 100.00
Library Hit %: 99.81 Soft Parse %: 95.16
Execute to Parse %: 98.92 Latch Hit %: 99.89
Parse CPU to Parse Elapsd %: 92.50 % Non-Parse CPU: 97.31
Shared Pool Statistics Begin End
Memory Usage %: 75.36 74.73
% SQL with executions>1: 90.63 90.41
% Memory for SQL w/exec>1: 83.10 85.49
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Event Waits Time(s) Avg(ms) %DBtime Wait Class
db file sequential read 3,686,200 15,658 4 87.7 User I/O
DB CPU 1,802 10.1
db file parallel read 19,646 189 10 1.1 User I/O
gc current grant 2-way 842,079 145 0 .8 Cluster
gc current block 2-way 425,663 106 0 .6 Cluster -
Jdbc thin driver bulk binding slow insertion performance problem
Hello All,
We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
here is the trace report from 10046 event, I hide table name for privacy reason.
Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
INSERT INTO ...
values
(:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
:18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
:33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.02 14.29 1 94 2565 200
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.02 14.29 1 94 2565 200
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 25
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net more data from client 28 6.38 14.19
db file sequential read 1 0.02 0.02
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this. -
Jdbc thin driver and bulk binding slow insertion performance
Hello All,
We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
here is the trace report from 10046 event, I hide table name for privacy reason.
Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
INSERT INTO ...
values
(:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
:18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
:33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.02 14.29 1 94 2565 200
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.02 14.29 1 94 2565 200
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 25
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net more data from client 28 6.38 14.19
db file sequential read 1 0.02 0.02
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this. -
I am experiencing performance problems when inserting a 30 MB XML file into an XMLTYPE field - under Oracle 11 with the schema I am using the minimum time I can achieve is around 9 minutes which is too long... can anyone comment on whether this performance is normal and possibly suggest how it could be improved while retaining the benefits of structured storage...thanks in advance for the help :)
sorry for the late reply - I didn't notice that you had replied to my earlier post...
To answer your questions in order:
- I am using "structured" storage because I read ( in this article: [http://www.oracle.com/technology/pub/articles/jain-xmldb.html] ) that this would result in higher xquery performance.
- the schema isn't very large but it is complex. ( as discussed in above article )
I built my table by first registering the schema and then adding the xml elements to the table such that they would be stored in structured storage. i.e.
--// Register schema /////////////////////////////////////////////////////////////
begin
dbms_xmlschema.registerSchema(
schemaurl=>'fof_fob.xsd',
schemadoc=>bfilename('XFOF_DIR','fof_fob.xsd'),
local=>TRUE,
gentypes=>TRUE,
genbean=>FALSE,
force=>FALSE,
owner=>'FOF',
csid=>nls_charset_id('AL32UTF8')
end;
COMMIT;
and then created the table using ...
--// Create the XCOMP table /////////////////////////////////////////////////////////////
create table "XCOMP" (
"type" varchar(128) not null,
"id" int not null,
"idstr1" varchar(50),
"idstr2" varchar(50),
"name" varchar(255),
"rev" varchar(20) not null,
"tstamp" varchar(30) not null,
"xmlfob" xmltype)
XMLTYPE "xmlfob" STORE AS OBJECT RELATIONAL
XMLSCHEMA "fof_fob.xsd"
ELEMENT "FOB";
No indexing was specified for this table. Then I inserted the offending 30 MB xml file using (in c#, using ODP.NET under .NET 3.5):
void test(string myName, XElement myXmlElem)
OracleConnection connection = new OracleConnection();
connection.Open();
string statement = "INSERT INTO XCOMP ( \"name\", \"xmlfob\"") values( :1, :2 )";
XDocument xDoc = new XDocument(new XDeclaration("1.0", "utf-8", "yes"), myXmlElem);
OracleCommand insCmd = new OracleCommand(statement, connection);
OracleXmlType xmlinfo = new OracleXmlType(connection, xDoc.CreateReader());
insCmd.Parameters.Add(FofDbCmdInsert.Name, OracleDbType.Varchar2, 255);
insCmd.Parameters.Add(FofDbCmdInsert.Xmldoc, OracleDbType.XmlType);
insCmd.Parameters[0].Value = myName;
insCmd.Parameters[1].Value = xmlinfo;
insCmd.ExecuteNonQuery();
connection.Close();
It took around 9 minutes to execute the ExecuteNonQuery statement, usingOracle 11 standard edition running under Windows 2008-64 with 8 GB RAM and 2.5 MHZ single core ( of a quad-core running under VMWARE )
I would much appreciate any suggestions that could speed up the insert performance here - as a temporary solution I chopped some of the information out of the XML document and store it seperately in another table, but this approach has the disadvantage that I using xqueries is a bit inflexible, although the performance is now in seconds rather than minutes...
I can't see any reason why Oracle's shredding mechanism should be less efficient than manual shredding the information.
Thanks in advance for any helpful hints you can provide! -
Spatial correlation performance issues
We are on 10.2.0.4, solaris.
We have a layout with one main table (~2million rows) with an sdo_geometry column.
There are 3 other small reference tables (15-200 rows) with reference geometries and spatial indexes.
We have a process that needs to go through each row in the main table (all 2 million) and for each row, compute the sdo_geometry column from other values. Once the geometry is computed, queries are run against the 3 small reference tables to see which rows the new geometry correlate to (and set some flags accordingly). These are just basic relate queries of the form
select <some column>
from reference_table r
where sdo_relate(r.geometry, <plsql variable holding test geometry>), 'MASK=ANYINTERACT') = 'TRUE';
These correlation queries are taking on the order of 0.1-0.3 seconds each; multiply that by 2 million rows are making this process take a very long time. Another issue exacerbating the problem is that it seems these queries are slowing down over time as rows are processed. If we kill and restart the processes (picking up where it left off) it starts off normal again before slowing down.
I'm looking for help with the following if anyone has some ideas:
- What might be causing these queries to gradually slow down over time, or what to look for to track that down.
- Any options that might exist to speed these up. Even if they were all 0.1 each consistently, it's 166 hours (0.1*3*2000000) to process (just these correlation queries) which isn't great but would be acceptable. Since these can take a little longer (0.2.0,3 seconds) and then slow down over time (even 0.1 to 0.4 makes it become 666 hours) it's an issue.
Thanks for any suggestions.I create two tables: table_a has 259200 polygons and table_b
has 160000 points; and I modify your code to track the timing
of every 100 cycles:
declare
oidX NUMBER(38);
distX NUMBER;
i NUMBER;
start_time DATE;
CURSOR curU (idR NUMBER) IS
select a.objectid oid, sdo_geom.sdo_distance(a.shape,b.shape,0.001) dist
from table_a a, table_b b
where b.objectid = idR AND sdo_anyinteract(a.shape,b.shape) = 'TRUE'
order by sdo_geom.sdo_distance(a.shape,b.shape,0.001);
begin
i := 0;
for iX IN (select objectid oid from table_b b) LOOP
i := i +1;
if (i mod 100 = 1) then
start_time := sysdate;
end if;
open curU(iX.oid);
fetch curU into oidX, distX;
close curU;
if (i mod 100 = 0) then
INSERT INTO stats VALUES (iX.oid , (sysdate-start_time)*86400);
commit;
end if;
end loop;
end;
But I don't see any performance difference between first cycles and last cycles.
I switch table_a and table_b, i.e.
CURSOR curU (idR NUMBER) IS
select a.objectid oid, sdo_geom.sdo_distance(a.shape,b.shape,0.001) dist
from table_b a, table_a b
where b.objectid = idR AND sdo_anyinteract(a.shape,b.shape) = 'TRUE'
order by sdo_geom.sdo_distance(a.shape,b.shape,0.001);
for iX IN (select objectid oid from table_a b) LOOP
and don't see that there's any problem in last runs.
Anyway, let us know on which db version you see this problem.
Edited by: yhu on Oct 8, 2009 7:14 PM -
Suggestions to improve the INSERT performance
Hi All,
I have a table which has 170 columns .
I am inserting huge data 50K and more records into this table.
my insert would be look like this.
INSERT INTO /*+ append */ REPORT_DATA(COL1,COL2,COL3,COL4,COL5,COL6)
SELECT DATA1,DATA2,DATA3,DATA4,DATA5,DATA5 FROM TXN_DETAILS
WHERE COL1='CA';
Here i want to insert values for only few columns.Hence i specifies only those column names in insert statement.
But when huge data(50k+) returned by select query then this statement taking very long time to execute(approximately 10 to 15 mins).
Please suggest me to improve this insert statement performance.I am also using 'append' hint.
Thanks in advance.a - Disable/drop indexes and constraints - It's far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size.
b - Manage segment header contention for parallel inserts - Make sure to define multiple freelist (or freelist groups) to remove contention for the table header. Multiple freelists add additional segment header blocks, removing the bottleneck. You can also use Automatic Segment Space Managementhttp://www.dba-oracle.com/art_dbazine_ts_mgt.htm (bitmap freelists) to support parallel DML, but ASSM has some limitations
c - Parallelize the load - You can invoke parallel DML (i.e. using the PARALLEL and APPEND hint) to have multiple inserts into the same table. For this INSERT optimization, make sure to define multiple freelists and use the SQL "APPEND" option. If you submit parallel jobs to insert against the table at the same time, using the APPEND hint may cause serialization, removing the benefit of parallel jobstreams.
d - APPEND into tables - By using the APPEND hint, you ensure that Oracle always grabs "fresh" data blocks by raising the high-water-mark for the table. If you are doing parallel insert DML, the Append mode is the default and you don't need to specify an APPEND hint. Also, if you're going w/ APPEND, consider putting the table into NOLOGGING mode, which will allow Oracle to avoid almost all redo logging."
insert /*+ append */ into customer values ('hello',';there');
e - Use a large blocksize - By defining large (i.e. 32k) blocksizes for the target table, you reduce I/O because more rows fit onto a block before a "block full" condition (as set by PCTFREE) unlinks the block from the freelist.
f - Use NOLOGGING
f - RAM disk - You can use high-speed solid state disk (RAM-SAN) to make Oracle inserts run up to 300x faster than platter disk. -
Truncate Table before Insert--Performance
HI All,
This post is in focus of special requirement where a table is truncated before inserting records in the table.
Now, when a table is truncated the High Water Mark(HWK) is reset to lowest memory allocated for table in tablespace. After this, would insert with append can boost the performance of the insert query?
In simple insert query, the oracle engine consults the free list to look for free spaces.
But in insert with apppend, the engine starts above the HWM. And the argument is when truncate has been executes on table, would the freelist be used in simple insert.
I just need to know if there are any benefits of using append insert on truncated table or simple insert would be same in term of performance with respect to insert with append.
Regards
NitsHi,
if you don't need the data truncate the table. There is no negativ impact whether you are using an conventional path or a direct path insert.
If you use append less redo is written for the table if the table is in NOLOGGING mode, but redo is written for all indexes. I would recommand to create a full backup after that (if needed), because your table will not be recoverable after that (no REDO Information).
Dim -
Can insert performance be improved playing with env parameters?
Below is the environment confioguration and results of my bulk load insert experiments. The results are from two scenarios that is also described below. The values for the two scenarios is separated by a space.
Environment Configuration:
setTxn N
DeferredWrite Y
Sec Bulk Load Y
Post Build SecIndex Y
Sync Y
Column1 value reflects for the scenario:
Two databases
a. Database with 2,500,000 records
b. Database with 2,500,000 records
Column2 value reflects for the scenario:
Two databases
a. Database with 25,000,000 records
b. Database with 25,000,000 records
1. Is there a good documentation which describes what the environment statistics mean.
2. Looking at the statistics below, can you make any suggestions for performance improvement.
Looking at the below statistics is the:
Eviction Stats
nEvictPasses 3929 146066
nNodesSelected 309219 17351997
nNodesScanned 3150809 176816544
nNodesExplicitlyEvicted 152897 8723271
nBINsStripped 156322 8628726
requiredEvictBytes 524323 530566
CheckPoint Stats
nCheckpoints 55 1448
lastCheckpointID 55 1448
nFullINFlush 54 1024
nFullBINFlush 26 494
nDeltaINFlush 116 2661
lastCheckpointStart 0x6f/0x2334f8 0xb6a/0x82fd83
lastCheckpointEnd 0x6f/0x33c2d6 0xb6a/0x8c4a6b
endOfLog 0xb/0x6f22e 0x6f/0x75a843 0xb6a/0x23d8f
Cache Stats
nNotResident 4591918 57477898
nCacheMiss 4583077 57469807
nLogBuffers 3 3
bufferBytes 3145728 3145728
(MB) 3.00 3.00
cacheDataBytes 563450470 370211966
(MB) 537.35 353.06
adminBytes 29880 16346272
lockBytes 1113 1113
cacheTotalBytes 566596198 373357694
(MB) 540.35 356.06
Logging Stats
nFSyncs 59 1452
nFSyncRequest 59 1452
nFSyncTimeouts 0 0
nRepeatFaultReads 31513 6525958
nTempBufferForWrite 0 0
nRepeatIteratorReads 0 0
totalLogSize 1117658932 29226945317
(MB) 1065.88 27872.99
lockBytes 1113 1113Hello Linda,
I am inserting 25,000,000 records of the type:
Database 1
Key --> Data
[long,String,long] --> [{long,long}, {String}}
The secondary keys are on {long,long} and {String}
Database 2
Key --> Data
[long,Integer,long] --> [{long,long}, {Integer}}
The secondary keys are on {long,long} and {Integer}
i set the env parameters to non-transactional and setDeferredWrite(True)
using setSecondaryBulkLoad(true) and then build two Secondary indexes on {long,long} and {String} of the data portion.
private void buildSecondaryIndex(DataAccessLayer dataAccessLayer ) {
try {
SecondaryIndex<TDetailSecondaryKey, TDetailStringKey,
TDetailStringRecord> secondaryIndex =
store.getSecondaryIndex(
dataAccessLayer.getPrimaryIndex() ,
TDetailSecondaryKey.class,
SECONDARY_KEY_NAME
} catch (DatabaseException e) {
throw new RuntimeException(e);
We are inserting to 2 databases as mentioned above.
NumRecs 250,000x2 2,500,000x2 25,000,000x2
TotalTime(ms) 16877 673623 30225781
PutTime(ms) 7684 76636 1065030
BuildSec(ms) 4952 590207 29125773
Sync(ms) 4241 6780 34978Why does building secondaryIndex ( 2 secondary databases in this case) take so much longer than inserting to the primary database - 27 times longer !!!
Its hard to believe that building of the tree for secondary database takes so much longer.
Why doesnt building the tree for primary database take so long. The data in the primary database is same as its key to be able to search on these values.
Hence its surprising it takes so long
The cache stats mentioned above relate to these .
Can you try explaining this. We are trying to figure out is it worth trying to build the secondary index later for bulk loading. -
Improve Database adapter insert performance
Hopefully this is an easy question to answer. I'm getting passed to my BPEL over 8,000 records and I need to take those records and then insert them into an Oracle database. I've been trying to tune the insert by using properties like inMemoryOptimization, but the load still takes severl hours. Any suggestions on how to get the Database adapter to perform better or load all 8,000 records at once? thanks in advance.
Hello.
8000 records doesn't sound "huge", unless a record is say 1 kB then you have 8 MB, which is a large payload to move around in one piece.
A DB merge is typically slower than an insert, though you did say you were using an insert.
If you are inserting each row one at a time that seems like it would be pretty slow.
Normally the input to a DB adapter insert is a collection (of rows) vs. a single row. If you have been handed 8000 individual rows you can assemble them into a collection with an iteration - tedious in BPEL but works fine.
Daren -
Spatial query performance problems
In preparation for making using of spatial data in our oracle database, I wanted to create a view (materialised) that brings together data from a few different tables into one place ready for publishing as a WMS layer.
I'm being stumped at first base by crippling performance of Oracle spatial function. Later joins of ordinary fields are ok, but the spatial joining of two tables using the following sql runs for an absurd length of time (i've given up - I don't know how long it actually takes only that it takes far too long)
SELECT /*+ ordered */
lg.GRIDREF, lg.SYSTEM, lg.PARENT, lg.TYPE,
lrd.REGION_CODE
FROM TABLE (SDO_JOIN('L_GRIDS','BOUNDARY','L_REGION_DEFINITION','BOUNDARY','mask=COVERS')) c,
L_GRIDS lg, L_REGION_DEFINITION lrd
WHERE c.rowid1 = lg.rowid AND c.rowid2 = lrd.rowid
ORDER BY lrd.REGION_CODE
Both tables have spatial indexs. L_REGION_DEFINITION contains 200 rows with complex boundaries stored as spatial objects. L_GRIDS contains 475,000 rows, each with a trivially simple spatial object consisting of a square polygon of 4 points.
The database is 10g patched to latest. The server is dual quad Xeon processors with 16gb of ram. I didn't expect it to be a lightning query, but surely it should be usable?
Any ideas?Try to upgrade to at least 11.2.0.2 and use the following query
SELECT /*+ leading(lrd lg) */
lg.GRIDREF, lg.SYSTEM, lg.PARENT, lg.TYPE,
lrd.REGION_CODE
FROM L_GRIDS lg, L_REGION_DEFINITION lrd
WHERE sdo_relate(lg.boundary, lrd.boundary, 'mask=COVEREDBY') = 'TRUE'
ORDER BY lrd.REGION_CODE;
And since not sure about your query's intention, maybe it is "mask=INSIDE+COVEREDBY",
please check out oracle spatial developer guide for details about different masks. -
Oracle 10g Merge Insert performance
Hi All,
Performance wise, is it better to use a regular insert statement or Merge (insert only) statement ... in Oracle10g. (no updates are used in this merge statement).
Thanks for the input.thanks for the comment ... here is the more info for INSERT alone using Merge ... thought Oracle has a reason for this to add in 10g.
http://www.oracle-developer.net/display.php?id=310
I am looking for right answer about the performance -
Hi,
In my database there is one table which size is 500MB and on that table there is 5 indexes (2 are composite index).
Through sql loader 15 to 20 batch files are running and those job are inserting into this table. Means there is high insertion on this table. PCTFREE of this table is 10% and PCTUSED is default.
But insertion on this table is taking some time, even 10000 rows are taking more time to insert.
Please help.
AnandYou can improve the performance of SQL*Loader on conventional loads in a number of ways:
*Increase the readsize, I use 20971520 which may be the maximum
*Increase the number of rows per commit to 1000 or even 10000 (default 64)
*Increase the bindsize used to hold the values read from the data file, again I use 20971520
SQL*Loader will use array inserts, so that one INSERT statement will be sent to the database server with many data records in a single round trip, rather than one round trip per data record. This is a big performance boost. Increasing the parameters I have listed will increase the array size, increasing efficiency and reducing the number of separate array inserts issued by SQL*Loader.
Another option to test, is to drop the 5 indexes on the table, load the new data, then recreate the 5 indexes. Without the 5 indexes the load and insert of the new records will happen much faster. And the updating of the indexes could be a cause of contention between multiple, concurrent SQL*Loaders and slowing down the inserts. Depending on how big the table is, it might not take that long to recreate the indexes.
Of course, with triggers spread around your database, you cannot remove the indexes if they are needed by any of the triggers themselves fired by the data being loaded for fast execution. And of course, no other part of the application should be running either.
John -
Question on PL/SQL / Insert Performance
So I have a table (TABLEA) with one column that has approximately 420k records and a I have a second table (TABLEB) that stores data identified by a procedure.
I have a PL/SQL Package with the two procedures.
With the package I pass it two parameters (start and stop number).
execute id_pkg.mrs(0,10000);These numbers are used to capture information into a CURSOR like this.
I then have another procedure GET_CV that takes the value passed from emp_cur_mrs.ID and loops through each record to populate TABLEB. TABLEB is created with "NOLOGGING".
for empi_cur_mrs in
select id into v_tmpmrn from sourcemrns where id > startid and id < endid)
loop
get_cv(empi_cur_mrs.id);
end loop;When I run this against my first 10k records it takes approximately 22 seconds to complete. As I move I continue to add more data when identifying the next 10k records (e.g. execute id_pkg.mrs(10000,20000) the performance begins to drop.
Here is the "insert" code that is stored within the GET_CV procedure.
insert /*+ APPEND */ into TABLEB(fullrec) values(v_fullrecord);Can anyone provide me with any ideas on why this could be happening?Okay, this is a basic structure of what I pull and the end result of the GET_CV. I want to populate the end result into a table.
MEMBER_TBL (IDENTIFIES MEMBERS FROM ALL SOURCES; INTERNAL ID IS UNIQUE in MEMBER_TBL AND ID COULD BE SAME IN MULTIPLE SOURCES:
INTERNALID SRCID ID
10 100 1200
13 120 3543
14 140 1354
15 300 10980
MEMLINKED_TBL (IDENTIFIES WHICH MEMBERS ARE LINKED)
INTERNALID LINKID
10 10
13 10
14 10
15 12
MEMNAME_TBL (NAMEID: 12=MEMBER NAME, 13=EMERGENCY CONTACT NAME, I ONLY WANT MEMBER NAME): A-active I-inactive
INTERNALID NAMEID NAME STATUS MODIFIED_KEY
10 12 SMITH,JOHN A 222
10 12 SMIT,JON I 099
10 13 JONES,JIM A 222
13 12 SMITH,J A 111
14 12 SMITH,JON A 212
13 13 Thomas,Train A 345The max number MODIFIED_KEY tells us the latest updated one (e.g. INTERNALID with Modified_key of 222 would be the latest for NAMEID=12)
MEMPHONE_TBL (PHONEID:11 IS HOMEPHONE AND 55=EMERGENCY PHONE)
INTERNALID PHONEID PHONE_AREA PHONE_NUMBER MODIFIED_KEY STATUS
10 11 800 8889999 133123 A
10 11 800 8880000 000001 I
10 55 888 7729999 323431 A
13 11 888 7739999 123243 A
14 55 888 7769999 454534 AI pass the pl/sql or sql the SRCID and the ID, I then need to look at all the members linked to that one and get the lastest information.
For example, I pass it INTERNALID of '10',ID of '1200' and SRCID of '100' and expect to get back the LINKID from MEMLINKED_TBL, most recent NAME from MEMNAME_TBL and MEMPHONE_TBL.
I would get the following:
LINKID NAME HOMEPHONE
10 SMITH,JOHN 8008889999If I could pull this all together without passing in an "ID", that would be great. I could not figure out how without actually passing it the ID and SRCID.
Thanks for any guidance!
Maybe you are looking for
-
I'm prototyping a JMS Listener / XML Parser in PL/SQL. I've not used either technology in conjunction with PL/SQL before, so I'm looking for resources to aid my research. Our current system is a Java Application that receives JMS messages off of a TI
-
Getting videos and and movies onto my ipod
I have many videos and movies in a folder on my desktop, and I am having trouble getting them on my ipod. when I drag and drop the video in itunes it would say something like "this video will not work in ipod". I looked up the error message and it sa
-
Hi experts I am getting java.lang.NoClassDefFoundError exception while deploying the Java Proxy (.ear) file on the server. I added all the jar files in my .ear project. Have anyone faces this problem. Thanks Mahesh
-
Hi, twice now when upgrading the kernel I have seen "error: command failed to execute correctly" in pacmans output, any ideas as to what it might be? Bit of pacman output: ( 4/11) upgrading fontconfig [###################] 100% updating font cache...
-
Add Picture type Column in Matrix...!!!
Hi , I am setting the Picture Type column in the matrix. but it doesn't work.How can i implement the picture Type Column. Basically my Purpose is i want to Implement My CFL same as System CFL I have created the CFL But don't able to Add a