Tuning an insert sql that inserts a million rows doing a full table scan
Hi Experts,
I am on Oracle 11.2.0.3 on Linux. I have a sql that inserts data in a history/archive table from a main application table based on date. The application table has 3 million rows in it. and all rows that are older then 6 months should go into a history/archive table. this was recently decided and we have 1 million rows that satisfy this criteria. This insert into archive table is taking about 3 minutes. The explain plan shows a full table scan on the main table - which is the right thing as we are pulling out 1 million rows from main table into history table.
My question is that, is there a way I can make this sql go faster?
Here is the query plan (I changed the table names etc.)
INSERT INTO EMP_ARCH
SELECT *
FROM EMP M
where HIRE_date < (sysdate - :v_num_days);
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 96.22 165.59 92266 147180 8529323 1441230
Fetch 0 0.00 0.00 0 0 0 0
total 4 96.22 165.59 92266 147180 8529323 1441230
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 166
Rows Row Source Operation
1441401 TABLE ACCESS FULL EMP (cr=52900 pr=52885 pw=0 time=21189581 us)
I heard that there is a way to use opt_param hint to increase the multiblock read count but didn't seem to work for me....I will be thankful for suggestions on this. also can collections and changing this to pl/sql make it faster?
Thanks,
OrauserN
Also I wish experts share their insight on how we make full table scan go faster (apart from parallel suggestion I mean).
Please make up your mind about what question you actually want help with.
First you said you want help making the INSERT query go faster but the rest of your replies, including the above statement, imply that you are obsessed with making full table scans go faster.
You also said:
our management would like us to come forth with the best way to do it
But when people make suggestions you make replies about you and your abilities:
I do not have the liberty to do the "alter session enable parallel dml". I have to work within this constraings
Does 'management' want the best way to do whichever question you are asking?
Or is is just YOU that want the best way (whatever you mean by best) based on some unknown set of constraints?
As SB already said, you clearly do NOT have an actual problem since you have already completed the task of inserting the data, several times in fact. So the time it takes to do it is irrevelant.
There is no universal agreement on what the word 'best' means for any given use case and you haven't given us your definition either. So how would we know what might be 'best'?
So until you provide the COMPLETE list of constraints you are just wasting our time asking for suggestions that you refute with a comment about some 'constraint' you have.
You also haven't provided ANY information that indicates that it is the full table scan that is the root of the problem. It is far more likely to be the INSERT into the table and a simple use of NOLOGGING with an APPEND hint might be all that is needed.
IMHO the 'best' way would be to partition both the source and target tables and just use EXCHANGE PARTITION to move the data. That operation would only take a millisecond or two.
But, let me guess, that would not conform to one of your 'unknown' constraints would it?
Similar Messages
-
Find out the SQLs which are using a full table scan
Hello all , how can i to find out the queries which are using a full table scan ? Any idea ?
In general, though, why would you want to tune SQL statements that aren't causing problems? Statspack will tell you what the most resource-intensive SQL statements on your system are. A SQL*Net trace of sessions that are performing poorly will indicate which statements are the most resource-intensive for that session. If a statement is incorrectly doing a full-table scan, but it is not causing a problem, why spend time tuning it? If you're not focusing your tuning attention on identifying statements that are causing problems, you'll also miss out on 90% of tuning opportunities which involve rewriting (or eliminating) code to make it more efficient. I can simulate a join on two tables with nested cursor loops, which won't generate a single full table scan, but replacing that code with a real join, while it will cause at least one full table scan, will be orders of magnitude faster.
As an aside, full table scans aren't necessarily a bad thing. If a statement needs to retrieve more than a couple percent of the rows of a table, full table scans are the most efficient way to go.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Inserting 10 million rows in to a table hangs
HI through toad iam using a simple for loop to insert 10 million rows into a table by saying
for i in 1 ......10000000
insert.................
It hangs ........ for lot of time
is there a better way to insert the rows in to the table....?
i have to test for performance.... and i have to insert 50 million rows in its child table..
practically when the code moves to production it will have these many rows...(may be more also) thats why i have to test for these many rows
plz suggest a better way for this
Regards
rajMust be a 'hardware thing'.
My ancient desktop (pentium IV, 1.8 Ghz, 512 MB), running XE, needs:
MHO%xe> desc t
Naam Null? Type
N NUMBER
A VARCHAR2(10)
B VARCHAR2(10)
MHO%xe> insert /*+ APPEND */ into t
2 with my_data as (
3 select level n, 'abc' a, 'def' b from dual
4 connect by level <= 10000000
5 )
6 select * from my_data;
10000000 rijen zijn aangemaakt.
Verstreken: 00:04:09.71
MHO%xe> drop table t;
Tabel is verwijderd.
Verstreken: 00:00:31.50
MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
Tabel is aangemaakt.
Verstreken: 00:00:01.04
MHO%xe> insert into t
2 with my_data as (
3 select level n, 'abc' a, 'def' b from dual
4 connect by level <= 10000000
5 )
6 select * from my_data;
10000000 rijen zijn aangemaakt.
Verstreken: 00:02:44.12
MHO%xe> drop table t;
Tabel is verwijderd.
Verstreken: 00:00:09.46
MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
Tabel is aangemaakt.
Verstreken: 00:00:00.15
MHO%xe> insert /*+ APPEND */ into t
2 with my_data as (
3 select level n, 'abc' a, 'def' b from dual
4 connect by level <= 10000000
5 )
6 select * from my_data;
10000000 rijen zijn aangemaakt.
Verstreken: 00:01:03.89
MHO%xe> drop table t;
Tabel is verwijderd.
Verstreken: 00:00:27.17
MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
Tabel is aangemaakt.
Verstreken: 00:00:01.15
MHO%xe> insert into t
2 with my_data as (
3 select level n, 'abc' a, 'def' b from dual
4 connect by level <= 10000000
5 )
6 select * from my_data;
10000000 rijen zijn aangemaakt.
Verstreken: 00:01:56.10Yea, 'cached' it a bit (ofcourse ;) )
But the append hint seems to knibble about 50 sec off anyway (using NO indexes at all) on my 'configuration'. -
Entity Framework Generated SQL for paging or using Linq skip take causes full table scans.
The slq genreated creates queries that cause a full table scan for pagination. Is there any way to fix this?
I am using
ODP.NET ODTwithODAC1120320_32bit
ASP.NET 4.5
EF 5
Oracle 11gR2
This table has 2 million records. The further into the records you page the longer it takes.
LINQ
var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
select errorLog).Count();
var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
orderby errorLog.ERR_LOG_ID
select errorLog).Skip(cnt-10).Take(10).ToList();
Here is the query & execution plans.
SELECT *
FROM (SELECT "Extent1"."ERR_LOG_ID" AS "ERR_LOG_ID",
"Extent1"."SRV_LOG_ID" AS "SRV_LOG_ID",
"Extent1"."TS" AS "TS",
"Extent1"."MSG" AS "MSG",
"Extent1"."STACK_TRACE" AS "STACK_TRACE",
"Extent1"."MTD_NM" AS "MTD_NM",
"Extent1"."PRM" AS "PRM",
"Extent1"."INSN_ID" AS "INSN_ID",
"Extent1"."TS_1" AS "TS_1",
"Extent1"."LOG_ETRY" AS "LOG_ETRY"
FROM (SELECT "Extent1"."ERR_LOG_ID" AS "ERR_LOG_ID",
"Extent1"."SRV_LOG_ID" AS "SRV_LOG_ID",
"Extent1"."TS" AS "TS",
"Extent1"."MSG" AS "MSG",
"Extent1"."STACK_TRACE" AS "STACK_TRACE",
"Extent1"."MTD_NM" AS "MTD_NM",
"Extent1"."PRM" AS "PRM",
"Extent1"."INSN_ID" AS "INSN_ID",
"Extent1"."TS_1" AS "TS_1",
"Extent1"."LOG_ETRY" AS "LOG_ETRY",
row_number() OVER (ORDER BY "Extent1"."ERR_LOG_ID" ASC) AS "row_number"
FROM (SELECT "ERRORLOGANDSERVICELOG_VIEW"."ERR_LOG_ID" AS "ERR_LOG_ID",
"ERRORLOGANDSERVICELOG_VIEW"."SRV_LOG_ID" AS "SRV_LOG_ID",
"ERRORLOGANDSERVICELOG_VIEW"."TS" AS "TS",
"ERRORLOGANDSERVICELOG_VIEW"."MSG" AS "MSG",
"ERRORLOGANDSERVICELOG_VIEW"."STACK_TRACE" AS "STACK_TRACE",
"ERRORLOGANDSERVICELOG_VIEW"."MTD_NM" AS "MTD_NM",
"ERRORLOGANDSERVICELOG_VIEW"."PRM" AS "PRM",
"ERRORLOGANDSERVICELOG_VIEW"."INSN_ID" AS "INSN_ID",
"ERRORLOGANDSERVICELOG_VIEW"."TS_1" AS "TS_1",
"ERRORLOGANDSERVICELOG_VIEW"."LOG_ETRY" AS "LOG_ETRY"
FROM "IDS_CORE"."ERRORLOGANDSERVICELOG_VIEW" "ERRORLOGANDSERVICELOG_VIEW") "Extent1") "Extent1"
WHERE ("Extent1"."row_number" > 1933849)
ORDER BY "Extent1"."ERR_LOG_ID" ASC)
WHERE (ROWNUM <= (10))
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 31750 | | 821K (1)| 02:44:15 |
|* 1 | COUNT STOPKEY | | | | | | |
| 2 | VIEW | | 1561K| 4728M| | 821K (1)| 02:44:15 |
|* 3 | VIEW | | 1561K| 4748M| | 821K (1)| 02:44:15 |
| 4 | WINDOW SORT | | 1561K| 3154M| 4066M| 821K (1)| 02:44:15 |
|* 5 | HASH JOIN OUTER | | 1561K| 3154M| | 130K (1)| 00:26:09 |
| 6 | TABLE ACCESS FULL| IDS_SERVICES_LOG | 1047 | 52350 | | 5 (0)| 00:00:01 |
| 7 | TABLE ACCESS FULL| IDS_SERVICES_ERROR_LOG | 1561K| 3080M| | 130K (1)| 00:26:08 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=10)
3 - filter("Extent1"."row_number">1933849)
5 - access("T1"."SRV_LOG_ID"(+)="T2"."SRV_LOG_ID")I did try a sample from stack overflow that would apply it to all string types, but I didn't see any query results differences. Please note, I am having the problem without any order with or where statements. Of course the skip take generates them. Please advise how I would implement the EntityFunctions.AsNonUnicode method with this Linq query.
LINQ
var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
select errorLog).Count();
var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
orderby errorLog.ERR_LOG_ID
select errorLog).Skip(cnt-10).Take(10).ToList();
This is what I inserted into my model to hopefully fix it. FROM:c# - EF Code First - Globally set varchar mapping over nvarchar - Stack Overflow
/// <summary>
/// Change the "default" of all string properties for a given entity to varchar instead of nvarchar.
/// </summary>
/// <param name="modelBuilder"></param>
/// <param name="entityType"></param>
protected void SetAllStringPropertiesAsNonUnicode(
DbModelBuilder modelBuilder,
Type entityType)
var stringProperties = entityType.GetProperties().Where(
c => c.PropertyType == typeof(string)
&& c.PropertyType.IsPublic
&& c. -
URGENT HELP Required: Solution to avoid Full table scan for a PL/SQL query
Hi Everyone,
When I checked the EXPLAIN PLAN for the below SQL query, I saw that Full table scans is going on both the tables TABLE_A and TABLE_B
UPDATE TABLE_A a
SET a.current_commit_date =
(SELECT MAX (b.loading_date)
FROM TABLE_B b
WHERE a.sales_order_id = b.sales_order_id
AND a.sales_order_line_id = b.sales_order_line_id
AND b.confirmed_qty > 0
AND b.data_flag IS NULL
OR b.schedule_line_delivery_date >= '23 NOV 2008')
Though the TABLE_A is a small table having nearly 1 lakh records, the TABLE_B is a huge table, having nearly 2 and a half crore records.
I created an Index on the TABLE_B having all its fields used in the WHERE clause. But, still the explain plan is showing FULL TABLE SCAN only.
When I run the query, it is taking long long time to execute (more than 1 day) and each time I have to kill the session.
Please please help me in optimizing this.
Thanks,
SudhindraCheck the instruction again, you're leaving out information we need in order to help you, like optimizer information.
- Post your exact database version, that is: the result of select * from v$version;
- Don't use TOAD's execution plan, but use
SQL> explain plan for <your_query>;
SQL> select * from table(dbms_xplan.display);(You can execute that in TOAD as well).
Don't forget you need to use the {noformat}{noformat} tag in order to post formatted code/output/execution plans etc.
It's also explained in the instruction.
When was the last time statistics were gathered for table_a and table_b?
You can find out by issuing the following query:select table_name
, last_analyzed
, num_rows
from user_tables
where table_name in ('TABLE_A', 'TABLE_B');
Can you also post the results of these counts;select count(*)
from table_b
where confirmed_qty > 0;
select count(*)
from table_b
where data_flag is null;
select count(*)
from table_b
where schedule_line_delivery_date >= /* assuming you're using a date, and not a string*/ to_date('23 NOV 2008', 'dd mon yyyy'); -
Finding the Text of SQL Query causing Full Table Scans
Hi,
does anyone have a sql script, that shows the complete sql text of queries that have caused a full table scan?
Please also let me know as to how soon this script needs to be run, in the sense does it work only while the query is running or would it work once it completes (if so is there a valid duration, such as until next restart, etc.)
Your help is appreciated.
Thx,
MayuranFinding the Text of SQL Query Causing Full Table Scan
-
Finding the Text of SQL Query Causing Full Table Scan
Hi,
does anyone have a sql script, that shows the complete sql text of queries that have caused a full table scan?
Please also let me know as to how soon this script needs to be run, in the sense does it work only while the query is running or would it work once it completes (if so is there a valid duration, such as until next restart, etc.)
Your help is appreciated.
Thx,
MayuranYou might try something like this:
select sql_text,
object_name
from v$sql s,
v$sql_plan p
where s.address = p.address and
s.hash_value = p.hash_value and
s.child_number = p.child_number and
p.operation = 'TABLE ACCESS' and
p.options = 'FULL' and
p.object_owner in ('SCOTT')
;Please note that this query is just a snapshot of the SQL statements currently in the cache. -
Update Query is Performing Full table Scan of 1 Millions Records
Hello Everyboby I have one update query ,
UPDATE tablea SET
task_status = 12
WHERE tablea.link_id >0
AND tablea.task_status <> 0
AND tablea.event_class='eventexception'
AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
AND ltask.task_status = 0)
When I do explain plan it shows following result...
Execution Plan
0 UPDATE STATEMENT Optimizer=CHOOSE
1 0 UPDATE OF 'tablea'
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'tablea'
4 2 TABLE ACCESS (BY INDEX ROWID) OF 'tablea'
5 4 INDEX (UNIQUE SCAN) OF 'PK_tablea' (UNIQUE)
NOW tablea may have more than 10 MILLION Records....This would take hell of time even if it has to
update 2 records....please suggest me some optimal solutions.....
Regards
MaheshI see your point but my question or logic say i have index on all columns used in where clauses so i find no reason for oracle to do full table scan,,,,
UPDATE tablea SET
task_status = 12
WHERE tablea.link_id >0
AND tablea.task_status <> 0
AND tablea.event_class='eventexception'
AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
AND ltask.task_status = 0)
I am clearly statis where task_status <> 0 and event_class= something and tablea.link_id >0
so ideal case FOR optimizer should be
Step 1)Select all the rowid having this condition...
Step 2)
For each row in rowid get all the row where task_status=0
and where taskid=linkid of rowid selected above...
Step 3)While looping for each rowid if it find any condition try for rowid obtained from ltask in task 2 update that record....
I want to have this kind of plan,,,,,does anyone know how to make oracle obtained this kind of plan......
It is try FULL TABLE SCAN is harmfull alteast not better than index scan..... -
Locate SQL causes full table scans from Statspack
Hello,
In my statspack reports I see a lot of full tables scans (1,425,297)
How can I locate the query that causes this ?
stats$sql_plan should fit?
Oracle is 9i
Thank you>
How can I locate the query that causes this ?
It can be hard. One idea is to put comments in queries identifying where they come from, something like
select /* my_package.my_procedure */ *
from dual;
[/code
The comment should remain with the sql text so various reports showing the sql text should also indicate where the query is] -
Full table scans : sql or server?
Thanks for taking my question!
We have 3 oracle servers (10.2.0.3). Each server has the same tables, data, and indexes but each can start acting different for no apparent reason. Sql that has runs for months goes bad on our production and test server while the production's backup server works fine.
In this case, I can fix the problem (as described below) but don’t understand what went wrong or why my change fixed the problem. Each server collects stats nightly so indexes and tables are not stale.
Anyone have any ideas why sql is breaks?
Thank You!
Kathie
BACKGOUND:
Sql works fine on 1 server and not on the other 2 servers (as I said before - same data and indexes on each server and sql has not changed on any of the servers).
TO FIX FULL TABLE SCAN:
MOVED: JOIN name_record n ON s.sr_key = n.nr_key AND n.nr_inactive_flag = 0
ADDED INDEX: (nr_key, nr_inactive_flag)
Already had an index with nr_key (which is unique) so it doesn't make since to add the new index but if I remove it the index the full table scan comes back.
*THIS IS THE ORIGINAL STATEMENT
SELECT ...sch.sch_name, sch.sch_state, v1.val_9pos AS sr_class_9,
v2.val_25pos AS sr_hsg_status_25, v3.val_25pos AS sr_hsg_list_25,
v4.val_9pos AS sr_selective_svc_9,
v5.val_madison AS sr_direct_deposit_10,
v6.val_3pos AS sr_for_lang_comp_3,
vtm1.vtm_13_display AS sr_matriculation_13,
vtm2.vtm_13_display AS last_ug_term,
vtm3.vtm_13_display AS last_g_term
FROM student_record s
JOIN user_names usr ON s.sr_public_w_check = usr.usr_key AND usr.usr_user_name = 'StudentUserID'
JOIN name_record n ON s.sr_key = n.nr_key AND n.nr_inactive_flag = 0JOIN value_terms vtm1 on s.sr_matriculation = vtm1.vtm_term
JOIN value_terms vtm2 ON s.sr_last_ug_term = vtm2.vtm_term
JOIN value_terms vtm3 ON s.sr_last_g_term = vtm3.vtm_term
JOIN value_master v1
ON v1.val_field = 205003 AND s.sr_class = v1.val_value_number
JOIN value_master v2
ON v2.val_field = 201204 AND s.sr_hsg_status = v2.val_value_number
JOIN value_master v3
ON v3.val_field = 201202 AND s.sr_hsg_list = v3.val_value_number
JOIN value_master v4
ON v4.val_field = 201173 AND s.sr_selective_svc = v4.val_value_number
JOIN value_master v5
ON v5.val_field = 201407 AND s.sr_direct_deposit = v5.val_value_number
JOIN value_master v6
ON v6.val_field = 201154 AND s.sr_for_lang_comp = v6.val_value_number
to here <<<<LEFT JOIN school_records sch ON s.sr_high_school =sch.SCH_TYPE_AND_CODESchizophrenic sql or server?.
C. -
Performance of an sql query with 16 million rows
Hello,
I have about 16 million records in a project table, lets call it the ENCOUNTER table.
I have an ENCOUNTER_NUMBER to be searched for which is VARCHAR2(30) and has a unique constraint on it. The results are to be ordered by ENCOUNTER_NUMBER.
The search string that the user enters is always considered to be a contains search and so a % is prefixed and suffixed to the search string.
I have created a functional index on UPPER(ENCOUNTER_NUMBER)
More the number of matches to the search query, the less the time taken. If there are 100 matches or more, the query returns within a second.
If there is only one match it takes atleast 10 seconds.
I am not sure at this stage why there is huge difference. Our client would be happy with 5 seconds.
Can anyone help me with this please, if at all it is possible to get the search quicker
Any help would be greatly appreciated.
Many thanks
MarilynYou are seeing differing response times to fetch the first n rows - not all the rows. So if your search is 'less selective' then you need to search fewer rows to find the first n rows meeting your criteria.
For example, if you have a search that 1% of your rows will match, then you probably need to scan about 10,000 rows to find the first 100 matching rows. If you specify a more selective predicate, matching 0.1% of the rows, then you will need to scan 100k rows to find your first 100 matches.
A generic 'contains' query, such as you describe, is difficult to implement with good performance. And it is often the result of someone not giving sufficient thought to the requirements: 'Well, it's too hard to figure out how people really need to use it, so I'll ask for this and I'll be covered regardless of what people really need.'
Can you negotiate/narrow down the requirements, such as:
- allow leading wildcards or trailing wildcards but not both
- allow/require other search criteria to be specified (not more double-wildcarded criteria, though)
- allow searching specific positions of the column (for example, if positions 4-6 have some specific meaning, then allow a search on that substring) -
SQL Performance causing full table scan
I have this SQL:
SELECT DISTINCT
UPPER (RTRIM (LTRIM (SS.PRESCDEAID))) PRESCRIBER,
UPPER (RTRIM (LTRIM (SS.NPIPRESCR))) NPI_NUMBER
FROM
PBM_SXC_STAGING SS,
PBM_PHYSICIANS P
WHERE
P.PHYSICIAN_ID = SS.PRESCDEAID
AND P.NPI_NUMBER <> SS.NPIPRESCR
AND SS.NPIPRESCR <> SS.PRESCDEAID
Uses this plan:
SELECT STATEMENT ALL_ROWSCost: 13,843 Bytes: 3,636,232 Cardinality: 106,948
4 SORT UNIQUE Cost: 13,843 Bytes: 3,636,232 Cardinality: 106,948
3 HASH JOIN Cost: 12,866 Bytes: 3,636,232 Cardinality: 106,948
1 TABLE ACCESS FULL TABLE PBM.PBM_PHYSICIANS Cost: 4,156 Bytes: 17,639,063 Cardinality: 1,356,851
2 INDEX FAST FULL SCAN INDEX PBM.SXCSTG_IDX1 Cost: 3,859 Bytes: 43,302,882 Cardinality: 2,062,042
If I comment out "AND P.NPI_NUMBER <> SS.NPIPRESCR" I get this plan that uses the PK index (PBM.PBM_PHYSICIAN_PK) that is on P.PHYSICIAN_ID. I do have an index on P.NPI_NUMBER
SELECT STATEMENT ALL_ROWSCost: 27,230 Bytes: 64,514,496 Cardinality: 2,016,078
4 SORT UNIQUE Cost: 27,230 Bytes: 64,514,496 Cardinality: 2,016,078
3 HASH JOIN Cost: 9,617 Bytes: 64,514,496 Cardinality: 2,016,078
1 INDEX FAST FULL SCAN INDEX (UNIQUE) PBM.PBM_PHYSICIAN_PK Cost: 1,035 Bytes: 14,925,361 Cardinality: 1,356,851
2 INDEX FAST FULL SCAN INDEX PBM.SXCSTG_IDX1 Cost: 3,859 Bytes: 43,302,882 Cardinality: 2,062,042Sorry for the delay, I was out of the office.
PLAN_TABLE_OUTPUT
SQL_ID 4j270u8fbhwpu, child number 0
SELECT /*+ gather_plan_statistics */ DISTINCT upper
(rtrim (ltrim (ss.prescdeaid))) prescriber ,upper (rtrim (ltrim
(ss.npiprescr))) npi_number FROM pbm_sxc_staging ss ,pbm_physicians
p WHERE p.physician_id = ss.prescdeaid AND p.npi_number !=
ss.npiprescr AND ss.npiprescr != ss.prescdeaid
Plan hash value: 2275909877
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | HASH UNIQUE | | 1 | 125K| 68 |00:00:01.54 | 24466 | 14552 | 1001K| 1001K| 1296K (0)|
|* 2 | HASH JOIN | | 1 | 125K| 6941 |00:00:01.14 | 24466 | 14552 | 47M| 6159K| 68M (0)|
| 3 | TABLE ACCESS FULL | PBM_PHYSICIANS | 1 | 1341K| 1341K|00:00:00.01 | 14556 | 14552 | | | |
|* 4 | INDEX FAST FULL SCAN| SXCSTG_IDX1 | 1 | 1872K| 1887K|00:00:00.01 | 9910 | 0 | | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - access("P"."PHYSICIAN_ID"="SS"."PRESCDEAID")
filter("P"."NPI_NUMBER"<>"SS"."NPIPRESCR")
4 - filter("SS"."NPIPRESCR"<>"SS"."PRESCDEAID")Edited by: Chris on Jul 12, 2011 8:19 AM -
1 million row full table scan in less than one minute?
All,
a customer has a table which is around 1 million rows.
The table size is 300 Mo.
The customer wants to calculate regular arithmetic operations,
and synthetic data on the whole table. He will use 'group by'
statements and 'sort by'.
He wants to know if it would be possible to get the results in
less than one minute, and what would be the necessary
architecture in order to achieve this?
He currently has a 4 Gb Ram machine, 4 processors (I think...).
It's a 7.3 database.
Would an upgrade to 8i or 9i help? Would partitioning help?
Would parallel query help? What else?
There will be no other concurrent requests on the system.
Thanks!Hi
If you have four processors parallel query will increase speed
of executing your statement. It will work better with
partitions. In version 7.3 is not very easy to manage partitons
because you have not partition tables - you have only partition
views. Maybe the speed is the same but management of partition
views is not so easy. So in version 8i and 9i it will be better.
If you have index for group by columns and they are 'NOT NULL'
it will be additional way of increasing speed beacause you will
eleminate sort operation.
Regards -
Regd: full table scan in sql query
hi friends,
i have one query below.
SELECT wm_concat (d.cp_hrchy_desc) LOCATION
FROM am_tm_chnl_prtnr_hrchy d, am_tl_user_hrchy e
WHERE d.pk_cp_hrchy_id = e.fk_cp_hrchy_id
AND e.fk_ent_mst_customers = 13754
its given me 'Full Table Scan' on table am_tm_chnl_prtnr_hrchy.its just simple query.
want to remove Full Scan.
see below plan of above query.
Plan
SELECT STATEMENT ALL_ROWSCost: 10 Bytes: 18 Cardinality: 1
4 SORT AGGREGATE Bytes: 18 Cardinality: 1
3 NESTED LOOPS Cost: 10 Bytes: 1,314 Cardinality: 73
1 TABLE ACCESS FULL TABLE BTSTARGET.AM_TM_CHNL_PRTNR_HRCHY
Cost: 9 Bytes: 50,831 Cardinality: 4,621
2 INDEX UNIQUE SCAN INDEX (UNIQUE) BTSTARGET.PK_USER_HRCHY_01 Cost: 0 Bytes: 7 Cardinality: 1Full table scan is not evil. Oracle selects full table scan only when its optimal.
So now tell us how many rows are you selecting from am_tm_chnl_prtnr_hrchy table. Give the percentage of data fetched from this table.
If its high say 90% then a FTS is a good option. Also do you have any index defined on the table am_tm_chnl_prtnr_hrchy. If yes give us the details.
See the below thread they will give you a nice idea of how to post a performance related question.
{thread:id=501834} and {thread:id=863295}. -
JBO-26041: Failed to post data to database during "Insert": SQL Statement "
Dear All,
I am trying to insert the data into custom table,getting the following error. Please help me to resolve the issues.
I have created one custom table in APPS schema having the primary key and History Columns.
Created the EO based on the custom table and generate the create method.
Created the VO based on the EO and generated the VOImpl, RowImpl Java Files.
I am using the PER_PEOPLE_S sequence to populate the value into Primary key column.
Calling the below code into create method of EO object to populate the value into Primarykey column.
setPersPersonid(getOADBTransaction().getSequenceValue("PER_PEOPLE_S"));
oracle.apps.fnd.framework.OAException: oracle.jbo.DMLException: JBO-26041: Failed to post data to database during "Insert": SQL Statement "INSERT INTO XXUTS_PERSON_T(PERS_PERSONID,PERS_FIRSTNAME,PERS_LASTNAME,CREATION_DATE,CREATED_BY,LAST_UPDATED_BY,LAST_UPDATE_DATE,LAST_UPDATE_LOGIN) VALUES (?,?,?,?,?,?,?,?)".
at oracle.apps.fnd.framework.OAException.wrapperInvocationTargetException(Unknown Source)
at oracle.apps.fnd.framework.server.OAUtility.invokeMethod(Unknown Source)
at oracle.apps.fnd.framework.server.OAUtility.invokeMethod(Unknown Source)
at oracle.apps.fnd.framework.server.OAApplicationModuleImpl.invokeMethod(Unknown Source)
at xxuts.oracle.apps.csc.person.webui.XXCreatePersonCO.processFormRequest(XXCreatePersonCO.java:67)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageLayoutHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.beans.layout.OAPageLayoutBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.beans.form.OAFormBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.beans.OABodyBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(Unknown Source)
at OA.jspService(_OA.java:72)
at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59)
at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:462)
at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:597)
at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:521)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:713)
at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:370)
at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:871)
at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:453)
at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:221)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:122)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:111)
at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
## Detail 0 ##
java.sql.SQLException: ORA-00911: invalid character
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:138)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:185)
at oracle.jdbc.driver.T4CPreparedStatement.execute_for_rows(T4CPreparedStatement.java:633)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1161)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3001)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3074)
at oracle.jbo.server.OracleSQLBuilderImpl.doEntityDML(OracleSQLBuilderImpl.java:427)
at oracle.jbo.server.EntityImpl.doDML(EntityImpl.java:5740)
at oracle.jbo.server.EntityImpl.postChanges(EntityImpl.java:4539)
at oracle.apps.fnd.framework.server.OAEntityImpl.postChanges(Unknown Source)
at oracle.jbo.server.DBTransactionImpl.doPostTransactionListeners(DBTransactionImpl.java:2996)
at oracle.jbo.server.DBTransactionImpl.postChanges(DBTransactionImpl.java:2807)
at oracle.jbo.server.DBTransactionImpl.commitInternal(DBTransactionImpl.java:1971)
at oracle.jbo.server.DBTransactionImpl.commit(DBTransactionImpl.java:2173)
at oracle.apps.fnd.framework.server.OADBTransactionImpl.commit(Unknown Source)
at xxuts.oracle.apps.csc.person.server.XXPersonMainAMImpl.savePersonToDatabase(XXPersonMainAMImpl.java:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at oracle.apps.fnd.framework.server.OAUtility.invokeMethod(Unknown Source)
at oracle.apps.fnd.framework.server.OAUtility.invokeMethod(Unknown Source)
at oracle.apps.fnd.framework.server.OAApplicationModuleImpl.invokeMethod(Unknown Source)
at xxuts.oracle.apps.csc.person.webui.XXCreatePersonCO.processFormRequest(XXCreatePersonCO.java:67)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageLayoutHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.beans.layout.OAPageLayoutBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.beans.form.OAFormBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.beans.OABodyBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(Unknown Source)
at OA.jspService(_OA.java:72)
at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59)
at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:462)
at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:597)
at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:521)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:713)
at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:370)
at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:871)
at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:453)
at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:221)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:122)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:111)
at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
java.sql.SQLException: ORA-00911: invalid character
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:138)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:185)
at oracle.jdbc.driver.T4CPreparedStatement.execute_for_rows(T4CPreparedStatement.java:633)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1161)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3001)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3074)
at oracle.jbo.server.OracleSQLBuilderImpl.doEntityDML(OracleSQLBuilderImpl.java:427)
at oracle.jbo.server.EntityImpl.doDML(EntityImpl.java:5740)
at oracle.jbo.server.EntityImpl.postChanges(EntityImpl.java:4539)
at oracle.apps.fnd.framework.server.OAEntityImpl.postChanges(Unknown Source)
at oracle.jbo.server.DBTransactionImpl.doPostTransactionListeners(DBTransactionImpl.java:2996)
at oracle.jbo.server.DBTransactionImpl.postChanges(DBTransactionImpl.java:2807)
at oracle.jbo.server.DBTransactionImpl.commitInternal(DBTransactionImpl.java:1971)
at oracle.jbo.server.DBTransactionImpl.commit(DBTransactionImpl.java:2173)
at oracle.apps.fnd.framework.server.OADBTransactionImpl.commit(Unknown Source)
at xxuts.oracle.apps.csc.person.server.XXPersonMainAMImpl.savePersonToDatabase(XXPersonMainAMImpl.java:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at oracle.apps.fnd.framework.server.OAUtility.invokeMethod(Unknown Source)
at oracle.apps.fnd.framework.server.OAUtility.invokeMethod(Unknown Source)
at oracle.apps.fnd.framework.server.OAApplicationModuleImpl.invokeMethod(Unknown Source)
at xxuts.oracle.apps.csc.person.webui.XXCreatePersonCO.processFormRequest(XXCreatePersonCO.java:67)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageLayoutHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.beans.layout.OAPageLayoutBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.beans.form.OAFormBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequestChildren(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAWebBeanContainerHelper.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.beans.OABodyBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.processFormRequest(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(Unknown Source)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(Unknown Source)
at OA.jspService(_OA.java:72)
at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59)
at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:462)
at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:597)
at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:521)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:713)
at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:370)
at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:871)
at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:453)
at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:221)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:122)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:111)
at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
Thanksm
Sai SankilisettyDear Kumar,
I have checked the datatypes of the table and datatypes of the Pageitems, both the datatypes are same.
I have created the region using the wizard based on the VO.
My custom table having only 3 columns and History Columns.Out of 3 columns PersPersonid is Primary key column and I am assigning the sequence value to the column in create method of EO as below
public void create(AttributeList attributeList) {
super.create(attributeList);
setPersPersonid(getOADBTransaction().getSequenceValue("PER_PEOPLE_S"));
//Here PER_PEOPLE_S is the Sequence
Thanks,
Sai
Maybe you are looking for
-
My canon 600D stops recording videos automatically after 8 seconds.
my canon 600D stops recording videos automatically after 8 seconds. i can't find anything of that sort in the settings menu. what settings should be used??
-
How to install PhpAdmin to manage MySql databases
I've install MySql 5.5.2 in my machine and download PhpMyadmin 3.3.3. How can I use PhpMyAdmin to manage MySql databases. i.e how to run PhpMyAdmin? where to put the folder, how to call it from the browser, etc.
-
Installing quickbooks 2013 on pavilion
I just purchased a Pavilion 500 with 6gb of memory and over 900gb of hardware space. I am trying to install Quickbooks 2013 which need 1gb of memory and am constantly getting an error message saying I dont have enough memory. I was able to install th
-
What will happen if City was created as a Display attribue in this case?
Hi, I know if you have City as a Navigational Attribute of State and you drop it in front of State in a report, you get details of the of States by Cities (i.e. the drill down) as follows: State City Total Sales DE Newark $50,000
-
I'm browsing a folder of Photoshop pdf documents, with 4000x3000 pixels size but bridge shows 1134x850 size instead of the correct size. Any idea why ? Thanks