Performance issue.... ( varchar2 - clob ).
Hi all,
I have a table ( data ) with the following configuration :
id number(10),
value charchar2(500)
I need to convert the value field from type varchar2 to clob.
for that I done the following:
SQL>create table temp as select * from data;
Table created.
SQL>truncate table data;
SQL>alter table data drop ( value );
SQL>alter table data add ( value clob );
SQL>alter table data move LOB(value) store as ( tablespace LOB disable storage in row nochace nologging );
SQL>alter table data nologging;
SQL>insert into data select * from temp;
The first step took 1 min. ( i have 2700000 records );
but the last step take ages...
Can anyone get a clue of why it is soooo slow ? ( maybe the "on the fly" conversion from varchar2 to clob ? )
Thanks,
Roye Avidor
the /*+ APPEND */ hint doesn't make it faster.
nice try :)
Similar Messages
-
Performance issue of frequently data inserted tables
Hi all,
Have a table named raw_trap_store having columns as trap_id(number,PK),Source_IP(varchar2), OID(varchar2),Message(CLOB) and received_time(date).
This table is partitioned across 24 partitions where partitioning column being received_time. (every hour's data will be stored in each partition).
This table is getting inserted with 40-50 records/sec on an average. Overall number of records for a day will be around 2.8-3 million. Data will be retained for 2 days.
No updates will be happening on this table.
Performance issue:N
Need a report which involves selection of records from this table based on certain values of Source IP (filtering condition on source_ip column).
Need a report which involves selection of records from this table based on certain values of OID (filtering condition on OID column).
But, if i create an index on SourceIP and OID column, then inserts are getting slow. (have created a normal index. not partitioned index)
Please help me to address the above issue.Giving the nature of your report (based on Source_IP and OID) and the nature of your table partitioning (range partitioned by received_time), you have already made a good decision to create these particular indexes as a normal (b-tree or global) and not locally partitioned. Because if you have locally partitioned them, your reports will not eliminate partitions (because they do not include the partition key in their where clause) and hence your index range scans will scan all 24 partitions generating a lot of logical I/O
That is said, remember that generally we insert once and select many times. You have to balance that. If you are sure that it is the creation of your two indexes that has decreased the insert performance then you may set them at in an unusable state before the insert and rebuild them after. But this might be a good advice only if the volume of data to be inserted is much bigger than the existing volume of data before the insert.
And if you are not deleting from the table and the table does not contain triggers and integrity constraints (like FK constraint) then you can opt for a direct path insert using the hint /*+ append */
Best regards
Mohamed Houri
<mod. action: removed unecessary blog ref.>
Message was edited by: Nicolas.Gasparotto -
Performance issues with dynamic action (PL/SQL)
Hi!
I'm having perfomance issues with a dynamic action that is triggered on a button click.
I have 5 drop down lists to select columns which the users want to filter, 5 drop down lists to select an operation and 5 boxes to input values.
After that, there is a filter button that just submits the page based on the selected filters.
This part works fine, the data is filtered almost instantaneously.
After this, I have 3 column selectors and 3 boxes where users put values they wish to update the filtered rows to,
There is an update button that calls the dynamic action (procedure that is written below).
It should be straight out, the only performance issue could be the decode section, because I need to cover cases when user wants to set a value to null (@) and when he doesn't want update 3 columns, but less (he leaves '').
Hence P99_X_UC1 || ' = decode(' || P99_X_UV1 ||','''','|| P99_X_UC1 ||',''@'',null,'|| P99_X_UV1 ||')
However when I finally click the update button, my browser freezes and nothing happens on the table.
Can anyone help me solve this and improve the speed of the update?
Regards,
Ivan
P.S. The code for the procedure is below:
create or replace
PROCEDURE DWP.PROC_UPD
(P99_X_UC1 in VARCHAR2,
P99_X_UV1 in VARCHAR2,
P99_X_UC2 in VARCHAR2,
P99_X_UV2 in VARCHAR2,
P99_X_UC3 in VARCHAR2,
P99_X_UV3 in VARCHAR2,
P99_X_COL in VARCHAR2,
P99_X_O in VARCHAR2,
P99_X_V in VARCHAR2,
P99_X_COL2 in VARCHAR2,
P99_X_O2 in VARCHAR2,
P99_X_V2 in VARCHAR2,
P99_X_COL3 in VARCHAR2,
P99_X_O3 in VARCHAR2,
P99_X_V3 in VARCHAR2,
P99_X_COL4 in VARCHAR2,
P99_X_O4 in VARCHAR2,
P99_X_V4 in VARCHAR2,
P99_X_COL5 in VARCHAR2,
P99_X_O5 in VARCHAR2,
P99_X_V5 in VARCHAR2,
P99_X_CD in VARCHAR2,
P99_X_VD in VARCHAR2
) IS
l_sql_stmt varchar2(32600);
p_table_name varchar2(30) := 'DWP.IZV_SLOG_DET';
BEGIN
l_sql_stmt := 'update ' || p_table_name || ' set '
|| P99_X_UC1 || ' = decode(' || P99_X_UV1 ||','''','|| P99_X_UC1 ||',''@'',null,'|| P99_X_UV1 ||'),'
|| P99_X_UC2 || ' = decode(' || P99_X_UV2 ||','''','|| P99_X_UC2 ||',''@'',null,'|| P99_X_UV2 ||'),'
|| P99_X_UC3 || ' = decode(' || P99_X_UV3 ||','''','|| P99_X_UC3 ||',''@'',null,'|| P99_X_UV3 ||') where '||
P99_X_COL ||' '|| P99_X_O ||' ' || P99_X_V || ' and ' ||
P99_X_COL2 ||' '|| P99_X_O2 ||' ' || P99_X_V2 || ' and ' ||
P99_X_COL3 ||' '|| P99_X_O3 ||' ' || P99_X_V3 || ' and ' ||
P99_X_COL4 ||' '|| P99_X_O4 ||' ' || P99_X_V4 || ' and ' ||
P99_X_COL5 ||' '|| P99_X_O5 ||' ' || P99_X_V5 || ' and ' ||
P99_X_CD || ' = ' || P99_X_VD ;
--dbms_output.put_line(l_sql_stmt);
EXECUTE IMMEDIATE l_sql_stmt;
END;Hi Ivan,
I do not think that the decode is performance relevant. Maybe the update hangs because some other transaction has uncommitted changes to one of the affected rows or the where clause is not selective enough and needs to update a huge amount of records.
Besides that - and I might be wrong, because I only know some part of your app - the code here looks like you have a huge sql injection vulnerability here. Maybe you should consider re-writing your logic in static sql. If that is not possible, you should make sure that the user input only contains allowed values, e.g. by white-listing P99_X_On (i.e. make sure they only contain known values like '=', '<', ...), and by using dbms_assert.enquote_name/enquote_literal on the other P99_X_nnn parameters.
Regards,
Christian -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance Issues with large XML (1-1.5MB) files
Hi,
I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
I guess I'm running out of options and patience as well.;)
I would appreciate any ideas/suggestions, please help.....
Thanks;
Ramakrishna ChintaAre you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0
-
Performance issue in procedure
Hi All
i have a performance issue with below procedure it is taking 10-15 hrs .custom table has 2 lacks record .
PROCEDURE update_summary_dollar_amounts( p_errbuf OUT VARCHAR2
,p_retcode OUT NUMBER) IS
v_customer_id NUMBER := NULL;
pymt_count NUMBER := 0;
rec_count NUMBER := 0;
v_number_late NUMBER;
v_number_on_time NUMBER;
v_days_late NUMBER;
v_avg_elapsed NUMBER;
v_avg_elapsed_US NUMBER;
v_percent_prompt NUMBER;
v_percent_late NUMBER;
v_number_open NUMBER;
v_last_payment_amount NUMBER;
v_last_payment_date DATE;
v_prev_payment_amount NUMBER;
v_prev_payment_date DATE;
v_last_sale_amount NUMBER;
v_last_sale_date DATE;
v_mtd_sales NUMBER;
v_ytd_sales NUMBER;
v_prev_year_sales NUMBER;
v_prev_receipt_num VARCHAR2(30);
v_last_sale VARCHAR2(50);
c_current_year VARCHAR2(4);
c_previous_year VARCHAR2(4);
c_current_month VARCHAR2(8);
/* ====================================================================== */
/* CURSOR Customer Cursor (Main Customer) LOOP */
/* ====================================================================== */
CURSOR customer_cursor IS
SELECT cst.customer_id customer_id
,cst.customer_number customer_number
,cst.org_id org_id
FROM zz_ar_customer_summary_all cst
ORDER by cst.customer_id;
/* ====================================================================== */
/* CURSOR Payments Cursor LOOP */
/* Note: This logic is taken from the Customer Credit Snapshot */
/* Report - ARXCCS */
/* ====================================================================== */
CURSOR payments_cursor IS
SELECT cr.receipt_number receipt_num
,NVL(cr.amount,0) amount
,crh.gl_date gl_date
FROM ar_lookups
,ar_cash_receipts_all cr
,ar_cash_receipt_history_all crh
,ar_receivable_applications_all ra
,ra_customer_trx_all ct
WHERE NVL(cr.type,'CASH') = ar_lookups.lookup_code
AND ar_lookups.lookup_type = 'PAYMENT_CATEGORY_TYPE'
AND cr.pay_from_customer = v_customer_id
AND cr.cash_receipt_id = ra.cash_receipt_id
AND cr.cash_receipt_id = crh.cash_receipt_id
AND crh.first_posted_record_flag = 'Y'
AND ra.applied_customer_trx_id = ct.customer_trx_id(+)
ORDER BY cr.creation_date DESC
,cr.cash_receipt_id DESC
,ra.creation_date DESC;
customer_record customer_cursor%rowtype;
payments_record payments_cursor%rowtype;
BEGIN
p_errbuf := NULL;
p_retcode := 0;
c_current_year := TO_CHAR(SYSDATE,'YYYY');
c_current_month := TO_CHAR(SYSDATE,'YYYYMM');
c_previous_year := TO_CHAR(TO_NUMBER(c_current_year) - 1);
FOR customer_record IN customer_cursor LOOP
/* Get Days Late and Average Elapsed Days */
/* Note: This logic is taken from the Customer Credit Snapshot */
/* Report - ARXCCS */
BEGIN
v_customer_id := customer_record.customer_id;
BEGIN
SELECT DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0) newlate
,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
INTO v_avg_elapsed
,v_days_late
,v_number_late
,v_number_on_time
FROM ar_receivable_applications_all ra
,ar_cash_receipts_all cr
,ar_payment_schedules_all ps
WHERE ra.cash_receipt_id = cr.cash_receipt_id
AND ra.applied_payment_schedule_id = ps.payment_schedule_id
AND ps.customer_id = v_customer_id
AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
AND ra.status = 'APP'
AND ra.display = 'Y'
AND NVL(ps.receipt_confirmed_flag,'Y') = 'Y';
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_days_late := NULL;
v_number_late := NULL;
v_avg_elapsed := NULL;
v_number_on_time := NULL;
END;
IF (v_number_on_time + v_number_late) > 0
THEN
v_percent_prompt := ROUND(v_number_on_time/(v_number_on_time + v_number_late),2) * 100;
v_percent_late := ROUND(v_number_late/(v_number_on_time + v_number_late),2) * 100;
ELSE
v_percent_prompt := 0;
v_percent_late := 0;
END IF;
/* C2# 49827 */
/* Get new average elapsed days for US use only */
v_avg_elapsed_us := NULL;
IF NVL(customer_record.org_id,-999) = 114
THEN
v_avg_elapsed_us := 0;
BEGIN
SELECT ROUND(SUM(NVL(ra.amount_applied,0) * (cr.deposit_date - ps.trx_date)) / DECODE(SUM(NVL(ra.amount_applied,0)),0,1,SUM(NVL(ra.amount_applied,0)))) avg_elapsed_us
INTO v_avg_elapsed_us
FROM ar_receivable_applications_all ra
,ar_cash_receipts_all cr
,ar_payment_schedules_all ps
WHERE ra.cash_receipt_id = cr.cash_receipt_id
AND ra.applied_payment_schedule_id = ps.payment_schedule_id
AND ps.customer_id = v_customer_id
AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -06) AND SYSDATE
AND ps.status = 'CL'
AND ra.status = 'APP'
AND ra.display = 'Y'
AND nvl(ps.receipt_confirmed_flag,'Y') = 'Y'
AND ra.amount_applied <> 0;
v_avg_elapsed_us := NVL(v_avg_elapsed_us,0);
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_avg_elapsed_us := NULL;
END;
END IF;
END;
/* Get MTD, YTD, Prev Year Sales */
/* Note: This logic is taken from the Customer Credit Snapshot */
/* Report - ARXCCS */
BEGIN
SELECT NVL(SUM(DECODE(TO_CHAR(ps.trx_date,'YYYYMM'),c_current_month,amount_due_original,0)),0) mtd_sales
,NVL(SUM(DECODE(TO_CHAR(ps.trx_date,'YYYY'),c_current_year,amount_due_original,0)),0) ytd_sales
,NVL(SUM(DECODE(TO_CHAR(ps.trx_date,'YYYY'),c_previous_year,amount_due_original,0)),0) prev_sales
,SUM(DECODE(ps.status,'OP',(DECODE(SIGN(amount_due_original),1,1,0)),0)) number_open
INTO v_mtd_sales
,v_ytd_sales
,v_prev_year_sales
,v_number_open
FROM ar_payment_schedules_all ps
WHERE ps.customer_id = v_customer_id
AND ps.class != 'PMT';
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_mtd_sales := NULL;
v_ytd_sales := NULL;
v_prev_year_sales := NULL;
END;
/* Get Last and Previous Payments */
pymt_count := 0;
v_last_payment_date := NULL;
v_prev_payment_date := NULL;
v_last_payment_amount := NULL;
v_prev_payment_amount := NULL;
v_prev_receipt_num := NULL;
FOR payments_record IN payments_cursor LOOP
BEGIN
IF payments_record.receipt_num = v_prev_receipt_num
THEN
NULL;
ELSIF pymt_count = 0
THEN
v_last_payment_date := payments_record.gl_date;
v_last_payment_amount := payments_record.amount;
pymt_count := pymt_count +1;
v_prev_receipt_num := payments_record.receipt_num;
ELSIF pymt_count = 1
THEN
v_prev_payment_date := payments_record.gl_date;
v_prev_payment_amount := payments_record.amount;
EXIT;
ELSE
EXIT;
END IF;
END;
END LOOP;
/* Get Last Sale Date and Amount */
/* Note: This logic is taken from the Customer Credit Snapshot */
/* Report - ARXCCS */
BEGIN
SELECT MAX(TO_CHAR(ct.trx_date,'YYYYDDD')||ps.amount_due_original)
INTO v_last_sale
FROM ra_cust_trx_types_all ctt
,ar_payment_schedules_all ps
,ra_customer_trx_all ct
WHERE ps.customer_trx_id = ct.customer_trx_id
AND ct.cust_trx_type_id = ctt.cust_trx_type_id
AND ct.bill_to_customer_id = v_customer_id
AND ps.class || '' = 'INV'
ORDER BY ct.trx_date DESC
,ct.customer_trx_id DESC;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
v_last_sale := NULL;
END;
IF v_last_sale IS NOT NULL
THEN
v_last_sale_date := TO_DATE(SUBSTR(v_last_sale,1,7),'YYYYDDD');
v_last_sale_amount := SUBSTR(v_last_sale,8,15);
ELSE
v_last_sale_date := NULL;
v_last_sale_amount := NULL;
END IF;
/* Update Values into ZZ_AR_CUSTOMER_SUMMARY_ALL */
BEGIN
UPDATE zz_ar_customer_summary_all
SET sales_last_year = v_prev_year_sales
,sales_ytd = v_ytd_sales
,sales_mtd = v_mtd_sales
,last_sale_date = v_last_sale_date
,last_sale_amount = v_last_sale_amount
,last_payment_date = v_last_payment_date
,last_payment_amount = v_last_payment_amount
,previous_payment_date = v_prev_payment_date
,previous_payment_amount = v_prev_payment_amount
,prompt = v_percent_prompt
,late = v_percent_late
,avg_elapsed_days = v_avg_elapsed
,avg_elapsed_days_us = v_avg_elapsed_us -- C2# 49827
,days_late = v_days_late
,number_open = v_number_open
WHERE customer_id = customer_record.customer_id;
EXCEPTION
WHEN PROGRAM_ERROR THEN NULL;
WHEN DUP_VAL_ON_INDEX THEN NULL;
WHEN STORAGE_ERROR THEN NULL;
WHEN OTHERS THEN NULL;
END;
rec_count := rec_count + 1;
IF rec_count = 10000
THEN
COMMIT;
rec_count := 0;
fnd_file.put_line(fnd_file.output,'Commit at customer_id = ' || TO_CHAR(customer_record.customer_id) || ' ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS'));
fnd_file.new_line(fnd_file.output,1);
END IF;
END LOOP;
COMMIT;
EXCEPTION
WHEN others THEN
ROLLBACK;
p_retcode := 2;
p_errbuf := SQLERRM;
END update_summary_dollar_amounts;
Thanks,
AnuBased on my initial assessment of the code, it looks like you are utilizing the "slow by slow" method. It is often termed "slow by slow" because it is one of the most INefficient ways of doing data processing. The "slow by slow" method uses CURSOR FOR LOOPs to loop through entire record sets and process them one at a time. In your case it looks like you are using NESTED FOR LOOPs which could exacerbate the problem.
I recommend you re-think your approach and try and do everything in a single, or a few SQL statements if possible and avoid the procedural logic.
If you can post your business requirements, and sample data we may be able to help you achieve your goal.
HTH! -
Expression Filter Performance Issues / Misuse?
I'm currently evaluating the Expression Filter functionality for a new requirement. The basic idea of the requirement is that I have a logging table that I want to get "interesting" records from. The way I want to set it up is to exclude known, "uninteresting", records or record patterns.
So as far as an implementation I was considering a table of expressions that contained expression filter entries for the "uninteresting" records and checking this against my logging table using the EVALUATE operator and looking for a 0 result.
In my testing I wanted to return results where the EVALUTE operator is equal to 1 to see if my expressions are correct. In doing this I was experiencing significant performance issues. For example my test filter matches 72 rows out of 61657 possible entries. It took Oracle almost 10 minutes to evaluate this expression. I tried it with and without an Expression Filter index with no noticeable change in execution time. The test case and query is provided below.
Is this the right use case for Expression Filter? Am I misunderstanding how it works? What am I doing wrong?
Test Case:
Version
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
Objects & Query:
CREATE TABLE expressions( white_list VARCHAR2(200));
CREATE TABLE data
AS
SELECT OBJECT_ID
, OWNER
, OBJECT_NAME
, CREATED
, LAST_DDL_TIME
FROM DBA_OBJECTS
BEGIN
-- Create the empty Attribute Set --
DBMS_EXPFIL.CREATE_ATTRIBUTE_SET('exptype');
-- Define elementary attributes of EXF$TABLE_ALIAS type --
DBMS_EXPFIL.ADD_ELEMENTARY_ATTRIBUTE('exptype','data',
EXF$TABLE_ALIAS('test_user.data'));
END;
BEGIN
DBMS_EXPFIL.ASSIGN_ATTRIBUTE_SET('exptype','expressions','white_list');
END;
INSERT INTO expressions(white_list) VALUES('data.owner=''TEST_USER'' AND data.created BETWEEN TO_DATE(''08/03/2010'',''MM/DD/YYYY'') AND TO_DATE(''08/05/2010'',''MM/DD/YYYY'')');
exec dbms_stats.gather_table_stats(USER,'EXPRESSIONS');
exec dbms_stats.gather_table_stats(USER,'DATA');
CREATE INDEX expIndex ON Expressions (white_list) INDEXTYPE IS EXFSYS.EXPFILTER
PARAMETERS ('STOREATTRS (data.owner,data.object_name,data.created)
INDEXATTRS (data.owner,data.object_name,data.created)');
SELECT /*+ gather_plan_statistics */ data.* FROM data, expressions WHERE EVALUATE(white_list,exptype.getVarchar(data.rowid)) = 1;
DROP TABLE expressions PURGE;
BEGIN
DBMS_EXPFIL.DROP_ATTRIBUTE_SET(attr_set => 'exptype');
END;
DROP TABLE data PURGE;Hi,
If you are already using the queries and are stable enough then rather than modifying query you can try other options to improve the query performance like data compression of the cube, creation of aggregates, placing cube on BIA or creating cache for the query.
Best Regards,
Prashant Vankudre. -
SQL Performance issue: Using user defined function with group by
Hi Everyone,
im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
IS
tz_name VARCHAR2(100);
date_out date;
BEGIN
SELECT
to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
INTO date_out
FROM dual;
RETURN date_out;
END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
select
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
But if I execute the following statement, it takes only ~90ms ...
select
fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
noi
from
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stamp
)The execution plan for all three statements is EXACTLY the same!!!
Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
My questions are:
Why is the second statement sooo much slower than the third?
and
Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
I would really appreciate some help on this really weird issue.
Thanks in advance,
AndiHi,
The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
drop table t cascade constraints purge;
create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
exec dbms_stats.gather_table_stats(user, 't');
create or replace function test_fnc(p_int number) return number is
begin
return trunc(p_int);
end;
explain plan for select id from t group by id;
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from t group by test_fnc(id);
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from (select id from t group by id);
select * from table(dbms_xplan.display(null,null,'advanced'));Output:
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL>
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "TEST_FNC"("ID")[22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$F5BB74E1
2 - SEL$F5BB74E1 / T@SEL$2
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
MERGE(@"SEL$2")
OUTLINE_LEAF(@"SEL$F5BB74E1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
37 rows selected. -
Oracle 11g Migration performance issue
Hello,
There a performance issue with Migration from Oracle 10g(10.2.0.5) to Oracle 11g(11.2.0.2).
Its very simple statement hanging for more than a day and later found that query plan is very very bad. Example of the query is given below:
INSERT INTO TABLE_XYZ
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
While looking at cost in Explain plan :
on 10g --> 62567
0n 11g --> 9879652356776
Strange thing is that
Scenario 1: if I issue just query as shown below, will display rows immediately :
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
Scenario 2: If I create a table as shown below, will work correctly.
CREATE TABLE TABLE_XYZ AS
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
What could be the issue here with INSERT INTO <TAB> SELECT <COL> FROM <TAB1>?Table:
CREATE TABLE AVN_WRK_F_RENEWAL_TRANS_T
"PKSRCSYSTEMID" NUMBER(4,0) NOT NULL ENABLE,
"PKCOMPANYCODE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKBRANCHCODE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKLINEOFBUSINESS" NUMBER(4,0) NOT NULL ENABLE,
"PKPRODUCINGOFFICELIST" VARCHAR2(2 CHAR) NOT NULL ENABLE,
"PKPRODUCINGOFFICE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKEXPIRYYR" NUMBER(4,0) NOT NULL ENABLE,
"PKEXPIRYMTH" NUMBER(2,0) NOT NULL ENABLE,
"CURRENTEXPIRYCOUNT" NUMBER,
"CURRENTRENEWEDCOUNT" NUMBER,
"PREVIOUSEXPIRYCOUNT" NUMBER,
"PREVIOUSRENEWEDCOUNT" NUMBER
SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT
TABLESPACE "XYZ" ;
Explain Plan(With Insert Statement and Query):_
INSERT STATEMENT, GOAL = ALL_ROWS Cost=9110025395866 Cardinality=78120 Bytes=11952360
LOAD TABLE CONVENTIONAL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS
NESTED LOOPS OUTER Cost=9110025395866 Cardinality=78120 Bytes=11952360
TABLE ACCESS FULL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS_1ST Cost=115 Cardinality=78120 Bytes=2499840
VIEW PUSHED PREDICATE Object owner=ODS Cost=116615788 Cardinality=1 Bytes=121
SORT GROUP BY Cost=116615788 Cardinality=3594 Bytes=406122
VIEW Object owner=SYS Object name=VW_DAG_1 Cost=116615787 Cardinality=20168 Bytes=2278984
SORT GROUP BY Cost=116615787 Cardinality=20168 Bytes=4073936
NESTED LOOPS OUTER Cost=116614896 Cardinality=20168 Bytes=4073936
VIEW Object owner=SYS Cost=5722 Cardinality=20168 Bytes=2157976
NESTED LOOPS Cost=5722 Cardinality=20168 Bytes=2097472
HASH JOIN Cost=924 Cardinality=1199 Bytes=100716
NESTED LOOPS
NESTED LOOPS Cost=181 Cardinality=1199 Bytes=80333
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=159 Cardinality=1199 Bytes=39567
INDEX RANGE SCAN Object owner=ODS Object name=IX_INWPOLDTLS_SYSCOMPANYBRANCH Cost=7 Cardinality=1199
INDEX UNIQUE SCAN Object owner=ODS Object name=PK_AVN_D_MASTERPOLICYDETAILS Cost=0 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=1 Cardinality=1 Bytes=34
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=288498 Bytes=4904466
VIEW PUSHED PREDICATE Object owner=ODS Cost=4 Cardinality=1 Bytes=20
FILTER
SORT AGGREGATE Cardinality=1 Bytes=21
TABLE ACCESS BY GLOBAL INDEX ROWID Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=4 Cardinality=1 Bytes=21
INDEX RANGE SCAN Object owner=ODS Object name=PK_AVN_F_TRANSACTIONS Cost=3 Cardinality=1
VIEW PUSHED PREDICATE Object owner=ODS Cost=5782 Cardinality=1 Bytes=95
SORT GROUP BY Cost=5782 Cardinality=2485 Bytes=216195
VIEW Object owner=SYS Object name=VW_DAG_0 Cost=5781 Cardinality=2485 Bytes=216195
SORT GROUP BY Cost=5781 Cardinality=2485 Bytes=278320
HASH JOIN Cost=5780 Cardinality=2485 Bytes=278320
VIEW Object owner=SYS Object name=VW_GBC_15 Cost=925 Cardinality=1199 Bytes=73139
SORT GROUP BY Cost=925 Cardinality=1199 Bytes=100716
HASH JOIN Cost=924 Cardinality=1199 Bytes=100716
NESTED LOOPS
NESTED LOOPS Cost=181 Cardinality=1199 Bytes=80333
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=159 Cardinality=1199 Bytes=39567
INDEX RANGE SCAN Object owner=ODS Object name=IX_INWPOLDTLS_SYSCOMPANYBRANCH Cost=7 Cardinality=1199
INDEX UNIQUE SCAN Object owner=ODS Object name=PK_AVN_D_MASTERPOLICYDETAILS Cost=0 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=1 Cardinality=1 Bytes=34
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=288498 Bytes=4904466
VIEW Object owner=SYS Object name=VW_GBF_16 Cost=4854 Cardinality=75507 Bytes=3850857
SORT GROUP BY Cost=4854 Cardinality=75507 Bytes=2340717
VIEW Object owner=ODS Cost=4207 Cardinality=75507 Bytes=2340717
SORT GROUP BY Cost=4207 Cardinality=75507 Bytes=1585647
PARTITION HASH ALL Cost=3713 Cardinality=75936 Bytes=1594656
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3713 Cardinality=75936 Bytes=1594656
Explain Plan(Only Query):_
SELECT STATEMENT, GOAL = ALL_ROWS Cost=62783 Cardinality=89964 Bytes=17632944
HASH JOIN OUTER Cost=62783 Cardinality=89964 Bytes=17632944
TABLE ACCESS FULL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS_1ST Cost=138 Cardinality=89964 Bytes=2878848
VIEW Object owner=ODS Cost=60556 Cardinality=227882 Bytes=37372648
HASH GROUP BY Cost=60556 Cardinality=227882 Bytes=26434312
VIEW Object owner=SYS Object name=VW_DAG_1 Cost=54600 Cardinality=227882 Bytes=26434312
HASH GROUP BY Cost=54600 Cardinality=227882 Bytes=36005356
HASH JOIN OUTER Cost=46664 Cardinality=227882 Bytes=36005356
VIEW Object owner=SYS Cost=18270 Cardinality=227882 Bytes=16635386
HASH JOIN Cost=18270 Cardinality=227882 Bytes=32587126
HASH JOIN Cost=12147 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=10076 Cardinality=34667 Bytes=2322689
TABLE ACCESS FULL Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=137 Cardinality=34667 Bytes=1178678
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=9934 Cardinality=820724 Bytes=27083892
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=866377 Bytes=14728409
VIEW Object owner=ODS Cost=5195 Cardinality=227882 Bytes=13445038
HASH GROUP BY Cost=5195 Cardinality=227882 Bytes=4785522
PARTITION HASH ALL Cost=3717 Cardinality=227882 Bytes=4785522
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3717 Cardinality=227882 Bytes=4785522
VIEW Object owner=ODS Cost=26427 Cardinality=227882 Bytes=19369970
HASH GROUP BY Cost=26427 Cardinality=227882 Bytes=18686324
VIEW Object owner=SYS Object name=VW_DAG_0 Cost=26427 Cardinality=227882 Bytes=18686324
HASH GROUP BY Cost=26427 Cardinality=227882 Bytes=25294902
HASH JOIN Cost=20687 Cardinality=227882 Bytes=25294902
VIEW Object owner=SYS Object name=VW_GBC_15 Cost=12826 Cardinality=34667 Bytes=2080020
HASH GROUP BY Cost=12826 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=12147 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=10076 Cardinality=34667 Bytes=2322689
TABLE ACCESS FULL Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=137 Cardinality=34667 Bytes=1178678
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=9934 Cardinality=820724 Bytes=27083892
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=866377 Bytes=14728409
VIEW Object owner=SYS Object name=VW_GBF_16 Cost=7059 Cardinality=227882 Bytes=11621982
HASH GROUP BY Cost=7059 Cardinality=227882 Bytes=6836460
VIEW Object owner=ODS Cost=5195 Cardinality=227882 Bytes=6836460
HASH GROUP BY Cost=5195 Cardinality=227882 Bytes=4785522
PARTITION HASH ALL Cost=3717 Cardinality=227882 Bytes=4785522
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3717 Cardinality=227882 Bytes=4785522 -
Performance Issue - Index is not used when a zero padded string is queried
Hi All,
I have a table T1 which has many columns. One such column say C1 is a varchar2(20). T1 has 10 million rows and there is an index called I1 on C1. Stats are current for both tables and indexes. These are the scenarios:
Scenario 1
select * from T1 where C1 = '0013206263' --Uses index I1
187 ms
Scenario 2
select * from T1 where C1 = '8177341863' --Uses index I1
203 ms
*Scenario 3*
*select * from T1 where C1 = '0000000945' --Uses Fulll Table Scan --Very Slow*
*45 seconds*
When I force the sql to use the index through a hint, it is working fine:
Scenario 4
select /*+ INDEX (t1 i1) */ * from T1 where C1 = '0013206263' --Uses index I1
123 ms
Scenario 5
select /*+ INDEX (t1 i1) */ * from T1 where C1 = '8177341863' --Uses index I1
201 ms
*Scenario 6*
*select /*+ INDEX (t1 i1) */ * from T1 where C1 = '0000000945' --Uses index I1*
*172ms*Is there any reason for this performance issue? Why does the optimizer goes for FTS in Scenario 3?
Edited by: user539954 on May 14, 2009 12:22 PM
Edited by: user539954 on May 14, 2009 12:32 PMuser539954 wrote:
Please see the replies below:
- How many distinct values for C1 out of that 10 million rows? I'm guessing that histograms were created for C1, correct?
=>7 million distinct c1 values. I have not gathered a histogram yet. Should I try that?
SQL> explain plan for select * from T1 where C1 = '0000000954';
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 244K| 19M| 26228 (5)|
| 1 | TABLE ACCESS FULL| T1 | 244K| 19M| 26228 (5)|
SQL> explain plan for select * from T1 where C1 = '0033454555';
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 532 | 43624 | 261 (0)|
| 1 | TABLE ACCESS BY INDEX ROWID| T1 | 532 | 43624 | 261 (0)|
| 2 | INDEX RANGE SCAN | I1 | 532 | | 2 (0)|
It's possible you do have a histogram, even though you didn't plan on creating it, if you're running 10g.
In the absence of a histogram and with 7M distinct keys in 10M rows, Oracle should have predicted 2 rows for both queries, not 244,000 and 532.
If you do have a histogram, you probably need to get rid of it.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format. -
Performance issues of SQL access to AW
Hi Experts:
I wonder whether there is performance issues when using SQL to access AW. When using SQL to access cubes in AW, the SQL queries the relational views for AW objects. And the views are based on OLAP_TABLE function. We know that, views based on any table function are not able to make use of index. That is to query a subset of the data of a view, we would have to full scan the view and then apply the filter. Such query plan always lead to bad performance.
I want to know, when I use SQL to retrieve a small part of data in an AW-cube, will Oracle OLAP engine retrieve all data in the cube and then apply the filter? If the Oracle OLAP engine only retrieves data needed from AW, how can she did it?
Thanks.For most requests the OLAP_TABLE function can reduce the amount of data it produces by examining the rowsource tree , or WHERE clause. The data in Oracle OLAP is highly indexed. There are steps a user can take to optimize the index use. Specifically, pin down the dimension(s) defined in the OLAP_TABLE function LIMITMAP via (NOT)IN lists on the dimension, parent, level or GID columns. Use of valuesets for the INHIER object, instead of a boolean object.
In 10g, WHERE clauses like SALES > 50 are also processed prior to sending data out.
For large requests (thousands of rows) performance can be a problem because the data is being sent through the object layer. In 10 this can be ameliorated by wrapping the OLAP_TABLE function call with a SQL MODEL clause. The SQL MODEL knows a bit more about the Olap options and does not require use to pipe the data through the object layer.
SQL MODEL example (note no ADT defintion, using of auto ADT) This can be wrapped in a CREATE VIEW statement :
select * from olap_table('myaw duration session', null, null, 'measure sales as number from aw_sales_obj dimension d1 as varchar2(10) from geog ...rest of dimensions')
sql model dimension by (d1, d2, d3, d4) measures (sales, any attributes, parent columns etc...) unique single reference rules update sequential order ()
Example of WHERE clause with above select.
SELECT *
FROM (select * from olap_table('myaw duration session', null, null, 'measure sales as number from aw_sales_obj dimension d1 as varchar2(10) from geog ...rest of dimensions')
sql model dimension by (d1, d2, d3, d4) measures (sales, any attributes, parent columns etc...) unique single reference rules update sequential order ()))
WHERE GEOG NOT IN ('USA', 'CANADA')
and GEOG_GID = 1
and TIME_PARENT IN ('2004')
and CHANNEL = 'CATALOG'
and SALES > 50000; -
Performance Issue in Query using = and =
Hi,
I have a performance issue in using condition like:
SELECT * FROM A WHERE ITEM_NO>='M-1130' AND ITEM_NO<='M-9999'.
Item_No is a varchar2 field and the field contains Numberical as well as string values.
Can anyone help to solve the issue.
Thanks and RegardsHow can you say it is a performance issue with the condition? Do you have execution plan? If yes, post it between [pre] and [/pre] tags. Like this.
[pre]SQL> explain plan for
2 select sysdate
3 from dual
4 /
Explained.
SQL> select * from table(dbms_xplan.display)
2 /
PLAN_TABLE_OUTPUT
Plan hash value: 1546270724
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
8 rows selected.
SQL>
[/pre] -
Performance issue with Oracle Text index
Hi Experts,
We are on Oracle 11.2..0.3 on Solaris 10. I have implemented Oracle Text in our environment and I am facing a strange performance issue that is happening in our environment.
One sql having CONTAINS clause is taking forever - more than 20 minutes and still does not complete. This sql has a contains clause and an exists clause and a not exists clause.
Now if I remove the exists clause and a not exists clause , it completes fast. but with those two clauses it is just taking forever. It is late night so i am not able to post the table and sql query details and will do so tomorrow but based on this general description, are there any pointers for me to review?
sql query doing fine:
SELECT
U.CLNT_OID, U.USR_OID, S.MAILADDR
FROM
access_usr U
INNER JOIN access_sia S
ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
WHERE U.CLNT_OID = 'ABCX32S'
AND CONTAINS(LAST_NAME , 'TO%' ) >0
--sql query that hangs forever:
SELECT
U.CLNT_OID, U.USR_OID, S.MAILADDR
FROM
access_usr U
INNER JOIN access_sia S
ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
WHERE U.CLNT_OID = 'ABCX32S'
AND CONTAINS(LAST_NAME , 'TO%' ) >0
and exists (--one clause here wiht a few table joins)
and not exists (--one clause here wiht a few table joins);
--Now another strange thing I found is if instead of 'TO%' in this sql, if I were to use 'ZZ%' or 'L1%' it works fast but for 'TO%' it goes slow with those two exists not exists clauses!
I will be most thankful for the inputs.
OrauserNHi Barbara,
First of all, thanks a lot for reviewing the issue.
Unluckily making the change to empty_stoplist did not work out. I am today copying the entire sql here that has this issue and will be most thankful for more insights/pointers on what can be done.
Here is the entire sql:
SELECT U.CLNT_OID,
U.USR_OID,
S.EMAILADDRESS,
U.FIRST_NAME,
U.LAST_NAME,
S.JOBCODE,
S.LOCATION,
S.DEPARTMENT,
S.ASSOCIATEID,
S.ENTERPRISECOMPANYCODE,
S.EMPLOYEEID,
S.PAYGROUP,
S.PRODUCTLOCALE
FROM ACCESS_USR U
INNER JOIN
ACCESS_SIA S
ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
WHERE U.CLNT_OID = 'G39NY3D25942TXDA'
AND EXISTS
(SELECT 1
FROM ACCESS_USR_GROUP_XREF UGX
INNER JOIN ACCESS_GROUP RELG
ON RELG.CLNT_OID = UGX.CLNT_OID
AND RELG.GROUP_OID = UGX.GROUP_OID
INNER JOIN ACCESS_GROUP G
ON G.CLNT_OID = RELG.CLNT_OID
AND G.GROUP_TYPE_OID = RELG.GROUP_TYPE_OID
WHERE UGX.CLNT_OID = U.CLNT_OID
AND UGX.USR_OID = U.USR_OID
AND G.GROUP_OID = 920512943
AND UGX.INCLUDED = 1)
AND NOT EXISTS
(SELECT 1
FROM ACCESS_USR_GROUP_XREF UGX
INNER JOIN
ACCESS_GROUP G
ON G.CLNT_OID = UGX.CLNT_OID
AND G.GROUP_OID = UGX.GROUP_OID
WHERE UGX.CLNT_OID = U.CLNT_OID
AND UGX.USR_OID = U.USR_OID
AND G.GROUP_OID = 920512943
AND UGX.INCLUDED = 1)
AND CONTAINS (U.LAST_NAME, 'Bon%') > 0;
Like I said before if the EXISTS and NOT EXISTS clause are removed it works in sub-second. But with those EXISTS and NOT EXISTS CLAUSE IT TAKES ANY WHERE FROM 25 minutes to more than one hour.
NOte also that it was not TO% but Bon% in the CONTAINS clause that is giving the issue - sorry that was wrong on my part.
Also please see below the ORACLE TEXT index defined on the table ACCESS_USER:
--definition of preferences used in the index:
SET SERVEROUTPUT ON size unlimited
WHENEVER SQLERROR EXIT SQL.SQLCODE
DECLARE
v_err VARCHAR2 (1000);
v_sqlcode NUMBER;
v_count NUMBER;
BEGIN
ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
EXCEPTION
WHEN OTHERS
THEN
v_err := SQLERRM;
v_sqlcode := SQLCODE;
v_count := INSTR (v_err, 'DRG-10701');
IF v_count > 0
THEN
DBMS_OUTPUT.put_line (
'The required preference named CUST_LEXER with BASIC LEXER is already set up');
ELSE
RAISE;
END IF;
END;
DECLARE
v_err VARCHAR2 (1000);
v_sqlcode NUMBER;
v_count NUMBER;
BEGIN
ctxsys.ctx_ddl.create_preference ('cust_wl', 'BASIC_WORDLIST');
ctxsys.ctx_ddl.set_attribute ('cust_wl', 'SUBSTRING_INDEX', 'true'); -- to improve performance
EXCEPTION
WHEN OTHERS
THEN
v_err := SQLERRM;
v_sqlcode := SQLCODE;
v_count := INSTR (v_err, 'DRG-10701');
IF v_count > 0
THEN
DBMS_OUTPUT.put_line (
'The required preference named CUST_WL with BASIC WORDLIST is already set up');
ELSE
RAISE;
END IF;
END;
--now below is the code of the index:
CREATE INDEX ACCESS_USR_IDX3 ON ACCESS_USR
(FIRST_NAME)
INDEXTYPE IS CTXSYS.CONTEXT
PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
CREATE INDEX ACCESS_USR_IDX4 ON ACCESS_USR
(LAST_NAME)
INDEXTYPE IS CTXSYS.CONTEXT
PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
The strange thing is that, like I said, If I remove the exists clause the query returns very fast. Also if I modify the query to use only one NOT EXISTS clause and remove the other EXISTS clause it returns in less than one second. Also if I remove the EXISTS clause and use only the NOT EXISTS clause it returns in less than 4 seconds. But with both clauses it runs forever!
When I tried to get dbms_xplan.display_cursor to get the query plan (for the case of both exists and not exists clause in the query), it said that previous statement's sql id was 0 or something like that so that I was not able to see the query plan. I will keep trying to get this plan (it takes 25 minutes to one hour each time but will get this info soon). Again any pointers are most helpful.
Regards
OrauserN -
Hi gurus!
We have experienced big performance issues using LKM.
I have tried use LKM SQL to Oracle and have found that LKM works very slowly with tables with big count of columns.
I have tried load table with 3 million records and for table with 20 columns it works much faster than with 200 columns.
With 2 separate Oracle servers I solved problem with dblink, and don't use LKM.
But how to solve this problem with load from MS SQL to Oracle without special db-link?
Have you found the reason of this performance issue?
I believe that performance bottleneck is in ODI agent, but I don’t know how to configure it to use more hardware resources.By example in export file decimal separator was "." but as I use russian NLS_LANG separator should be ",". I solved that with translate function.
Now I have problem that load text field where I have 262 russian character (code page 1251), and I cant load it in varchar2(4000) field.
When I try to insert this value manual, it has been inserted.
In log file I have error message like this:
Error in table xxx , column yyy.
The field in file exceeded maximal length.
(It's translated, as I should use NLS_LANG russian) -
SQLDeveloper 1.5.4 Table browsing performance issue
Hi all,
I had read of previous posts regarding SQLDeveloper 1.5.3 table browsing performance issues. I downloaded and installed version 1.5.4 and it appears the problem has gotten worse!
It takes ages to display rows of this particular table (the structure is shown below). It takes much longer to view it in Single Record format. Then attempting to Export the data is another frustrating exercise. By the way, TOAD does not seem to have this problem so I guess it is a SQLDeveloper bug.
Can someone help with any workarounds?
Thanks
Chiedu
Here is the table structure:
create table EMAIL_SETUP
APPL_ID VARCHAR2(10) not null,
EML_ID VARCHAR2(10) not null,
EML_DESC VARCHAR2(80) not null,
PRIORITY_NO_DM NUMBER(1) default 3 not null
constraint CC_EMAIL_SETUP_4 check (
PRIORITY_NO_DM in (1,2,3,4,5)),
DTLS_YN VARCHAR2(1) default '0' not null
constraint CC_EMAIL_SETUP_5 check (
DTLS_YN in ('0','1')),
ATT_YN VARCHAR2(1) default '0' not null
constraint CC_EMAIL_SETUP_6 check (
ATT_YN in ('0','1')),
MSG_FMT VARCHAR2(5) default 'TEXT' not null
constraint CC_EMAIL_SETUP_7 check (
MSG_FMT in ('TEXT','HTML')),
MSG_TMPLT VARCHAR2(4000) not null,
MSG_MIME_TYPE VARCHAR2(500) not null,
PARAM_NO NUMBER(2) default 0 not null
constraint CC_EMAIL_SETUP_10 check (
PARAM_NO between 0 and 99),
IN_USE_YN VARCHAR2(1) not null
constraint CC_EMAIL_SETUP_11 check (
IN_USE_YN in ('0','1')),
DFLT_USE_YN VARCHAR2(1) default '0' not null
constraint CC_EMAIL_SETUP_12 check (
DFLT_USE_YN in ('0','1')),
TAB_NM VARCHAR2(30) null ,
FROM_ADDR VARCHAR2(80) null ,
RPLY_ADDR VARCHAR2(80) null ,
MSG_SBJ VARCHAR2(100) null ,
MSG_HDR VARCHAR2(2000) null ,
MSG_FTR VARCHAR2(2000) null ,
ATT_TYPE_DM VARCHAR2(4) null
constraint CC_EMAIL_SETUP_19 check (
ATT_TYPE_DM is null or (ATT_TYPE_DM in ('RAW','TEXT'))),
ATT_INLINE_YN VARCHAR2(1) null
constraint CC_EMAIL_SETUP_20 check (
ATT_INLINE_YN is null or (ATT_INLINE_YN in ('0','1'))),
ATT_MIME_TYPE VARCHAR2(500) null ,
constraint PK_EMAIL_SETUP primary key (EML_ID)
)Check Tools | Preferences | Database | Advanced Parameters and post the value you have there.
Try setting it to a small number and report if you see any improvement.
-Raghu -
Performance issue of "clobagg"
create or replace
type clobagg_type as object(
text clob,
static function ODCIAggregateInitialize(
sctx in out clobagg_type
return number,
member function ODCIAggregateIterate(
self in out clobagg_type,
value in clob
return number,
member function ODCIAggregateTerminate(
self in clobagg_type,
returnvalue out clob,
flags in number
return number,
member function ODCIAggregateMerge(
self in out clobagg_type,
ctx2 in clobagg_type
return number
create or replace
type body clobagg_type
is
static function ODCIAggregateInitialize(
sctx in out clobagg_type
return number
is
begin
sctx := clobagg_type(null) ;
return ODCIConst.Success ;
end;
member function ODCIAggregateIterate(
self in out clobagg_type,
value in clob
return number
is
begin
self.text := self.text || value ;
return ODCIConst.Success;
end;
member function ODCIAggregateTerminate(
self in clobagg_type,
returnvalue out clob,
flags in number
return number
is
begin
returnValue := self.text;
return ODCIConst.Success;
end;
member function ODCIAggregateMerge(
self in out clobagg_type ,
ctx2 in clobagg_type
return number
is
begin
self.text := self.text || ctx2.text;
return ODCIConst.Success;
end;
end;
create or replace
function clobagg(
input clob
return clob
deterministic
parallel_enable
aggregate using clobagg_type;
SQL> select trim(',' from clobagg(ename||',')) as enames from emp;
ENAMES
SMITH,ALLEN,WARD,JONES,MARTIN,BLAKE,CLARK,SCOTT,KING,TURNER,ADAMS,JAMES,FORD,MILLER
SQL>
I use the above function in my queries.. but it takes ages. Any one has better solution for this?
Requirement is , I need to use LISTAGG functionality with out 4000 char limitBoth the threads are not talking about the performance issue. Yes, they are (but they are large threads), see for example:
"It works perfectly fine for small data. But for large data sets, it takes forever e.g"
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:15637744429336#50979409688013
"search - you'll find an implementation that does clobs - but I would recommend against it, the amount of data is probably just too long at that point."
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:15637744429336#1225142800346989581
Maybe you are looking for
-
On Yahoo Groups, using Internet Explorer, one can use a clipboard function to post photos to messages. When using Firefox, the clipboard doesn't appear and you cannot post photos on group messages.
-
DBMS_LOB.WRITEAPPEND Max buffer size exceeded
Hello, I'm following this guide to create an index using Oracle Text: http://download.oracle.com/docs/cd/B19306_01/text.102/b14218/cdatadic.htm#i1006810 So I wrote something like this: CREATE OR REPLACE PROCEDURE CREATE_INDEX(rid IN ROWID, tlob IN OU
-
3G ipod with non-updating "smart" playlists
It appears that apple managed to disable one of the most widely used features of the ipod with an update last year. Lots of folks have reported that the January firmware update fixed it... but it doesn't contain any new software for 3G (touchwheel) i
-
Novice needs help, please
I recently moved from outside Chicago to Davenport, Iowa. Have mediacom as high speed internet company. I-book not used for at least a month so power seems depleted; how long to recharge? Or will I need to replace battery? Further, I had wireless net
-
I just upgraded to the iPhone 6 and before I delete my old phone, I want to make sure all my photos were in icloud, but I can't find them anywhere. Why? I know it may take weeks to download my 14,000 pics, but I want to make sure they're there.