SPATIAL QUERY VERY SLOW
I CAN TO EXECUTE THIS QUERY BUT IT IS VERY SLOW, I HAVE 2 TABLE , ONE A WITH 250.000 SITE AND B WITH 250.000 POINTS, I WANT TO DETERMINING HOW MANY RISK INSIDE THE SITES.
THANKS
JGS
SELECT B.ID, A.ID, A.GC, A.SUMA
FROM DBG_RIESGOS_CUMULOS_SITE A, DBG_RIESGOS_CUMULOS B
WHERE A.GC = 'PATRIMONIAL FENOMENOS SISMICOS' AND A.GC=B.GC
AND SDO_RELATE(B.GEOMETRY, A.GEOMETRY, 'MASK=INSIDE') = 'TRUE';
100 RECORS IN 220 '' SLOWWWWW
I would do two things:
1) Ensure Oracle is patched with the latest 10.2.0.4 patches
This is the list I've been working with:
Patch 7003151
Patch 6989483
Patch 7237687
Patch 7276032
Patch 7307918
2) Write the query like this
SELECT /*+ ORDERED*/ B.ID, A.ID, A.GC, A.SUMA
FROM DBG_RIESGOS_CUMULOS B, DBG_RIESGOS_CUMULOS_SITE A
WHERE B.GC = 'PATRIMONIAL FENOMENOS SISMICOS'
AND A.GC=B.GC
AND SDO_ANYINTERACT(A.GEOMETRY, B.GEOMETRY) = 'TRUE';
Similar Messages
-
I have Oracle 9i and SUN OS 5.8
I have a Java application that have a query to the Customer table. This table has 2.000.000 of records and I have to show its in pages (20 record each page)
The user query for example the Customer that the Last Name begin with O. Then the application shows the first 20 records with this condition and order by Name.
Then I have to create 2 querys
1)
SELECT id_customer,Name
FROM Customers
WHERE Name like 'O%'
ORDER BY id_customer
But when I proved this query in TOAD it take long to do it (the time consuming was 15 minutes)
I have the index in the NAME field!!
Besides, if the user want to go to the second page the query is executed again. (The java programmers said me that)
What is your recommendation to optimize it????? I need to obtain the information in
few seconds.
2)
SELECT count(*) FROM Customers WHERE NAME like 'O%'
I have to do this query because I need to known How many pages (20 records) I need to show.
Example with 5000 records I have to have 250 pages.
But when I proved this query in TOAD it take long to do it (the time consuming was 30 seconds)
What is your recommendation to optimize it????? I need to obtain the information in
few seconds.
Thanks in advance!This appears to be a dulpicate of a post in the Query very slow! forum.
Claudio, since the same folks tend to read both forums, it is generally preferred that you post questions in only one forum. That way, multiple people don't spend time writing generally equivalent replies.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Spatial query runs slow on view
Hello,
I have two tables and one of them has geometry column. I created view to join those two tables based on id column which has been indexed for both tables.
t1(
id number(9),
name varchar2(20)
t2(
id number(9),
geom MDSYS.SDO_GEOMETRY
CREATE VIEW v1 (
id,
name,
geom
) AS
SELECT /*+ FIRST_ROWS */ t1.id, t1.name, t2.geom
FROM t1,t2
WHERE t1.id = t2.id
When I query the view with following statement it runs very slow (there are more then 1 million rows in t2 table)
SELECT * FROM v1
WHERE mdsys.sdo_filter(geom, [a rectangle],'querytype=window') = 'TRUE';
but
SELECT /*+ FIRST_ROWS */ t1.id, t1.name,t2.geom
FROM t1,t2
WHERE t1.id=t2.id
and mdsys.sdo_filter(geom, [a rectangle],'querytype=window') = 'TRUE';
returns almost instantly. Can some one tell me what is wrong with the "create view" statement?
ThanksThank you for your reply. Here are the plans. The view looks for the spatial index first.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 21 | 756 | 10 (60)|
| 1 | NESTED LOOPS | | 21 | 756 | 10 (60)|
| 2 | TABLE ACCESS BY INDEX ROWID| T2 | 5269 | 123K| 3 (0)|
| 3 | DOMAIN INDEX | T2_SDX | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| T1 | | | |
| 5 | INDEX RANGE SCAN | T1_ID_IDX | 1 | | 0 (0)|
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 21 | 756 | 99 (3)|
| 1 | TABLE ACCESS BY INDEX ROWID | T2 | 1 | 24 | 99 (3)|
| 2 | NESTED LOOPS | | 21 | 756 | 99 (3)|
| 3 | TABLE ACCESS FULL | T1 | 21 | 252 | 2 (0)|
| 4 | BITMAP CONVERSION TO ROWIDS | | | | |
| 5 | BITMAP AND | | | | |
| 6 | BITMAP CONVERSION FROM ROWIDS| | | | |
| 7 | INDEX RANGE SCAN | T2_ID_IDX | 1 | | 2 (0)|
| 8 | BITMAP CONVERSION FROM ROWIDS| | | | |
| 9 | SORT ORDER BY | | | | |
| 10 | DOMAIN INDEX | T2_SDX | 1 | | |
----------------------------------------------------------------------------------------------------- -
Oracle interview questions;; query very slow
Hai
Most of time in interview they ask..
WHAT IS THE STEPS U DO WHEN UR QUERY IS RUNNNING VERY SLOW..?
i used to say
1) first i will check for whether table is properly indexed or not
2) next properly normalized or not
interviewers are not fully satisfied with these answers..
So kindly tell me more suggestion
SAlso when checking the execution plan, get the actual plan using DBMS_XPLAN.DISPLAY_CURSOR, rather than the predicted one (EXPLAIN PLAN FOR). If you use a configurable IDE such as SQL Developer of PL/SQL Developer it is worth taking the time to set this up in the session browser so that you can easily capture it while it's running. You might also look at the estimated vs actual cardinalities. While you're at it you could check v$session_longops, v$session_wait and (if you have the Diagnostic and Tuning packs licenced) v$active_session_history and the various dba_hist% views.
You might try the SQL Tuning Advisor (DBMS_SQLTUNE) which generates profiles for SQL statements (requires ADVISOR system privilege to run a tuning task, and CREATE ANY SQL PROFILE to apply a profile).
In 11g look at SQL Monitor.
Tracing is all very well if you can get access to the tracefile in a reasonable timeframe, though in many sites (including my current one) it's just too much trouble unless you're a DBA.
Edited by: William Robertson on Apr 18, 2011 11:40 PM
Sorry Rob, should probably have replied to oraclehema rather than you. -
User Defined Type - Array bind Query very slow
Hi.
I have following Problem. I try to use Oracle Instant Client 11 and ODP.NET to pass Arrays in SELECT statements as Bind Parameters. I did it, but it runs very-very slow. Example:
- Inittial Query:
SELECT tbl1.field1, tbl1.field2, tbl2.field1, tbl2.field2 ... FROM tbl1
LEFT JOIN tbl2 ON tbl1.field11=tbl2.field0
LEFT JOIN tbl3 ON tbl2.field11=tbl3.field0 AND tbll1.field5=tbl3.field1
...and another LEFT JOINS
WHERE
tbl1.field0 IN ('id01', 'id02', 'id03'...)
this query with 100 elements in "IN" on my database takes 3 seconds.
- Query with Array bind:
in Oracle I did UDT: create or replace type myschema.mytype as table of varchar2(1000)
than, as described in Oracle Example I did few classes (Factory and implementing IOracleCustomType) and use it in Query,
instead of IN ('id01', 'id02', 'id03'...) I have tbl1.field0 IN (select column_value from table(:prmTable)), and :prmTable is bound array.
this query takes 190 seconds!!! Why? I works, but the HDD of Oracle server works very hard, and it takes too long.
Oracle server we habe 10g.
PS: I tried to use only 5 elements in array - the same result, it takes also 190 seconds...
Please help!I recommend you generate an explain plan for each query and post them here. Based on what you have given the following MAY be happening:
Your first query has as static IN list when it is submitted to the server. Therefore when Oracle generates the execution plan the CBO can accurately determine it based on a KNOWN set of input parameters. However the second query has a bind variable for this list of parameters and Oracle has no way of knowing at the time the execution plan is generated what that list contains. If it does not know what the list contains it cannot generate the most optimal execution plan. Therefore I would guess that it is probably doing some sort of full table scan (although these aren't always bad, remember that!).
Again please post the execution plans for each.
HTH! -
10g Form - first execute query - very slow
I have the following issue:
Enter an application
open a form in enter query mode
first time execute query is very slow (several minutes)
every other time it's quick (couple seconds or less)
I can leave the form, use other forms within the app, come back and query is still quick. It's only the first time after initially launching the app.
Any ideas what might be causing this?We have the same application running in 6i client/server DB-9i in production. We are testing the upgraded application that is 10g forms on OAS DB-10g. We don't have the issue in the current production client/server app.
-
UNION making query very slow... solution?
Hi Guys,
I want to get the records of two tables in one view. Option available in oracle is UNION.
I have used UNION between two select statement. There are above 15,000 records in one table and around 200 in the other one.
But after using this UNION between the select statements my view have become very slow.
Can i use a order by command in the following view, I have tried but it gives error. What is the alternate of a UNION.
Please help. All of our reports depend on this view and its very slow.
the script of the view is as follows:
CREATE OR REPLACE VIEW "COMMON"."V_SEL_SYS_EMP" AS
Select Employee.Emp_Employees.Employee_ID,
trim(Employee.Emp_Employees.Emp_F_Name) ||' '|| trimemployee.Emp_Employees.Emp_L_Name) As
Emp_Name, Employee.Emp_Employees.Branch_ID,
Common.Com_Branches.Br_Name, COMMON.COM_BRANCHES.REGION_ID,
COMMON.COM_REGIONS.REGION_NAME, COMMON.COM_BRANCHES.CHAPTER_ID,
COMMON.COM_CHAPTERS.CHAPTER_NAME, Employee.Emp_Employees.Company_ID,
Common.Com_Companies.Comp_Name, Employee.Emp_Employees.Department_ID,
Common.Com_Departments.Dept_Name, Employee.Emp_Employees.Religion_ID,
Common.Com_Religions.Religion_Name, Employee.Emp_Employees.Premises_ID,
Common.Com_Premises.Premises_name, Employee.Emp_Employees.Categ_ID,
Employee.Emp_Categories.Categ_Name, Employee.Emp_Employees.Desig_ID,
Employee.Emp_Employees.Desig_Suffix, Employee.Emp_Designations.Designation,
EMPLOYEE.EMP_EMPLOYEES.PAY_SCALE, EMPLOYEE.EMP_EMPLOYEES.BASIC_SAL,
Employee.Emp_Employees.HEAD_OF_DEPT, Employee.Emp_Employees.Birth_Date,
Employee.Emp_Employees.Emp_Gender, Employee.Emp_Employees.Emp_Status,
Employee.Emp_Employees.Hire_Date, Employee.Emp_Employees.Conf_Date,
Employee.Emp_Employees.Left_Date, Employee.Emp_Employees.Emp_Photo,
Employee.Emp_Emp_Info.E_Mail,Employee.Emp_Employees.Dept_Head_Id FROM Employee.Emp_Employees, Common.Com_Branches,
Common.Com_Companies, Common.Com_Departments, Common.Com_Religions, Common.Com_Premises,
Employee.Emp_categories,
Employee.Emp_Designations, Employee.Emp_Emp_Info, COMMON.COM_REGIONS,common.com_chapters
Where (Employee.Emp_Employees.Branch_ID = Common.Com_Branches.Branch_ID(+))
and (Employee.Emp_Employees.Company_ID = Common.Com_Companies.Company_ID(+))
AND (COM_BRANCHES.REGION_ID = COM_REGIONS.REGION_ID(+))
AND (COM_BRANCHES.CHAPTER_ID = COM_CHAPTERS.CHAPTER_ID(+))
and (Employee.Emp_Employees.Department_ID = Common.Com_Departments.Department_ID(+))
and (Employee.Emp_Employees.Religion_ID = Common.Com_Religions.Religion_ID(+))
and (Employee.Emp_Employees.Premises_ID = Common.Com_Premises.Premises_ID(+))
and (Employee.Emp_Employees.Categ_ID = Employee.Emp_Categories.Categ_ID(+))
and (Employee.Emp_Employees.Desig_ID = Employee.Emp_Designations.Desig_ID(+))
and (Employee.Emp_Employees.Employee_ID = Employee.Emp_Emp_Info.Employee_ID(+))
UNION
Select Common.Com_Non_Employees.Non_Employee_ID,
trim(Common.Com_Non_Employees.First_Name) ||' '|| trim(Common.Com_Non_Employees.Last_Name)
As Emp_Name, Common.Com_Non_Employees.Branch_ID,
Common.Com_Branches.Br_Name, COMMON.COM_BRANCHES.REGION_ID,
COMMON.COM_REGIONS.REGION_NAME, COMMON.COM_BRANCHES.CHAPTER_ID,
COMMON.COM_CHAPTERS.CHAPTER_NAME, Common.Com_Non_Employees.Company_ID,
Common.Com_Companies.Comp_Name, Common.Com_Non_Employees.Department_ID,
Common.Com_Departments.Dept_Name, Common.Com_Non_Employees.Religion_ID,
Common.Com_Religions.Religion_Name, NULL as Premises_ID,
NULL as Premises_name, NULL as Categ_ID, NULL as Categ_Name,
Common.Com_Non_Employees.Desig_ID, Common.Com_Non_Employees.Desig_Suffix,
Employee.Emp_Designations.Designation, NULL as PAY_SCALE,
NULL as BASIC_SAL, NULL as HEAD_OF_DEPT, NULL as Birth_Date,
Common.Com_Non_Employees.Emp_Gender, NULL as Emp_Status,
NULL as Hire_Date,NULL as Conf_Date,NULL as Left_Date,NULL as Emp_Photo,
Employee.Emp_Emp_Info.E_Mail,Null as Dept_Head_ID
FROM Common.Com_Non_Employees, Common.Com_Branches,
Common.Com_Companies,
Common.Com_Departments, Common.Com_Religions, Common.Com_Premises,
Employee.Emp_categories, Employee.Emp_Designations, Employee.Emp_Emp_Info, COMMON.COM_REGIONS,
common.com_chapters
Where (Common.Com_Non_Employees.Branch_ID = Common.Com_Branches.Branch_ID(+))
and (Common.Com_Non_Employees.Company_ID = Common.Com_Companies.Company_ID(+))
AND (COM_BRANCHES.REGION_ID = COM_REGIONS.REGION_ID(+))
AND (COM_BRANCHES.CHAPTER_ID = COM_CHAPTERS.CHAPTER_ID(+))
and (Common.Com_Non_Employees.Department_ID = Common.Com_Departments.Department_ID(+))
and (Common.Com_Non_Employees.Religion_ID = Common.Com_Religions.Religion_ID(+))
and (Common.Com_Non_Employees.Desig_ID = Employee.Emp_Designations.Desig_ID(+))
and (Common.Com_Non_Employees.NOn_Employee_ID = Employee.Emp_Emp_Info.Employee_ID(+))
without UNION the two selet commands retrieve data in a quick manner.
Plis help!
Imran Baiguse UNION ALL instead of UNION.
If you still feel slow then generate the trace and see what is the bottle neck.
alter session set events '10046 trace name context forever, level 8'
select * from veww;
alter session set events '10046 trace name context off';
use tkprof to format the trace file generated by the event, you can find trace in your udump directory. And see what are the waiting events.
Jaffar
OCP DBA -
Why Oracle Spatial Query So Slow???
I have 90,000 lines in my database. When I did a query about
finding all the lines within a certain distance of a point, it
took me nearly 30 seconds. And when I query the intersection
between the lines, it took me about 5 minutes!! God, I can't
believe it!
Do you guys met the problem before? Please share something with
me, thank you so much!
XiongHi,
Are you using a spatial index for your within distance query? If
so, what kind, and have you tuned it at all?
As far as the intersection question goes, how many geometries are
you testing? Are they big? Are you only comparing the ones
returned by within distance?
If they are big geometries, there are some very large performance
gains in 9i when dealing with larger geometries.
Dan -
First query very slow, subsequent queries fine
In our 9i application when we first do a query it is extremely slow. If the database has not been used for sometime. This happens for sure after overnight, but also after an hour or so of inactivity during day.
After the initial query eventually completes, subsequent queries seem fast and no problem. Is just problem with first query.
This does not happen with all data. Just a particular group of data in our database.
any suggestions?
Thanks
JohnHi John !
For mee, it looks like a data cache effect.
A database need to manipulate data and use a data cache to avoid reading/writing data too much.
So if the request don't find data in the cache, the database have to read it from disk and put it in the data cache (for me, your fist request). But if data are already in the cache, there is no need to reed them from disk. So the request time is very far better (for me, following requests).
So if this is a very important problem what can you do ?
- Check your query exec plan and try to need few data reads (avoid full scans tables for exemple...)
- Rise the size of your db cache (check the cache hit ratio (1))
- You can place data permanently in the cache (for table CACHE option) but only if these data sets are small (check dba_segments, [dba_tables after statistics]). If data sets are important, these data can eject other data from cache, so your request time will be good but other requests very bad.
It could be a library cache effect too (same kind of problem: entries are made for querys already parsed, so the same query can avoid a hard parse) if, for exemple, you handle queries with 5,000 bind variables .
You can check the library hit ratio too (2)
To be sure of your problem, I think the best is to trace your request
1) when executed first (cold request)
2) and when executed 4th time (hot request)
Tkprof the two traces and look where is the difference. There is 3 phases: parse, execute and fetch.
Data cache problem is a high fetch, library, high parse. You will also find for your query the state which implies disk reads read (on execution plan)
You can posts here cache query results and times for your 1st request and following requests. Even your trace files, if you want me to check your resolution.
Regards,
Jean-Luc
(1)
Cache hit ratio.
Warning1: calculated from your last startup (so if your last startup is few weeks ago, you need to shutdwon, wait for a good sample of your batches executed, and try the following request)
Warning2: There is no ">98 % is good" and "<90 % is bad". It depends on yours applications. For exemple, if same data is frequently acceded in a transactionnal database, you have rise it as high you can.
But imagine databases for clients and clients who needs their data 1 time a day or a week (database or schema of client information like this very forum [Good exemple because I suspect them to use Oracle databases you know :)]). You can accept to have a high response time, lots of disk reads, and so a HR < 90.
Cache hit ratio :
select round((1-(pr.value/(bg.value+cg.value)))*100,2) cachehit
from v$sysstat pr, v$sysstat bg, v$sysstat cg
where pr.name = 'physical reads'
and bg.name = 'db block gets'
and cg.name = 'consistent gets';
(2)
Same warnings than (1)W1
but not (1)W2: Library HR is generaly higher than cache hit ratio >98
Library cache hit ratio :
select round(sum(pinhits)/sum(pins) * 100,2) efficacite from v$librarycache; -
Flashback and transaction query very slow
Hello. I was wondering if anyone else has seen transaction queries be really slow and if there is anything I can do to speed it up? Here is my situation:
I have a database with about 50 tables. We need to allow the user to go back to a point in time and "undo" what they have done. I can't use flashback table because multiple users can be making changes to the same table (different records) and I can't undo what the other users have done. So I must use the finer granularity of undoing each transaction.
I have not had a problem with the queries, etc. I basically get a cursor to all the transactions in each of the tables and order them backwards (since all the business rules must be observed). However, getting this cursor takes forever. From that cursor, I can execute the undo_sql. In fact, I once had a cursor that did "union all" on each table and even if the user only modified 1 table, it took way too long. So now I do a quick count based on the ROWSCN (running 10g and tables have ROWDEPENDANCIES) being in the time needed to find out if this table has been touched. Based on that, I can create a cursor only for the tables that have been touched. This helps. But it is still slow especially compared to any other query I have. And if the user did touch a lot of tables, it is still way too slow.
Here is an example of part of a query that is used on each table:
select xid, commit_scn, logon_user, undo_change#, operation, table_name, undo_sql
from flashback_transaction_query
where operation IN ('INSERT', 'UPDATE', 'DELETE')
and xid IN (select versions_xid
from TABLE1
versions between SCN p_scn and current_scn
where system_id = p_system_id)
and table_name = UPPER('TABLE1')Any help is greatly appreciated.
-CarmineAnyone?
Thanks,
-Carmine -
I have a table which has 40million data in it. Of-course partitioned!.
begin
pk_cm_entity_context.set_entity_in_context(1);
end;
SELECT COUNT(1) FROM XFACE_ADDL_DETAILS_TXNLOG;
alter table XFACE_ADDL_DETAILS_TXNLOG rename to XFACE_ADDLDTS_TXNLOG_PTPART;
SELECT COUNT(1) FROM XFACE_ADDLDTS_TXNLOG_PTPART;
-- Create table
create table XFACE_ADDL_DETAILS_TXNLOG
REF_TXN_NO CHAR(40),
REF_USR_NO CHAR(40),
REF_KEY_NO VARCHAR2(50),
REF_TXN_NO_ORG CHAR(40),
REF_USR_NO_ORG CHAR(40),
RECON_CODE VARCHAR2(25),
COD_TASK_DERIVED VARCHAR2(5),
COD_CHNL_ID VARCHAR2(6),
COD_SERVICE_ID VARCHAR2(10),
COD_USER_ID VARCHAR2(30),
COD_AUTH_ID VARCHAR2(30),
COD_ACCT_NO CHAR(22),
TYP_ACCT_NO VARCHAR2(4),
COD_SUB_ACCT_NO CHAR(16),
COD_DEP_NO NUMBER(5),
AMOUNT NUMBER(15,2),
COD_CCY VARCHAR2(3),
DAT_POST DATE,
DAT_VALUE DATE,
TXT_TXN_NARRATIVE VARCHAR2(60),
DATE_CHEQUE_ISSUE DATE,
TXN_BUSINESS_TYPE VARCHAR2(10),
CARD_NO CHAR(20),
INVENTORY_CODE CHAR(10),
INVENTORY_NO CHAR(20),
CARD_PASSBOOK_NO CHAR(30),
COD_CASH_ANALYSIS CHAR(20),
BANK_INFORMATION_NO CHAR(8),
BATCH_NO CHAR(10),
SUMMARY VARCHAR2(60),
MAIN_IC_TYPE CHAR(1),
MAIN_IC_NO CHAR(48),
MAIN_IC_NAME CHAR(64),
MAIN_IC_CHECK_RETURN_CODE CHAR(1),
DEPUTY_IC_TYPE CHAR(1),
DEPUTY_IC_NO CHAR(48),
DEPUTY_NAME CHAR(64),
DEPUTY_IC_CHECK_RETURN_CODE CHAR(1),
ACCOUNT_PROPERTY CHAR(4),
CHEQUE_NO CHAR(20),
COD_EXT_TASK CHAR(10),
COD_MODULE CHAR(4),
ACC_PURPOSE_CODE VARCHAR2(15),
NATIONALITY CHAR(3),
CUSTOMER_NAME CHAR(192),
COD_INCOME_EXPENSE CHAR(6),
COD_EXT_BRANCH CHAR(6),
COD_ACCT_TITLE CHAR(192),
FLG_CA_TT CHAR(1),
DAT_EXT_LOCAL DATE,
ACCT_OWNER_VALID_RESULT CHAR(1),
FLG_DR_CR CHAR(1),
FLG_ONLINE_UPLOAD CHAR(1),
FLG_STMT_DISPLAY CHAR(1),
COD_TXN_TYPE NUMBER(1),
DAT_TS_TXN TIMESTAMP(6),
LC_BG_GUARANTEE_NO VARCHAR2(20),
COD_OTHER_ACCT_NO CHAR(22),
COD_MOD_OTHER_ACCT_NO CHAR(4),
COD_CC_BRN_SUB_ACCT NUMBER(5),
COD_CC_BRN_OTHR_ACCT NUMBER(5),
COD_ENTITY_VPD NUMBER(5) default NVL(sys_context('CLIENTCONTEXT','entity_code'),11),
COD_EXT_TASK_REV VARCHAR2(10)
partition by hash (REF_TXN_NO)
PARTITIONS 128
store in (FCHDATA1,FCHDATA2,FCHDATA3,FCHDATA4, FCHDATA5, FCHDATA6, FCHDATA7, FCHDATA8);
insert /*+APPEND NOLOGGING */ into XFACE_ADDL_DETAILS_TXNLOG
select /*+PARALLEL */ * from XFACE_ADDLDTS_TXNLOG_PTPART;
-- Add comments to the table
comment on table XFACE_ADDL_DETAILS_TXNLOG
is ' Additional Data log table ';
-- Add comments to the columns
comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_TXN_NO
is 'Transaction Reference Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_USR_NO
is 'User Reference Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_KEY_NO
is 'Unique key to identify a leg of the transaction';
comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_TXN_NO_ORG
is 'Original Transaction Reference Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_USR_NO_ORG
is 'Original Transaction User Reference Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.RECON_CODE
is 'Reconciliation of transactions in future';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_TASK_DERIVED
is 'Transaction mnemonic for the request';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CHNL_ID
is 'Channel ID';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_SERVICE_ID
is 'Service ID';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_USER_ID
is 'User ID';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_AUTH_ID
is 'Authorizer ID';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_ACCT_NO
is 'It can be Card number or MCA or GL or CASH GL';
comment on column XFACE_ADDL_DETAILS_TXNLOG.TYP_ACCT_NO
is 'Type of input (Valid values CARD, MCA, GL, CASH, LN)';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_SUB_ACCT_NO
is 'MC Sub Account Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_DEP_NO
is 'Deposit Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.AMOUNT
is 'Transaction Amount';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CCY
is 'Currency Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_POST
is 'Posting Date of the transaction';
comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_VALUE
is 'Value Date of the transaction';
comment on column XFACE_ADDL_DETAILS_TXNLOG.TXT_TXN_NARRATIVE
is 'Text Transaction Narrative';
comment on column XFACE_ADDL_DETAILS_TXNLOG.DATE_CHEQUE_ISSUE
is 'Date of Issue of Cheque';
comment on column XFACE_ADDL_DETAILS_TXNLOG.TXN_BUSINESS_TYPE
is 'Transaction Business Type';
comment on column XFACE_ADDL_DETAILS_TXNLOG.CARD_NO
is 'Card Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.INVENTORY_CODE
is 'Inventory Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.INVENTORY_NO
is 'Inventory Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.CARD_PASSBOOK_NO
is 'Card Passbook Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CASH_ANALYSIS
is 'Cash Analysis Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.BANK_INFORMATION_NO
is 'Bank Information Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.BATCH_NO
is 'Batch Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.SUMMARY
is 'Summary';
comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_TYPE
is 'IC Type';
comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_NO
is 'IC Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_NAME
is 'IC Name';
comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_CHECK_RETURN_CODE
is 'IC Check Return Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_TYPE
is 'Deputy IC Type';
comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_NO
is 'Deputy IC Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_NAME
is 'Deputy Name';
comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_CHECK_RETURN_CODE
is 'Deputy IC Check Return Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.ACCOUNT_PROPERTY
is 'Account Property';
comment on column XFACE_ADDL_DETAILS_TXNLOG.CHEQUE_NO
is 'Cheque Number';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_EXT_TASK
is 'External Task Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_MODULE
is 'Module Code - CH, TD, RD , LN, CASH, GL';
comment on column XFACE_ADDL_DETAILS_TXNLOG.ACC_PURPOSE_CODE
is 'Account Purpose Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.NATIONALITY
is 'Nationality';
comment on column XFACE_ADDL_DETAILS_TXNLOG.CUSTOMER_NAME
is 'Customer Name';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_INCOME_EXPENSE
is 'Income Expense Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_EXT_BRANCH
is 'External Branch Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_ACCT_TITLE
is 'Account Title Code';
comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_CA_TT
is 'Cash or Funds Transfer flag';
comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_EXT_LOCAL
is 'Local Date';
comment on column XFACE_ADDL_DETAILS_TXNLOG.ACCT_OWNER_VALID_RESULT
is 'Account Owner Valid Result';
comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_DR_CR
is 'Flag Debit Credit - D, C.';
comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_ONLINE_UPLOAD
is 'Flag Online Upload - O, U.';
comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_STMT_DISPLAY
is 'Statement Display Flag - Y/N, Y(Normal Reversal), N(Correction Reversal)';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_TXN_TYPE
is 'To denote the kind of transaction:
1 ?Cash Credit Transaction
2 ?Cash Debit Transaction
3 ?Funds Transfer Credit Transaction
4 ?Funds Transfer Debit Transaction
comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_TS_TXN
is 'Date and Timestamp of the record being inserted';
comment on column XFACE_ADDL_DETAILS_TXNLOG.LC_BG_GUARANTEE_NO
is 'LC/BG Guarantee Number for which the request for the Liquidation has been initiated.';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_OTHER_ACCT_NO
is 'Other Account No';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_MOD_OTHER_ACCT_NO
is 'Module Code of Other Account No - CH, TD, RD , LN, CASH, GL';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CC_BRN_SUB_ACCT
is 'Branch Code for Sub Account';
comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CC_BRN_OTHR_ACCT
is 'Branch Code for Other Account';
-- Create/Recreate indexes
drop index IN_XFACE_ADDL_DETAILS_TXNLOG_1;
drop index IN_XFACE_ADDL_DETAILS_TXNLOG_2;
drop index IN_XFACE_ADDL_DETAILS_TXNLOG_3;
drop index IN_XFACE_ADDL_DETAILS_TXNLOG_4;
drop index IN_XFACE_ADDL_DETAILS_TXNLOG_5;
drop index IN_XFACE_ADDL_DETAILS_TXNLOG_6;
drop index IN_XFACE_ADDL_DETAILS_TXNLOG_7;
drop index IN_XFACE_ADDL_DETAILS_TXNLOG_8;
create index IN_XFACE_ADDL_DETAILS_TXNLOG_1 on XFACE_ADDL_DETAILS_TXNLOG (REF_TXN_NO, REF_KEY_NO, COD_SUB_ACCT_NO, COD_ENTITY_VPD)
GLOBAL PARTITION BY HASH (REF_TXN_NO, REF_KEY_NO, COD_SUB_ACCT_NO) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
create index IN_XFACE_ADDL_DETAILS_TXNLOG_2 on XFACE_ADDL_DETAILS_TXNLOG (REF_USR_NO, REF_KEY_NO, COD_SUB_ACCT_NO, COD_ENTITY_VPD)
GLOBAL PARTITION BY HASH(REF_USR_NO, REF_KEY_NO, COD_SUB_ACCT_NO) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
create index IN_XFACE_ADDL_DETAILS_TXNLOG_3 on XFACE_ADDL_DETAILS_TXNLOG (COD_SUB_ACCT_NO, FLG_STMT_DISPLAY,DAT_POST COD_ENTITY_VPD)
GLOBAL PARTITION BY HASH(COD_SUB_ACCT_NO, FLG_STMT_DISPLAY) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
create index IN_XFACE_ADDL_DETAILS_TXNLOG_4 on
XFACE_ADDL_DETAILS_TXNLOG (COD_ACCT_NO, REF_TXN_NO, COD_TXN_TYPE, COD_USER_ID, COD_EXT_BRANCH, COD_ENTITY_VPD)
GLOBAL PARTITION BY HASH(COD_ACCT_NO, REF_TXN_NO, COD_TXN_TYPE, COD_USER_ID, COD_EXT_BRANCH)
PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
create index IN_XFACE_ADDL_DETAILS_TXNLOG_5 on XFACE_ADDL_DETAILS_TXNLOG (COD_USER_ID, DAT_POST, COD_ENTITY_VPD)
GLOBAL PARTITION BY HASH(COD_USER_ID) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
create index IN_XFACE_ADDL_DETAILS_TXNLOG_6 on XFACE_ADDL_DETAILS_TXNLOG (REF_TXN_NO_ORG, COD_ENTITY_VPD)
GLOBAL PARTITION BY HASH(REF_TXN_NO_ORG) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
create index IN_XFACE_ADDL_DETAILS_TXNLOG_7 on XFACE_ADDL_DETAILS_TXNLOG (DAT_EXT_LOCAL, DAT_POST,TXN_BUSINESS_TYPE, FLG_ONLINE_UPLOAD, COD_CHNL_ID, REF_TXN_NO, COD_ENTITY_VPD)
GLOBAL PARTITION BY HASH(DAT_EXT_LOCAL) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
/* Previous Key order: (COD_EXT_BRANCH,DAT_POST,REF_TXN_NO_ORG,COD_SERVICE_ID,COD_ENTITY_VPD) */
create index IN_XFACE_ADDL_DETAILS_TXNLOG_8 on XFACE_ADDL_DETAILS_TXNLOG (DAT_POST, COD_EXT_BRANCH, REF_TXN_NO_ORG, COD_SERVICE_ID, COD_ENTITY_VPD)
GLOBAL PARTITION BY HASH(DAT_POST) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
ALTER TABLE XFACE_ADDL_DETAILS_TXNLOG NOPARALLEL PCTFREE 50 INITRANS 128 LOGGING;
ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_1 NOPARALLEL INITRANS 128;
ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_2 NOPARALLEL INITRANS 128;
ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_3 NOPARALLEL INITRANS 128;
ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_4 NOPARALLEL INITRANS 128;
ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_5 NOPARALLEL INITRANS 128;
ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_6 NOPARALLEL INITRANS 128;
ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_7 NOPARALLEL INITRANS 128;
ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_8 NOPARALLEL INITRANS 128;
BEGIN
DBMS_RLS.ADD_POLICY(OBJECT_SCHEMA => UPPER('FCR44HOST'),
OBJECT_NAME => UPPER('XFACE_ADDL_DETAILS_TXNLOG '),
POLICY_NAME => 'FC_ENTITY_POLICY',
FUNCTION_SCHEMA => UPPER('FCR44HOST'),
POLICY_FUNCTION => 'pk_cm_vpd_policy.get_entity_predicate',
STATEMENT_TYPES => 'select,insert,update,delete',
UPDATE_CHECK => TRUE,
ENABLE => TRUE,
STATIC_POLICY => FALSE,
POLICY_TYPE => DBMS_RLS.SHARED_STATIC,
LONG_PREDICATE => FALSE,
SEC_RELEVANT_COLS => NULL,
SEC_RELEVANT_COLS_OPT => NULL);
END;
begin
dbms_stats.gather_table_stats(ownname => 'FCR44HOST',tabname => 'XFACE_ADDL_DETAILS_TXNLOG', cascade=>true,method_opt=>'for all columns size 1',degree => 32, GRANULARITY => 'PARTITION');
end;
Query which takes time.
INSERT INTO xface_addl_dtls_tlog_temp
(ref_txn_no,
ref_usr_no,
ref_key_no,
ref_txn_no_org,
ref_usr_no_org,
recon_code,
cod_task_derived,
cod_chnl_id,
cod_service_id,
cod_user_id,
cod_auth_id,
cod_acct_no,
typ_acct_no,
cod_sub_acct_no,
cod_dep_no,
amount,
cod_ccy,
dat_post,
dat_value,
txt_txn_narrative,
date_cheque_issue,
txn_business_type,
card_no,
inventory_code,
inventory_no,
card_passbook_no,
cod_cash_analysis,
bank_information_no,
batch_no,
summary,
main_ic_type,
main_ic_no,
main_ic_name,
main_ic_check_return_code,
deputy_ic_type,
deputy_ic_no,
deputy_name,
deputy_ic_check_return_code,
account_property,
cheque_no,
cod_ext_task,
cod_module,
acc_purpose_code,
nationality,
customer_name,
cod_income_expense,
cod_ext_branch,
cod_acct_title,
flg_ca_tt,
dat_ext_local,
acct_owner_valid_result,
flg_dr_cr,
flg_online_upload,
flg_stmt_display,
cod_txn_type,
dat_ts_txn,
lc_bg_guarantee_no,
cod_other_acct_no,
cod_mod_other_acct_no,
cod_cc_brn_sub_acct,
cod_cc_brn_othr_acct,
cod_ext_task_rev,
sessionid)
SELECT ref_txn_no,
ref_usr_no,
ref_key_no,
ref_txn_no_org,
ref_usr_no_org,
recon_code,
cod_task_derived,
cod_chnl_id,
cod_service_id,
cod_user_id,
cod_auth_id,
cod_acct_no,
typ_acct_no,
cod_sub_acct_no,
cod_dep_no,
amount,
cod_ccy,
dat_post,
dat_value,
txt_txn_narrative,
date_cheque_issue,
txn_business_type,
card_no,
inventory_code,
inventory_no,
card_passbook_no,
cod_cash_analysis,
bank_information_no,
batch_no,
summary,
main_ic_type,
main_ic_no,
main_ic_name,
main_ic_check_return_code,
deputy_ic_type,
deputy_ic_no,
deputy_name,
deputy_ic_check_return_code,
account_property,
cheque_no,
cod_ext_task,
cod_module,
acc_purpose_code,
nationality,
customer_name,
cod_income_expense,
cod_ext_branch,
cod_acct_title,
flg_ca_tt,
dat_ext_local,
acct_owner_valid_result,
flg_dr_cr,
flg_online_upload,
flg_stmt_display,
cod_txn_type,
dat_ts_txn,
lc_bg_guarantee_no,
cod_other_acct_no,
cod_mod_other_acct_no,
cod_cc_brn_sub_acct,
cod_cc_brn_othr_acct,
cod_ext_task_rev,
var_l_sessionid
FROM xface_addl_details_txnlog
WHERE cod_sub_acct_no = var_pi_cod_acct_no
AND dat_post between var_pi_start_dat AND var_pi_end_dat;
Index referred is in_xface_addl_details_txnlog_3.
First time when i execute the query it takes huge time. but subsequent queries are faster. This is only if i pass same account and criteria again.
Observed that first time it goes for physical reads which takes time. and subsequent runs physical reads are less.....
Request suggestions.....this is account statement inquiry user may have 10000txns in a day as well
Bymistake earlier i raised this in "Oracle -> Text"
Slow inserts due to physical reads every time for fresh account i am passin
They suggested to use bind variable. But as i know, we are already using bind variables to bind account number and start and end date.My Replies below.
Whenever you post provide your 4 digit Oracle version (SELECT * FROM V$VERSION).
Ans :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
"CORE 11.2.0.3.0 Production"
TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
1. If your question is about the INSERT query into xface_addl_dtls_tlog_temp why didn't you post any information about the DDL for that table? Is it the same structure as the table you did post DDL for?
Ans :
-- Create table
create global temporary table XFACE_ADDL_DTLS_TLOG_TEMP
REF_TXN_NO CHAR(40) not null,
REF_USR_NO CHAR(40) not null,
REF_KEY_NO VARCHAR2(50),
REF_TXN_NO_ORG CHAR(40),
REF_USR_NO_ORG CHAR(40),
RECON_CODE VARCHAR2(25),
COD_TASK_DERIVED VARCHAR2(5),
COD_CHNL_ID VARCHAR2(6),
COD_SERVICE_ID VARCHAR2(10),
COD_USER_ID VARCHAR2(30),
COD_AUTH_ID VARCHAR2(30),
COD_ACCT_NO CHAR(22),
TYP_ACCT_NO VARCHAR2(4),
COD_SUB_ACCT_NO CHAR(16),
COD_DEP_NO NUMBER(5),
AMOUNT NUMBER(15,2),
COD_CCY VARCHAR2(3),
DAT_POST DATE,
DAT_VALUE DATE,
TXT_TXN_NARRATIVE VARCHAR2(60),
DATE_CHEQUE_ISSUE DATE,
TXN_BUSINESS_TYPE VARCHAR2(10),
CARD_NO CHAR(20),
INVENTORY_CODE CHAR(10),
INVENTORY_NO CHAR(20),
CARD_PASSBOOK_NO CHAR(30),
COD_CASH_ANALYSIS CHAR(20),
BANK_INFORMATION_NO CHAR(8),
BATCH_NO CHAR(10),
SUMMARY VARCHAR2(60),
MAIN_IC_TYPE CHAR(1),
MAIN_IC_NO VARCHAR2(150),
MAIN_IC_NAME VARCHAR2(192),
MAIN_IC_CHECK_RETURN_CODE CHAR(1),
DEPUTY_IC_TYPE CHAR(1),
DEPUTY_IC_NO VARCHAR2(150),
DEPUTY_NAME VARCHAR2(192),
DEPUTY_IC_CHECK_RETURN_CODE CHAR(1),
ACCOUNT_PROPERTY CHAR(4),
CHEQUE_NO CHAR(20),
COD_EXT_TASK CHAR(10),
COD_MODULE CHAR(4),
ACC_PURPOSE_CODE VARCHAR2(15),
NATIONALITY CHAR(3),
CUSTOMER_NAME CHAR(192),
COD_INCOME_EXPENSE CHAR(6),
COD_EXT_BRANCH CHAR(6),
COD_ACCT_TITLE VARCHAR2(360),
FLG_CA_TT CHAR(1),
DAT_EXT_LOCAL DATE,
ACCT_OWNER_VALID_RESULT CHAR(1),
FLG_DR_CR CHAR(1),
FLG_ONLINE_UPLOAD CHAR(1),
FLG_STMT_DISPLAY CHAR(1),
COD_TXN_TYPE NUMBER(1),
DAT_TS_TXN TIMESTAMP(6),
LC_BG_GUARANTEE_NO VARCHAR2(20),
COD_OTHER_ACCT_NO CHAR(22),
COD_MOD_OTHER_ACCT_NO CHAR(4),
COD_CC_BRN_SUB_ACCT NUMBER(5),
COD_CC_BRN_OTHR_ACCT NUMBER(5),
COD_EXT_TASK_REV VARCHAR2(10),
SESSIONID NUMBER default USERENV('SESSIONID') not null
on commit delete rows;
-- Create/Recreate indexes
create index IN_XFACE_ADDL_DTLS_TLOG_TEMP on XFACE_ADDL_DTLS_TLOG_TEMP (COD_SUB_ACCT_NO, REF_TXN_NO, COD_SERVICE_ID, REF_KEY_NO, SESSIONID);
2. Why doesn't your INSERT query use APPEND, NOLOGGING and PARALLEL like the first query you posted? If those help for the first query why didn't you try them for the query you are now having problems with?
Ans :
I will try to use append but i cannot use parallel since i have hardware limitations.
3. What does this mean: 'Index referred is in_xface_addl_details_txnlog_3.'? You haven't posted any plan that refers to any index. Do you have an execution plan? Why didn't you post it?
Ans :
Plan hash value: 4081844790
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | INSERT STATEMENT | | | | 5 (100)| | | |
| 1 | LOAD TABLE CONVENTIONAL | | | | | | | |
| 2 | FILTER | | | | | | | |
| 3 | PARTITION HASH ALL | | 1 | 494 | 5 (0)| 00:00:01 | 1 | 128 |
| 4 | TABLE ACCESS BY GLOBAL INDEX ROWID| XFACE_ADDL_DETAILS_TXNLOG | 1 | 494 | 5 (0)| 00:00:01 | ROWID | ROWID |
| 5 | INDEX RANGE SCAN | IN_XFACE_ADDL_DETAILS_TXNLOG_3 | 1 | | 3 (0)| 00:00:01 | 1 | 128 |
4. Why are you defining 37 columns as CHAR datatypes? Are you aware that CHAR data REQUIRES the use of the designated number of BYTES/CHARACTERS?
Ans :
I understand and appreciate your points, but since it is huge application and is built over a period of time. I am afraid if i will be allowed to do change on datatypes. there are lot of queries over this table.
5. Are you aware that #4 means those 37 columns columns, even if all of them are NULL, mean that your MINIMUM record length is 1012? Care to guess how many of those records Oracle can fit into an 8k block? And that is if you ignore the other 26 VARCHAR2, NUMBER and DATE columns.
Two of your columns take 192 bytes MINIMUM even if they are null
CUSTOMER_NAME CHAR(192),
COD_ACCT_TITLE CHAR(192)
Why are you wasting all of that space? If you are using a multi-byte character set and your data is multi-byte those 37 columns are using even more space because some characters will use more than one byte.
If the name and title average 30 characters/bytes then those two columns alone use 300+ unused bytes. With 40 million records those unused bytes, just for those two columns take 12 GB of space.
WIth a block size of 8k that would totally waste 1.5 million blocks that Oracle has to read just to ignore the empty space that isn't being used.
I highly suspect that your use of CHAR is a large part of this performance problem and probably other performance problems in your system. Not only for this table but for any other table that uses similar CHAR datatypes and wastes space.
Please reconsider your use of CHAR datatypes like this. I can't imagine what justification you have for using them.
Ans :
I understand your points, but since it is huge application is built over a period of time. I am afraid if i will be allowed to do change on datatypes.
I have to manage in current situation. Not expecting query to respond in millisecs but not even 40secs which is happening currently.
Edited by: Rohit Jadhav on Dec 30, 2012 6:44 PM -
Query Performance - Query very slow to run
I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.
Hi Joel,
Walkthrough Checklist for Query Performance:
1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
16. Check Sequential vs Parallel read on Multiproviders.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
Regards
Vivek Tripathi -
Query very slow on Windows 2003 Server
Hi,
Our customer is running 10g on Windows 2003 server. Some queries perform badly. I imported the data into a 10g DB on Linux (at our office) to analyze and test. Strange enough the same query takes more than 10 times longer to run on the Windows machine compared to the Linux. The Windows machine is dedicated to Oracle, it is not 'overloaded'.
SELECT
plan_task.id_task,
plan_task.task_id,
plan_task.taskdef_y_n,
plan_task.description,
plan_task.status,
plan_task.team_id,
plan_task.activity_id,
plan_task.task_start_datetime,
plan_task.district_id,
plan_task.task_end_datetime,
plan_task.taskdef_freq_code,
plan_task.taskdef_start_time,
plan_task.taskdef_end_time,
plan_task.order_nr
d,
PREVENT.PLAN_PERSONS_AVAILABLE_SHORT(ID_TASK) PERS_OK,
PREVENT.PLAN_MATERIALS_AVAILABLE_SHORT(ID_TASK) MAT_OK
FROM PREVENT.PLAN_TASK
WHERE (TASKDEF_Y_N='N') AND (TASK_START_DATETIME>=to_date(to_char(SYSDATE,'dd-mm-yyyy'),'dd-mm-yyyy'))
AND (TASK_START_DATETIME<to_date(to_char(SYSDATE,'dd-mm-yyyy'),'dd-mm-yyyy')+1)
ORDER BY DESCRIPTION;
On LINUX (Intel 2 GHz, 2 Gb mem) takes 0,5 seconds to execute (46 rows returned)
================================================================================
Plan
SELECT STATEMENT ALL_ROWSCost: 27 Bytes: 2,436 Cardinality: 29
4 SORT ORDER BY Cost: 27 Bytes: 2,436 Cardinality: 29
3 FILTER
2 TABLE ACCESS BY INDEX ROWID TABLE PREVENT.PLAN_TASK Cost: 26 Bytes: 2,436 Cardinality: 29
1 INDEX RANGE SCAN INDEX PREVENT.PLAN_TASK_START_DATETIME_I Cost: 2 Cardinality: 30
On WINDOWS (Intel 2 GHz, 2 Gb mem) takes 11 seconds to execute (46 rows returned)
=================================================================================
Plan
SELECT STATEMENT ALL_ROWSCost: 35 Bytes: 3,276 Cardinality: 39
4 SORT ORDER BY Cost: 35 Bytes: 3,276 Cardinality: 39
3 FILTER
2 TABLE ACCESS BY INDEX ROWID TABLE PREVENT.PLAN_TASK Cost: 34 Bytes: 3,276 Cardinality: 39
1 INDEX RANGE SCAN INDEX PREVENT.PLAN_TASK_START_DATETIME_I Cost: 2 Cardinality: 40
NOTEs:
- The data is exactly the same on both machines
- I analyzed_schema on both machines/DB's before running the query
- The SGA size en DB_BUFFERS are (almost) set to the same value
- Oracle version is the same: 10gOn Windows:
- I exported the data
- Re-created my 2 tablespaces, to set: extent management local, uniform 1M
- imported the data
- dbms_stats.gather_schema_stats( option=>'GATHER')
Plan
SELECT STATEMENT ALL_ROWSCost: 36 Bytes: 3,32 Cardinality: 40
4 SORT ORDER BY Cost: 36 Bytes: 3,32 Cardinality: 40
3 FILTER
2 TABLE ACCESS BY INDEX ROWID TABLE PREVENT.PLAN_TASK Cost: 35 Bytes: 3,32 Cardinality: 40
1 INDEX RANGE SCAN INDEX PREVENT.PLAN_TASK_START_DATETIME_I Cost: 2 Cardinality: 41
Tested, takes 7 seconds now (was 11), but this is still slow compared to the run on Linux
Grt, Stephan
Edited by: Stephan van Hoof on Jan 9, 2009 8:41 PM -
Hi All
I have three setups on which i have to run same query which is mentioned below. The execution plan on all three setups is same for the mentioned query. Still in one of the setup the query is taking almost 8 Hrs to complete. while in rest 2 setups it takes 2 Hrs to complete. The Ram Available for the setup is Same(16 GB). I tried to increase yhe SGA size but not got the expected results. i do not have DBA support for the same. I have also analysed and changed the parameter Index_OPtimizer_cost and made sure vthat this parameter is same for all three setups.
The main problem is i can not modify the query as it is been generated during on the processes. But as mentioned earlier the query generated on all three setup is same. I also changed log_buffer_size. The query is :
UPDATE /*+ BYPASS_UJVC */ ( SELECT Main Table_name.n_exp_covered_amt_irb AS T0 , CASE WHEN COND0 = 0 THEN BP0 ELSE BP1 END AS T1 FROM Global temp table , Main Table Name WHERE Main Table Name.n_gaap_skey=Gloabl Temp table.n_gaap_skey AND Main Table Name.n_run_skey=Gloabl Temp table.n_run_skey AND Main Table Name.n_acct_skey=Gloabl Temp table.n_acct_skey AND Main Table Name.fic_mis_date=Gloabl Temp table.fic_mis_date) SET T0 = T1
Indexes are same on the All three setups also one index is present for the column name mentioned in the where clause of the query.
The oracle version used is 10.0.1.0 in first setup in second setup i am using 10.0.2.0 and in third setup it's 10.0.4.0. The query is taking time where 10.0.2.0 version is installed.
When i have looked in to the session while the query e=was executing SORT OUTPUT was taking most of the time.
Thanks in Advance. It's very critical for me to get it resolved.Any suggestions are extremely welcome.Hi,
please check the indexes on the colums of table where sort is happening. if indexes are not there then create it or rebuilt it. also the sql tuning advisor
recommendation in dbconsole.
thanks -
Group By making query very slow
Hi All,
I have a query as follows,
SELECT cc.segment1 company,
recv.customer_number paying_account_number,
irec.trx_number,
irec.trx_date,
irec.trx_type_name,
recv.payment_method_dsp pmt_met,
recv.customer_name paying_customer,
irec.applied_payment_schedule_id,
irec.customer_trx_id,
irec.cash_receipt_id,
irec.gl_date gl_date,
irec.apply_date apply_date,
irec.receipt_number,
recv.receipt_date,
'PMT' ps_class,
SUM (NVL (irec.amount_applied, 0)) amount_applied
FROM iex_receivable_applications_v irec,
ra_cust_trx_types trxt,
gl_code_combinations cc,
ar_cash_receipts_v recv
WHERE cc.segment2 != '410400'
AND recv.receipt_status != 'REV'
AND recv.cash_receipt_id = irec.cash_receipt_id
AND cc.code_combination_id = trxt.gl_id_rev
AND trxt.NAME = irec.trx_type_name
AND trxt.org_id = irec.org_id
AND trxt.cust_trx_type_id = irec.cust_trx_type_id
AND recv.cash_receipt_id = irec.cash_receipt_id
GROUP BY cc.segment1,
recv.customer_number,
recv.customer_name,
irec.trx_number,
irec.trx_date,
irec.trx_type_name,
recv.payment_method_dsp,
irec.applied_payment_schedule_id,
irec.customer_trx_id,
irec.cash_receipt_id,
irec.gl_date,
irec.apply_date,
irec.receipt_number,
recv.receipt_date
In this query when I put the group by clause it hangs, but without group by clause it works very fast.
Can anyone help me with the same.
Regards,
ShrutiWhen your query takes too long:
HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long ...
HTH
Srini
Maybe you are looking for
-
Problem with ForEach block in BPM
Hi, I have a scenario as follows: Collecting Idocs and send them separately via file channel (Does not make so much sense but I want to get used using the forEach Block) Loop (as long as "control" != false) Fork 1. branch: Wait step 1 minute and cont
-
Broken BB and I need my PIN to fix it but I can't access my BB to get my PIN!
Hi I've been searching for a solution and can't find one that doesn't involve checking my actual BB device. The story is my BB has flagged up a message: "Reload Software 552" BUT to do this on the device manager I need my PIN but I can't find a way
-
My BO server has ran out of space(98% used). the server just has boot drive. I see a lot of log files in the logging directory. I want to know , is it okay if I remove the old Log file from the bobje/logging directory? Please let me know what all fil
-
Inbound Queues in XI --System error
HI Friends, sceanrio : Outbound IDOC I have observed some messages struck in inbound queue and first message stoped because of system error. I know if we delete this message , rest of the message will restart and process it. BUt i don't know , if
-
No fax number could be determined for partner
Hi, I received this message when I was trying to see output type after creating, saving sales activity (VC01N), then come back to this sales activity > Goto> Output. My output type is not shown here. So I went to Determin Analysis, I found 3 messages