Sql query slow in new redhat enviornment
We just migrated to a new dev environment in Linux REDHAT5, and now the query is very slow, and I used the TOAD to run the query, it took like 700 msecond to finish, however from any server connection, the sql query takes hours to finish.
I checked toad monitor, it said need to increase db_buffer_cache and shared pool too small.
Also three red alert from toad is:
1. Library Cache get hit ratio: Dynamic or unsharable sql
2. Chained fetch ratio: PCT free too low for a table
3. parse to execute ratio: HIgh parse to execute ratio.
App team said it ran real quick in the old AIX system, however I ran it in old system, and monitored in the toad, it gave me all same 5 red alerts in old system, and it did provide query results a lot quicker though.
Here is the parameters in the old system (11gr1 on AIX):
SQL> show parameter target
NAME TYPE VALUE
-------------------------------- archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 0
memory_target big integer 0
pga_aggregate_target big integer 278928K
sga_target big integer 0
SQL> show parameter shared
NAME TYPE VALUE
-------------------------------- hi_shared_memory_address integer 0
max_shared_servers integer
shared_memory_address integer 0
shared_pool_reserved_size big integer 31876710
shared_pool_size big integer 608M
shared_server_sessions integer
shared_servers integer 0
SQL> show parameter db_buffer
SQL> show parameter buffer
NAME TYPE VALUE
-------------------------------- buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer integer 2048000
use_indirect_data_buffers boolean FALSE
SQL>
In new 11gr2 Linux REDHAT parameter:
NAME TYPE VALUE
----------- archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 2512M
memory_target big integer 2512M
parallel_servers_target integer 192
pga_aggregate_target big integer 0
sga_target big integer 1648M
SQL> show parameter shared
NAME TYPE VALUE
----------- hi_shared_memory_address integer 0
max_shared_servers integer
shared_memory_address integer 0
shared_pool_reserved_size big integer 28M
shared_pool_size big integer 0
shared_server_sessions integer
shared_servers integer 1
SQL> show parameter buffer
NAME TYPE VALUE
----------- buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer integer 18857984
use_indirect_data_buffers boolean FALSE
SQL>
Please help. Thanks in advance.
846422 wrote:
why need ddl? we have a sql query slow.The DDL shows the physical structure of the table and physical storage characteristics. All relevant in performance tuning.
As for the SQL query being slow. It is not.
You have not provided any evidence that it is slow. And no, comparing performance with a totally different system is not a valid baseline for comparison. (most cars have 4 wheels, a gearbox and a steering wheel - but that does not mean you can compare different cars like a VW Beetle with a VW Porsche)
What is slow? What are the biggest wait states for the SQL? What does the execution plan say?
You have not defined a problem - you identified a symptom called "+query is slow+". You need to diagnose the condition by determining exactly what the SQL qeury is doing in the database. (and please, do not use TOAD and similar tools in an attempt to do this - do it properly instead)
Similar Messages
-
Sql query slowness due to rank and columns with null values:
Sql query slowness due to rank and columns with null values:
I have the following table in database with around 10 millions records:
Declaration:
create table PropertyOwners (
[Key] int not null primary key,
PropertyKey int not null,
BoughtDate DateTime,
OwnerKey int null,
GroupKey int null
go
[Key] is primary key and combination of PropertyKey, BoughtDate, OwnerKey and GroupKey is unique.
With the following index:
CREATE NONCLUSTERED INDEX [IX_PropertyOwners] ON [dbo].[PropertyOwners]
[PropertyKey] ASC,
[BoughtDate] DESC,
[OwnerKey] DESC,
[GroupKey] DESC
go
Description of the case:
For single BoughtDate one property can belong to multiple owners or single group, for single record there can either be OwnerKey or GroupKey but not both so one of them will be null for each record. I am trying to retrieve the data from the table using
following query for the OwnerKey. If there are same property rows for owners and group at the same time than the rows having OwnerKey with be preferred, that is why I am using "OwnerKey desc" in Rank function.
declare @ownerKey int = 40000
select PropertyKey, BoughtDate, OwnerKey, GroupKey
from (
select PropertyKey, BoughtDate, OwnerKey, GroupKey,
RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
from PropertyOwners
) as result
where result.[Rank]=1 and result.[OwnerKey]=@ownerKey
It is taking 2-3 seconds to get the records which is too slow, similar time it is taking as I try to get the records using the GroupKey. But when I tried to get the records for the PropertyKey with the same query, it is executing in 10 milliseconds.
May be the slowness is due to as OwnerKey/GroupKey in the table can be null and sql server in unable to index it. I have also tried to use the Indexed view to pre ranked them but I can't use it in my query as Rank function is not supported in indexed
view.
Please note this table is updated once a day and using Sql Server 2008 R2. Any help will be greatly appreciated.create table #result (PropertyKey int not null, BoughtDate datetime, OwnerKey int null, GroupKey int null, [Rank] int not null)Create index idx ON #result(OwnerKey ,rnk)
insert into #result(PropertyKey, BoughtDate, OwnerKey, GroupKey, [Rank])
select PropertyKey, BoughtDate, OwnerKey, GroupKey,
RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
from PropertyOwners
go
declare @ownerKey int = 1
select PropertyKey, BoughtDate, OwnerKey, GroupKey
from #result as result
where result.[Rank]=1
and result.[OwnerKey]=@ownerKey
go
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Sql query slow due to case statement on Joins
Hi
The sql query runs very slow for 30 min when the below case statement is added on the joins. Could you please let me know how to tune it. if the case statement is not there then it runs only for 1 min.
*( CASE*
WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
THEN PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
THEN DT_REQ_ALL.EMPMGMTCD
END =DB2.DB2_FDW_MGMT_V.MH_CHILD )
SELECT DISTINCT
D.DB2_FDW_MGMT_V.RC_PARENT,
DT_REQ_ALL.FULL_NAME,
DT_REQ_ALL.EMP_COMPANY_CODE,
DT_REQ_ALL.EMP_COST_CENTER,
PO.PO_VENDORS.VENDOR_NAME,
PO_PO_HEADERS_ALL2.SEGMENT1,
PO_PO_HEADERS_ALL2.CREATION_DATE,
PO_DIST_GL_CODE_COMB.SEGMENT1,
PO_DIST_GL_CODE_COMB.SEGMENT2,
PO_PO_HEADERS_ALL2.CURRENCY_CODE,
PO_INV_DIST_ALL.INVOICE_NUM,
PO_INV_DIST_ALL.INVOICE_DATE,
(PO_INV_DIST_ALL.INVOICE_AMOUNT* PO_Rates_GL_DR.CONVERSION_RATE),
(NVL(to_number(PO_DIST_ALL.AMOUNT_BILLED),0) * PO_Rates_GL_DR.CONVERSION_RATE),
PO_LINES_LOC.LINE_NUM,
GL.GL_SETS_OF_BOOKS.NAME,
CASE
WHEN TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) > PO_INV_DIST_ALL.INVOICE_DATE
THEN 1
ELSE 0
END ,
PO.PO_REQUISITION_LINES_ALL.LINE_LOCATION_ID,
TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE,'WW') + 8 WEEK_Ending
FROM
DB2.DB2_FDW_MGMT_V,
PO.PO_VENDORS,
PO.PO_HEADERS_ALL PO_PO_HEADERS_ALL2,
GL.GL_CODE_COMBINATIONS PO_DIST_GL_CODE_COMB,
AP.AP_INVOICES_ALL PO_INV_DIST_ALL,
PO.PO_DISTRIBUTIONS_ALL PO_DIST_ALL,
PO.PO_LINES_ALL PO_LINES_LOC,
GL.GL_SETS_OF_BOOKS,
PO.PO_REQUISITION_LINES_ALL,
PO.PO_LINE_LOCATIONS_ALL,
AP.AP_INVOICE_DISTRIBUTIONS_ALL PO_DIST_INV_DIST_ALL,
APPS.HR_OPERATING_UNITS,
PO.PO_REQ_DISTRIBUTIONS_ALL,
SELECT DISTINCT
PO_RDA.DISTRIBUTION_ID,
PO_RLA.requisition_line_id,
PO_RHA.DESCRIPTION PO_Descr,
PO_RHA.NOTE_TO_AUTHORIZER PO_Justification,
Req_Emp.FULL_NAME,
GL_CC.SEGMENT1 Req_Company_Code,
GL_CC.SEGMENT2 Req_Cost_Center,
Req_Emp_CC.SEGMENT1 Emp_Company_Code,
Req_Emp_CC.SEGMENT2 Emp_Cost_Center,
(Case
When GL_CC.SEGMENT2 <> 8000
Then TRUNC(GL_CC.SEGMENT1) || TRUNC(GL_CC.SEGMENT2) || '_' || NVL(GL_CC.SEGMENT6,'000')
Else TRUNC(Req_Emp_CC.SEGMENT1) || TRUNC(Req_Emp_CC.SEGMENT2) || '_' || NVL(Req_Emp_CC.SEGMENT6,'000')
End) EmpMgmtCD
FROM
PO.po_requisition_lines_all PO_rla,
PO.po_requisition_headers_all PO_rha,
PO.PO_REQ_DISTRIBUTIONS_ALL po_RDA,
GL.GL_CODE_COMBINATIONS gl_cc,
HR.PER_ALL_PEOPLE_F Req_Emp,
HR.PER_ALL_ASSIGNMENTS_F Req_Emp_Assign,
HR.hr_all_organization_units Req_Emp_Org,
HR.pay_cost_allocation_keyflex Req_Emp_CC
WHERE
PO_RDA.CODE_COMBINATION_ID = GL_CC.CODE_COMBINATION_ID and
PO_RLA.REQUISITION_LINE_ID = PO_RDA.REQUISITION_LINE_ID AND
PO_RLA.to_person_id = Req_Emp.PERSON_ID AND
PO_RLA.REQUISITION_HEADER_ID = PO_RHA.REQUISITION_HEADER_ID AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp.effective_start_date and Req_Emp.effective_end_date OR
Req_Emp.effective_start_date IS NULL) AND
Req_Emp.PERSON_ID = Req_Emp_Assign.PERSON_ID AND
Req_Emp_Assign.organization_id = Req_Emp_Org.organization_id AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp_Assign.effective_start_date and Req_Emp_Assign.effective_end_date OR
Req_Emp_Assign.effective_start_date IS NULL) AND
Req_Emp_Assign.primary_flag = 'Y' AND
Req_Emp_Assign.assignment_type = 'E' AND
Req_Emp_Org.cost_allocation_keyflex_id = Req_Emp_CC.cost_allocation_keyflex_id
) DT_REQ_ALL,
SELECT
FROM_CURRENCY,
TO_CURRENCY,
CONVERSION_DATE,
CONVERSION_RATE
FROM GL.GL_DAILY_RATES
UNION
SELECT Distinct
'USD',
'USD',
CONVERSION_DATE,
1
FROM GL.GL_DAILY_RATES
) PO_Rates_GL_DR
WHERE
( PO_DIST_GL_CODE_COMB.CODE_COMBINATION_ID=PO_DIST_ALL.CODE_COMBINATION_ID )
AND ( PO_DIST_ALL.LINE_LOCATION_ID=PO.PO_LINE_LOCATIONS_ALL.LINE_LOCATION_ID )
AND ( PO_PO_HEADERS_ALL2.VENDOR_ID=PO.PO_VENDORS.VENDOR_ID )
AND ( PO_PO_HEADERS_ALL2.ORG_ID=APPS.HR_OPERATING_UNITS.ORGANIZATION_ID )
AND ( GL.GL_SETS_OF_BOOKS.SET_OF_BOOKS_ID=APPS.HR_OPERATING_UNITS.SET_OF_BOOKS_ID )
AND ( PO_PO_HEADERS_ALL2.CURRENCY_CODE=PO_Rates_GL_DR.FROM_CURRENCY )
AND ( trunc(PO_PO_HEADERS_ALL2.CREATION_DATE)=PO_Rates_GL_DR.CONVERSION_DATE )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=PO.PO_REQ_DISTRIBUTIONS_ALL.DISTRIBUTION_ID(+) )
AND ( PO.PO_REQ_DISTRIBUTIONS_ALL.REQUISITION_LINE_ID=PO.PO_REQUISITION_LINES_ALL.REQUISITION_LINE_ID(+) )
AND ( PO_LINES_LOC.PO_HEADER_ID=PO_PO_HEADERS_ALL2.PO_HEADER_ID )
AND ( PO.PO_LINE_LOCATIONS_ALL.PO_LINE_ID=PO_LINES_LOC.PO_LINE_ID )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=DT_REQ_ALL.DISTRIBUTION_ID(+) )
AND ( PO_DIST_ALL.PO_DISTRIBUTION_ID=PO_DIST_INV_DIST_ALL.PO_DISTRIBUTION_ID(+) )
AND ( PO_INV_DIST_ALL.INVOICE_ID(+)=PO_DIST_INV_DIST_ALL.INVOICE_ID )
AND ( PO_INV_DIST_ALL.SOURCE(+) <> 'XML GATEWAY' )
AND
( NVL(PO_PO_HEADERS_ALL2.CANCEL_FLAG,'N') <> 'Y' )
AND
( NVL(PO_PO_HEADERS_ALL2.CLOSED_CODE, 'OPEN') <> 'FINALLY CLOSED' )
AND
( NVL(PO_PO_HEADERS_ALL2.AUTHORIZATION_STATUS,'IN PROCESS') <> 'REJECTED' )
AND
( TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) BETWEEN TO_DATE('01-jan-2011') AND TO_DATE('04-jan-2011') )
AND
PO_Rates_GL_DR.TO_CURRENCY = 'USD'
AND
DB2.DB2_FDW_MGMT_V.RC_PARENT In ( 'Unavailable','Corp','Commercial' )
AND
( CASE
WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
THEN PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
THEN DT_REQ_ALL.EMPMGMTCD
END =DB2.DB2_FDW_MGMT_V.MH_CHILD )Explain plan. sorry can't get the explain plan from sql. this is from toad.
Plan
SELECT STATEMENT ALL_ROWSCost: 53,932 Bytes: 2,607 Cardinality: 1
79 HASH UNIQUE Cost: 53,932 Bytes: 2,607 Cardinality: 1
78 NESTED LOOPS OUTER Cost: 53,931 Bytes: 2,607 Cardinality: 1
75 NESTED LOOPS OUTER Cost: 53,928 Bytes: 2,560 Cardinality: 1
72 NESTED LOOPS Cost: 53,902 Bytes: 2,552 Cardinality: 1
69 NESTED LOOPS OUTER Cost: 53,900 Bytes: 2,533 Cardinality: 1
66 NESTED LOOPS OUTER Cost: 53,898 Bytes: 2,521 Cardinality: 1
63 HASH JOIN OUTER Cost: 53,896 Bytes: 2,509 Cardinality: 1
40 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_DISTRIBUTIONS_ALL Cost: 3 Bytes: 26 Cardinality: 1
39 NESTED LOOPS Cost: 17,076 Bytes: 2,400 Cardinality: 1
37 NESTED LOOPS Cost: 17,073 Bytes: 2,374 Cardinality: 1
34 NESTED LOOPS Cost: 17,070 Bytes: 2,362 Cardinality: 1
31 NESTED LOOPS Cost: 17,066 Bytes: 2,347 Cardinality: 1
29 NESTED LOOPS Cost: 17,066 Bytes: 2,339 Cardinality: 1
26 NESTED LOOPS Cost: 17,065 Bytes: 2,312 Cardinality: 1
23 NESTED LOOPS Cost: 17,064 Bytes: 2,287 Cardinality: 1
20 NESTED LOOPS Cost: 17,062 Bytes: 2,261 Cardinality: 1
17 NESTED LOOPS Cost: 17,056 Bytes: 6,678 Cardinality: 3
15 HASH JOIN Cost: 17,056 Bytes: 6,663 Cardinality: 3
13 MERGE JOIN CARTESIAN Cost: 135 Bytes: 30,352 Cardinality: 14
5 VIEW VIEW DB2.DB2_FDW_MGMT_V Cost: 4 Bytes: 2,128 Cardinality: 1
4 SORT UNIQUE Cost: 4 Cardinality: 1
3 UNION-ALL
1 REMOTE REMOTE SERIAL_FROM_REMOTE PRDFDW.WORLD
2 FAST DUAL Cost: 3 Cardinality: 1
12 BUFFER SORT Cost: 135 Bytes: 560 Cardinality: 14
11 VIEW DB2. Cost: 131 Bytes: 560 Cardinality: 14
10 SORT UNIQUE Cost: 131 Bytes: 310 Cardinality: 14
9 UNION-ALL
7 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_DAILY_RATES Cost: 65 Bytes: 270 Cardinality: 9
6 INDEX SKIP SCAN INDEX (UNIQUE) GL.GL_DAILY_RATES_U1 Cost: 64 Cardinality: 1
8 INDEX SKIP SCAN INDEX (UNIQUE) GL.GL_DAILY_RATES_U1 Cost: 64 Bytes: 4,368 Cardinality: 546
14 TABLE ACCESS FULL TABLE PO.PO_HEADERS_ALL Cost: 16,920 Bytes: 32,754 Cardinality: 618
16 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ORGANIZATION_UNITS_PK Cost: 0 Bytes: 5 Cardinality: 1
19 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ORGANIZATION_INFORMATION Cost: 2 Bytes: 35 Cardinality: 1
18 INDEX RANGE SCAN INDEX HR.HR_ORGANIZATION_INFORMATIO_FK2 Cost: 1 Cardinality: 2
22 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ORGANIZATION_INFORMATION Cost: 2 Bytes: 26 Cardinality: 1
21 INDEX RANGE SCAN INDEX HR.HR_ORGANIZATION_INFORMATIO_FK2 Cost: 1 Cardinality: 1
25 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_SETS_OF_BOOKS Cost: 1 Bytes: 25 Cardinality: 1
24 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_SETS_OF_BOOKS_U2 Cost: 0 Cardinality: 1
28 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_VENDORS Cost: 1 Bytes: 27 Cardinality: 1
27 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_VENDORS_U1 Cost: 0 Cardinality: 1
30 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ALL_ORGANIZATION_UNTS_TL_PK Cost: 0 Bytes: 8 Cardinality: 1
33 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_LINES_ALL Cost: 4 Bytes: 60 Cardinality: 4
32 INDEX RANGE SCAN INDEX (UNIQUE) PO.PO_LINES_U2 Cost: 2 Cardinality: 4
36 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_LINE_LOCATIONS_ALL Cost: 3 Bytes: 12 Cardinality: 1
35 INDEX RANGE SCAN INDEX PO.PO_LINE_LOCATIONS_N1 Cost: 2 Cardinality: 1
38 INDEX RANGE SCAN INDEX PO.PO_DISTRIBUTIONS_N1 Cost: 2 Cardinality: 1
62 VIEW DB2. Cost: 36,819 Bytes: 1,090 Cardinality: 10
61 HASH UNIQUE Cost: 36,819 Bytes: 2,580 Cardinality: 10
60 NESTED LOOPS Cost: 36,818 Bytes: 2,580 Cardinality: 10
57 NESTED LOOPS Cost: 36,798 Bytes: 2,390 Cardinality: 10
54 NESTED LOOPS Cost: 36,768 Bytes: 2,220 Cardinality: 10
51 NESTED LOOPS Cost: 36,758 Bytes: 1,510 Cardinality: 10
48 NESTED LOOPS Cost: 36,747 Bytes: 1,050 Cardinality: 10
45 HASH JOIN Cost: 36,737 Bytes: 960 Cardinality: 10
43 HASH JOIN Cost: 34,602 Bytes: 230,340 Cardinality: 3,490
41 TABLE ACCESS FULL TABLE HR.PER_ALL_PEOPLE_F Cost: 1,284 Bytes: 1,848,420 Cardinality: 44,010
42 TABLE ACCESS FULL TABLE PO.PO_REQUISITION_LINES_ALL Cost: 31,802 Bytes: 18,340,080 Cardinality: 764,170
44 TABLE ACCESS FULL TABLE HR.PER_ALL_ASSIGNMENTS_F Cost: 2,134 Bytes: 822,540 Cardinality: 27,418
47 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ALL_ORGANIZATION_UNITS Cost: 1 Bytes: 9 Cardinality: 1
46 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ORGANIZATION_UNITS_PK Cost: 0 Cardinality: 1
50 TABLE ACCESS BY INDEX ROWID TABLE HR.PAY_COST_ALLOCATION_KEYFLEX Cost: 1 Bytes: 46 Cardinality: 1
49 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.PAY_COST_ALLOCATION_KEYFLE_PK Cost: 0 Cardinality: 1
53 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQUISITION_HEADERS_ALL Cost: 1 Bytes: 71 Cardinality: 1
52 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQUISITION_HEADERS_U1 Cost: 0 Cardinality: 1
56 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQ_DISTRIBUTIONS_ALL Cost: 3 Bytes: 17 Cardinality: 1
55 INDEX RANGE SCAN INDEX PO.PO_REQ_DISTRIBUTIONS_N1 Cost: 2 Cardinality: 1
59 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 2 Bytes: 19 Cardinality: 1
58 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1
65 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQ_DISTRIBUTIONS_ALL Cost: 2 Bytes: 12 Cardinality: 1
64 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQ_DISTRIBUTIONS_U1 Cost: 1 Cardinality: 1
68 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQUISITION_LINES_ALL Cost: 2 Bytes: 12 Cardinality: 1
67 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQUISITION_LINES_U1 Cost: 1 Cardinality: 1
71 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 2 Bytes: 19 Cardinality: 1
70 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1
74 TABLE ACCESS BY INDEX ROWID TABLE AP.AP_INVOICE_DISTRIBUTIONS_ALL Cost: 26 Bytes: 16 Cardinality: 2
73 INDEX RANGE SCAN INDEX AP.AP_INVOICE_DISTRIBUTIONS_N7 Cost: 2 Cardinality: 37
77 TABLE ACCESS BY INDEX ROWID TABLE AP.AP_INVOICES_ALL Cost: 3 Bytes: 47 Cardinality: 1
76 INDEX RANGE SCAN INDEX (UNIQUE) AP.AP_INVOICES_U1 Cost: 2 Cardinality: 1 ThanksForming a new table "new_table" with 3 tables which particiapate in CASE statement logic.
with DT_REQ_ALL as
SELECT DISTINCT
PO_RDA.DISTRIBUTION_ID,
PO_RLA.requisition_line_id,
PO_RHA.DESCRIPTION PO_Descr,
PO_RHA.NOTE_TO_AUTHORIZER PO_Justification,
Req_Emp.FULL_NAME,
GL_CC.SEGMENT1 Req_Company_Code,
GL_CC.SEGMENT2 Req_Cost_Center,
Req_Emp_CC.SEGMENT1 Emp_Company_Code,
Req_Emp_CC.SEGMENT2 Emp_Cost_Center,
(Case
When GL_CC.SEGMENT2 8000
Then TRUNC(GL_CC.SEGMENT1) || TRUNC(GL_CC.SEGMENT2) || '_' || NVL(GL_CC.SEGMENT6,'000')
Else TRUNC(Req_Emp_CC.SEGMENT1) || TRUNC(Req_Emp_CC.SEGMENT2) || '_' || NVL(Req_Emp_CC.SEGMENT6,'000')
End) EmpMgmtCD
FROM
PO.po_requisition_lines_all PO_rla,
PO.po_requisition_headers_all PO_rha,
PO.PO_REQ_DISTRIBUTIONS_ALL po_RDA,
GL.GL_CODE_COMBINATIONS gl_cc,
HR.PER_ALL_PEOPLE_F Req_Emp,
HR.PER_ALL_ASSIGNMENTS_F Req_Emp_Assign,
HR.hr_all_organization_units Req_Emp_Org,
HR.pay_cost_allocation_keyflex Req_Emp_CC
WHERE
PO_RDA.CODE_COMBINATION_ID = GL_CC.CODE_COMBINATION_ID and
PO_RLA.REQUISITION_LINE_ID = PO_RDA.REQUISITION_LINE_ID AND
PO_RLA.to_person_id = Req_Emp.PERSON_ID AND
PO_RLA.REQUISITION_HEADER_ID = PO_RHA.REQUISITION_HEADER_ID AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp.effective_start_date and Req_Emp.effective_end_date OR
Req_Emp.effective_start_date IS NULL) AND
Req_Emp.PERSON_ID = Req_Emp_Assign.PERSON_ID AND
Req_Emp_Assign.organization_id = Req_Emp_Org.organization_id AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp_Assign.effective_start_date and Req_Emp_Assign.effective_end_date OR
Req_Emp_Assign.effective_start_date IS NULL) AND
Req_Emp_Assign.primary_flag = 'Y' AND
Req_Emp_Assign.assignment_type = 'E' AND
Req_Emp_Org.cost_allocation_keyflex_id = Req_Emp_CC.cost_allocation_keyflex_id
SELECT DISTINCT
D.DB2_FDW_MGMT_V.RC_PARENT,
DT_REQ_ALL.FULL_NAME,
DT_REQ_ALL.EMP_COMPANY_CODE,
DT_REQ_ALL.EMP_COST_CENTER,
PO.PO_VENDORS.VENDOR_NAME,
PO_PO_HEADERS_ALL2.SEGMENT1,
PO_PO_HEADERS_ALL2.CREATION_DATE,
PO_DIST_GL_CODE_COMB.SEGMENT1,
PO_DIST_GL_CODE_COMB.SEGMENT2,
PO_PO_HEADERS_ALL2.CURRENCY_CODE,
PO_INV_DIST_ALL.INVOICE_NUM,
PO_INV_DIST_ALL.INVOICE_DATE,
(PO_INV_DIST_ALL.INVOICE_AMOUNT* PO_Rates_GL_DR.CONVERSION_RATE),
(NVL(to_number(PO_DIST_ALL.AMOUNT_BILLED),0) * PO_Rates_GL_DR.CONVERSION_RATE),
PO_LINES_LOC.LINE_NUM,
GL.GL_SETS_OF_BOOKS.NAME,
CASE
WHEN TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) > PO_INV_DIST_ALL.INVOICE_DATE
THEN 1
ELSE 0
END ,
PO.PO_REQUISITION_LINES_ALL.LINE_LOCATION_ID,
TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE,'WW') + 8 WEEK_Ending
FROM
( SELECT * FROM
DB2.DB2_FDW_MGMT_V,
GL.GL_CODE_COMBINATIONS PO_DIST_GL_CODE_COMB,
DT_REQ_ALL
WHERE
DB2.DB2_FDW_MGMT_V.RC_PARENT In ( 'Unavailable','Corp','Commercial' )
AND
CASE
WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
THEN PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
THEN DT_REQ_ALL.EMPMGMTCD
END =DB2.DB2_FDW_MGMT_V.MH_CHILD
) new_table,
PO.PO_VENDORS,
PO.PO_HEADERS_ALL PO_PO_HEADERS_ALL2,
AP.AP_INVOICES_ALL PO_INV_DIST_ALL,
PO.PO_DISTRIBUTIONS_ALL PO_DIST_ALL,
PO.PO_LINES_ALL PO_LINES_LOC,
GL.GL_SETS_OF_BOOKS,
PO.PO_REQUISITION_LINES_ALL,
PO.PO_LINE_LOCATIONS_ALL,
AP.AP_INVOICE_DISTRIBUTIONS_ALL PO_DIST_INV_DIST_ALL,
APPS.HR_OPERATING_UNITS,
PO.PO_REQ_DISTRIBUTIONS_ALL,
SELECT
FROM_CURRENCY,
TO_CURRENCY,
CONVERSION_DATE,
CONVERSION_RATE
FROM GL.GL_DAILY_RATES
UNION
SELECT Distinct
'USD',
'USD',
CONVERSION_DATE,
1
FROM GL.GL_DAILY_RATES
) PO_Rates_GL_DR
WHERE
( PO_DIST_GL_CODE_COMB.CODE_COMBINATION_ID=PO_DIST_ALL.CODE_COMBINATION_ID )
AND ( PO_DIST_ALL.LINE_LOCATION_ID=PO.PO_LINE_LOCATIONS_ALL.LINE_LOCATION_ID )
AND ( PO_PO_HEADERS_ALL2.VENDOR_ID=PO.PO_VENDORS.VENDOR_ID )
AND ( PO_PO_HEADERS_ALL2.ORG_ID=APPS.HR_OPERATING_UNITS.ORGANIZATION_ID )
AND ( GL.GL_SETS_OF_BOOKS.SET_OF_BOOKS_ID=APPS.HR_OPERATING_UNITS.SET_OF_BOOKS_ID )
AND ( PO_PO_HEADERS_ALL2.CURRENCY_CODE=PO_Rates_GL_DR.FROM_CURRENCY )
AND ( trunc(PO_PO_HEADERS_ALL2.CREATION_DATE)=PO_Rates_GL_DR.CONVERSION_DATE )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=PO.PO_REQ_DISTRIBUTIONS_ALL.DISTRIBUTION_ID(+) )
AND ( PO.PO_REQ_DISTRIBUTIONS_ALL.REQUISITION_LINE_ID=PO.PO_REQUISITION_LINES_ALL.REQUISITION_LINE_ID(+) )
AND ( PO_LINES_LOC.PO_HEADER_ID=PO_PO_HEADERS_ALL2.PO_HEADER_ID )
AND ( PO.PO_LINE_LOCATIONS_ALL.PO_LINE_ID=PO_LINES_LOC.PO_LINE_ID )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=DT_REQ_ALL.DISTRIBUTION_ID(+) )
AND ( PO_DIST_ALL.PO_DISTRIBUTION_ID=PO_DIST_INV_DIST_ALL.PO_DISTRIBUTION_ID(+) )
AND ( PO_INV_DIST_ALL.INVOICE_ID(+)=PO_DIST_INV_DIST_ALL.INVOICE_ID )
AND ( PO_INV_DIST_ALL.SOURCE(+) 'XML GATEWAY' )
AND
( NVL(PO_PO_HEADERS_ALL2.CANCEL_FLAG,'N') 'Y' )
AND
( NVL(PO_PO_HEADERS_ALL2.CLOSED_CODE, 'OPEN') 'FINALLY CLOSED' )
AND
( NVL(PO_PO_HEADERS_ALL2.AUTHORIZATION_STATUS,'IN PROCESS') 'REJECTED' )
AND
( TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) BETWEEN TO_DATE('01-jan-2011') AND TO_DATE('04-jan-2011') )
AND
PO_Rates_GL_DR.TO_CURRENCY = 'USD'
-
SQL query slow with call to function
I have a SQL query that will return in less than a second or two with a function in-line selected in the "from" clause of the statement. As soon as I select that returned value in the SQL statement, the statement takes from anywhere from 2 to 5 minutes to return. Here is a simplified sample from the statement:
This statement returns in a second or 2.
select A.pk_id
from stu_schedule A, stu_school B, stu_year C, school_year D,
(select calc_ytd_class_abs2(Z.PK_ID,'U') ytd_unx
from stu_schedule Z) II
where B.pk_id = A.fk_stu_school
and C.pk_id = B.fk_stu_year
and D.pk_id = C.year
and D.school_year = '2011';
if I add this function call in, the statement runs extremely poor.
select A.pk_id,
II.ytd_unx
from stu_schedule A, stu_school B, stu_year C, school_year D,
(select calc_ytd_class_abs2(Z.PK_ID,'U') ytd_unx
from stu_schedule Z) II
where B.pk_id = A.fk_stu_school
and C.pk_id = B.fk_stu_year
and D.pk_id = C.year
and D.school_year = '2011';
Here is the function that is called:
create or replace FUNCTION calc_ytd_class_abs2 (p_fk_stu_schedule in varchar2,
p_legality in varchar2) return number IS
l_days_absent number := 0;
CURSOR get_class_abs IS
select (select nvl(max(D.days_absent),'0')
from cut_code D
where D.pk_id = C.fk_cut_code
and (D.legality = p_legality
or p_legality = '%')) days_absent
from stu_schedule_detail B, stu_class_attendance C
where B.fk_stu_schedule = p_fk_stu_schedule
and C.fk_stu_schedule_detail = B.pk_id;
BEGIN
FOR x in get_class_abs LOOP
l_days_absent := l_days_absent + x.days_absent;
END LOOP;
return (l_days_absent);
END calc_ytd_class_abs2;Query returns anywhere from 6000 to 32000 rows. For each of those rows a parameter is passed in to 4 different functions to get ytd totals. When I call the functions in the in-line view but do not select from them in the main SQL, the report (oh, this is Application Express 4.0 interactive reports, just an FYI) runs fast. The report comes back in a few seconds. But when I select from the in-line view to display those ytd totals, the report runs extremely slow. I know there are the articles about context switching and how mixing SQL with PL/SQL performs poorly. So I tried a pipeline table function where the function for the ytd totals populate the columns of the pipeline table and I select from the pipeline table in the SQL query in the interactive report. That seemed to perform a little worse from what I can tell.
Thanks for any help you can offer. -
SQL QUERY to create new schema in Oracle 10g Express
Can anyone provide the SQL query to create a new schema in Oracle 10g Express edition.
Can anyone provide a SQl query to create a
schema/user named 'test' with username as 'system'
and password as 'manager'system user is created during database creation, it's internal Oracle admin user that shouldn't be used as schema holder.
In Oracle database, Oracle user is schema holder there's no seperate schema name to be defined other than username. -
Sql query slow while using poc *C, OCI
Sql query is taking long time while using fetching records from RAC using Pro *C, OCI. Same query working fine while using JDBC connection.what could be the issue.Please help
Thanks,
SamPro*C is not part of Oracle Solaris Studio (formerly Sun Studio). Studio has no special support for database programming. You are more likely to get a helpful answer in a database programming forum. Start here:
https://forums.oracle.com/forums/category.jspa?categoryID=18 -
Sql query - help reqd (new)
Hi,
I have follwing dataset
"herb with garden"
"garden without herb"
"herb with zbc"
"garden with pqr"
Now i want sql query like
select * from table where text1 = "herb and garden"
This should populate
herb with garden
garden without herb
And
select * from table where text1 = "herb or garden"
This should populate
"herb with garden"
"garden without herb"
"herb with zbc"
"garden with pqr"SQL> with t
2 as
3 (
4 select 'herb with garden' str from dual union all
5 select 'garden without herb' str from dual union all
6 select 'herb with zbc' str from dual union all
7 select 'garden with pqr' str from dual
8 )
9 select *
10 from t
11 where lower(str) like '%herb%'
12 and lower(str) like '%garden%'
13 /
STR
herb with garden
garden without herb
SQL> with t
2 as
3 (
4 select 'herb with garden' str from dual union all
5 select 'garden without herb' str from dual union all
6 select 'herb with zbc' str from dual union all
7 select 'garden with pqr' str from dual
8 )
9 select *
10 from t
11 where lower(str) like '%herb%'
12 or lower(str) like '%garden%'
13 /
STR
herb with garden
garden without herb
herb with zbc
garden with pqr -
MS SQL query slow using view column as criteria
HI,
I am experiencing a very frustrate problem. I have 2 tables, and create a view
to union these 2 tables, when do a select on this view using the column of the
view as criteria is took more 1 minutes, but the query runs fine in Qurey Analyzer.
Anybody has the same experience? is this the problem with jdbc?I searched http://e-docs.bea.com/wls/docs70/index.html, also searched the documentation
for wls6.1, wls5.1. As you pointed I searched support site, they are all the customer
case, it's not formal documentation.
Joe Weinstein <[email protected]> wrote:
>
>
jen wrote:
Thanks. but I search on the table is fine (the same column). is thereany db setting
could be tuned? so the view is the problem? No, it's a client decision/issue. If you defined your tables to have
nvarchar columns
the jdbc driver's parameter values would be fine as-is.
I searched "useVarChars" on whole
site and can't find anything.Which site? This is a property of the weblogic.jdbc.mssqlserver4.Driver.
I just went to www.bea.com/support and entered useVarChars in the Ask
BEA
question panel and got hits...
Joe
Joe Weinstein <[email protected]> wrote:
Jen wrote:
Sorry it's my bad. I am testing on wls81, but the problems is on wls70,so they
are using different drivers.
You are the magic man. It worked on wls81 now. I am sure it will curethe problem
on wls70. Is there any documentation on this? Why it is not a problemon some
databse server? ThanksSure. The issue has to do with the MS client-DBMS protocol having evolved
from
an 8-bit (7-bit really) character representation. Now it has a newer
16-bit
way, to transfer NVARCHAR data. Java characters are all 16-bit, so
by default
a JDBC driver will send Java parameters over as NVARCHAR data. This
is
crucial
for Japanese data etc. However, once simple ASCII data is transformed
to an
NVARCHAR format, the DBMS can't directly compare it to varchar data,
or use it
in index searches. The DBMS has to convert the VARCHAR data to NVARCHAR,
but it
can't guarantee that the converted data will retain the same ordering
as the index,
so the DBMS has to do a table scan!
The properties I suggested are each driver's way of allowing you
to say "I'm
a simple American ;) I am using simple varchar data so please sendmy
java
strings that way.". This allows the DBMS to use your varchar indexes
in queries.
Joe Weinstein at BEA
Joe Weinstein <[email protected]> wrote:
Jen wrote:
It doesn't cure the problem. Here is my pool
<JDBCConnectionPool DriverName="weblogic.jdbc.sqlserver.SQLServerDriver"Name="test_pool"
Password="{3DES}fKSovViFe5kHzl/vTs0LVQ==" Properties="user=user;PortNumber=1543;useVarChars=true;ServerName=194.20.2.10;DatabaseName=devDB"
Targets="admin" TestTableName="SQL SELECT COUNT(*) FROM sysobjects"URL="jdbc:bea:sqlserver://194.20.2.10:1543"/>
Strange is some database is fine.Oh, sorry. I thought it was the older weblogic driver. Change the
useVarChars=true to sendStringParametersAsUnicode=false
Let me know... Also, I suggest changing the TestTableName to "SQL
select
1".
For MS, that will be much more efficient than involving a full count
of sysobjects!
Joe
Joe Weinstein <[email protected]> wrote:
Jen wrote:
You are right. Tadaa! Am I Kreskin, or what? ;) Here's what I recommend:
In your pool definition, for this driver add a driver property:
useVarChars=true
and let me know if it's all better.
Joe
I am using weblogic jdbc driver weblogic.jdbc.mssqlserver4.Driver.
here is the code:
getData(Connection connection, String stmt, ArrayList arguments)
PreparedStatement pStatement=null;>>>>>>>> ResultSet resultSet=null;>>>>>>>> try {>>>>>>>> pStatement = connection.prepareStatement(stmt);>>>>>>>> for (int i = 1; i <= arguments.size(); i++) {>>>>>>>> pStatement.setString(i, (String) arguments.get(i-1));>>>>>>>> resultSet = pStatement.executeQuery(); //this statement takesmore than 1
min.
Joe Weinstein <[email protected]> wrote:
Jen wrote:
HI,
I am experiencing a very frustrate problem. I have 2 tables,
and
create
a view
to union these 2 tables, when do a select on this view using
the
column
of the
view as criteria is took more 1 minutes, but the query runs
fine
in
Qurey Analyzer.
Anybody has the same experience? is this the problem with jdbc?
I have suspicions... Show me the jdbc code. I'm guessing it's
a
PreparedStatement,
and you send the search criterion column value as a parameter
you
set
with a
setString().... Let me know... (also let me know which jdbc driveryou're
using).
Joe -
SQL query slow when issued by app, fast when issued mnaually
Hi there,
I have a more general question about a specific Oracle behaviour.
I update a feature from within an application. The application doesn't respond and I finally have to terminate it. I checked Oracle whether a query is running long using the following statement:
select s.username,s.sid,s.serial#,s.last_call_et/60 mins_running,q.sql_text from v$session s
join v$sqltext_with_newlines q
on s.sql_address = q.address
where status='ACTIVE'
and type <>'BACKGROUND'
and last_call_et> 60
order by sid,serial#,q.piece
The result of the above query is:
WITH CONNECTION AS ( SELECT * FROM WW_CONN C WHERE (C.FID_FROM I
N (SELECT FID FROM WW_LINE WHERE FID_ATTR=:B1 ) AND C.F_CLASS_ID
FROM =22) OR (C.FIDTO IN (SELECT FID FROM WW_LINE WHERE FID_AT
TR=:B1 ) AND C.F_CLASS_ID_TO =22) ) SELECT MIN(P.FID_ATTR) AS FI
D_FROM FROM CONNECTION C, WW_POINT P WHERE (P.FID = C.FID_FROM A
ND C.F_CLASS_ID_FROM = 32 AND C.FLOW = 1) OR (P.FID = C.FID_TO A
ND C.F_CLASS_ID_TO = 32 AND C.FLOW = 2)
I have a different tool which shows me the binding parameter values. So I know that the value for :B1 is 5011 - the id of the feature being updated. This query runs for 20 mins and longer before it eventually stops. The update process involves multiple sql statements - so this one is not doing the update but is part of the process.
Here is the bit I do not understand: when I run the query in SQL Developer with value 5011 for :B1 it takes 0.5 secs to return a result.
Why is it, that the sql statement takes so long when issued by the application but takes less than a second when I run it manually?
I sent a dump of the data to the application vendor who is not able to reproduce the issue in their environment. Could someone explain to me what happens here or give me some keywords for further research?
We are using 11gR2, 64bit.
Many thanks,
RobHi Rob,
at least you should see some differences in the statistics for the different child cursor (the one for the execution in the application should show at least a higher value for ELAPSED_TIME). I would use something like the following query to check the information for the child cursors:
select sql_id
, PLAN_HASH_VALUE
, CHILD_NUMBER
, EXECUTIONS
, ELAPSED_TIME
, USER_IO_WAIT_TIME
, CONCURRENCY_WAIT_TIME
, DISK_READS
, BUFFER_GETS
, ROWS_PROCESSED
from v$sql
where sql_id = your_sql_idRegards
Martin -
MAP Toolkit - How to use this MAP tool kit for all SQL Server inventory in new work enviornment
Hi Every one
Just joined to new job and planning to do Inventory for whole environment so I can get list of all SQL Server installed . I downloaded MAP tool kit just now. So looking for step by step information to use this for SQL Inventory. If anyone have documentation
or screen shot and can share would be great.
Also like to run It will be good to run this tool anytime or should run in night time when is less activity?
Hoe long generally takes for medium size environment where server count is about 30 ( Dev/Staging/Prod)
Also any scripts that will give detailed information would be great too..
Thank you
Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.ShahHi Logicinisde,
According to your description, since the issue regards Microsoft Assessment and Planning Solution Accelerator. I suggestion you post the question in the Solution Accelerators forums at
http://social.technet.microsoft.com/Forums/en-US/map/threads/ . It is appropriate and more experts will assist you.
The Microsoft Assessment and Planning (MAP) Toolkit is an agentless inventory, assessment, and reporting tool that can securely assess IT environments for various platform migrations. You can use MAP as part of a comprehensive process for planning and migrating
legacy database to SQL Server instances.
There is more information about how to use MAP Tool–Microsoft Assessment and Planning toolkit, you can review the following articles.
http://blogs.technet.com/b/meamcs/archive/2012/09/24/how-to-use-map-tool-microsoft-assessment-and-planning-toolkit.aspx
Microsoft Assessment and Planning Toolkit - Technical FAQ:
http://ochoco.blogspot.in/2009/02/microsoft-assessment-and-planning.html
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
Sql query slow from application but executes in 1 min in Toad
Hello,
When I execute a big select query from DB itself , it gives results in a min.
But when called from .Net application it takes 10 mins to execute.
But after trying to excute no of times, it suddenly works faster sometimes.Hello,
When I execute a big select query from DB itself , it gives results in a min.
But when called from .Net application it takes 10 mins to execute.
But after trying to excute no of times, it suddenly works faster sometimes. -
Sql query extremely slow in the new linux environment , memory issues?
We just migrated to a new dev environment in Linux REDHAT5, and now the query is very slow, and I used the TOAD to run the query, it took like 700 msecond to finish, however from any server connection, the sql query takes hours to finish.
I checked toad monitor, it said need to increase db_buffer_cache and shared pool too small.
Also three red alert from toad is:
1. Library Cache get hit ratio: Dynamic or unsharable sql
2. Chained fetch ratio: PCT free too low for a table
3. parse to execute ratio: HIgh parse to execute ratio.
App team said it ran real quick in the old AIX system, however I ran it in old system, and monitored in the toad, it gave me all same 5 red alerts in old system, and it did provide query results a lot quicker though.
Here is the parameters in the old system (11gr1 on AIX):
SQL> show parameter target
NAME TYPE VALUE
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 0
memory_target big integer 0
pga_aggregate_target big integer 278928K
sga_target big integer 0
SQL> show parameter shared
NAME TYPE VALUE
hi_shared_memory_address integer 0
max_shared_servers integer
shared_memory_address integer 0
shared_pool_reserved_size big integer 31876710
shared_pool_size big integer 608M
shared_server_sessions integer
shared_servers integer 0
SQL> show parameter db_buffer
SQL> show parameter buffer
NAME TYPE VALUE
buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer integer 2048000
use_indirect_data_buffers boolean FALSE
SQL>
In new 11gr2 Linux REDHAT parameter:
NAME TYPE VALUE
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 2512M
memory_target big integer 2512M
parallel_servers_target integer 192
pga_aggregate_target big integer 0
sga_target big integer 1648M
SQL> show parameter shared
NAME TYPE VALUE
hi_shared_memory_address integer 0
max_shared_servers integer
shared_memory_address integer 0
shared_pool_reserved_size big integer 28M
shared_pool_size big integer 0
shared_server_sessions integer
shared_servers integer 1
SQL> show parameter buffer
NAME TYPE VALUE
buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer integer 18857984
use_indirect_data_buffers boolean FALSE
SQL>
Please help. Thanks in advance.Duplicate question. Originally posted in sql query slow in new redhat enviornment
Please post in just one forum. -
Oracle XSU: oracle.xml.sql.query.OracleXMLQuery is not recognized
Hi, there!
I've got a problem when tryed to use Oracle XSU (xml-sql utility to generate xml). My simple java application works fine using XSU. But when I created session stateless bean I've got an EJBException regarding this
line:
oracle.xml.sql.query.OracleXMLQuery qry = new oracle.xml.sql.query.OracleXMLQuery(conn, commandSQLStatement);
It doesn't recognize OracleXMLQuery class as I've got in dump. So my classpath includes original location of that utility and Oracle parser.
I really appreciate for any help.
Thanks.
strXML = qry.getXMLString();Patricia,
Did you go through the link
Re: XML SQL Utility
You have to put xsu12.jar in the lib directory of the jdev.
xsu12.jar is in the lib directory of the XDK installation.
You can download XDK from
http://www.oracle.com/technology/tech/xml/xdk/software/prod/xdk_java.html
Just download the XDK kit, get the xsu12.jar from the lib directory and put in the lib directory of the jdev.
-- Arvind -
SQL Query running really slow, any help in improving will be Great!
Hi,
I am really new to this performance tuning and optimization techniques. Explain plan also, I only have theoretical knowledge, no clues on how to find out the real issue which is making the query slow..So if anyone can give me a good direction on where to start even, it will be great..
Now, my issue is, I have a query which runs really really slow. If I run this query for a small subset of data, it runs fast(Its flying, actually..) but if I give the same query for everything(Full required data), its running for ages..(Actually it is running for 2 days now and still running.)
I am pasting my query here, the output shows that the query stucks after "Table created"
SQL> @routinginfo
Table dropped.
Table created.
Please please help!
I also ran explain plan for this query and there are a number of rows in the plan_table now..
SORRY!IS there a way to insert a file here, as I want to attach my explain plan also?
My query -Routinginfo.sql
set trimspool on
set heading on
set verify on
set serveroutput on
drop table routinginfo;
CREATE TABLE routinginfo
( POST_TOWN_NAME VARCHAR2(22 BYTE),
DELIVERY_OFFICE_NAME VARCHAR2(40 BYTE),
ROUTE_ID NUMBER(10),
ROUTE_NAME VARCHAR2(40 BYTE),
BUILDING_ID NUMBER(10),
SUB_BUILDING_ID NUMBER(10),
SEQUENCE_NO NUMBER(4),
PERSONAL_NAME VARCHAR2(60 BYTE),
ADDRESS VARCHAR2(1004 BYTE),
BUILDING_USE VARCHAR2(1 BYTE),
COMMENTS VARCHAR2(200 BYTE),
EAST NUMBER(17,5),
NORTH NUMBER(17,5)
insert into routinginfo
(post_town_name,delivery_office_name,route_id,route_name,
building_id,sub_building_id,sequence_no,personal_name,
address,building_use,comments,east,north)
select
p.name,
d.name,
b.route_id,
r.name,
b.building_id,
s.sub_build_id,
b.sequence_no,
b.personal_name,
ad.addr_line_1||' '||ad.addr_line_2||' '||ad.addr_line_3||' '||ad.addr_line_4||' '||ad.addr_line_5,
b.building_use,
rtrim(replace(b.comments,chr(10),'')),
b.east,
b.north
from t_buildings b,
(select * from t_sub_buildings where nvl(invalid,'N') = 'N') s,
t_routes r,
t_delivery_offices d,
t_post_towns p,
t_address_model ad
where b.building_id = s.building_id(+)
and s.building_id is null
and r.route_id=b.route_id
and (nvl(b.residential_delivery_points,0) > 0 OR nvl(b.commercial_delivery_points,0) > 0)
and r.delivery_office_id=d.delivery_office_id
--and r.delivery_office_id=303
and D.POST_TOWN_ID=P.post_town_id
and ad.building_id=b.building_id
and ad.sub_building_id is null
and nvl(b.invalid, 'N') = 'N'
and nvl(b.derelict, 'N') = 'N'
union
select
p.name,
d.name ,
b.route_id ,
r.name ,
b.building_id ,
s.sub_build_id ,
NVL(s.sequence_no,b.sequence_no),
b.personal_name ,
ad.addr_line_1||' '||ad.addr_line_2||' '||ad.addr_line_3||' '||ad.addr_line_4||' '||ad.addr_line_5,
b.building_use,
rtrim(replace(b.comments,chr(10),'')),
b.east,
b.north
from t_buildings b,
(select * from t_sub_buildings where nvl(invalid,'N') = 'N') s,
t_routes r,
t_delivery_offices d,
t_post_towns p,
t_address_model ad
where s.building_id = b.building_id
and r.route_id = s.route_id
and (nvl(b.residential_delivery_points,0) > 0 OR nvl(b.commercial_delivery_points,0) > 0)
and r.delivery_office_id=d.delivery_office_id
--and r.delivery_office_id=303
and D.POST_TOWN_ID=P.post_town_id
and ad.building_id=b.building_id
and ad.sub_building_id = s.sub_build_id
and nvl(b.invalid, 'N') = 'N'
and nvl(b.derelict, 'N') = 'N'
union
select
p.name,
d.name,
b.route_id ,
r.name ,
b.building_id,
s.sub_build_id ,
NVL(s.sequence_no,b.sequence_no) ,
b.personal_name ,
ad.addr_line_1||' '||ad.addr_line_2||' '||ad.addr_line_3||' '||ad.addr_line_4||' '||ad.addr_line_5 ,
b.building_use,
rtrim(replace(b.comments,chr(10),'')),
b.east,
b.north
from t_buildings b,
(select * from t_sub_buildings where nvl(invalid,'N') = 'N') s,
t_routes r,
t_delivery_offices d,
t_post_towns p,
t_localities l,
t_localities lo,
t_localities loc,
t_tlands tl,
t_address_model ad
where s.building_id = b.building_id
and s.route_id is null
and r.route_id = b.route_id
and (nvl(b.residential_delivery_points,0) > 0 OR nvl(b.commercial_delivery_points,0) > 0)
and r.delivery_office_id=d.delivery_office_id
--and r.delivery_office_id=303
and D.POST_TOWN_ID=P.post_town_id
and ad.building_id=b.building_id
and ad.sub_building_id = s.sub_build_id
and nvl(b.invalid, 'N') = 'N'
and nvl(b.derelict, 'N') = 'N';
commit; Edited by: Krithi on 16-Jun-2009 01:48
Edited by: Krithi on 16-Jun-2009 01:51
Edited by: Krithi on 16-Jun-2009 02:44This link is helpful alright..but as a beginner, it is taking me too long to understand..But I am going to learn the techniques for sure..
Fo the time being,I am pasting my explain plan for the above query here, so that I hope any expert can really help me on this one..
STATEMENT_ID TIMESTAMP REMARKS OPERATION OPTIONS OBJECT_NODE OBJECT_OWNER OBJECT_NAME OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER SEARCH_COLUMNS ID PARENT_ID POSITION COST CARDINALITY BYTES
06/16/2009 09:33:01 SELECT STATEMENT CHOOSE 0 829,387,159,200 829,387,159,200 3,720,524,291,654,720 703,179,091,122,042,000
06/16/2009 09:33:01 SORT UNIQUE 1 0 1 829,387,159,200 3,720,524,291,654,720 703,179,091,122,042,000
06/16/2009 09:33:01 UNION-ALL 2 1 1
06/16/2009 09:33:01 HASH JOIN 3 2 1 11,209 87,591 15,853,971
06/16/2009 09:33:01 FILTER 4 3 1
06/16/2009 09:33:01 HASH JOIN OUTER 5 4 1
06/16/2009 09:33:01 HASH JOIN 6 5 1 5,299 59,325 6,585,075
06/16/2009 09:33:01 VIEW GEO2 index$_join$_006 6 7 6 1 4 128 1,792
06/16/2009 09:33:01 HASH JOIN 8 7 1 5,299 59,325 6,585,075
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 POST_TOWN_NAME_I NON-UNIQUE ANALYZED 9 8 1 1 128 1,792
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 POST_TOWN_PK UNIQUE ANALYZED 10 8 2 1 128 1,792
06/16/2009 09:33:01 HASH JOIN 11 6 2 5,294 59,325 5,754,525
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_DELIVERY_OFFICES 5 ANALYZED 12 11 1 7 586 10,548
06/16/2009 09:33:01 HASH JOIN 13 11 2 5,284 59,325 4,686,675
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_ROUTES 4 ANALYZED 14 13 1 7 4,247 118,916
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_BUILDINGS 1 ANALYZED 15 13 2 5,265 59,408 3,029,808
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_SUB_BUILDINGS 3 ANALYZED 16 5 2 851 278,442 3,898,188
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_ADDRESS_MODEL 7 ANALYZED 17 3 2 3,034 1,582,421 88,615,576
06/16/2009 09:33:01 NESTED LOOPS 18 2 2 10,217 1 189
06/16/2009 09:33:01 NESTED LOOPS 19 18 1 10,216 1 175
06/16/2009 09:33:01 HASH JOIN 20 19 1 10,215 1 157
06/16/2009 09:33:01 HASH JOIN 21 20 1 6,467 80,873 8,168,173
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_ROUTES 11 ANALYZED 22 21 1 7 4,247 118,916
06/16/2009 09:33:01 HASH JOIN 23 21 2 6,440 80,924 5,907,452
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_BUILDINGS 8 ANALYZED 24 23 1 5,265 59,408 3,029,808
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_SUB_BUILDINGS 10 ANALYZED 25 23 2 851 278,442 6,125,724
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_ADDRESS_MODEL 14 ANALYZED 26 20 2 3,034 556,000 31,136,000
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_DELIVERY_OFFICES 12 ANALYZED 27 19 2 1 1 18
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 DELIVERY_OFFICE_PK UNIQUE ANALYZED 1 28 27 1 1
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_POST_TOWNS 13 ANALYZED 29 18 2 1 1 14
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 POST_TOWN_PK UNIQUE ANALYZED 1 30 29 1 1
06/16/2009 09:33:01 MERGE JOIN CARTESIAN 31 2 3 806,976,583,802 3,720,524,291,567,130 703,179,091,106,188,000
06/16/2009 09:33:01 MERGE JOIN CARTESIAN 32 31 1 16,902,296 73,359,971,046 13,865,034,527,694
06/16/2009 09:33:01 MERGE JOIN CARTESIAN 33 32 1 1,860 1,207,174 228,155,886
06/16/2009 09:33:01 MERGE JOIN CARTESIAN 34 33 1 1,580 20 3,780
06/16/2009 09:33:01 NESTED LOOPS 35 34 1 1,566 1 189
06/16/2009 09:33:01 NESTED LOOPS 36 35 1 1,565 1 175
06/16/2009 09:33:01 NESTED LOOPS 37 36 1 1,564 1 157
06/16/2009 09:33:01 NESTED LOOPS 38 37 1 1,563 1 129
06/16/2009 09:33:01 NESTED LOOPS 39 38 1 1,207 178 12,994
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_SUB_BUILDINGS 17 ANALYZED 40 39 1 851 178 3,916
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_BUILDINGS 15 ANALYZED 41 39 2 2 1 51
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 BUILDING_PK UNIQUE ANALYZED 1 42 41 1 1 31
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_ADDRESS_MODEL 25 ANALYZED 43 38 2 2 1 56
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 MODEL_MODEL2_UK UNIQUE ANALYZED 2 44 43 1 1 1
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_ROUTES 18 ANALYZED 45 37 2 1 1 28
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 ROUTE_PK UNIQUE ANALYZED 1 46 45 1 1
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_DELIVERY_OFFICES 19 ANALYZED 47 36 2 1 1 18
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 DELIVERY_OFFICE_PK UNIQUE ANALYZED 1 48 47 1 1
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_POST_TOWNS 20 ANALYZED 49 35 2 1 1 14
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 POST_TOWN_PK UNIQUE ANALYZED 1 50 49 1 1
06/16/2009 09:33:01 BUFFER SORT 51 34 2 1,579 60,770
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 LOCAL_COUNTY_FK_I NON-UNIQUE ANALYZED 52 51 1 14 60,770
06/16/2009 09:33:01 BUFFER SORT 53 33 2 1,846 60,770
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 LOCAL_COUNTY_FK_I NON-UNIQUE ANALYZED 54 53 1 14 60,770
06/16/2009 09:33:01 BUFFER SORT 55 32 2 16,902,282 60,770
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 LOCAL_COUNTY_FK_I NON-UNIQUE ANALYZED 56 55 1 14 60,770
06/16/2009 09:33:01 BUFFER SORT 57 31 2 806,976,583,788 50,716
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 TLAND_COUNTY_FK_I NON-UNIQUE ANALYZED 58 57 1 11 50,716 -------------------------------------------------------------
Edited by: Krithi on 16-Jun-2009 02:47 -
IF NEW VARIABLE IN SQL QUERY, DISPLAYS AS LAST COLUMN + rpad not working
Hello everybody and thank you in advance,
1) if I add a new variable to my sql query to select a column which was already in the table, it shows it in the report table as the Last column to the right. That is, if I add "street" to
something like city, postcode, street, store, manager, etc, instead of placing street between postcode and store, it places it as i said as the last column to the right.
2) When values are entered into the cells of the tables, yes, they do expand it to their needed lenght, But, only if it is one word. If it is two, like when i enter the value "very good"
then it takes two lines so as with a carriage return within the cell, thus, making it too high the row. I tried to padd spaces with rpad but it did not work. something like rpad(stock, 20,' ')
I must say that the table is in the same page where there is a Form, so as the table grows in lenth it is actually squeezing the form located right on its left.
3) rpad did not work with the most simple syntax, but less would with what i need because it turns out i am using DECODE in order to do a conversion between value displayed and
value returned in my select list of values, something like : DECODE (TO_CHAR (stock),'1','Deficient','2','Average','3','Good','4','Very Good',null) AS stock,
so, i have tried to put the rpad there in several places but either it gave parsing error or it left the column empty not picking any values.
thank you very much
AlvaroAlvaro
1) That is standard behaviour of apex builder. You can change the display order with the arrows in the report attributes column report.
2) You will have to play with the style attributes of the column to accomplice this. For instance style="white-space:pre;" in the Element Attributes of the column attributes. White-space:normal would thread several space (' ') as 1. So no matter how many you add with rpad they will be shown as 1.
Or set a width either as attibute or in a style class for that column.
Nicolette
Maybe you are looking for
-
Web app Connection.close()...how to prevent
Hello I am somewhat new to this, but I am developing an web application that will be a help desk of sorts for clients to log on and create trouble tickets, plus many other features. I have completed the project and it works fine when I first start to
-
I want to hide certain pictures in the "Gallery"so only I can see when I want. How can I do this?
-
10.4.8 followed by wireless might mouse = no keyboard?
For everyones info, I installed 10.4.8 minutes before I installed my new wireless Mighty Mouse (sw v1.3) and then by the next reboot the (PC) keyboard that I have running though a usb hub stopped working. As with many problems I've read about before,
-
my friend got a iphone 5 and she forgot her password already if she disables it where it says connect to itunes can she reset it? even though she forgot her apple id and password
-
What Happened to my Feed??
Very strange...but help. http://phobos.apple.com/WebObjects/MZStore.woa/wa/viewPodcast?id=179237134 This is the location of my podcast. I successfully got up and running but then made some changes to my .rss file - uploaded said file to my server and