Query is taking much time (query optimization)
Dear all ,
I have write this can you please guide me how to optimize this query ,
SELECT distinct papf.employee_number,paaf.POSITION_ID, paaf.ASS_ATTRIBUTE3 ntn,
paaf.payroll_id,
paaf.assignment_id,paa.assignment_action_id,papf.EFFECTIVE_START_DATE pop_st_date,papf.EFFECTIVE_END_DATE pop_end_date,
paaf.effective_start_date ass_st,
paaf.effective_end_date ass_end,
ptp.start_date pay_st,
ptp.end_date pay_end,
TO_CHAR (pac.effective_date, 'YYYY/MM/DD') eff_date,
ppg.segment1 region,
pac.payroll_action_id,
hla.location_id,
hla.location_code branch,
substr(papf.full_name,0,35) NAME,
pg.short_name CATEGORY,
DECODE (paaf.employment_category,
'CONT_EXPAT', 'Expatriate',
'CONT_PR', 'Post Retirement',
'CONT_PT', 'Part Time (Visiting Faculty)',
'CONT_VISIT', 'Ad-Hoc(Regular Contract)',
'Permanent'
) status,
TO_CHAR (papf.original_date_of_hire, 'DD-MON-YYYY') joining_date,
hr_payrolls.display_period_name (pac.payroll_action_id) period_name
FROM per_all_assignments_f paaf,
per_all_people_f papf,
pay_people_groups ppg,
hr_locations_all hla,
per_grades pg,
pay_assignment_actions paa,
pay_run_results prr,
pay_payroll_actions pac,
per_time_periods ptp,
hr_lookups hl
WHERE paaf.person_id = papf.person_id
AND paa.ASSIGNMENT_ACTION_ID = prr.ASSIGNMENT_ACTION_ID
AND paaf.people_group_id = ppg.people_group_id
AND pac.payroll_id = ptp.payroll_id
AND paaf.location_id = hla.location_id
AND paaf.grade_id = pg.grade_id
AND paaf.employment_category = hl.lookup_code
AND hl.lookup_type LIKE 'EMP_CAT%'
AND hl.enabled_flag = 'Y'
AND hl.lookup_code = NVL (:assignment_category, hl.lookup_code)
AND pg.short_name = NVL (:grade, pg.short_name)
AND paa.assignment_id = paaf.assignment_id
AND pac.payroll_action_id = paa.payroll_action_id
AND paaf.primary_flag = 'Y'
AND paaf.payroll_id IS NOT NULL
AND pac.action_type IN ('R', 'Q')
and ptp.END_DATE between papf.EFFECTIVE_START_DATE and papf.EFFECTIVE_END_DATE
AND pac.effective_date >= ptp.start_date
AND pac.effective_date <= ptp.end_date
AND ptp.period_name = :period_name
AND pac.payroll_id = NVL (:p_payroll_id, pac.payroll_id)
AND papf.employee_number = nvl(:P_EMP_NUM,papf.employee_number)
AND ptp.end_date BETWEEN paaf.effective_start_date
AND paaf.effective_end_date
AND prr.assignment_action_id = ------------------------ This subquery filters the record to Max assignment action id with payroll run twice in a month
(SELECT MAX (paa1.assignment_action_id)
FROM per_all_assignments_f paaf1,
per_all_people_f papf1,
pay_people_groups ppg1,
hr_locations_all hla1,
per_grades pg1,
pay_assignment_actions paa1,
pay_run_results prr1,
pay_payroll_actions pac1,
per_time_periods ptp1,
hr_lookups hl1
WHERE paaf1.person_id = papf1.person_id
AND paa1.assignment_action_id =
prr1.assignment_action_id
AND paaf1.people_group_id = ppg1.people_group_id
AND pac1.payroll_id = ptp1.payroll_id
AND paaf1.location_id = hla1.location_id
AND paaf1.grade_id = pg1.grade_id
AND paaf1.employment_category = hl1.lookup_code
AND hl1.lookup_type LIKE 'EMP_CAT%'
AND hl1.enabled_flag = 'Y'
AND hl1.lookup_code =
NVL (:assignment_category, hl1.lookup_code)
AND pg1.short_name = NVL (:grade, pg1.short_name)
AND paa1.assignment_id = paaf1.assignment_id
AND pac1.payroll_action_id = paa1.payroll_action_id
AND paaf1.primary_flag = 'Y'
AND paaf1.payroll_id IS NOT NULL
AND pac1.action_type IN ('R', 'Q')
AND papf1.employee_number = papf.employee_number
AND ptp1.end_date BETWEEN papf1.effective_start_date
AND papf1.effective_end_date
AND pac1.effective_date >= ptp1.start_date
AND pac1.effective_date <= ptp1.end_date
AND ptp1.period_name = :period_name
AND pac1.payroll_id =
NVL (:p_payroll_id, pac1.payroll_id)
AND ptp1.end_date BETWEEN paaf1.effective_start_date
AND paaf1.effective_end_date)
-------- This code is added on 15-SEP 09 for advance salary payment
ORDER BY region, branch
Regards
Maybe
SELECT distinct
employee_number,
POSITION_ID,
ntn,
payroll_id,
assignment_action_id,
pop_st_date,
pop_end_date,
***_st,
***_end,
pay_st,
pay_end,
eff_date,
region,
payroll_action_id,
location_id,
branch,
NAME,
CATEGORY,
status,
joining_date,
period_name
from (select papf.employee_number,
paaf.POSITION_ID,
paaf.***_ATTRIBUTE3 ntn,
paaf.payroll_id,
paaf.assignment_id,
paa.assignment_action_id,
papf.EFFECTIVE_START_DATE pop_st_date,
papf.EFFECTIVE_END_DATE pop_end_date,
paaf.effective_start_date ***_st,
paaf.effective_end_date ***_end,
ptp.start_date pay_st,
ptp.end_date pay_end,
TO_CHAR(pac.effective_date,'YYYY/MM/DD') eff_date,
ppg.segment1 region,
pac.payroll_action_id,
hla.location_id,
hla.location_code branch,
substr(papf.full_name,0,35) NAME,
pg.short_name CATEGORY,
DECODE(paaf.employment_category,
'CONT_EXPAT','Expatriate',
'CONT_PR', 'Post Retirement',
'CONT_PT', 'Part Time (Visiting Faculty)',
'CONT_VISIT','Ad-Hoc (Regular Contract)',
'Permanent'
) status,
TO_CHAR(papf.original_date_of_hire,'DD-MON-YYYY') joining_date,
hr_payrolls.display_period_name(pac.payroll_action_id) period_name,
/* To enable filter on Max assignment action id with payroll run twice in a month */
max(paa.assignment_action_id) over
(partition by papf.employee_number) max_assignment_action_id
/* This code is added on 15-SEP 09 for advance salary payment */
FROM per_all_assignments_f paaf,
per_all_people_f papf,
pay_people_groups ppg,
hr_locations_all hla,
per_grades pg,
pay_assignment_actions paa,
pay_run_results prr,
pay_payroll_actions pac,
per_time_periods ptp,
hr_lookups hl
WHERE paaf.person_id = papf.person_id
AND paa.ASSIGNMENT_ACTION_ID = prr.ASSIGNMENT_ACTION_ID
AND paaf.people_group_id = ppg.people_group_id
AND pac.payroll_id = ptp.payroll_id
AND paaf.location_id = hla.location_id
AND paaf.grade_id = pg.grade_id
AND paaf.employment_category = hl.lookup_code
AND hl.lookup_type LIKE 'EMP_CAT%'
AND hl.enabled_flag = 'Y'
AND hl.lookup_code = NVL(:assignment_category,hl.lookup_code)
AND pg.short_name = NVL(:grade,pg.short_name)
AND paa.assignment_id = paaf.assignment_id
AND pac.payroll_action_id = paa.payroll_action_id
AND paaf.primary_flag = 'Y'
AND paaf.payroll_id IS NOT NULL
AND pac.action_type IN ('R','Q')
AND papf.employee_number = nvl(:P_EMP_NUM,papf.employee_number)
and ptp.END_DATE between papf.EFFECTIVE_START_DATE
and papf.EFFECTIVE_END_DATE
AND pac.effective_date >= ptp.start_date
AND pac.effective_date <= ptp.end_date
AND ptp.period_name = :period_name
AND pac.payroll_id = NVL(:p_payroll_id,pac.payroll_id)
AND ptp.end_date BETWEEN paaf.effective_start_date
AND paaf.effective_end_date
/* This condition filters the record to Max assignment action id with payroll run twice in a month */
where assignment_action_id = max_assignment_action_id
/* This code is added on 15-SEP 09 for advance salary payment */
ORDER BY region,branchRegards
Etbin
Similar Messages
-
Why this Query is taking much longer time than expected?
Hi,
I need experts support on the below mentioned issue:
Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time. Below, please find the DDL & DML:
DDL
BHDCollections
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[BHDCollections](
[BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
[GroupMemberid] [int] NOT NULL,
[BHDDate] [datetime] NOT NULL,
[BHDShift] [varchar](10) NULL,
[SlipValue] [decimal](18, 3) NOT NULL,
[ProcessedValue] [decimal](18, 3) NOT NULL,
[BHDRemarks] [varchar](500) NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
[BHDCollectionid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
BHDCollectionsDet
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[BHDCollectionsDet](
[CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
[BHDCollectionid] [bigint] NOT NULL,
[Currencyid] [int] NOT NULL,
[Denomination] [decimal](18, 3) NOT NULL,
[Quantity] [int] NOT NULL,
CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
[CollectionDetailid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Banks
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Banks](
[Bankid] [int] IDENTITY(1,1) NOT NULL,
[Bankname] [varchar](50) NOT NULL,
[Bankabbr] [varchar](50) NULL,
[BankContact] [varchar](50) NULL,
[BankTel] [varchar](25) NULL,
[BankFax] [varchar](25) NULL,
[BankEmail] [varchar](50) NULL,
[BankActive] [bit] NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
[Bankid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
Groupmembers
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[GroupMembers](
[GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
[Groupid] [int] NOT NULL,
[BAID] [int] NOT NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
[GroupMemberid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[GroupMembers] WITH CHECK ADD CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
REFERENCES [dbo].[BankAccounts] ([BAID])
GO
ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
GO
ALTER TABLE [dbo].[GroupMembers] WITH CHECK ADD CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
REFERENCES [dbo].[Groups] ([Groupid])
GO
ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
BankAccounts
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[BankAccounts](
[BAID] [int] IDENTITY(1,1) NOT NULL,
[CustomerID] [int] NOT NULL,
[Locationid] [varchar](25) NOT NULL,
[Bankid] [int] NOT NULL,
[BankAccountNo] [varchar](50) NOT NULL,
CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
[BAID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[BankAccounts] WITH CHECK ADD CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
REFERENCES [dbo].[Banks] ([Bankid])
GO
ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
GO
ALTER TABLE [dbo].[BankAccounts] WITH CHECK ADD CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
REFERENCES [dbo].[Locations] ([Locationid])
GO
ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
Currency
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Currency](
[Currencyid] [int] IDENTITY(1,1) NOT NULL,
[CurrencyISOCode] [varchar](20) NOT NULL,
[CurrencyCountry] [varchar](50) NULL,
[Currency] [varchar](50) NULL,
CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
[Currencyid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
CurrencyDetails
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[CurrencyDetails](
[CurDenid] [int] IDENTITY(1,1) NOT NULL,
[Currencyid] [int] NOT NULL,
[Denomination] [decimal](15, 3) NOT NULL,
[DenominationType] [varchar](25) NOT NULL,
CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
[CurDenid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
QUERY
WITH TEMP_TABLE AS
SELECT 0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
(BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
FROM BHDCollections INNER JOIN
BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
Banks ON BankAccounts.Bankid = Banks.Bankid
GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
HAVING (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
(CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
UNION ALL
SELECT BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
(BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
FROM BHDCollections INNER JOIN
BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
Banks ON BankAccounts.Bankid = Banks.Bankid
GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
HAVING (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
(CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
TEMP_TABLE2 AS
SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
HAVING COUNT(DSLIPS)<>0;Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
Just
SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS FROM
#tmp Group By CollectionDate,DSLIPS,Bankname
HAVING COUNT(DSLIPS)<>0;
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Query taking much time.
Hi All,
I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
select count(*) from
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tls.siebel_ba, tls.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba = tlo.siebel_ba (+) and
tls.msisdn = tlo.msisdn (+) and
tlo.siebel_ba is null and
tlo.msisdn is null
union
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tlo.siebel_ba, tlo.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba (+) = tlo.siebel_ba and
tls.msisdn (+) = tlo.msisdn and
tls.siebel_ba is null and
tls.msisdn is null
explain plan of above query is
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | | 14 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | SORT AGGREGATE | | 1 | | | 41,04 | P->S | QC (RAND) |
| 3 | VIEW | | 164 | | 14 | 41,04 | PCWP | |
| 4 | SORT UNIQUE | | 164 | 14104 | 14 | 41,04 | PCWP | |
| 5 | UNION-ALL | | | | | 41,03 | P->P | HASH |
|* 6 | FILTER | | | | | 41,03 | PCWC | |
|* 7 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 8 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,03 | PCWP | |
| 9 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,00 | S->P | BROADCAST |
|* 10 | FILTER | | | | | 41,03 | PCWC | |
|* 11 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 12 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,01 | S->P | HASH |
| 13 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,02 | P->P | HASH |
Predicate Information (identified by operation id):
6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")user3479748 wrote:
Hi All,
I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
select count(*) from
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tls.siebel_ba, tls.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba = tlo.siebel_ba (+) and
tls.msisdn = tlo.msisdn (+) and
tlo.siebel_ba is null and
tlo.msisdn is null
union
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tlo.siebel_ba, tlo.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba (+) = tlo.siebel_ba and
tls.msisdn (+) = tlo.msisdn and
tls.siebel_ba is null and
tls.msisdn is null
) ;explain plan of above query is
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | | 14 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | SORT AGGREGATE | | 1 | | | 41,04 | P->S | QC (RAND) |
| 3 | VIEW | | 164 | | 14 | 41,04 | PCWP | |
| 4 | SORT UNIQUE | | 164 | 14104 | 14 | 41,04 | PCWP | |
| 5 | UNION-ALL | | | | | 41,03 | P->P | HASH |
|* 6 | FILTER | | | | | 41,03 | PCWC | |
|* 7 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 8 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,03 | PCWP | |
| 9 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,00 | S->P | BROADCAST |
|* 10 | FILTER | | | | | 41,03 | PCWC | |
|* 11 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 12 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,01 | S->P | HASH |
| 13 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,02 | P->P | HASH |
Predicate Information (identified by operation id):
6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")
I dunno, it looks like you are getting all the things that are null with an outer join, so won't that decide to full scan anyways? Plus the union means it will do it twice and do a distinct to get rid of dups - see how it does a union all and then sort unique. Somehow I have the feeling there might be a more trick way to do what you want, so maybe you should state exactly what you want in English. -
Query taking much time Orace 9i
Hi,
**How can we tune the sql query in oracle 9i.**
The select query taking more than 1 and 30 min to throw the result.
Due to this,
We have created materialsed view on the select query and also we submitted a job to get Materilazed view refreshed daily in dba_jobs.
When we tried to retrive the data from Materilased view getting result very quickly.
But the job which we has been assisgned in Dbajobs taking equal time to complete, as the query use to take.
We feel since the job taking much time in the test Database and it may cause load if we move the same scripts in Production Environment.
Please suggest how to resolvethe issue and also how to tune the sql
With Regards,
Srinivas
Edited by: Srinivas.. on Dec 17, 2009 6:29 AMHi Srinivas;
Please follow this search and see its helpful
Regard
Helios -
Hi all,
db:oracle 9i
I am facing below query prob.
prob is that query is taking more time 45 min than earliar (10 sec).
please any one suggest me .....
SQL> SELECT MAX (tdar1.ID) ID, tdar1.request_id, tdar1.lolm_transaction_id,
2 tdar1.transaction_version
3 FROM transaction_data_arc tdar1
4 WHERE tdar1.transaction_name ='O96U '
5 AND tdar1.transaction_type = 'REQUEST'
6 AND tdar1.message_type_code ='PCN'
7 AND NOT EXISTS (
8 SELECT NULL
9 FROM transaction_data_arc tdar2
10 WHERE tdar2.request_id = tdar1.request_id
11 AND tdar2.lolm_transaction_id != tdar1.lolm_transaction_id
12 AND tdar2.ID > tdar1.ID)
13 GROUP BY tdar1.request_id,
14 tdar1.lolm_transaction_id,
15 tdar1.transaction_version;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1 Bytes=42)
1 0 SORT (GROUP BY) (Cost=12 Card=1 Bytes=42)
2 1 FILTER
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
' (Cost=1 Card=1 Bytes=42)
4 3 INDEX (RANGE SCAN) OF 'NK_TDAR_2' (NON-UNIQUE) (Cost
=3 Card=1)
5 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
' (Cost=5 Card=918 Bytes=20196)
6 5 INDEX (RANGE SCAN) OF 'NK_TDAR_7' (NON-UNIQUE) (Cost
=8 Card=4760)prob is that query is taking more time 45 min than earliar (10 sec).Then something must have changed (data growth/stale statistics/...?).
You should post as much details as possible, how and what it is described in the FAQ, see:
*3. How to improve the performance of my query? / My query is running slow*.
When your query takes too long...
How to post a SQL statement tuning request
SQL and PL/SQL FAQ
Also, given your database version, using NOT IN instead of NOT EXISTS might make a difference (but they're not the same).
See: SQL and PL/SQL FAQ -
Simple query is taking long time
Hi Experts,
The below query is taking long time.
[code]SELECT FS.*
FROM ORL.FAX_STAGE FS
INNER JOIN
ORL.FAX_SOURCE FSRC
INNER JOIN
GLOBAL_BU_MAPPING GBM
ON GBM.BU_ID = FSRC.BUID
ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
WHERE FSRC.IS_DELETED = 'N'
AND GBM.BU_ID IS NOT NULL
AND UPPER (FS.FAX_STATUS) ='COMPLETED';[/code]
this query is returning 1645457 records.
[code]PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 625K| 341M| 45113 (1)|
| 1 | HASH JOIN | | 625K| 341M| 45113 (1)|
| 2 | NESTED LOOPS | | 611 | 14664 | 22 (0)|
| 3 | TABLE ACCESS FULL| FAX_SOURCE | 2290 | 48090 | 22 (0)|
| 4 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 3 | 0 (0)|
| 5 | TABLE ACCESS FULL | FAX_STAGE | 2324K| 1214M| 45076 (1)|
PLAN_TABLE_OUTPUT
Note
- 'PLAN_TABLE' is old version
15 rows selected.[/code]
The distinct number of records in each table.
[code]SELECT FAX_STATUS,count(*)
FROM fax_STAGE
GROUP BY FAX_STATUS;
FAX_STATUS COUNT(*)
BROKEN 10
Broken - New 9
Completed 2324493
New 20
SELECT is_deleted,COUNT(*)
FROM FAX_SOURCE
GROUP BY IS_DELETED;
IS_DELETED COUNT(*)
N 2290
Y 78[/code]
Total number of records in each table.
[code]SELECT COUNT(*) FROM ORL.FAX_SOURCE FSRC-- 2368
SELECT COUNT(*) FROM ORL.FAX_STAGE--2324532
SELECT COUNT(*) FROM APPS_GLOBAL.GLOBAL_BU_MAPPING--9
[/code]
To improve the performance of this query I have created the following indexes.
[code]Functional based index on UPPER (FSRC.FAX_NUMBER) ,UPPER (FS.DESTINATION) and UPPER (FS.FAX_STATUS).
Bitmap index on FSRC.IS_DELETED.
Normal Index on GBM.BU_ID and FSRC.BUID.
[/code]
But still the performance is bad for this query.
What can I do apart from this to improve the performance of this query.
Please help me .
Thanks in advance.<I have created the following indexes.
CREATE INDEX ORL.IDX_DESTINATION_RAM ON ORL.FAX_STAGE(UPPER("DESTINATION"))
CREATE INDEX ORL.IDX_FAX_STATUS_RAM ON ORL.FAX_STAGE(LOWER("FAX_STATUS"))
CREATE INDEX ORL.IDX_UPPER_FAX_STATUS_RAM ON ORL.FAX_STAGE(UPPER("FAX_STATUS"))
CREATE INDEX ORL.IDX_BUID_RAM ON ORL.FAX_SOURCE(BUID)
CREATE INDEX ORL.IDX_FAX_NUMBER_RAM ON ORL.FAX_SOURCE(UPPER("FAX_NUMBER"))
CREATE BITMAP INDEX ORL.IDX_IS_DELETED_RAM ON ORL.FAX_SOURCE(IS_DELETED)
After creating the following indexes performance got improved.
But our DBA said that new BITMAP index at FAX_SOURCE table (ORL.IDX_IS_DELETED_RAM) can cause locks
on multiple rows if IS_DELETED column is in use. Please proceed with detailed tests.
I am sending the explain plan before creating indexes and after indexes has been created.
SELECT FS.*
FROM ORL.FAX_STAGE FS
INNER JOIN
ORL.FAX_SOURCE FSRC
INNER JOIN
GLOBAL_BU_MAPPING GBM
ON GBM.BU_ID = FSRC.BUID
ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
WHERE FSRC.IS_DELETED = 'N'
AND GBM.BU_ID IS NOT NULL
AND UPPER (FS.FAX_STATUS) =:B1;
--OLD without indexes
PLAN_TABLE_OUTPUT
Plan hash value: 3076973749
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 141K| 85M| 45130 (1)| 00:09:02 |
|* 1 | HASH JOIN | | 141K| 85M| 45130 (1)| 00:09:02 |
| 2 | NESTED LOOPS | | 611 | 18330 | 22 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| FAX_SOURCE | 2290 | 59540 | 22 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 4 | 0 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | FAX_STAGE | 23245 | 13M| 45106 (1)| 00:09:02 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - access(UPPER("FSRC"."FAX_NUMBER")=UPPER("FS"."DESTINATION"))
3 - filter("FSRC"."IS_DELETED"='N')
4 - access("GBM"."BU_ID"="FSRC"."BUID")
filter("GBM"."BU_ID" IS NOT NULL)
5 - filter(UPPER("FS"."FAX_STATUS")=SYS_OP_C2C(:B1))
21 rows selected.
--NEW with indexes.
PLAN_TABLE_OUTPUT
Plan hash value: 665032407
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5995 | 3986K| 3117 (1)| 00:00:38 |
|* 1 | HASH JOIN | | 5995 | 3986K| 3117 (1)| 00:00:38 |
| 2 | NESTED LOOPS | | 611 | 47658 | 20 (5)| 00:00:01 |
|* 3 | VIEW | index$_join$_002 | 2290 | 165K| 20 (5)| 00:00:01 |
|* 4 | HASH JOIN | | | | | |
|* 5 | HASH JOIN | | | | | |
PLAN_TABLE_OUTPUT
| 6 | BITMAP CONVERSION TO ROWIDS| | 2290 | 165K| 1 (0)| 00:00:01 |
|* 7 | BITMAP INDEX SINGLE VALUE | IDX_IS_DELETED_RAM | | | | |
| 8 | INDEX FAST FULL SCAN | IDX_BUID_RAM | 2290 | 165K| 8 (0)| 00:00:01 |
| 9 | INDEX FAST FULL SCAN | IDX_FAX_NUMBER_RAM | 2290 | 165K| 14 (0)| 00:00:01 |
|* 10 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 4 | 0 (0)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID | FAX_STAGE | 23245 | 13M| 3096 (1)| 00:00:38 |
|* 12 | INDEX RANGE SCAN | IDX_UPPER_FAX_STATUS_RAM | 9298 | | 2434 (1)| 00:00:30 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - access(UPPER("DESTINATION")="FSRC"."SYS_NC00035$")
3 - filter("FSRC"."IS_DELETED"='N')
4 - access(ROWID=ROWID)
5 - access(ROWID=ROWID)
7 - access("FSRC"."IS_DELETED"='N')
10 - access("GBM"."BU_ID"="FSRC"."BUID")
filter("GBM"."BU_ID" IS NOT NULL)
12 - access(UPPER("FAX_STATUS")=SYS_OP_C2C(:B1))
31 rows selected
Please confirm on the DBA comment.Is this bitmap index locks rows in my case.
Thanks.> -
Query is taking more time to execute
Hi,
Query is taking more time to execute.
But when i execute same query in other server then it is giving immediate output.
What is the reason of it.
thanks in advance.'My car doesn't start, please help me to start my car'
Do you think we are clairvoyant?
Or is your salary subtracted for every letter you type here?
Please be aware this is not a chatroom, and we can not see your webcam.
Sybrand Bakker
Senior Oracle DBA -
Query is taking more time to execute in PROD
Hi All,
Can anyone tell me why this query is taking more time when I am using for single trx_number record it is working fine but when I am trying to use all the records it is not fatching any records and it is keep on running.
SELECT DISTINCT OOH.HEADER_ID
,OOH.ORG_ID
,ct.CUSTOMER_TRX_ID
,ool.ship_from_org_id
,ct.trx_number IDP_SHIPMENT_ID
,ctt.type STATUS_CODE
,SYSDATE STATUS_DATE
,ooh.attribute16 IDP_ORDER_NBR --Change based on testing on 21-JUL-2010 in UAT
,lpad(rac_bill.account_number,6,0) IDP_BILL_TO_CUSTOMER_NBR
,rac_bill.orig_system_reference
,rac_ship_party.party_name SHIP_TO_NAME
,raa_ship_loc.address1 SHIP_TO_ADDR1
,raa_ship_loc.address2 SHIP_TO_ADDR2
,raa_ship_loc.address3 SHIP_TO_ADDR3
,raa_ship_loc.address4 SHIP_TO_ADDR4
,raa_ship_loc.city SHIP_TO_CITY
,NVL(raa_ship_loc.state,raa_ship_loc.province) SHIP_TO_STATE
,raa_ship_loc.country SHIP_TO_COUNTRY_NAME
,raa_ship_loc.postal_code SHIP_TO_ZIP
,ooh.CUST_PO_NUMBER CUSTOMER_ORDER_NBR
,ooh.creation_date CUSTOMER_ORDER_DATE
,ool.actual_shipment_date DATE_SHIPPED
,DECODE(mp.organization_code,'CHP', 'CHESAPEAKE'
,'CSB', 'CHESAPEAKE'
,'DEP', 'CHESAPEAKE'
,'CHESAPEAKE') SHIPPED_FROM_LOCATION --'MEMPHIS' --'HOUSTON'
,ooh.freight_carrier_code FREIGHT_CARRIER
,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)
+ NVL(XX_FSG_NA_FASTRAQ_IFACE.get_line_fr_amt ('FREIGHT',ct.customer_trx_id,ct.org_id),0)FREIGHT_CHARGE
,ooh.freight_terms_code FREIGHT_TERMS
,'' IDP_BILL_OF_LADING
,(SELECT WAYBILL
FROM WSH_DELIVERY_DETAILS_OE_V
WHERE -1=-1
AND SOURCE_HEADER_ID = ooh.header_id
AND SOURCE_LINE_ID = ool.line_id
AND ROWNUM =1) WAYBILL_CARRIER
,'' CONTAINERS
,ct.trx_number INVOICE_NBR
,ct.trx_date INVOICE_DATE
,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('LINE',ct.customer_trx_id,ct.org_id),0) +
NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) +
NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)INVOICE_AMOUNT
,NULL IDP_TAX_IDENTIFICATION_NBR
,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) TAX_AMOUNT_1
,NULL TAX_DESC_1
,NULL TAX_AMOUNT_2
,NULL TAX_DESC_2
,rt.name PAYMENT_TERMS
,NULL RELATED_INVOICE_NBR
,'Y' INVOICE_PRINT_FLAG
FROM ra_customer_trx_all ct
,ra_cust_trx_types_all ctt
,hz_cust_accounts rac_ship
,hz_cust_accounts rac_bill
,hz_parties rac_ship_party
,hz_locations raa_ship_loc
,hz_party_sites raa_ship_ps
,hz_cust_acct_sites_all raa_ship
,hz_cust_site_uses_all su_ship
,ra_customer_trx_lines_all rctl
,oe_order_lines_all ool
,oe_order_headers_all ooh
,mtl_parameters mp
,ra_terms rt
,OE_ORDER_SOURCES oos
,XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V
WHERE ct.cust_trx_type_id = ctt.cust_trx_type_id
AND ctt.TYPE <> 'BR'
AND ct.org_id = ctt.org_id
AND ct.ship_to_customer_id = rac_ship.cust_account_id
AND ct.bill_to_customer_id = rac_bill.cust_account_id
AND rac_ship.party_id = rac_ship_party.party_id
AND su_ship.cust_acct_site_id = raa_ship.cust_acct_site_id
AND raa_ship.party_site_id = raa_ship_ps.party_site_id
AND raa_ship_loc.location_id = raa_ship_ps.location_id
AND ct.ship_to_site_use_id = su_ship.site_use_id
AND su_ship.org_id = ct.org_id
AND raa_ship.org_id = ct.org_id
AND ct.customer_trx_id = rctl.customer_trx_id
AND ct.org_id = rctl.org_id
AND rctl.interface_line_attribute6 = to_char(ool.line_id)
AND rctl.org_id = ool.org_id
AND ool.header_id = ooh.header_id
AND ool.org_id = ooh.org_id
AND mp.organization_id = ool.ship_from_org_id
AND ooh.payment_term_id = rt.term_id
AND xla_ael_sl_v.last_update_date >= NVL(p_last_update_date,xla_ael_sl_v.last_update_date)
AND ooh.order_source_id = oos.order_source_id --Change based on testing on 19-May-2010
AND oos.name = 'FASTRAQ' --Change based on testing on 19-May-2010
AND ooh.org_id = g_org_id --Change based on testing on 19-May-2010
AND ool.flow_status_code = 'CLOSED'
AND xla_ael_sl_v.trx_hdr_id = ct.customer_trx_id
AND trx_hdr_table = 'CT'
AND xla_ael_sl_v.gl_transfer_status = 'Y'
AND xla_ael_sl_v.accounted_dr IS NOT NULL
AND xla_ael_sl_v.org_id = ct.org_id;
-- AND ct.trx_number = '2000080';
}Hello Friend,
You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
Please check the possbility of finding it through other AR tables.
2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
For example : For ra_cust_trx_types_all use ra_cust_trx_types.
This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
You can also check the same using
SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
If the tables are not analyzed , the CBO will not be able to tune your query.
4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
report.
Thanks,
Neeraj Shrivastava
[email protected]
Edited by: user9352949 on Oct 1, 2010 8:02 PM
Edited by: user9352949 on Oct 1, 2010 8:03 PM -
Parsing the query takes too much time.
Hello.
I hitting the bug in в Oracle XE (parsing some query takes too much time).
A similar bug was previously found in the commercial release and was successfully fixed (SR Number 3-3301916511).
Please, raise a bug for Oracle XE.
Steps to reproduce the issue:
1. Extract files from testcase_dump.zip and testcase_sql.zip
2. Under username SYSTEM execute script schema.sql
3. Import data from file TESTCASE14.DMP
4. Under username SYSTEM execute script testcase14.sql
SQL text can be downloaded from http://files.mail.ru/DJTTE3
Datapump dump of testcase can be downloaded from http://files.mail.ru/EC1J36
Regards,
Viacheslav.Bug number? Version fix applies to?
Relevant Note that describes the problem and points out bug/patch availability?
With a little luck some PSEs might be "backported", since 11g XE is not base release e.g. 11.2.0.1. -
Update query which taking more time
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd<>13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank you
Edited by: 960991 on Apr 16, 2013 7:11 AM960991 wrote:
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank youUpdates with subqueries can be slow. Get an execution plan for the update to see what SQL is doing.
Some things to look at ...
1. Are you sure you posted the right SQL? I could not "balance" the parenthesis - 4 "(" and 3 ")"
2. Unnecessary "(" ")" in the subquery "(sum" are confusing
3. Updates with subqueries can be slow. The tqtr5 value seems to evaluate to a constant. You might improve performance by computing the value beforehand and using a variable instead of the subquery
4. Subquery appears to be correlated - good! Make sure the subquery is properly indexed if it reads < 20% of the rows in the table (this figure depends on the version of Oracle)
5. Is tqtr5 part of an index? It is a bad idea to update indexed columns -
Discoverer report is taking much time to open
Hi
All the discoverer report are taking much time to open,even query in lov is taking 20 -25 min.s.We have restart the services but on result found.
Please suggest what can be done ,my application is on 12.0.6.
RegardsThis topic was discussed many times in the forum before, please see old threads for details and for the docs you need to refer to -- https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Slow&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
Thanks,
Hussein -
Adding column is taking much time. How to avoid?
ALTER TABLE CONTACT_DETAIL
ADD (ISIMDSCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
,ISREACHCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
Is there any way that to speed up the execution time of the query?
It's more than 24 hrs completed after started running the above script.
I do not know why it is taking much time.
Size of the table is 30 MB.To add a column the row directory of every record must be rewritten.
Obviously this will take time and produce redo.
Whenever something is slow the first question you need to answer is
'What is it waiting for?' You can do so by investigating by various v$ views.
Also, after more than 200 'I can not be bothered to do any research on my own' questions, you should know you don't post here without posting a four digit version number and a platform,
as volunteers aren't mind readers.
If you want to continue to withheld information, please consider NOT posting here.
Sybrand Bakker
Senior Oracle DBA
Experts: those who did read documentatiion and can be bothered to investigate their own problems. -
Taking much time when trying to Drill a characterstic in the BW Report.
Hi All,
When we are executing the BW report, it is taking nearly 1 to 2 mins then when we are trying to drill down a characterstic it is taking much time nearly 30 mins to 1 hour and througing an error message that,
"An error has occared during loading. Plese look in the upper frame for further information."
I have executed this query in RSRT and cheked the query properties,
this quey is bringing the data directly form Aggregates but some chares are not avalable in the Agrregtes.
So... after execution when we are trying to drill down the chars is taking much time for chars which are not avilable in the Aggregates. For other chars which are avilable in the Aggregates it is taking only 2 to 3 mins only.
How to do the drill down for the chars which are not avilable in the Aggregates with out taking much time and error.
Could you kindly give any solution for this.
Thanks & Regards,
Raju. EHi,
The only solution is to include all the char used in the report in the aggregates or this will the issue you will face.
just create a proposal for aggregates before creating any new aggregates as it will give you the idea which one is most used.
Also you should make sure that all the navigation characteristics are part of the aggregates.
Thanks
Ajeet -
Database tableAUFM hitting is taking much time even secondary index created
Hi Friends,
There is report for Goods movement rel. to Service orders + Acc.indicator.
We have two testing Systems(EBQ for developer and PEQ from client side).
EBQ system contains replica of PEQ every month.
This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems have same data(Getting same output).
The report has the follwoing fields on the selection criteria:
A_MJAHR Material Doc. Year (Mandatory)
S_BLDAT Document Date(Optional)
S_BUDAT Posting Date(Optional)
S_LGORT Storage Location(Optional)
S_MATNR Material(Optional)
S_MBLNR Material Documen(Optional)t
S_WERKS Plant(Optional)
Client not agrrying to make Material Documen as Mandatory.
The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
BLDAT
BUDAT
MATNR
WERKS
LGORT
Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
What can be done to get teh report executed very fast.
<removed by moderator>
The part of report Soure code is as below:
<long code part removed by moderator>
Thanks and Regards,
Rama chary.P
Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
Please Read before Posting in the Performance and Tuning Forum
locked by: Thomas Zloch on Sep 15, 2010 11:40 AMHi Friends,
There is report for Goods movement rel. to Service orders + Acc.indicator.
We have two testing Systems(EBQ for developer and PEQ from client side).
EBQ system contains replica of PEQ every month.
This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems have same data(Getting same output).
The report has the follwoing fields on the selection criteria:
A_MJAHR Material Doc. Year (Mandatory)
S_BLDAT Document Date(Optional)
S_BUDAT Posting Date(Optional)
S_LGORT Storage Location(Optional)
S_MATNR Material(Optional)
S_MBLNR Material Documen(Optional)t
S_WERKS Plant(Optional)
Client not agrrying to make Material Documen as Mandatory.
The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
BLDAT
BUDAT
MATNR
WERKS
LGORT
Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
What can be done to get teh report executed very fast.
<removed by moderator>
The part of report Soure code is as below:
<long code part removed by moderator>
Thanks and Regards,
Rama chary.P
Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
Please Read before Posting in the Performance and Tuning Forum
locked by: Thomas Zloch on Sep 15, 2010 11:40 AM -
LOV is slow after selecting a value its taking much time to default
Hi,
I have a dependent LOV. Master LOV is executing fine and its populatin into the field fastly. But Child LOV is very slow after selecting a value its taking much time to default.
Can any one please help me if there is any way to default the value fast after selecting a value?
Thanks,
MaheshHi Gyan,
Same issues in TST and PROD instances.
In my search criteria if i get 1 record, even after selecting that value its taking much time to default that value into the field.
Please advice. Thanks for your quick resp.
Thanks,
Mahesh
Maybe you are looking for
-
Weblogic server 6.1 CMP
Dear Sirs, In the documentation I can not find any reference to hooking MSSQLServer4 driver to EJB container. I want to use SQL server 2000. I want to know that how I inform Weblogic server to use SQL 2000 and more important how do I attach a table t
-
I'm unale to receive or send any sms in iphone 3gs, please help.
-
FireFox Add-ons tab wont load and addons web page is blocked
I just installed FireFox onto a brand new hard drive, when I open firefox and click add-ons the addon tab will not load it just tells me when its connected to the internet the tab will appear but it is connected, also when I go to the addon web page
-
Steps for Master data upload from ECC6.0 to BI7.0
Hi experts, I need to load material master and customer master data from ECC6.0 to BI7.0 and i could not see the steps to do so in sdn. Can anyone please give me the steps from the beging till end usign Business content 0material_attr/text/Hier . I W
-
Updated to 7.8, tried different region like UK and china,,still don't see Skype in marketplace,tried from Skype website, tell me not available for this phone. Any one any idel? Thanks Moderator's note: We have provided a subject-related title to help