SLow runung sql
I have following sql in the datamodel.The report is very slow and I have asked to tune the query. Can somebody help me with this? .I am very new to the reports and sql.
Thanks
KK
SELECT /*+ RULE */
vipv.EMD_code,
vipv.org_jsa, vipv.org,
vipv.dest_jsa, vipv.dest,
TO_CHAR(vipv.trail_dt,'DD-MON-YYYY') trail_dt,
vipv.NPM_WTM WTM_code,
vipv.WTM_name WTM, vipv.voy_code, vipv.line_code,
SUM(vipv.amount) amount, vipv.currency,
vipv.eqp_size,
SUM(vipv.eqp_cnt) eqp_cnt
FROM locationl,
locationl2,
VIPV vipv
WHERE 1=1
--added; 11/17/03
AND l.LOCATION_code = vipv.origin_LOCATION
AND l2.LOCATION_code = vipv.dest_LOCATION
AND ( :p_location IS NULL --for origin
OR
(:p_operator = '=' AND :p_location_code ='EMD' AND l.EMD_code = :p_location)
OR
(:p_operator = '=' AND :p_location_code ='FOG' AND l.FOG_code = :p_location)
OR
(:p_operator = '!=' AND :p_location_code ='EMD' AND l.EMD_code != :p_location)
OR
(:p_operator = '!=' AND :p_location_code ='FOG' AND l.FOG_code != :p_location)
AND ( :p_location2 IS NULL --for destination
OR
(:p_operator2 = '=' AND :p_location_code2 ='EMD' AND l2.EMD_code = :p_location2)
OR
(:p_operator2 = '=' AND :p_location_code2 ='FOG' AND l2.FOG_code = :p_location2)
OR
(:p_operator2 = '!=' AND :p_location_code2 ='EMD' AND l2.EMD_code != :p_location2)
OR
(:p_operator2 = '!=' AND :p_location_code2 ='FOG' AND l2.FOG_code != :p_location2)
AND ('ALL' = UPPER(:p_ctrl_pt)
OR
vipv.ctrl_pt = UPPER(:p_ctrl_pt)
OR
(:p_ctrl_pt ='NOT QFB' AND vipv.ctrl_pt != 'QFB')
AND (vipv.exp_fg = UPPER(:p_exp_FG)
OR
:p_exp_FG = 'A'
AND vipv.sod_code = UPPER(:p_sod_code)
AND vipv.rpt_month = UPPER(:p_rpt_month)
--AND vipv.line_code != 'FAS'
AND vipv.rpt_year = :p_rpt_year
AND vipv.detail_code = :p_detail_code
GROUP BY
vipv.EMD_code,
vipv.org_jsa, vipv.origin,
vipv.dest_jsa, vipv.dest,
TO_CHAR(vipv.sail_dt,'DD-MON-YYYY'),
vipv.NPM_WTM,
vipv.WTM_name, vipv.voy_code, vipv.line_code,
vipv.currency,
vipv.eqp_size
ORDER BY 1,2,3,4,5,8,9
I have following sql in the datamodel.The report is very slow and I have asked to tune the query. Can somebody help me with this? .I am very new to the reports and sql.
Thanks
KK
SELECT /*+ RULE */
vipv.EMD_code,
vipv.org_jsa, vipv.org,
vipv.dest_jsa, vipv.dest,
TO_CHAR(vipv.trail_dt,'DD-MON-YYYY') trail_dt,
vipv.NPM_WTM WTM_code,
vipv.WTM_name WTM, vipv.voy_code, vipv.line_code,
SUM(vipv.amount) amount, vipv.currency,
vipv.eqp_size,
SUM(vipv.eqp_cnt) eqp_cnt
FROM locationl,
locationl2,
VIPV vipv
WHERE 1=1
--added; 11/17/03
AND l.LOCATION_code = vipv.origin_LOCATION
AND l2.LOCATION_code = vipv.dest_LOCATION
AND ( :p_location IS NULL --for origin
OR
(:p_operator = '=' AND :p_location_code ='EMD' AND l.EMD_code = :p_location)
OR
(:p_operator = '=' AND :p_location_code ='FOG' AND l.FOG_code = :p_location)
OR
(:p_operator = '!=' AND :p_location_code ='EMD' AND l.EMD_code != :p_location)
OR
(:p_operator = '!=' AND :p_location_code ='FOG' AND l.FOG_code != :p_location)
AND ( :p_location2 IS NULL --for destination
OR
(:p_operator2 = '=' AND :p_location_code2 ='EMD' AND l2.EMD_code = :p_location2)
OR
(:p_operator2 = '=' AND :p_location_code2 ='FOG' AND l2.FOG_code = :p_location2)
OR
(:p_operator2 = '!=' AND :p_location_code2 ='EMD' AND l2.EMD_code != :p_location2)
OR
(:p_operator2 = '!=' AND :p_location_code2 ='FOG' AND l2.FOG_code != :p_location2)
AND ('ALL' = UPPER(:p_ctrl_pt)
OR
vipv.ctrl_pt = UPPER(:p_ctrl_pt)
OR
(:p_ctrl_pt ='NOT QFB' AND vipv.ctrl_pt != 'QFB')
AND (vipv.exp_fg = UPPER(:p_exp_FG)
OR
:p_exp_FG = 'A'
AND vipv.sod_code = UPPER(:p_sod_code)
AND vipv.rpt_month = UPPER(:p_rpt_month)
--AND vipv.line_code != 'FAS'
AND vipv.rpt_year = :p_rpt_year
AND vipv.detail_code = :p_detail_code
GROUP BY
vipv.EMD_code,
vipv.org_jsa, vipv.origin,
vipv.dest_jsa, vipv.dest,
TO_CHAR(vipv.sail_dt,'DD-MON-YYYY'),
vipv.NPM_WTM,
vipv.WTM_name, vipv.voy_code, vipv.line_code,
vipv.currency,
vipv.eqp_size
ORDER BY 1,2,3,4,5,8,9
Similar Messages
-
Performance is too slow on SQL Azure box
Hi,
Performance is too slow on SQL Azure box (Located in Europe)
Below query returns 500,000 rows in 18 Min. on SQL Azure box (connected via SSMS, located in India)
SELECT * FROM TABLE_1
Whereas, on local server it returns 500,000 rows in (30 sec.)
SQL Azure configuration:
Service Tier/Performance Level : Premium/P1
DTU : 100
MAX DB Size : 500GB
Max Worker Threads : 200
Max Sessions : 2400
Benchmark Transaction Rate : 105 transactions per second
Predictability : Best
Any suggestion would be highly appreciated.
Thanks,Hello,
Can you please explain in a little more detail the scenario you testing? Are you comparing a SQL Database in Europe against a SQL Database in India? Or a SQL Database with a local, on-premise SQL Server installation?
In case of the first scenario, the roundtrip latency for the connection to the datacenter might play a role.
If you are comparing to a local installation, please note that you might be running against completely different hardware specifications and without network delay, resulting in very different results.
In both cases you can use the below blog post to assess the resource utilization of the SQL Database during the operation:
http://azure.microsoft.com/blog/2014/09/11/azure-sql-database-introduces-new-near-real-time-performance-metrics/
If the DB utilizes up to 100% you might have to consider to upgrade to a higher performance level to achieve the throughput you are looking for.
Thanks,
Jan -
Performance too Slow on SQL Azure box
Hi,
Performance is too slow on SQL Azure box:
Below query returns 500,000 rows in 18 Min. on SQL Azure box (connected via SSMS)
SELECT * FROM TABLE_1
Whereas, on local server it returns 500,000 rows in (30 sec.)
SQL Azure configuration:
Service Tier/Performance Level : Premium/P1
DTU : 100
MAX DB Size : 500GB
Max Worker Threads : 200
Max Sessions : 2400
Benchmark Transaction Rate : 105 transactions per second
Predictability : Best
Thanks,Hello,
Please refer to the following document too:
http://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/Performance%20Guidance%20for%20SQL%20Server%20in%20Windows%20Azure%20Virtual%20Machines.docx
Hope this helps.
Regards,
Alberto Morillo
SQLCoffee.com -
Server performance slow using sql server 2005
Dear Team,
Kindly assist me in solving the performance issue .
The server is very slow and it is taking more time.,
SAP version details:
- SAP ECC 6.0 sr3 version.
-DB SQl server 2005.
performance is very slowi,ts is taking time to execute the Transactions.
Appreciate for quick response.
Thanks&Regards
KumarvyasDear Team,
In T- code: DB13
Space overview
Found an error:" ERROR CONDITION EXISTS- CHECK FILES TAB ".
in the files TAB: Datafiles:
Exception in red for <SID>DATA1, : <SID>DATA2,: <SID>DATA3
Freepct: 0,0,0,
Free pct : Free percentage of space in a SQL Server file tab
How to extend the The DATA files in SQL server 2005
Reagrds
Kumar -
SQL server 2000 Procedure executing very slower in SQL 2012
Hi,
I've migrated my database from SQL 2000 to SQL 2012 using SQL2008 as intermediate. After migration, I found that one of my procedure is timing out from web site. When I run that procedure in Management Studio, it takes 42 seconds to display result. The same
query is executed in 3 seconds in SQL 2000 version (in first run it takes 10 seconds in SQL 2000 and second run it took 3 seconds only). But in SQL 2012 second and third time also it took 36 seconds. Can anyone explain why the query takes longer time instead
of less time after the upgrade?
Our prod deployment date is approaching so any quick help is appreciated.
Thanks in advance.You need to compare the execution plans of queries running in 2000 and 2012 to pin point why there is a difference.
However, have you made sure that you have reindexed the indexes and updated the stats and you still see the difference? If not, please do that and test again.
Could you post the execution plans?
Also read this -
http://blogs.msdn.com/b/psssql/archive/2015/01/30/frequently-used-knobs-to-tune-a-busy-sql-server.aspx
Regards, Ashwin Menon My Blog - http:\\sqllearnings.com -
Hello I have a stored procedure that I am using to load 5 tables from a stagin table. It reads each line and populates these tables. The staging table has 200K rows in it and I commit every 10,000 rows. Once released to production the staging table will have around 23 million rows in it to be processed. It is running very poorly. I am running Oracle 9.2.0.6. Here is the stored procedure. Any suggestions on how to make it faster. Is there a way to use bulk binding/insert and would that be better?
Thank you,
David
CREATE OR REPLACE procedure SP_LOAD_STAGED_CUST (runtime_minutes int)
is
staged_rec STAGE_CUSTOMER%ROWTYPE;
end_time date default sysdate + (nvl(runtime_minutes,1440)/1440);
test_row_id number(38);
in_table_cnt number(38);
BEGIN
-- POPULATE LOCATION AND ASSOCIATE TABLES AS NEEDED
insert into TMP_LOCATION (update_location_cd, country_cd, locale_cd)
select distinct update_location_cd,
country_cd,
locale_cd
from stage_customer
update TMP_Location
set location_id = location_seq.nextval
insert /*+ APPEND */ into location
( location_id, location_cd, location_desc, short_description, sales_channel_id, location_type_id,
location_category_id, addr1, addr2, addr3, city, state_cd, postal_cd, country_cd, region_id,
timezone_id, locale_cd, currency_cd, phone_num, alt_phone_num, fax_num, alt_fax_num,
email_addr, alt_email_addr, is_default, create_modify_timestamp)
select
location_id,
update_location_cd,
null,
null,
fn_sales_channel_default(),
null,
null,
null, null, null,
null, null, null ,
nvl(country_cd, 'USA'),
null, null,
locale_cd,
'USD',
null, null, null, null, null, null,
0,
sysdate
from TMP_LOCATION
where update_location_cd not in (select location_cd from location);
commit
insert into TMP_ASSOCIATE (associate_number, update_location_cd)
select associate_number, min(update_location_cd)
from stage_customer
where associate_number is not null
group by associate_number;
update TMP_ASSOCIATE
set associate_id = associate_seq.nextval
insert /*+ APPEND */ into associate
select
associate_id ,
null,
associate_number,
(select nvl(location_id, fn_location_default())
from location
where location.location_cd = tmp_associate.update_location_cd)
from TMP_ASSOCIATE
where not exists (select associate_id from associate
where associate.associate_number = tmp_associate.associate_number)
delete from tmp_associate;
commit;
insert into TMP_ASSOCIATE (associate_number, update_location_cd)
select alt_associate_number, min(update_location_cd)
from stage_customer
where alt_associate_number is not null
group by alt_associate_number
update TMP_ASSOCIATE
set associate_id = associate_seq.nextval
insert /*+ APPEND */ into associate
select
associate_id ,
null,
associate_number,
(select nvl(location_id, fn_location_default())
from location
where location.location_cd = tmp_associate.update_location_cd)
from TMP_ASSOCIATE
where not exists (select associate_id from associate
where associate.associate_number = tmp_associate.associate_number);
commit;
select min(row_id) -1 into test_row_id from stage_customer ;
WHILE sysdate < end_time
LOOP
select *
into staged_rec
from Stage_Customer
where row_id = test_row_id + 1;
if staged_rec.row_id is null
then
COMMIT;
EXIT;
end if;
-- EXIT WHEN staged_rec.row_id is null;
-- INSERTS TO CUSTOMER TABLE (IN LOOP - DATA FROM STAGE CUSTOMER TABLE)
insert /*+ APPEND */ into customer (
customer_id,
customer_acct_num,
account_type_id,
acct_status,
discount_percent,
discount_code,
uses_purch_order,
business_name,
tax_exempt_prompt,
is_deleted,
name_prefix,
first_name,
middle_name,
last_name,
name_suffix,
nick_name,
alt_first_name,
alt_last_name,
marketing_source_id,
country_cd,
locale_cd,
email_addr,
email_addr_valid,
alt_email_addr,
alt_email_addr_valid,
birth_date,
acquisition_sales_channel_id,
acquisition_location_id,
home_location_id,
salesperson_id,
alt_salesperson_id,
customer_login_name,
age_range_id,
demographic_role_id,
education_level_id,
gender_id,
household_count_id,
housing_type_id,
income_range_id,
lifecycle_type_id,
lifetime_value_score_id,
marital_status_id,
religious_affil_id
values (
staged_rec.row_id,
staged_rec.customer_acct_num,
nvl(staged_rec.account_type_id, fn_account_type_default()),
1,
staged_rec.discount_percent,
staged_rec.discount_cd,
staged_rec.pos_allow_purchase_order_flag,
staged_rec.business_name,
staged_rec.pos_tax_prompt,
staged_rec.is_deleted,
staged_rec.name_prefix,
staged_rec.first_name,
staged_rec.middle_name,
staged_rec.last_name,
staged_rec.name_suffix,
staged_rec.nick_name,
staged_rec.alt_first_name,
staged_rec.alt_last_name,
staged_rec.new_marketing_source_id,
staged_rec.country_cd,
staged_rec.locale_cd,
staged_rec.email_addr,
nvl2(staged_rec.email_addr,1,0),
staged_rec.alt_email_addr,
nvl2(staged_rec.alt_email_addr,1,0),
staged_rec.birth_date,
staged_rec.SALES_CHANNEL_ID,
(select location_id from location where location_cd = staged_rec.update_location_cd),
(select location_id from location where location_cd = staged_rec.update_location_cd),
(select min(a.associate_id) from associate a
where staged_rec.associate_number = a.associate_number
and a.location_id =
(select location_id from location where location_cd = staged_rec.update_location_cd)),
(select min(a.associate_id) from associate a
where staged_rec.alt_associate_number = a.associate_number
and a.location_id =
(select location_id from location where location_cd = staged_rec.update_location_cd)),
staged_rec.customer_login_name,
fn_age_range_default(),
fn_demographic_role_default(),
fn_education_level_default(),
fn_gender_default(),
fn_household_cnt_default(),
fn_housing_type_default(),
fn_income_range_default(),
fn_lifecycle_type_default(),
fn_lifetime_val_score_default(),
fn_marital_status_default(),
fn_religious_affil_default()
-- INSERTS TO PHONE TABLE ( IN LOOP -DATA FROM STAGE CUSTOMER TABLE)
if staged_rec.home_phone is not null
then
insert /*+ APPEND */ into phone ( customer_id, phone_type_id, phone_num, is_valid, is_primary)
values (staged_rec.row_id, 1, staged_rec.home_phone, 1, 1);
end if;
if staged_rec.work_phone is not null
then
insert /*+ APPEND */ into phone (customer_id, phone_type_id, phone_num, is_valid,is_primary)
values (staged_rec.row_id, 2, staged_rec.work_phone, 1, 0);
end if;
if staged_rec.mobile_phone is not null
then
insert /*+ APPEND */ into phone ( customer_id, phone_type_id, phone_num, is_valid, is_primary)
values (staged_rec.row_id, 3, staged_rec.home_phone, 1, 0);
end if;
if staged_rec.work_phone is not null
then
insert /*+ APPEND */ into phone (customer_id, phone_type_id, phone_num, is_valid,is_primary)
values (staged_rec.row_id, 4, staged_rec.work_phone, 1, 0);
end if;
-- INSERTS TO CUSTOMER ADDR TABLE ( IN LOOP - DATA FROM STAGE CUSTOMER TABLE)
if staged_rec.address1 is not null
then
insert /*+ APPEND */ into customer_addr (customer_id, address_type_id, address1, address2, address3, city, state_cd, postal_cd, country_cd, region_id,
is_primary, is_valid, cannot_standardize, is_standardized)
values (staged_rec.row_id, fn_address_type_default(), staged_rec.address1, staged_rec.address2, staged_rec.address3,
staged_rec.city, staged_rec.state_cd, staged_rec.postal_cd, staged_rec.country_cd, staged_rec.region, 1, 1, 0, 0);
end if;
-- INSERTS TO CUSTOMER_STATE_TAX_ID
if staged_rec.pos_default_tax_id is not null
then
insert /*+ APPEND */ into customer_state_tax_id (state_cd, customer_id, tax_id, expiration_date)
values (staged_rec.state_cd, staged_rec.row_id, staged_rec.pos_default_tax_id,
staged_rec.pos_tax_id_expiration_date);
end if;
-- REMOVE STAGE CUSTOMER ROW (IN LOOP - DELETE CUSTOMER FROM STAGE_CUSTOMER TABLE)
delete from stage_customer
where row_id = staged_rec.row_id;
-- COMMIT AFTER EVERY 10,000 CUSTOMERS
if mod(staged_rec.row_id, 100) = 0
then
commit;
end if;
-- INCREMENT ROW ID TO BE RETRIEVED
test_row_id := test_row_id + 1;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
COMMIT;
END;
Message was edited by:
JesusLuvR
Message was edited by:
JesusLuvRYou want to do as much processing as you can in single large sql statements, not row by row. You also want to do as few statemetns as possible. Most of what you are doing looks to me like it could be done as a series of single sql statements. For example, the three statements that populate location ca nbe condensed to a single statement like:
INSERT /*+ APPEND */ INTO location
(location_id, location_cd, location_desc, short_description, sales_channel_id,
location_type_id, location_category_id, addr1, addr2, addr3, city, state_cd,
postal_cd, country_cd, region_id, timezone_id, locale_cd, currency_cd,
phone_num, alt_phone_num, fax_num, alt_fax_num, email_addr, alt_email_addr,
is_default, create_modify_timestamp)
SELECT location_seq.NEXTVAL, update_location_cd, NULL, NULL,
fn_sales_channel_default(), NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NVL(country_cd, 'USA'), NULL, NULL, locale_cd, 'USD', NULL, NULL, NULL,
NULL, NULL, NULL, 0, sysdate
FROM (SELECT DISTINCT update_location_cd, country_cd, locale_cd
FROM stage_customer
WHERE update_location_cd NOT IN (SELECT location_cd FROM location);You can easily do a similar change to the statemetns populating associates.
I don't, off hand, see anything in the intert into customers that seems to require row by row processing. Just do it in a large insert. I would also be tempted to replace the many values of the form (select location_id from location where location_cd = staged_rec.update_location_cd) to a join with location. As far as I can see, you would only need one copy of location.
I also notince that you have a number of function calls in the values list. I would look carefully at whether you need them at all, since they seem to have no parameters, so should return the same value every time. Minimally, I would consider calling them once each, storing the values in variables and use those variables in the insert.
HTH
John -
URGENT: Migrating from SQL to Oracle results in very poor performance!
*** IMPORTANT, NEED YOUR HELP ***
Dear, I have a banking business solution from Windows/SQL Server 2000 to Sun Solaris/ORACLE 10g migrated. In the test environment everything was working fine. On the production system we have very poor DB performance. About 100 times slower than SQL Server 2000!
Environment at Customer Server Side:
Hardware: SUN Fire 4 CPU's, OS: Solaris 5.8, DB Oracle 8 and 10
Data Storage: Em2
DB access thru OCCI [Environment:OBJECT, Connection Pool, Create Connection]
Depending from older applications it's necessary to run ORACLE 8 as well on the same Server. Since we have running the new solution, which is using ORACLE 10, the listener for ORACLE 8 is frequently gone (or by someone killed?). The performance of the whole ORACLE 10 Environment is very poor. As a result of my analyse I figured out that the process to create a connection to the connection pool takes up to 14 seconds. Now I am wondering if it a problem to run different ORACLE versions on the same Server? The Customer has installed/created the new ORACLE 10 DB with the same user account (oracle) as the older version. To run the new solution we have to change the ORACLE environment settings manually. All hints/suggestions to solve this problem are welcome. Thanks in advance.
AntonOn the production system we have very poor DB performanceHave you identified the cause of the poor performance is not the queries and their plans being generated by the database?
Do you know if some of the queries appear to take more time than what it used to be on old system? Did you analyze such queries to see what might be the problem?
Are you running RBO or CBO?
if stats are generated, how are they generated and how often?
Did you see what autotrace and tkprof has to tell you about problem queries (if in fact such queries have been identified)?
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10752/sqltrace.htm#1052 -
Query In SP runs 10x slower than straight query - NOT Parameter Sniffing Issue
Hi Everyone,
I have a real mystery on my hands that appears to be a parameter sniffing issue with my SP but I think is not. I have tried every parameter sniffing workaround trick in the book ( local variables, option recompile, with recompile, optimize for variables,UNKNOWN,
table hints, query hints, MAXDOP, etc.) I have dropped indexes/recreated indexes, updated statistics, etc. I have restored a copy of the DB to 2 different servers ( one with identical HW specs as the box having the issue, one totally different ) and the SP
runs fine on those 2, no workarounds needed. Execution plans are identical on all boxes. When I run a profiler on the 2 different boxes however, I see that on the server having issues, the Reads are around 8087859 while the other server with identical HW,
the reads are 10608. Not quite sure how to interpret those results or where to look for answers. When the sql server service is restarted on the server having issues and the sp is run, it runs fine for a time ( not sure how long ) and then goes back to its
snail pace speed. Here is the profile trace:
Here is the stored procedure. The only modifications I made were the local variables to eliminate the obvious parameter sniffing issues and I added the WITH NOLOCK hints:
/****** Object: StoredProcedure [dbo].[EC_EMP_APPR_SIGNATURE_SEL_TEST] Script Date: 12/03/2014 08:06:01 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
/****** Object: StoredProcedure [dbo].[EC_EMP_APPR_SIGNATURE_SEL_TEST] Script Date: 12/02/2014 22:24:45 ******/
ALTER PROCEDURE [dbo].[EC_EMP_APPR_SIGNATURE_SEL_TEST]
@EMPLOYEE_ID varchar(9) ,
@OVERRIDE_JOB_CODE varchar(8),
@OVERRIDE_LOCATION_CODE varchar(8),
@OVERRIDE_EST_CODE varchar(8),
@OVERRIDE_EMP_GROUP_CODE varchar(8)
AS
set NOCOUNT ON
declare
@EMPLOYEE_ID_LOCAL varchar(9),
@OVERRIDE_JOB_CODE_LOCAL varchar(8),
@OVERRIDE_LOCATION_CODE_LOCAL varchar(8),
@OVERRIDE_EST_CODE_LOCAL varchar(8),
@OVERRIDE_EMP_GROUP_CODE_LOCAL varchar(8)
set @EMPLOYEE_ID_LOCAL = @EMPLOYEE_ID
set @OVERRIDE_JOB_CODE_LOCAL = @OVERRIDE_JOB_CODE
set @OVERRIDE_LOCATION_CODE_LOCAL = @OVERRIDE_LOCATION_CODE
set @OVERRIDE_EST_CODE_LOCAL = @OVERRIDE_EST_CODE
set @OVERRIDE_EMP_GROUP_CODE_LOCAL = @OVERRIDE_EMP_GROUP_CODE
select 3 as SIGNATURE_ID
from EC_EMPLOYEE_POSITIONS p
where p.EMPLOYEE_ID = @EMPLOYEE_ID_LOCAL
and DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) between p.POSITION_START_DATE and ISNULL(p.POSITION_END_DATE, convert(datetime, '47121231', 112))
and (dbo.lpad(isnull(p.job_code,''), 8, ' ') + dbo.lpad(isnull(p.location_code,''), 8, ' ') + dbo.lpad(isnull(p.ESTABLISHMENT_CODE,'<N/A>'), 8, ' ') + dbo.lpad(isnull(p.EMP_GROUP_CODE,''), 8, ' '))
in
(select (dbo.lpad(isnull(ep.REPORTING_JOB_CODE,''), 8, ' ') + dbo.lpad(isnull(ep.REPORTING_LOCATION_CODE,''), 8, ' ') + dbo.lpad(isnull(ep.REPORTING_ESTABLISHMENT_CODE,'<N/A>'), 8, ' ') + dbo.lpad(isnull(ep.REPORTING_EMP_GROUP_CODE,''),
8, ' '))
from EC_POSITIONS ep
where DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) between ep.POSITION_EFFECT_DATE
and ISNULL(ep.POSITION_TERM_DATE, convert(datetime, '47121231', 112))
and (dbo.lpad(isnull(ep.job_code,''), 8, ' ') + dbo.lpad(isnull(ep.location_code,''), 8, ' ') + dbo.lpad(isnull(ep.ESTABLISHMENT_CODE,'<N/A>'), 8, ' ') + dbo.lpad(isnull(ep.EMP_GROUP_CODE,''), 8, ' '))
= (dbo.lpad(isnull(@OVERRIDE_JOB_CODE_LOCAL,''), 8, ' ') + dbo.lpad(isnull(@OVERRIDE_LOCATION_CODE_LOCAL,''), 8, ' ') + dbo.lpad(isnull(@OVERRIDE_EST_CODE_LOCAL,'<N/A>'), 8, ' ') + dbo.lpad(isnull(@OVERRIDE_EMP_GROUP_CODE_LOCAL,''),
8, ' ')))
union
select 4 as SIGNATURE_ID
from EC_EMPLOYEE_POSITIONS p
where p.EMPLOYEE_ID = @EMPLOYEE_ID_LOCAL
and DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) between p.POSITION_START_DATE and ISNULL(p.POSITION_END_DATE, convert(datetime, '47121231', 112))
and (dbo.lpad(isnull(p.job_code,''), 8, ' ') + dbo.lpad(isnull(p.location_code,''), 8, ' ') + dbo.lpad(isnull(p.ESTABLISHMENT_CODE,'<N/A>'), 8, ' ') + dbo.lpad(isnull(p.EMP_GROUP_CODE,''), 8, ' '))
in
(select (dbo.lpad(isnull(ep.REPORTING_JOB_CODE,''), 8, ' ') + dbo.lpad(isnull(ep.REPORTING_LOCATION_CODE,''), 8, ' ') + dbo.lpad(isnull(ep.REPORTING_ESTABLISHMENT_CODE,'<N/A>'), 8, ' ') + dbo.lpad(isnull(ep.REPORTING_EMP_GROUP_CODE,''),
8, ' '))
from EC_POSITIONS ep with (NOLOCK)
where DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) between ep.POSITION_EFFECT_DATE
and ISNULL(ep.POSITION_TERM_DATE, convert(datetime, '47121231', 112))
and (dbo.lpad(isnull(ep.job_code,''), 8, ' ') + dbo.lpad(isnull(ep.location_code,''), 8, ' ') + dbo.lpad(isnull(ep.ESTABLISHMENT_CODE,'<N/A>'), 8, ' ') + dbo.lpad(isnull(ep.EMP_GROUP_CODE,''), 8, ' '))
in
(select (dbo.lpad(isnull(ep.REPORTING_JOB_CODE,''), 8, ' ') + dbo.lpad(isnull(ep.REPORTING_LOCATION_CODE,''), 8, ' ') + dbo.lpad(isnull(ep.REPORTING_ESTABLISHMENT_CODE,'<N/A>'), 8, ' ') + dbo.lpad(isnull(ep.REPORTING_EMP_GROUP_CODE,''),
8, ' '))
from EC_POSITIONS ep with (NOLOCK)
where DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) between ep.POSITION_EFFECT_DATE
and ISNULL(ep.POSITION_TERM_DATE, convert(datetime, '47121231', 112))
and (dbo.lpad(isnull(ep.job_code,''), 8, ' ') + dbo.lpad(isnull(ep.location_code,''), 8, ' ') + dbo.lpad(isnull(ep.ESTABLISHMENT_CODE,'<N/A>'), 8, ' ') + dbo.lpad(isnull(ep.EMP_GROUP_CODE,''), 8, ' '))
= (dbo.lpad(isnull(@OVERRIDE_JOB_CODE_LOCAL,''), 8, ' ') + dbo.lpad(isnull(@OVERRIDE_LOCATION_CODE_LOCAL,''), 8, ' ') + dbo.lpad(isnull(@OVERRIDE_EST_CODE_LOCAL,'<N/A>'), 8, ' ') + dbo.lpad(isnull(@OVERRIDE_EMP_GROUP_CODE_lOCAL,''),
8, ' '))))
order by SIGNATURE_ID
Any suggestions would be greatly appreciated by anyone. I have no more tricks left in my toolbox and am starting to think its either the hardware or the database engine itself that is at fault.
Maxfast server:
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'EC_POSITIONS'. Scan count 3, logical reads 10450, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'EC_EMPLOYEE_POSITIONS'. Scan count 2, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 3343 ms, elapsed time = 3460 ms.
SQL Server Execution Times:
CPU time = 3343 ms, elapsed time = 3460 ms.
Slow server:
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'EC_POSITIONS'. Scan count 3, logical reads 10450, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'EC_EMPLOYEE_POSITIONS'. Scan count 2, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 37875 ms, elapsed time = 38295 ms.
SQL Server Execution Times:
CPU time = 37875 ms, elapsed time = 38295 ms.
Big diff in server execution times.... -
We have to inverstigate about a reporting solution where things are getting slow (may be material, database design, network matters).
I have red a lot in MSDN and some books about performance tuning on SQL Server 2008 R2 (or other) but frankly, I feel a little lost in all that stuff
I'am looking for practical steps in order to do the tuning. Someone had like a recipe for that : a success story...
My (brain storm) Methodology should follow these steps:
Resource bottlenecks: CPU, memory, and I/O bottlenecks
tempdb bottlenecks
A slow-running user query : Missing indexes, statistics,...
Use performance counters : there are many, can one give us the list of the most important
how to do fine tuning about SQL Server configuration
SSRS, SSIS configuration ?
And do the recommandations.
Thanks
"there is no Royal Road to Mathematics, in other words, that I have only a very small head and must live with it..."
Edsger W. DijkstraHello,
There is no clear defined step which can be categorized as step by step to performance tuning.Your first goal is to find out cause or drill down to factor causing slowness of SQL server it can be poorly written query ,missing indexes,outdated stats.RAM crunch
CPU crunch so on and so forth.
I generally refer to below doc for SQL server tuning
http://technet.microsoft.com/en-us/library/dd672789(v=sql.100).aspx
For SSIS tuning i refer below doc.
http://technet.microsoft.com/library/Cc966529#ECAA
http://msdn.microsoft.com/en-us/library/ms137622(v=sql.105).aspx
When I face issue i generally look at wait stats ,wait stats give you idea about on what resource query was waiting.
--By Jonathan KehayiasSELECT TOP 10
wait_type ,
max_wait_time_ms wait_time_ms ,
signal_wait_time_ms ,
wait_time_ms - signal_wait_time_ms AS resource_wait_time_ms ,
100.0 * wait_time_ms / SUM(wait_time_ms) OVER ( )
AS percent_total_waits ,
100.0 * signal_wait_time_ms / SUM(signal_wait_time_ms) OVER ( )
AS percent_total_signal_waits ,
100.0 * ( wait_time_ms - signal_wait_time_ms )
/ SUM(wait_time_ms) OVER ( ) AS percent_total_resource_waits
FROM sys.dm_os_wait_stats
WHERE wait_time_ms > 0 -- remove zero wait_time
AND wait_type NOT IN -- filter out additional irrelevant waits
( 'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH',
'SQLTRACE_BUFFER_FLUSH','CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT',
'LAZYWRITER_SLEEP', 'SLEEP_SYSTEMTASK', 'SLEEP_BPOOL_FLUSH',
'BROKER_EVENTHANDLER', 'XE_DISPATCHER_WAIT', 'FT_IFTSHC_MUTEX',
'CHECKPOINT_QUEUE', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
'BROKER_TRANSMITTER', 'FT_IFTSHC_MUTEX', 'KSOURCE_WAKEUP',
'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BAD_PAGE_PROCESS',
'DBMIRROR_EVENTS_QUEUE', 'BROKER_RECEIVE_WAITFOR',
'PREEMPTIVE_OS_GETPROCADDRESS', 'PREEMPTIVE_OS_AUTHENTICATIONOPS',
'WAITFOR', 'DISPATCHER_QUEUE_SEMAPHORE', 'XE_DISPATCHER_JOIN',
'RESOURCE_QUEUE' )
ORDER BY wait_time_ms DESC
use below link to analyze wait stats
http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
HTH
PS: for reporting services you can post in SSRS forum
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
SQL Query to get statistics report
Hi Experts,
I need to get a report for CPU utilization,Memory Usage,Event Waits, Connection Spool and Shared Spool for a given query.
Need some tips from you to get that report.
Thanks,Its not something to trace my slow running sql query.
I need to provide a report regarding Database Statistics on a specific process from front end PHP.
Previously, i have provided you the different environment Oracle Version, here is the exact version in which i need to generate the report.
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Solaris: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production This Database server is controlled by Client and having just limited access (not able to access the Trace Files).
Can someone tell me to generate the requested report using some v$ Views.
Thanks,
Edited by: DharanV on Apr 15, 2010 8:01 PM
added some more details to make it much clear -
Hi All,
DB:oracle 9i R2
OS:sun solaris 8
Below is the Sql Query taking very long time to complete
Could any one help me out regarding this.
SELECT MAX (md1.ID) ID, md1.request_id, md1.jlpp_transaction_id,
md1.transaction_version
FROM transaction_data_arc md1
WHERE md1.transaction_name = :b2
AND md1.transaction_type = 'REQUEST'
AND md1.message_type_code = :b1
AND NOT EXISTS (
SELECT NULL
FROM transaction_data_arc tdar2
WHERE tdar2.request_id = md1.request_id
AND tdar2.jlpp_transaction_id != md1.jlpp_transaction_id
AND tdar2.ID > md1.ID)
GROUP BY md1.request_id,
md1.jlpp_transaction_id,
md1.transaction_version
Any alternate query to get the same results?
kindly let me know if any one knows.
regards,
kk.
Edited by: kk001 on Apr 28, 2011 4:09 PMThis SQL performance question should go to the SQL forum.
PL/SQL
Before repost please check the FAQ post.
http://forums.oracle.com/forums/ann.jspa?annID=1535
especially
3. How can improve the performance of my query? / My query is running slow.
SQL and PL/SQL FAQ
HTH, Zoltan
ah found that you did post it earlier there too:
Re: Sql Query taking very long time to complete
why did you repost?
why did not post answer for available indexes, description, explain plan?
did you read tuning request help/faq?
Edited by: Kecskemethy on Apr 28, 2011 4:34
Added repost link and questions. -
SQL Developer vs TOAD - query performance question
Somebody made me notice same queries are executing slower in SQL Developer than in TOAD. I'm rather curious about this issue, since I understand Java is "slow" but I can't find any other thread about this point. I don't use TOAD, so I can't compare...
Can this be related to the amount of data being returned by the query ? What could be the other reasons of SQL Dev running slower with an identical query ?
Thanks,
AttilaIt also occurs to me that TOAD always uses the equivalent of the JDBC "thick" driver. SQL Developer can use either the "thin" driver or the "thick" driver, but connections are usually configured with the "thin" driver, since you need an Oracle client to use the "thick" driver.
The difference is that "thin" drivers are written entirely in Java, but "thick" drivers are written with only a little Java that calls the native executable (hence you need an Oracle client) to do most of the work. Theoretically, a thick driver is faster because the object code doesn't need to be interpreted by the JVM. However, I've heard that the difference in performance is not that large. The only way to know for sure is to configure a connection in SQL Developer to use the thick driver, and see if it is faster (I'd use a stop-watch).
Someone correct me if I'm wrong, but I think that if you use "TNS" as your connection type, SQL Developer will use the thick driver, while the default, "Basic" connection type uses the thin driver. Otherwise, you're going to have to use the "Advanced" connection type and type in the Custom JDBC URL for the thick driver. -
Hello!
I just downloaded SQL Developer 1.1.0.23.64 and I'm wondered by its inappropriately bad perfomance. "Tables" and "Views" node can't be opened, and simple (very simple) query "select * from dual" executes 15 to 30 seconds. Can it be worked around in any way? Oracle versions are 10g and 9i - looks like it works the same bad way on both of them.I have 1.25Gb operating memory, jdk 1.5.0_06 (latest Mac JDK). I do not have neither sql*plus nor oracle client installed (and there are no such requirements in the installation guide) - everything works fine and fast enough with JDBC drivers in SQL4X as well as in projects being under development. I'm using JDBC drivers from ojdbc14.jar. The only program that very slow in SQL execution is SQL Developer, unfortunately...
-
Oracle SQL Developer 3.1 or 1.5.5 from Oracle11g Client?
Hello,
which SQLdeveloper is the newest version? I just installed an Oracle11g client on Windows XP 32 bit that comes with SQLDeveloper and installs it in ORACLE_HOME. It is version 1.5.5.
How does this relate to version 3.1? What is the difference?
The standalone version had was 3.0.0x.
Thanks
Richard3.1 is the latest.
The reason the version bundled with 11g is so old is that the release cycle for DBMS versions (including the client bundle) is much slower than SQL Developer. 11gR2 is a couple of years old now and there have been several releases of SQL Developer since then. -
SQL server 2012 Ent using less memory than the allocated amount after enabling -T834
I am facing the situation mentioned here.
http://blogs.msdn.com/b/psssql/archive/2009/06/05/sql-server-and-large-pages-explained.aspx
My SQL Server 2012 is not able to use all the 112 GB RAM that was allocated to it after enabling -T834.
This was not the case earlier. Now I see the Total server memory and target server memory counters are just 27 GB constantly. I found the below error while starting SQL after enabling -T834. I restarted services again and this time it started fine. But I
didnt bother about the error untill users complained slowness and SQL memory usage was found to be low.
Detected 131068 MB of RAM. This is an informational message; no user action is required.
Using large pages in the memory manager.
Large Page Allocated: 32MB
Large page allocation failed during memory manager initialization
Failed to initialize the memory manager
Failed allocate pages: FAIL_PAGE_ALLOCATION 2
Error: 17138, Severity: 16, State: 1.
Unable to allocate enough memory to start 'SQL OS Boot'. Reduce non-essential memory load or increase system memory.
Now, SQL is started by its Total server memory is only 27 GB. How can I make SQL server use all the allocated max server memory with -T834 still on ?
Bharath Kumar ------------- Please mark solved if I've answered your question, vote for it as helpful to help other user's find a solution quickerHi Bharath ,
in the below post the scenario is mentioned clearly
http://blogs.msdn.com/b/psssql/archive/2009/06/05/sql-server-and-large-pages-explained.aspx
Unable to allocate enough memory to start 'SQL OS Boot'. Reduce non-essential memory load or increase system memory.
This shows one of the problems with large pages: the memory size requested must be contiguous. This is called out very nicely at the MSDN
article on Large Pages
These memory regions may be difficult to obtain after the system has been running for a long time because the space for each large page must be contiguous, but the memory may have become fragmented. This is an expensive operation;
therefore, applications should avoid making repeated large page allocations and allocate them all one time at startup instead.
In this case above, even if ‘max server memory’ was set to say 8Gb, the server could only allocate 2Gb and that now becomes a maximum allocation for the buffer pool. Remember we don’t grow the buffer pool when using large pages so whatever memory we allocate
at startup is the max you get.
The other interesting thing you will find out with large pages is a possible slowdown in server startup time. Notice in the ERRORLOG entry above the gap of 7 minutes between the server discovering trace flag 834 was on (the "Using large pages..” message)
and the message about how much large memory was allocated for the buffer pool. Not only does it take a long time to call VirtualAlloc() but in the case where we cannot allocate total physical memory or ‘max server memory” we attempt to allocate lower values
several times before either finding one that works or failing to start. We have had some customers report the time to start the server when using trace flag 834 was over 30 minutes.
regards,
Ram
ramakrishna
Maybe you are looking for
-
How can I hide a button with javascript in Captivate 8?
I need to run some javascript in my project and at the same time I want to hide a button, so I can't use advanced actions.
-
What is the Standard Syllabus for Oracle Apps Technical
Dear Members I have studied Oracle Apps Technical 11i in this First I learned SQL ( Different Languages like DDL, DML, DCL, TCL, DQL, Operator, Functions, Constraints, Joins, Sub Queries, Locks on Tables, Synonyms, Sequences, Index, Views) PL/SQL (Pl
-
I can not resize my pictures once i insert them into my job nor can I resize my text box. Can someone help me with this please?
-
Burnt CDs won't play on all CD players
Either some or all tracks on a CD burnt on my eMac's SuperDrive will not play on some commercial CD players.
-
Enter Substitution Variable I: (capital letter I for Indigo)
Hello all, I urgently need to run a query in SQL Developer today but one of my 'case... when' clauses which I had to modify slightly is causing the query to ask for a Substitution Variable. I have never seen this before and can't see any fault in my