Rank Function taking a long time to execute in SAP HANA
Hi All,
I have a couple of reports with rank function which is timing out/ or taking a really long time to execute, Is there any way to get the result in less time when rank functions are involved?
the following is a sample of how the Query looks,
SQL 1:
select a.column1,
b.column1,
rank () over(partition by a.column1 order by sum(b.column2) asc)
from "_SYS_BIC"."Analyticview1" b
join "Table1" a
on (a.column2 = b.column3)
group by a.column1,
b.column1;
SQL 2:
select a.column1,
b.column1,
rank () over( order by min(b.column1) asc) WJXBFS1
from "_SYS_BIC"."Analytic view2" b
cross join "Table 2" a
where (a.column2 like '%a%'
and b.column1 between 100 and 200)
group by a.column1,
b.column1
when I visualize the execution plan,the rank function is the one taking up a longer time frame. so I executed the same SQL without the rank() or partition or order by(only with Sum() in SQL1 and Min() in SQL 2) even that took a around an hour to get the result.
1.Does anyone have an any idea to make these queries to execute faster?
2. Does the latency have anything to do with the rank function or could it be size of the result set?
3. is there any workaround to implement these rank function/partition inside the Analytic view itself? if yes, will this make it give the result faster?
Thank you for your help!!
-Gayathri
Krishna,
I tried both of them, Graphical and CE function,
It is also taking a long time to execute
Graphical view giving me the following error after 2 hr and 36 minutes
Could not execute 'SELECT ORDER_ID,ITEM_ID,RANK from "_SYS_BIC"."EMMAPERF/ORDER_FACT_HANA_CV" group by ...' in 2:36:23.411 hours .
SAP DBTech JDBC: [2048]: column store error: search table error: [2620] executor: plan operation failed
CE function - I aborted after 40 mins
Do you know the syntax to declare local variable to use in CE function?
Similar Messages
-
I am running the below query. This query runs fine. However, if I uncomment the
"rank() over(partition by CONCAT_DATE,VARIABLE_ID order by VARIABLE_VALUE) RANK" and
"B.rank=1" , the query takes a very long time to execute...takes about 6-7 minutes
instead of 20 seconds(when the rank part is commented out). Is there any other way to speed
this up as I needed the one with the lowest rank.
Thanks
SELECT
EXAMCODE,
STARTDATE,
REVDATE,
ENDDATE,
VARIATION ,
GROUPNAME,
PLAN,
STDPLAN,
CORPPLAN,
PRODUCT,
PES,
CONCAT_DATE,
NOTE_ID,
MAJ_HDG_ID,
MAJ_HDG_TXT,
MIN_HDG_ID,
MIN_HDG_TXT,
VARIABLE_ID,
VARIABLE_DESC,
PROVIDERCODE,
VARFORMAT,
NOTE_NAME,
MAJHEADNGNOTE,
MINHEADNGNOTE,
VARIABLENOTE,
VARIABLE_VALUE
FROM(
SELECT
EXAMCODE,
STARTDATE,
REVDATE,
ENDDATE,
VARIATION ,
GROUPNAME,
PLAN,
STDPLAN,
CORPPLAN,
PRODUCT,
PES,
CONCAT_DATE,
NOTE_ID,
MAJ_HDG_ID,
MAJ_HDG_TXT,
MIN_HDG_ID,
MIN_HDG_TXT,
VARIABLE_ID,
VARIABLE_DESC,
PROVIDERCODE,
VARFORMAT,
NOTE_NAME,
MAJHEADNGNOTE,
MINHEADNGNOTE,
VARIABLENOTE,
VARIABLE_VALUE --,
-- rank() over(partition by CONCAT_DATE,VARIABLE_ID order by VARIABLE_VALUE) RANK
FROM
SELECT
EXAM_DIM2.EXAM_CODE EXAMCODE,
to_char(START_DATE_DIM2.FULL_DATE,'MM/DD/YYYY') STARTDATE,
to_char(REV_DATE_DIM2.FULL_DATE,'MM/DD/YYYY') REVDATE,
to_char(END_DATE_DIM2.FULL_DATE,'MM/DD/YYYY') ENDDATE,
VARIATION_DIM2.VARIATION_ID VARIATION ,
EXAM_DIM2.GROUP_NAME GROUPNAME,
EXAM_DIM2.BENEFIT_PLAN_NAME PLAN,
EXAM_DIM2.STANDARD_PLAN_NAME STDPLAN,
EXAM_DIM2.CORPORATE_PLAN_NAME CORPPLAN,
EXAM_DIM2.PRODUCT_NAME PRODUCT,
STRUCTURE_DIM2.STRUCTURE_NAME PES,
EXAM_DIM2.EXAM_CODE || ' - ' || to_char(START_DATE_DIM2.FULL_DATE,'MM/DD/YYYY') || ' - ' || nvl(to_char(REV_DATE_DIM2.FULL_DATE,'MM/DD/YYYY'),'N/A')|| ' - ' || to_char(END_DATE_DIM2.FULL_DATE ,'MM/DD/YYYY') CONCAT_DATE,
NOTES_DIM2.NOTE_ID NOTE_ID,
DECODE (MAJOR_HEADING_DIM2.HEADING_ID ,null,HEADING_DIM2.HEADING_ID,MAJOR_HEADING_DIM2.HEADING_ID) MAJ_HDG_ID ,
DECODE (MAJOR_HEADING_DIM2.HEADING_ID ,null,HEADING_DIM2.HEADING_TEXT,MAJOR_HEADING_DIM2.HEADING_TEXT) MAJ_HDG_TXT ,
DECODE(HEADING_DIM2.PARENT_HEADING_ID,null,'',HEADING_DIM2.HEADING_ID) MIN_HDG_ID,
DECODE(HEADING_DIM2.PARENT_HEADING_ID,null,'',HEADING_DIM2.HEADING_TEXT) MIN_HDG_TXT,
VARIABLE_DIM2.VARIABLE_ID VARIABLE_ID,
VARIABLE_DIM2.VARIABLE_SHORT_DESCRIPTION VARIABLE_DESC,
VARIABLE_DIM2.PROVIDER_ARRANGEMENT_CODE PROVIDERCODE,
VARIABLE_DIM2.VARIABLE_FORMAT_CODE VARFORMAT,
NOTES_DIM2.NOTE_NAME NOTE_NAME,
'' as MAJHEADNGNOTE,
'' as MINHEADNGNOTE,
DBMS_LOB.SUBSTR(NOTES_DIM2.NOTE_TEXT,DBMS_LOB.GETLENGTH(NOTES_DIM2.NOTE_TEXT) ,1) VARIABLENOTE,
EXAM_INFO_FACT2.VARIABLE_VALUE VARIABLE_VALUE
FROM
MED_DM.DATE_DIM START_DATE_DIM2,
MED_DM.DATE_DIM END_DATE_DIM2,
MED_DM.DATE_DIM REV_DATE_DIM2,
MED_DM.EXAM_DIM EXAM_DIM2,
MED_DM.STRUCTURE_DIM STRUCTURE_DIM2,
MED_DM.NOTES_FACT NOTES_FACT2,
MED_DM.HEADING_DIM MAJOR_HEADING_DIM2,
MED_DM.HEADING_DIM HEADING_DIM2,
MED_DM.VARIABLE_DIM VARIABLE_DIM2,
MED_DM.NOTES_DIM NOTES_DIM2,
MED_DM.EXAM_INFO_FACT EXAM_INFO_FACT2,
MED_DM.VARIATION_DIM VARIATION_DIM2
WHERE
( EXAM_INFO_FACT2.EXAM_DIM_ID = EXAM_DIM2.EXAM_DIM_ID )
AND ( VARIATION_DIM2.VARIATION_DIM_ID (+)= EXAM_INFO_FACT2.VARIATION_DIM_ID )
AND ( EXAM_INFO_FACT2.STRUCTURE_DIM_ID = STRUCTURE_DIM2.STRUCTURE_DIM_ID)
AND ( EXAM_DIM2.END_DATE_DIM_ID=END_DATE_DIM2.DATE_DIM_ID )
AND ( EXAM_DIM2.REVISION_DATE_DIM_ID=REV_DATE_DIM2.DATE_DIM_ID )
AND ( EXAM_DIM2.START_DATE_DIM_ID=START_DATE_DIM2.DATE_DIM_ID )
AND ( HEADING_DIM2.HEADING_DIM_ID= EXAM_INFO_FACT2.HEADING_DIM_ID )
AND ( MAJOR_HEADING_DIM2.HEADING_ID(+)=HEADING_DIM2.PARENT_HEADING_ID )
AND ( EXAM_INFO_FACT2.VARIABLE_DIM_ID = VARIABLE_DIM2.VARIABLE_DIM_ID )
AND ( EXAM_INFO_FACT2.EXAM_DIM_ID = NOTES_FACT2.EXAM_DIM_ID (+) )
AND ( EXAM_INFO_FACT2.VARIABLE_DIM_ID = NOTES_FACT2.VARIABLE_DIM_ID (+))
AND ( NOTES_FACT2.NOTE_DIM_ID = NOTES_DIM2.NOTE_DIM_ID (+))
UNION ALL
SELECT
EXAM_DIM2.EXAM_CODE EXAMCODE,
to_char(START_DATE_DIM2.FULL_DATE,'MM/DD/YYYY') STARTDATE,
to_char(REV_DATE_DIM2.FULL_DATE,'MM/DD/YYYY') REVDATE,
to_char(END_DATE_DIM2.FULL_DATE,'MM/DD/YYYY') ENDDATE,
'' as VARIATION,
EXAM_DIM2.GROUP_NAME GROUPNAME,
EXAM_DIM2.BENEFIT_PLAN_NAME PLAN,
EXAM_DIM2.STANDARD_PLAN_NAME ,
EXAM_DIM2.CORPORATE_PLAN_NAME CORPPLAN,
EXAM_DIM2.PRODUCT_NAME PRODUCT,
'' as PES,
EXAM_DIM2.EXAM_CODE || ' - ' || to_char(START_DATE_DIM2.FULL_DATE,'MM/DD/YYYY') || ' - ' || nvl(to_char(REV_DATE_DIM2.FULL_DATE,'MM/DD/YYYY'),'N/A')|| ' - ' || to_char(END_DATE_DIM2.FULL_DATE ,'MM/DD/YYYY') CONCAT_DATE,
MED_DM.NOTES_DIM.NOTE_ID NOTE_ID,
DECODE (MAJOR_HEADING_DIM2.HEADING_ID ,null,HEADING_DIM2.HEADING_ID,MAJOR_HEADING_DIM2.HEADING_ID) MAJ_HDG_ID ,
DECODE (MAJOR_HEADING_DIM2.HEADING_ID ,null,HEADING_DIM2.HEADING_TEXT,MAJOR_HEADING_DIM2.HEADING_TEXT) MAJ_HDG_TXT ,
DECODE(HEADING_DIM2.PARENT_HEADING_ID,null,'',HEADING_DIM2.HEADING_ID) MIN_HDG_ID,
DECODE(HEADING_DIM2.PARENT_HEADING_ID,null,'',HEADING_DIM2.HEADING_TEXT) MIN_HDG_TXT,
VARIABLE_DIM2.VARIABLE_ID VARIABLE_ID,
VARIABLE_DIM2.VARIABLE_SHORT_DESCRIPTION VARIABLE_DESC,
VARIABLE_DIM2.PROVIDER_ARRANGEMENT_CODE PROVIDERCODE,
VARIABLE_DIM2.VARIABLE_FORMAT_CODE VARFORMAT,
MED_DM.NOTES_DIM.NOTE_NAME NOTE_NAME,
(CASE WHEN ((HEADING_DIM2.PARENT_HEADING_ID is null) AND (NOTES_FACT.VARIABLE_DIM_ID is null))
THEN DBMS_LOB.SUBSTR(MED_DM.NOTES_DIM.NOTE_TEXT,DBMS_LOB.GETLENGTH(MED_DM.NOTES_DIM.NOTE_TEXT),1)
ELSE
END) as MAJHEADNGNOTE,
-- DECODE(HEADING_DIM2.PARENT_HEADING_ID,null,DBMS_LOB.SUBSTR(MED_DM.NOTES_DIM.NOTE_TEXT,DBMS_LOB.GETLENGTH(MED_DM.NOTES_DIM.NOTE_TEXT),1),'') MAJHEADNGNOTE,
-- DECODE(HEADING_DIM2.PARENT_HEADING_ID,null,'',DBMS_LOB.SUBSTR(MED_DM.NOTES_DIM.NOTE_TEXT,DBMS_LOB.GETLENGTH(MED_DM.NOTES_DIM.NOTE_TEXT),1)) MINHEADNGNOTE,
(CASE WHEN ((HEADING_DIM2.PARENT_HEADING_ID is not null) AND (NOTES_FACT.VARIABLE_DIM_ID is null))
THEN DBMS_LOB.SUBSTR(MED_DM.NOTES_DIM.NOTE_TEXT,DBMS_LOB.GETLENGTH(MED_DM.NOTES_DIM.NOTE_TEXT),1)
ELSE
END) as MINHEADNGNOTE,
(CASE WHEN (NOTES_FACT.VARIABLE_DIM_ID is not null)
THEN DBMS_LOB.SUBSTR(MED_DM.NOTES_DIM.NOTE_TEXT,DBMS_LOB.GETLENGTH(MED_DM.NOTES_DIM.NOTE_TEXT),1)
ELSE
END) as VARIABLENOTE,
--DECODE(NOTES_FACT.VARIABLE_DIM_ID,null,DBMS_LOB.SUBSTR(MED_DM.NOTES_DIM.NOTE_TEXT,DBMS_LOB.GETLENGTH(MED_DM.NOTES_DIM.NOTE_TEXT),1),'') VARIABLENOTE,
'' as VARIABLE_VALUE
FROM
MED_DM.DATE_DIM START_DATE_DIM2,
MED_DM.DATE_DIM END_DATE_DIM2,
MED_DM.DATE_DIM REV_DATE_DIM2,
MED_DM.EXAM_DIM EXAM_DIM2,
MED_DM.NOTES_FACT,
MED_DM.HEADING_DIM MAJOR_HEADING_DIM2,
MED_DM.HEADING_DIM HEADING_DIM2,
MED_DM.VARIABLE_DIM VARIABLE_DIM2,
MED_DM.NOTES_DIM
WHERE
( MED_DM.NOTES_DIM.NOTE_DIM_ID=MED_DM.NOTES_FACT.NOTE_DIM_ID )
AND ( MED_DM.NOTES_FACT.VARIABLE_DIM_ID=VARIABLE_DIM2.VARIABLE_DIM_ID (+) )
AND ( MED_DM.NOTES_FACT.EXAM_DIM_ID=EXAM_DIM2.EXAM_DIM_ID )
AND ( HEADING_DIM2.HEADING_DIM_ID=MED_DM.NOTES_FACT.HEADING_DIM_ID )
AND ( EXAM_DIM2.END_DATE_DIM_ID=END_DATE_DIM2.DATE_DIM_ID )
AND ( EXAM_DIM2.REVISION_DATE_DIM_ID=REV_DATE_DIM2.DATE_DIM_ID )
AND ( EXAM_DIM2.START_DATE_DIM_ID=START_DATE_DIM2.DATE_DIM_ID )
AND ( MAJOR_HEADING_DIM2.HEADING_ID(+)=HEADING_DIM2.PARENT_HEADING_ID )
AND ( MED_DM.NOTES_FACT.HEADING_DIM_ID is not null)
)B
WHERE B.EXAMCODE ='G971' and B.STARTDATE = '10/01/2002'
--and B.RANK =1There are probably lots of things you could do to improve the performance of this query. The first thing I would try is applying the filter criteria to each query in the "union all" rather than to the final set of results. The way it is working now is that the inner queries are ranking and returning all rows for all exam codes and all start dates (likely thousands if not millions of rows). Only after all rows have been ranked are the results filtered for the desired examcode.
Try something like:
select * from (
select ...rank() over(...) rnk from (
select ... from ...
where examcode='G971' and startdate=to_date('10/02/2002','MM/DD/YYYY') ...
union all
select ... from ...
where examcode='G971' and startdate=to_date('10/02/2002','MM/DD/YYYY') ...
) where rnk = 1Also, indexes on examcode and/or startdate might help, if they don't already exist. -
IP10 and IP 30 is taking very long time to Execute.
HI gurus,
I am facing a problem.For counter based Maintenance order scheduling-when I integrate Production order with Maintenance order by PRT, the order is generated after required number of usage for PRT but deadline monitoring and Scheduling of a individual maintenance plan takes more than 7 hours to execute.
I am not in a condition to understand where the error is going on in the program as after a long time the required result of maintenance order is generated.
Please Help me
Thanx in advance,
Praveen Kumarthere is a SAP note available for this...just check in service.sap.com
-
Stored Procedure is taking too long time to Execute.
Hi all,
I have a stored procedure which executes in 2 hr in one database, but the same stored procedure is taking more than 6 hour in the other database.
Both the database are in oracle 11.2
Can you please suggest what might be the reasons.
Thanks.In most sites I've worked at it's almost impossible to trace sessions, because you don't have read permissions on the tracefile directory (or access to the server at all). My first check would therefore be to look in my session browser to see what the session is actually doing. What is the current SQL statement? What is the current wait event? What cursors has the session spent time on? If the procedure just slogs through one cursor or one INSERT statement etc then you have a straightforward SQL tuning problem. If it's more complex then it will help to know which part is taking the time.
If you have a licence for the diagnostic pack you can query v$active_session_history, e.g. (developed for 10.2.0.3, could maybe do more in 11.2):
SELECT CAST(ash.started AS DATE) started
, ash.elapsed
, s.sql_text
, CASE WHEN ash.sql_id = :sql_id AND :status = 'ACTIVE' THEN 'Y' END AS executing
, s.executions
, CAST(NUMTODSINTERVAL(elapsed_time/NULLIF(executions,0)/1e6,'SECOND') AS INTERVAL DAY(0) TO SECOND(1)) AS avg_time
, CAST(NUMTODSINTERVAL(elapsed_time/1e6,'SECOND') AS INTERVAL DAY(0) TO SECOND(1)) AS total_time
, ROUND(s.parse_calls/NULLIF(s.executions,0),1) avg_parses
, ROUND(s.fetches/NULLIF(s.executions,0),1) avg_fetches
, ROUND(s.rows_processed/NULLIF(s.executions,0),1) avg_rows_processed
, s.module, s.action
, ash.sql_id
, ash.sql_child_number
, ash.sql_plan_hash_value
, ash.started
FROM ( SELECT MIN(sample_time) AS started
, CAST(MAX(sample_time) - MIN(sample_time) AS INTERVAL DAY(0) TO SECOND(0)) AS elapsed
, sql_id
, sql_child_number
, sql_plan_hash_value
FROM v$active_session_history
WHERE session_id = :sid
AND session_serial# = :serial#
GROUP BY sql_id, sql_child_number, sql_plan_hash_value ) ash
LEFT JOIN
( SELECT sql_id, plan_hash_value
, sql_text, SUM(executions) OVER (PARTITION BY sql_id) AS executions, module, action, rows_processed, fetches, parse_calls, elapsed_time
, ROW_NUMBER() OVER (PARTITION BY sql_id ORDER BY last_load_time DESC) AS seq
FROM v$sql ) s
ON s.sql_id = ash.sql_id AND s.plan_hash_value = ash.sql_plan_hash_value
WHERE s.seq = 1
ORDER BY 1 DESC;:sid and :serial# come from v$session. In PL/SQL Developer I defined this as a tab named 'Session queries' in the session browser.
I have another tab named 'Object wait totals this query' containing:
SELECT LTRIM(ep.owner || '.' || ep.object_name || '.' || ep.procedure_name,'.') AS plsql_entry_procedure
, LTRIM(cp.owner || '.' || cp.object_name || '.' || cp.procedure_name,'.') AS plsql_procedure
, session_state
, CASE WHEN blocking_session_status IN ('NOT IN WAIT','NO HOLDER','UNKNOWN') THEN NULL ELSE blocking_session_status END AS blocking_session_status
, event
, wait_class
, ROUND(SUM(wait_time)/100,1) as wait_time_secs
, ROUND(SUM(time_waited)/100,1) as time_waited_secs
, LTRIM(o.owner || '.' || o.object_name,'.') AS wait_object
FROM v$active_session_history h
LEFT JOIN dba_procedures ep
ON ep.object_id = h.plsql_entry_object_id AND ep.subprogram_id = h.plsql_entry_subprogram_id
LEFT JOIN dba_procedures cp
ON cp.object_id = h.plsql_object_id AND cp.subprogram_id = h.plsql_subprogram_id
LEFT JOIN dba_objects o ON o.object_id = h.current_obj#
WHERE h.session_id = :sid
AND h.session_serial# = :serial#
AND h.user_id = :user#
AND h.sql_id = :sql_id
AND h.sql_child_number = :sql_child_number
GROUP BY
ep.owner, ep.object_name, ep.procedure_name
, cp.owner, cp.object_name, cp.procedure_name
, session_state
, CASE WHEN blocking_session_status IN ('NOT IN WAIT','NO HOLDER','UNKNOWN') THEN NULL ELSE blocking_session_status END
, event
, wait_class
, o.owner
, o.object_nameIt's not perfect and the numbers aren't reliable, but it gives me an idea where the time might be going. While I'm at it, v$session_longops is worth a look, so I also have 'Longops' as:
SELECT sid
, CASE WHEN l.time_remaining> 0 OR l.sofar < l.totalwork THEN 'Yes' END AS "Active?"
, l.opname AS operation
, l.totalwork || ' ' || l.units AS totalwork
, NVL(l.target,l.target_desc) AS target
, ROUND(100 * l.sofar/GREATEST(l.totalwork,1),1) AS "Complete %"
, NULLIF(RTRIM(RTRIM(LTRIM(LTRIM(numtodsinterval(l.elapsed_seconds,'SECOND'),'+0'),' '),'0'),'.'),'00:00:00') AS elapsed
, l.start_time
, CASE
WHEN l.time_remaining = 0 THEN l.last_update_time
ELSE SYSDATE + l.time_remaining/86400
END AS est_completion
, l.sql_id
, l.sql_address
, l.sql_hash_value
FROM v$session_longops l
WHERE :sid IN (sid,qcsid)
AND l.start_time >= TO_DATE(:logon_time,'DD/MM/YYYY HH24:MI:SS')
ORDER BY l.start_time descand 'Longops this query' as:
SELECT sid
, CASE WHEN l.time_remaining> 0 OR l.sofar < l.totalwork THEN 'Yes' END AS "Active?"
, l.opname AS operation
, l.totalwork || ' ' || l.units AS totalwork
, NVL(l.target,l.target_desc) AS target
, ROUND(100 * l.sofar/GREATEST(l.totalwork,1),1) AS "Complete %"
, NULLIF(RTRIM(RTRIM(LTRIM(LTRIM(numtodsinterval(l.elapsed_seconds,'SECOND'),'+0'),' '),'0'),'.'),'00:00:00') AS elapsed
, l.start_time
, CASE
WHEN l.time_remaining = 0 THEN l.last_update_time
ELSE SYSDATE + l.time_remaining/86400
END AS est_completion
, l.sql_id
, l.sql_address
, l.sql_hash_value
FROM v$session_longops l
WHERE :sid IN (sid,qcsid)
AND l.start_time >= TO_DATE(:logon_time,'DD/MM/YYYY HH24:MI:SS')
AND l.sql_id = :sql_id
ORDER BY l.start_time descYou can also get this sort of information out of OEM if you're lucky enough to have access to it - if not, ask for it!
Apart from this type of monitoring, you might try using DBMS_PROFILER (point and click in most IDEs, but you can use it from the SQL*Plus prompt), and also instrument your code with calls to DBMS_APPLICATION_INFO.SET_CLIENT_INFO so you can easily tell from v$session which section of code is being executed. -
Report script taking very long time to export in ASO
Hi All,
My report script is taking very long time to execute and finally a message appears as timed out.
I'm working on ASO Cubes and there are 14 dimensions for which i need to export all data for all the dimensions for only one version.
The data is very huge and the member count in each dimension is also huge, so which is making me difficult to export the data.
Any suggestions??
ThanksHere is a link that addresses several ways to optimize your report script. I utilize report scripts for Level 0 exports in an ASO environment as well, however the majority of our dimemsions are attribute dimensions.
These are the most effective solutions we have implemented to improve our exports via report scripts:
1. Make sure your report script is written in the order of how the Report Extractor retrieves data.
2. Supressing Zero and Missing Data
3. We use the LINK command within reports for some dimensions that are really big and pull at Level 0
4. Using Symmetric reports.
5. Breakout the exports in multiple reports.
However, you may also consider some additional solutions outlined in this link:
1. The MDX optimizing commands
2. Back end system settings
http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag/drpoptim.htm
I hope this helps. Maybe posting your report script would also help users to provide feedback.
Thanks
Edited by: ronnie on Jul 14, 2011 9:25 AM
Edited by: ronnie on Jul 14, 2011 9:53 AM -
Function taking longer time to execute
Hi,
I have a scenario where i am using a TABLE FUNCTION in a join con condition with a Normal TABLE but its getting longer time to execute:
The function is given below:
CREATE OR REPLACE FUNCTION GET_ACCOUNT_TYPE(
SUBNO VARCHAR2 DEFAULT NULL
RETURN ACCOUNT_TYP_KEY_1 PIPELINED AS
V_SUBNO VARCHAR2(20);
V_SUBS_TYP VARCHAR2(10);
V_ACCOUNT_TYP_KEY VARCHAR2(10);
V_ACCOUNT_TYP_KEY_1 VARCHAR2(10);
V_SUBS_TYP_KEY_1 VARCHAR2(10);
V_VAL1 VARCHAR2(255);
CURSOR C1_REC2 IS SELECT SUBNO,NULL
FROM CTVA_ETL.RA_CRM_USER_INFO
GROUP BY SUBNO,SUBSCR_TYPE;
--CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE,ACCOUNT_TYPE_KEY
--FROM CTVA_ETL.RA_CRM_USER_INFO,DIM_RA_MAST_ACCOUNT_TYPE
--WHERE ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
--WHERE MSISDN='8615025400109'
--WHERE MSISDN IN ('8615025400068','8615025400083','8615025400101','8615025400132','8615025400109')
CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE--,ACCOUNT_TYPE_KEY
FROM CTVA_ETL.RA_CRM_USER_INFO
GROUP BY SUBNO,SUBSCR_TYPE;
BEGIN
OPEN C1_REC;
LOOP
FETCH C1_REC INTO V_SUBNO ,V_SUBS_TYP;
IF V_SUBS_TYP IS NOT NULL THEN
BEGIN
SELECT
ACCOUNT_TYPE_KEY
INTO
V_ACCOUNT_TYP_KEY
FROM
DIM_RA_MAST_ACCOUNT_TYPE,
RA_CRM_USER_INFO
WHERE
ACCOUNT_TYPE_KEY=V_SUBS_TYP
AND ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
AND SUBNO=V_SUBNO;
EXCEPTION
WHEN NO_DATA_FOUND THEN
V_ACCOUNT_TYP_KEY := '-99';
V_ACCOUNT_TYP_KEY_1 := V_ACCOUNT_TYP_KEY;
END;
ELSE
V_ACCOUNT_TYP_KEY_1:='-99';
END IF;
FOR CUR IN (select
DISTINCT V_SUBNO SUBNO_TYP_2 ,V_ACCOUNT_TYP_KEY_1 ACCOUNT_TYP
from dual)
LOOP
PIPE ROW (ACCOUNT_TYP_KEY(CUR.SUBNO_TYP_2,CUR.ACCOUNT_TYP));
END LOOP;
END LOOP;
RETURN;
CLOSE C1_REC;
END;
The above function wil return rows with respect to SUBSCRIBER TYPE (if Not Null then it will return the ACCOUNT KEY and SUBNO else '-99').
But its not returning any rows so all the rows will come as
SUBNO ACCOUNT_TYP
21 -99
22 -99
23 -99
24 -99
25 -99
Thanks and RegardsHi LMLobo,
In addition to Sebastian’s answer, you can refer to the document
Server Memory Server Configuration Options to check whether the maximum server memory setting of the SQL Server is changed on the new server. Besides, you can also compare the
network packet size setting of the SQL Server as well as the network connectivity on both servers. Besides, you can refer to the following link to troubleshooting SSIS package performance
issue:
http://technet.microsoft.com/en-us/library/dd795223(v=sql.100).aspx.
Regards,
Mike Yin
TechNet Community Support -
Query is taking long time to execute after migrating to 10g r2
Hi
We recently migrated the database from 9i to 10gr2 ((10.2.0.2.0).. This query was running in acceptable time before the upgrade in 9i.. Now it is taking a long long time to execute this... Can you please let me know what should i do to improve the performance now.. We are running stats everyday..
Thanks for your help,
Shree
======================================================================================
SELECT cr.cash_receipt_id
,cr.pay_from_customer
,cr.receipt_number
,cr.receipt_date
,cr.amount
,cust.account_number
,crh.gl_date
,cr.set_of_books_id
,sum(ra.amount_applied) amount_applied
FROM AR_CASH_RECEIPTS_ALL cr
,AR_RECEIVABLE_APPLICATIONS_ALL ra
,hz_cust_accounts cust
,AR_CASH_RECEIPT_HISTORY_ALL crh
,GL_PERIOD_STATUSES gps
,FND_APPLICATION app
WHERE cr.cash_receipt_id = ra.cash_receipt_id
AND ra.status = 'UNAPP'
AND cr.status <> 'REV'
AND cust.cust_account_id = cr.pay_from_customer
AND substr(cust.account_number,1,2) <> 'SI' -- Don't allocate Unapplied receipts FOR SI customers
AND crh.cash_receipt_id = cr.cash_receipt_id
AND app.application_id = gps.application_id
AND app.application_short_name = 'AR'
AND gps.period_name = 'May-07'
AND crh.gl_date <= gps.end_date
AND cr.receipt_number not like 'WH%'
-- AND cust.customer_number = '0000079260001'
GROUP BY cr.cash_receipt_id
,cr.pay_from_customer
,cr.receipt_number
,cr.receipt_date
,cr.amount
,cust.account_number
,crh.gl_date
,cr.set_of_books_id
HAVING sum(ra.amount_applied) > 0;
=========================================================================================
Here is the explain plan in 10g r2 (10.2.0.2.0)
PLAN_TABLE_OUTPUT
Plan hash value: 2617075047
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 92340 | 10M| | 513K (1)|
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 92340 | 10M| 35M| 513K (1)|
| 3 | TABLE ACCESS BY INDEX ROWID | AR_RECEIVABLE_APPLICATIONS_ALL | 2 | 34 |
| 4 | NESTED LOOPS | | 184K| 21M| | 510K (1)|
|* 5 | HASH JOIN | | 99281 | 9M| 3296K| 176K (1)|
|* 6 | TABLE ACCESS FULL | HZ_CUST_ACCOUNTS | 112K| 1976K| | 22563 (1)|
|* 7 | HASH JOIN | | 412K| 33M| 25M| 151K (1)|
| 8 | TABLE ACCESS BY INDEX ROWID | AR_CASH_RECEIPT_HISTORY_ALL | 332K| 4546K|
| 9 | NESTED LOOPS | | 498K| 19M| | 26891 (1)|
| 10 | NESTED LOOPS | | 2 | 54 | | 4 (0)|
| 11 | TABLE ACCESS BY INDEX ROWID| FND_APPLICATION | 1 | 8 | | 1 (0)|
|* 12 | INDEX UNIQUE SCAN | FND_APPLICATION_U3 | 1 | | | 0 (0)|
| 13 | TABLE ACCESS BY INDEX ROWID| GL_PERIOD_STATUSES | 2 | 38 | | 3 (0)
|* 14 | INDEX RANGE SCAN | GL_PERIOD_STATUSES_U1 | 1 | | | 2 (0)|
|* 15 | INDEX RANGE SCAN | AR_CASH_RECEIPT_HISTORY_N2 | 332K| | | 1011 (1)
PLAN_TABLE_OUTPUT
|* 16 | TABLE ACCESS FULL | AR_CASH_RECEIPTS_ALL | 5492K| 235M| | 108K
|* 17 | INDEX RANGE SCAN | AR_RECEIVABLE_APPLICATIONS_N1 | 4 | | | 2
Predicate Information (identified by operation id):
1 - filter(SUM("RA"."AMOUNT_APPLIED")>0)
5 - access("CUST"."CUST_ACCOUNT_ID"="CR"."PAY_FROM_CUSTOMER")
6 - filter(SUBSTR("CUST"."ACCOUNT_NUMBER",1,2)<>'SI')
7 - access("CRH"."CASH_RECEIPT_ID"="CR"."CASH_RECEIPT_ID")
12 - access("APP"."APPLICATION_SHORT_NAME"='AR')
14 - access("APP"."APPLICATION_ID"="GPS"."APPLICATION_ID" AND "GPS"."PERIOD_NAME"='May-07')
filter("GPS"."PERIOD_NAME"='May-07')
15 - access("CRH"."GL_DATE"<="GPS"."END_DATE")
16 - filter("CR"."STATUS"<>'REV' AND "CR"."RECEIPT_NUMBER" NOT LIKE 'WH%')
17 - access("CR"."CASH_RECEIPT_ID"="RA"."CASH_RECEIPT_ID" AND "RA"."STATUS"='UNAPP')
filter("RA"."CASH_RECEIPT_ID" IS NOT NULL)
Here is the explain plan in 9i
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=445977 Card=78530 By
tes=9423600)
1 0 FILTER
2 1 SORT (GROUP BY) (Cost=445977 Card=78530 Bytes=9423600)
3 2 HASH JOIN (Cost=443717 Card=157060 Bytes=18847200)
4 3 HASH JOIN (Cost=99563 Card=94747 Bytes=9758941)
5 4 TABLE ACCESS (FULL) OF 'HZ_CUST_ACCOUNTS' (Cost=12
286 Card=110061 Bytes=1981098)
6 4 HASH JOIN (Cost=86232 Card=674761 Bytes=57354685)
7 6 TABLE ACCESS (BY INDEX ROWID) OF 'AR_CASH_RECEIP
T_HISTORY_ALL' (Cost=17532 Card=542304 Bytes=7592256)
8 7 NESTED LOOPS (Cost=17536 Card=809791 Bytes=332
01431)
9 8 NESTED LOOPS (Cost=4 Card=1 Bytes=27)
10 9 TABLE ACCESS (BY INDEX ROWID) OF 'FND_APPL
ICATION' (Cost=1 Card=1 Bytes=8)
11 10 INDEX (UNIQUE SCAN) OF 'FND_APPLICATION_
U3' (UNIQUE)
12 9 TABLE ACCESS (BY INDEX ROWID) OF 'GL_PERIO
D_STATUSES' (Cost=3 Card=1 Bytes=19)
13 12 INDEX (RANGE SCAN) OF 'GL_PERIOD_STATUSE
S_U1' (UNIQUE) (Cost=2 Card=1)
14 8 INDEX (RANGE SCAN) OF 'AR_CASH_RECEIPT_HISTO
RY_N2' (NON-UNIQUE) (Cost=1740 Card=542304)
15 6 TABLE ACCESS (FULL) OF 'AR_CASH_RECEIPTS_ALL' (C
ost=60412 Card=8969141 Bytes=394642204)
16 3 TABLE ACCESS (FULL) OF 'AR_RECEIVABLE_APPLICATIONS_A
LL' (Cost=337109 Card=15613237 Bytes=265425029)Hi,
The plan between 9i and 10g is pretty the same but the amount of data fetched has considerably increased. I guess the query was performing slow even in 9i.
The AR_CASH_RECEIPT_HISTORY_ALL is presently having 332000 rows in 10g where as it was 17532 in 9i.
AR_CASH_RECEIPT_HISTORY_N2 is now having 332,000 rows in 10g where as in 9i it had 1,740
Try creating some indexes on
AR_CASH_RECEIPTS_ALL
hz_cust_accounts -
Taking long time to execute views
Hi All,
my query is taking long time to execute(i am using standard views in my query)
XLA_INV_AEL_GL_V , XLA_WIP_AEL_GL_V -----these standard views itself taking long time to execute ,but i need the info from this views
WHERE gjh.je_batch_id = gjb.je_batch_id AND
gjh.je_header_id = gjl.je_header_id AND
gjh.je_header_id = xlawip.je_header_id AND
gjl.je_header_id = xlawip.je_header_id AND
gjl.je_line_num = xlawip.je_line_num AND
gcc.code_combination_id = gjl.code_combination_id AND
gjl.code_combination_id = xlawip.code_combination_id AND
gjb.set_of_books_id = xlawip.set_of_books_id AND
gjh.je_source = 'Inventory' AND
gjh.je_category = 'WIP' AND
gp.period_set_name = 'Accounting' AND
gp.period_name = gjl.period_name AND
gp.period_name = gjh.period_name AND
gp.start_date +1 between to_date(startdate,'DD-MON-YY') AND
to_date(enddate,'DD-MON-YY') AND
gjh.status =nvl(lstatus,gjh.status)
Could any one help me to execute it fast?
Thanks
Madhu[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long...
-
Issue in updating large number of rows which is taking a long time
Hi all,
Am new to oracle forums. First I will explain my problems as below:
1) I have a table of 350 columns for which i have two indexes. One is for primary key's id
and the other is the composite id (combination of two functional ids)
2) Through my application, the user can calculate some functional conditions and the result
is updated in the same table.
3) The table consists of all input, intermediate and output columns.
4) The only way of calculation is done through update statements. The problem is, for one
complete process, the total number of update statement hits the db is around 1000.
5) From the two index, one indexed column is mandatory in all update where clause. So one
will come at any case but the other is optional.
6) Updating the table is taking a long time if the row count exceeds 1lakh.
7) I will now explain the scenario:
a. Say there is 5lakh 100 records in the table in which mandatory indexed column id 1 has
100 records and id 2 has 5 lakhs record.
b. If I process id 1, it is very fast and executed within 10seconds. But if I process id 2,
then it is taking more than 4 minutes to update.
Is there any way to increase the speed of the update statement. Am using oracle 10g.
Please help me in this, Since I am a developer and dont have much knowledge in oracle.
Thanks in advance.
Regards,
Sethurefer the link:
http://hoopercharles.wordpress.com/2010/03/09/vsession_longops-wheres-my-sql-statement/ -
Discoverer reports taking a long time!!!
Hi all,
One of our clients is complaining that the discoverer reports are taking a long time to run for the last few days, the report used to take 30 minutes before but now is running for hours!!
I have checked the SGA and I have killed the idle sessions but still there was no improvement in the performance.
The version of BI discoverer is 10 and database also is 10g and the platform is win server 2003.
I have checked the forums and they talk about explain plan and tkprof and other commands, but my problem is that i am unable to find the query that discoverer is running i mean once the report is clicked the query runs and gives the estimate time it would take. can some one tell me where this query is stored so that i can check this query,
Also there were no changes made in the query or to the database.
The temp space fills up 100%, i increased the size of temp space but still it goes to 100% also i noticed that the CPU utilisation goes to 100%
i also increased the SGA but still no go.
can someone kindly help me as to what could be causing this problem
also kindly guide me to some good documents for tuning the discoverer.
thanks in advance,
regards,
Edited by: user10243788 on Jan 4, 2010 12:47 AMHi,
The fact that the report used to work fast and now not can be related to many things but my guess is that the database statistics were changed and so the explain plan has changed.
This can be done due to change in the volume of the data that crossed a level were oracle optimizer change the behavior but it can be other things as well.
Anyway it is not relevant since it will be easier to tune the SQL than to find what have changed.
In order to find whether the problem is with the discoverer or in the SQL extract the SQL as described above and run it in SQL tool (SQL Plus, TOAD, SQL Developer and so on).
The best way to get to the problem is run a trace on your session and then use the TKPROF command to translate it to a text file you can analyze - you can assist your DBA team they should have no problem doing that.
By doing that you will get the problematic statements/ functions/ procedures that the report uses.
From there you can start working on improving the performance.
Performance is expertise for itself so i'm sorry i don't know to tell you where to start from, I guess the start will be from understanding the meaning of the explain plan.
Hope I helped a little although I wish Ii had a magic answer for you
BTW, until you resolve that problem you can use the discoverer scheduler to run the reports in the background and so the users will get the data.
Tamir -
Attribute Change run taking too long time to complete.
Hi all,
Attribute change run has been taking too long time to complete.It has to realign 50 odd aggreagates, some by delta , some by reconstruction. But inspite of all the aggregates it used to finish in quick time earlier. But since last 4-5 days it is taking indefinite time to finish.
Can anyone please suggest what all reasons may be causing this? and what possibly can be the solution to the problem? It is becoming a big issue. So kindly help with ur advises.
Promise to reward your answer liberally.
Regards,
Pradyut.Hi,
Check with your functional owners in R/3 if there are mass changes/realignments or classification changes are going on regarding master data. e.g. reasigning materials to other material groups. This causes a major realignment in BW for all the aggregates. Otherwise check for parameterchanges / patches or missing db stats with your sap basis team.
Kind regards, Patrick Rieken. -
When query is taking too long time
When query is taking too long time,Where and how to start tuning it?
Here i've listed few things need to be considered,out of my knowledge and understanding
1.What the sql is waiting for(wait events)
2.Parameter modification need to be done at system/session level
3.The query has to be tuned (using hints )
4.Gathering/deleting statistics
List out any other things that need to be taken into account?
Which approach must be followed and on what basis that approach must be considered?When query is taking too long time,Where and how to start tuning it?explain plan will be good start . trace also
Here i've listed few things need to be considered,out of my knowledge and understanding
1.What the sql is waiting for(wait events)When Oracle executes an SQL statement, it is not constantly executing. Sometimes it has to wait for a specific event to happen befor it can proceed.
Read
http://www.adp-gmbh.ch/ora/tuning/event.html
2.Parameter modification need to be done at system/session levelDepend on parameter , define parameter , trace done on session level for example
3.The query has to be tuned (using hints )Could be help you but you must know how to use .
4.Gathering/deleting statisticsDo it in non working hours , it will impact on database performance , but its good
List out any other things that need to be taken into account?Which account ?
Which approach must be followed and on what basis that approach must be considered?you could use lot of tools , Trace , AWR -
Explain plan generating it self taking very long time !!! What to do ?
Hi,
I am trying to generate explain plan for a query which is running more than 2 hours. I am trying to generate explain plan for query by "explin plan for select ..." but it self is taking very long time.
Kindly suggest me how to reduce time for explain plan ? Secondly why explain plan itself taking very long time ?
thanks & regards
PKPJust guessing here, but I've experienced this behaviour when I did two explain's within a second. This is because a plan is identified by a statement_id or, if you don't provide a statement_id, by the timestamp (of DATE datatype). So if you execute two explain plans within the second (using a script), it has two sets of data with the same primary key. The hierarchical query that needs to be executed to provide you the nicely indented text, will exponentially expand in this case.
Maybe time to clean up your plan_table ?
Hope this helps.
Regards,
Rob. -
Sql Query taking very long time to complete
Hi All,
DB:oracle 9i R2
OS:sun solaris 8
Below is the Sql Query taking very long time to complete
Could any one help me out regarding this.
SELECT MAX (md1.ID) ID, md1.request_id, md1.jlpp_transaction_id,
md1.transaction_version
FROM transaction_data_arc md1
WHERE md1.transaction_name = :b2
AND md1.transaction_type = 'REQUEST'
AND md1.message_type_code = :b1
AND NOT EXISTS (
SELECT NULL
FROM transaction_data_arc tdar2
WHERE tdar2.request_id = md1.request_id
AND tdar2.jlpp_transaction_id != md1.jlpp_transaction_id
AND tdar2.ID > md1.ID)
GROUP BY md1.request_id,
md1.jlpp_transaction_id,
md1.transaction_version
Any alternate query to get the same results?
kindly let me know if any one knows.
regards,
kk.
Edited by: kk001 on Apr 27, 2011 11:23 AMDear
/* Formatted on 2011/04/27 08:32 (Formatter Plus v4.8.8) */
SELECT MAX (md1.ID) ID, md1.request_id, md1.jlpp_transaction_id,
md1.transaction_version
FROM transaction_data_arc md1
WHERE md1.transaction_name = :b2
AND md1.transaction_type = 'REQUEST'
AND md1.message_type_code = :b1
AND NOT EXISTS (
SELECT NULL
FROM transaction_data_arc tdar2
WHERE tdar2.request_id = md1.request_id
AND tdar2.jlpp_transaction_id != md1.jlpp_transaction_id
AND tdar2.ID > md1.ID)
GROUP BY md1.request_id
,md1.jlpp_transaction_id
,md1.transaction_versionCould you please post here :
(a) the available indexes on transaction_data_arc table
(b) the description of transaction_data_arc table
(c) and the formatted explain plan you will get after executing the query and issuing:
select * from table (dbms_xplan.display_cursor);Hope this helps
Mohamed Houri -
Getting Long time to execute select count(*) statement.
Hi all,
My table have 40 columns and it doesn't have the primary key column. it contain more than 5M records. it's taking long time to execute simple sql statement.
Such as select (*) take 1min and 30 sec. If i use select count(index_colunm) then it finished with in 3s. i did the following workarounds.
Analyzed the table.
created required indexes.
yet getting the same performance issues. please help me to solve this issue
ThanksBlueDiamond wrote:
COUNT(*) counts the number of rows produced by the query, whereas COUNT(1) counts the number of 1 values.Would you care to show details that prove that?
In fact, if you use count(1) then the optimizer actually re-writes that internally as count(*).
Count(*) and Count(1) are have identical executions.
Re: Count(*)/Count(1)
http://asktom.oracle.com/pls/asktom/f?p=100:11:6346014113972638::::P11_QUESTION_ID:1156159920245
Maybe you are looking for
-
HT1296 my iphone is no longer syncing to my computer
I actually have a couple of issues: the first being that my phone is no longer syncing to my computer and; the second is that Time Machine no longer backs up. Any help?
-
Hi, system i am using for Oracle SOA is : Windows 64 Bit i5 Processor 6 GB RAM 29 GB on C Drive is already free after installation of all SOA related products. I have installed wlserver_10.3 for SOA 11g Development purpose and followed exact installa
-
Linkage Error when using Apache SOAP 2.2 with Weblogic 6.1
Has anyone seen this error before? Apparently I've got some incompatible versions of xerces being loaded. I tried putting different versions of xerces.jar in the front of my classpath, and creating an XML Registry to point to org.apache.xerces.jaxp..
-
Calling a C++ MFC dialog from c code
Hi, I use MFC for the GUI and want to use a dialog class from within the C code. Is that possible? Any comments gratefully appreciated.
-
I need a printer that will work wirelessly with Lion os
Please help me find an all in one printer that will work wirelessly with the new Lion OS. I'm on my third printer... Thanks