Function taking longer time to execute
Hi,
I have a scenario where i am using a TABLE FUNCTION in a join con condition with a Normal TABLE but its getting longer time to execute:
The function is given below:
CREATE OR REPLACE FUNCTION GET_ACCOUNT_TYPE(
SUBNO VARCHAR2 DEFAULT NULL
RETURN ACCOUNT_TYP_KEY_1 PIPELINED AS
V_SUBNO VARCHAR2(20);
V_SUBS_TYP VARCHAR2(10);
V_ACCOUNT_TYP_KEY VARCHAR2(10);
V_ACCOUNT_TYP_KEY_1 VARCHAR2(10);
V_SUBS_TYP_KEY_1 VARCHAR2(10);
V_VAL1 VARCHAR2(255);
CURSOR C1_REC2 IS SELECT SUBNO,NULL
FROM CTVA_ETL.RA_CRM_USER_INFO
GROUP BY SUBNO,SUBSCR_TYPE;
--CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE,ACCOUNT_TYPE_KEY
--FROM CTVA_ETL.RA_CRM_USER_INFO,DIM_RA_MAST_ACCOUNT_TYPE
--WHERE ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
--WHERE MSISDN='8615025400109'
--WHERE MSISDN IN ('8615025400068','8615025400083','8615025400101','8615025400132','8615025400109')
CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE--,ACCOUNT_TYPE_KEY
FROM CTVA_ETL.RA_CRM_USER_INFO
GROUP BY SUBNO,SUBSCR_TYPE;
BEGIN
OPEN C1_REC;
LOOP
FETCH C1_REC INTO V_SUBNO ,V_SUBS_TYP;
IF V_SUBS_TYP IS NOT NULL THEN
BEGIN
SELECT
ACCOUNT_TYPE_KEY
INTO
V_ACCOUNT_TYP_KEY
FROM
DIM_RA_MAST_ACCOUNT_TYPE,
RA_CRM_USER_INFO
WHERE
ACCOUNT_TYPE_KEY=V_SUBS_TYP
AND ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
AND SUBNO=V_SUBNO;
EXCEPTION
WHEN NO_DATA_FOUND THEN
V_ACCOUNT_TYP_KEY := '-99';
V_ACCOUNT_TYP_KEY_1 := V_ACCOUNT_TYP_KEY;
END;
ELSE
V_ACCOUNT_TYP_KEY_1:='-99';
END IF;
FOR CUR IN (select
DISTINCT V_SUBNO SUBNO_TYP_2 ,V_ACCOUNT_TYP_KEY_1 ACCOUNT_TYP
from dual)
LOOP
PIPE ROW (ACCOUNT_TYP_KEY(CUR.SUBNO_TYP_2,CUR.ACCOUNT_TYP));
END LOOP;
END LOOP;
RETURN;
CLOSE C1_REC;
END;
The above function wil return rows with respect to SUBSCRIBER TYPE (if Not Null then it will return the ACCOUNT KEY and SUBNO else '-99').
But its not returning any rows so all the rows will come as
SUBNO ACCOUNT_TYP
21 -99
22 -99
23 -99
24 -99
25 -99
Thanks and Regards
Hi LMLobo,
In addition to Sebastian’s answer, you can refer to the document
Server Memory Server Configuration Options to check whether the maximum server memory setting of the SQL Server is changed on the new server. Besides, you can also compare the
network packet size setting of the SQL Server as well as the network connectivity on both servers. Besides, you can refer to the following link to troubleshooting SSIS package performance
issue:
http://technet.microsoft.com/en-us/library/dd795223(v=sql.100).aspx.
Regards,
Mike Yin
TechNet Community Support
Similar Messages
-
Taking long time to execute views
Hi All,
my query is taking long time to execute(i am using standard views in my query)
XLA_INV_AEL_GL_V , XLA_WIP_AEL_GL_V -----these standard views itself taking long time to execute ,but i need the info from this views
WHERE gjh.je_batch_id = gjb.je_batch_id AND
gjh.je_header_id = gjl.je_header_id AND
gjh.je_header_id = xlawip.je_header_id AND
gjl.je_header_id = xlawip.je_header_id AND
gjl.je_line_num = xlawip.je_line_num AND
gcc.code_combination_id = gjl.code_combination_id AND
gjl.code_combination_id = xlawip.code_combination_id AND
gjb.set_of_books_id = xlawip.set_of_books_id AND
gjh.je_source = 'Inventory' AND
gjh.je_category = 'WIP' AND
gp.period_set_name = 'Accounting' AND
gp.period_name = gjl.period_name AND
gp.period_name = gjh.period_name AND
gp.start_date +1 between to_date(startdate,'DD-MON-YY') AND
to_date(enddate,'DD-MON-YY') AND
gjh.status =nvl(lstatus,gjh.status)
Could any one help me to execute it fast?
Thanks
Madhu[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long...
-
Query is taking long time to execute after migrating to 10g r2
Hi
We recently migrated the database from 9i to 10gr2 ((10.2.0.2.0).. This query was running in acceptable time before the upgrade in 9i.. Now it is taking a long long time to execute this... Can you please let me know what should i do to improve the performance now.. We are running stats everyday..
Thanks for your help,
Shree
======================================================================================
SELECT cr.cash_receipt_id
,cr.pay_from_customer
,cr.receipt_number
,cr.receipt_date
,cr.amount
,cust.account_number
,crh.gl_date
,cr.set_of_books_id
,sum(ra.amount_applied) amount_applied
FROM AR_CASH_RECEIPTS_ALL cr
,AR_RECEIVABLE_APPLICATIONS_ALL ra
,hz_cust_accounts cust
,AR_CASH_RECEIPT_HISTORY_ALL crh
,GL_PERIOD_STATUSES gps
,FND_APPLICATION app
WHERE cr.cash_receipt_id = ra.cash_receipt_id
AND ra.status = 'UNAPP'
AND cr.status <> 'REV'
AND cust.cust_account_id = cr.pay_from_customer
AND substr(cust.account_number,1,2) <> 'SI' -- Don't allocate Unapplied receipts FOR SI customers
AND crh.cash_receipt_id = cr.cash_receipt_id
AND app.application_id = gps.application_id
AND app.application_short_name = 'AR'
AND gps.period_name = 'May-07'
AND crh.gl_date <= gps.end_date
AND cr.receipt_number not like 'WH%'
-- AND cust.customer_number = '0000079260001'
GROUP BY cr.cash_receipt_id
,cr.pay_from_customer
,cr.receipt_number
,cr.receipt_date
,cr.amount
,cust.account_number
,crh.gl_date
,cr.set_of_books_id
HAVING sum(ra.amount_applied) > 0;
=========================================================================================
Here is the explain plan in 10g r2 (10.2.0.2.0)
PLAN_TABLE_OUTPUT
Plan hash value: 2617075047
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 92340 | 10M| | 513K (1)|
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 92340 | 10M| 35M| 513K (1)|
| 3 | TABLE ACCESS BY INDEX ROWID | AR_RECEIVABLE_APPLICATIONS_ALL | 2 | 34 |
| 4 | NESTED LOOPS | | 184K| 21M| | 510K (1)|
|* 5 | HASH JOIN | | 99281 | 9M| 3296K| 176K (1)|
|* 6 | TABLE ACCESS FULL | HZ_CUST_ACCOUNTS | 112K| 1976K| | 22563 (1)|
|* 7 | HASH JOIN | | 412K| 33M| 25M| 151K (1)|
| 8 | TABLE ACCESS BY INDEX ROWID | AR_CASH_RECEIPT_HISTORY_ALL | 332K| 4546K|
| 9 | NESTED LOOPS | | 498K| 19M| | 26891 (1)|
| 10 | NESTED LOOPS | | 2 | 54 | | 4 (0)|
| 11 | TABLE ACCESS BY INDEX ROWID| FND_APPLICATION | 1 | 8 | | 1 (0)|
|* 12 | INDEX UNIQUE SCAN | FND_APPLICATION_U3 | 1 | | | 0 (0)|
| 13 | TABLE ACCESS BY INDEX ROWID| GL_PERIOD_STATUSES | 2 | 38 | | 3 (0)
|* 14 | INDEX RANGE SCAN | GL_PERIOD_STATUSES_U1 | 1 | | | 2 (0)|
|* 15 | INDEX RANGE SCAN | AR_CASH_RECEIPT_HISTORY_N2 | 332K| | | 1011 (1)
PLAN_TABLE_OUTPUT
|* 16 | TABLE ACCESS FULL | AR_CASH_RECEIPTS_ALL | 5492K| 235M| | 108K
|* 17 | INDEX RANGE SCAN | AR_RECEIVABLE_APPLICATIONS_N1 | 4 | | | 2
Predicate Information (identified by operation id):
1 - filter(SUM("RA"."AMOUNT_APPLIED")>0)
5 - access("CUST"."CUST_ACCOUNT_ID"="CR"."PAY_FROM_CUSTOMER")
6 - filter(SUBSTR("CUST"."ACCOUNT_NUMBER",1,2)<>'SI')
7 - access("CRH"."CASH_RECEIPT_ID"="CR"."CASH_RECEIPT_ID")
12 - access("APP"."APPLICATION_SHORT_NAME"='AR')
14 - access("APP"."APPLICATION_ID"="GPS"."APPLICATION_ID" AND "GPS"."PERIOD_NAME"='May-07')
filter("GPS"."PERIOD_NAME"='May-07')
15 - access("CRH"."GL_DATE"<="GPS"."END_DATE")
16 - filter("CR"."STATUS"<>'REV' AND "CR"."RECEIPT_NUMBER" NOT LIKE 'WH%')
17 - access("CR"."CASH_RECEIPT_ID"="RA"."CASH_RECEIPT_ID" AND "RA"."STATUS"='UNAPP')
filter("RA"."CASH_RECEIPT_ID" IS NOT NULL)
Here is the explain plan in 9i
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=445977 Card=78530 By
tes=9423600)
1 0 FILTER
2 1 SORT (GROUP BY) (Cost=445977 Card=78530 Bytes=9423600)
3 2 HASH JOIN (Cost=443717 Card=157060 Bytes=18847200)
4 3 HASH JOIN (Cost=99563 Card=94747 Bytes=9758941)
5 4 TABLE ACCESS (FULL) OF 'HZ_CUST_ACCOUNTS' (Cost=12
286 Card=110061 Bytes=1981098)
6 4 HASH JOIN (Cost=86232 Card=674761 Bytes=57354685)
7 6 TABLE ACCESS (BY INDEX ROWID) OF 'AR_CASH_RECEIP
T_HISTORY_ALL' (Cost=17532 Card=542304 Bytes=7592256)
8 7 NESTED LOOPS (Cost=17536 Card=809791 Bytes=332
01431)
9 8 NESTED LOOPS (Cost=4 Card=1 Bytes=27)
10 9 TABLE ACCESS (BY INDEX ROWID) OF 'FND_APPL
ICATION' (Cost=1 Card=1 Bytes=8)
11 10 INDEX (UNIQUE SCAN) OF 'FND_APPLICATION_
U3' (UNIQUE)
12 9 TABLE ACCESS (BY INDEX ROWID) OF 'GL_PERIO
D_STATUSES' (Cost=3 Card=1 Bytes=19)
13 12 INDEX (RANGE SCAN) OF 'GL_PERIOD_STATUSE
S_U1' (UNIQUE) (Cost=2 Card=1)
14 8 INDEX (RANGE SCAN) OF 'AR_CASH_RECEIPT_HISTO
RY_N2' (NON-UNIQUE) (Cost=1740 Card=542304)
15 6 TABLE ACCESS (FULL) OF 'AR_CASH_RECEIPTS_ALL' (C
ost=60412 Card=8969141 Bytes=394642204)
16 3 TABLE ACCESS (FULL) OF 'AR_RECEIVABLE_APPLICATIONS_A
LL' (Cost=337109 Card=15613237 Bytes=265425029)Hi,
The plan between 9i and 10g is pretty the same but the amount of data fetched has considerably increased. I guess the query was performing slow even in 9i.
The AR_CASH_RECEIPT_HISTORY_ALL is presently having 332000 rows in 10g where as it was 17532 in 9i.
AR_CASH_RECEIPT_HISTORY_N2 is now having 332,000 rows in 10g where as in 9i it had 1,740
Try creating some indexes on
AR_CASH_RECEIPTS_ALL
hz_cust_accounts -
Reports taking long time to execute
Hi,
There are few reports on SSRS which are taking almost 6-8 minutes to complete and display the data.
I am using Oracle database as source.
When I checked the query performance, what I found that main data set is taking almost 2-3 minutes to execute on Sql developer. Also there are two parameters which are taking almost 1-1 minute to execute.
When I run the report without these two parameters, report execution time is reduced by more than 3 minutes.
Also I am using 3-4 groupings in the report and 6 columns are aggregated at each grouping.
Reports are tabular reports and contains header, footers and 2-3 text boxes to display parameter values.
Can you please provide me some ways to optimize the queries and reduce the time taken by report to complete.Hi sudipta,
According to your description, it takes too long to render the report when accessing it. Right?
In this scenario, it can be many possibilities which will reduce the report performance. We suggest you check the report on ReportServer first. Go to the ReportServer ExecutionLog to see which part takes more time.
Then we need to do some troubleshooting to improve the report performance. Here we have a article for your reference:
Troubleshooting Reports: Report Performance
If you have any question, please feel free to ask.
Best Regards,
Simon Hou -
Script is taking long time to execute
Hi:
This is the query I am executing, this takes a lot of time, how can I use temp tables to make this work fast:
There are only about 6000 records but even then taking lot of time
SET NOCOUNT ON
DECLARE @SName VARCHAR(40), @ContactNo varchar(11), @Code VARCHAR(20), @EMail VARCHAR(40),@CodeId int
DECLARE C1 CURSOR
STATIC FOR SELECT [Staff Name],[Contact No],[Code],[EMail ID] FROM AB
OPEN C1
IF @@CURSOR_ROWS > 0
FETCH NEXT FROM C1 INTO @SName,@ContactNo,@Code,@EMail
WHILE @@Fetch_status = 0
BEGIN
IF EXISTS (SELECT * FROM AddressBook WHERE
Name=@SName)
BEGIN
UPDATE AddressBook
SET MobileNo=@ContactNo,
EMailId=@EMail
WHERE Name=@SName
END
ELSE
BEGIN
INSERT INTO AddressBook
VALUES(@SName,'',@ContactNo,@EMail,'',GETDATE(),GETDATE(),'','A',8,NULL,NULL)
END
END
CLOSE C1
DEALLOCATE C1
SET NOCOUNT OFFInstead of cursor have you tried to look at MERGE command?
IF OBJECT_ID('t1') IS NOT NULL
DROP TABLE t1
GO
CREATE TABLE t1 (id INT PRIMARY KEY, name1 VARCHAR(10))
INSERT INTO t1
SELECT 1, 'name 1' UNION ALL
SELECT 2, 'name 2' UNION ALL
SELECT 3, 'name 3' UNION ALL
SELECT 4, 'name 4' UNION ALL
SELECT 5, 'name 5'
GO
DECLARE @id INT = 6, @name1 VARCHAR(10) = 'name 6'
MERGE t1
USING (SELECT @id AS id, @name1 AS name1) AS t2 ON t1.id = t2.id
WHEN MATCHED
THEN UPDATE SET t1.name1 = t2.name1
WHEN NOT MATCHED
THEN INSERT VALUES(@id, @name1 );
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Collection function taking more time to execute
Hi all,
I am using a collection function in my sql_report it is taking plenty of time to return rows, is there any way to get the resulted rows(using collection) without consuming more time.
SELECT tab_to_string(CAST(COLLECT(wot_vw."Name") AS t_varchar2_tab)) FROM REPORT_VW wot_vw
WHERE(wot_vw."Task ID" = wot."task_id") GROUP BY wot_rept_vw."Task ID") as "WO"
from TASK_TBL wot
INNER JOIN
(SELECT "name", MAX("task_version") AS MaxVersion from TASK_TBL
GROUP BY "name") q
ON (wot."name" = q."name" AND wot."task_version" = q.MaxVersion)
order by NLSSORT(wot."name",'NLS_SORT=generic_m')
Here this order by is causing problem
Apex version is 4.0
Thanks.
Edited by: apex on Feb 21, 2012 7:24 PM'My car doesn't start, please help me to start my car'
Do you think we are clairvoyant?
Or is your salary subtracted for every letter you type here?
Please be aware this is not a chatroom, and we can not see your webcam.
Sybrand Bakker
Senior Oracle DBA -
Hi All,
We are working on BI 7.0. version
In the varaible pop-up screen we have two info objects.
1. Fiscal year Period
2. JOA(Joint operating aggriment)
If u press F4 for JOA, it is taking long time to execute and finally the application is getting closed.same situation is there in RSRT also.
If i enter with out JOA the query is giving the output. Here i have to restrict the query by JOA.
i have changed the JOA peroperties in query designer.
Query execution for filter value selection = Values in master data table.......
but still the situation is the same.......
Could you please suggest any solution for this.....
Thanks & Regards,
PKHi Kamal,
You can set that at the query level in the query designer for each query.
1. Select the corresponding characteristic in the query designer.
2. Goto to the "Extended tab" in the properties
3. Select the "Values in the Master data table" in the "Query execution in the filter value selection.
Also see some recomendations:
Note 748623 - Input help (F4) has a very long runtime - recommendations
Hope this helps.
CK -
Identifying which part of stored procedure is taking long time
Hi Everyone,
I have a stored procedure which is taking long time to execute. I am trying to understand which part/query in the stored procedure is taking long time.
It involves lots of table variables and n no of queries .Could anyone please help me in how to identify which query/part of the stored procedure is taking long time to execute?
Thanks in AdvanceHi Vivek -
I am only familiar with running the plan visualization for a single SQL query.
Could you please guide me how to run it for a procedure.
Thanks in Advance. -
Stopping a Query taking more time to execute in runtime in Oracle Forms.
Hi,
In the present application one of the oracle form screen is taking long time to execute a query, user wanted an option to stop the query in between and browse the result (whatever has been fetched before stopping the query).
We have tried three approach.
1. set max fetch record in form and block level.
2. set max fetch time in form and block level.
in above two method does not provide the appropiate solution for us.
3. the third approach we applied is setting the interaction mode to "NON BLOCKING" at the form level.
It seems to be worked, while the query took long time to execute, oracle app server prompts an message to press Esc to cancel the query and it a displaying the results fetched upto that point.
But the drawback is one pressing esc, its killing the session itself. which is causing the entire application to collapse.
Please suggest if there is any alternative approach for this or how to overcome this perticular scenario.
This kind of facility is alreday present in TOAD and PL/SQL developer where we can stop an executing query and browse the results fetched upto that point, is the similar facility is avialable in oracle forms ,please suggest.
Thanks and Regards,
Suraj
Edited by: user10673131 on Jun 25, 2009 4:55 AMHello Friend,
You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
Please check the possbility of finding it through other AR tables.
2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
For example : For ra_cust_trx_types_all use ra_cust_trx_types.
This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
You can also check the same using
SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
If the tables are not analyzed , the CBO will not be able to tune your query.
4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
report.
Thanks,
Neeraj Shrivastava
[email protected]
Edited by: user9352949 on Oct 1, 2010 8:02 PM
Edited by: user9352949 on Oct 1, 2010 8:03 PM -
Performance tunned report taking more time to execute - URGENT
Dear Experts,
In One Report Program is Taking long time to execute at background session, I am Taking That Report To Tune The Performance But This is taking 12 hours more to excute .........
The Code is given below.
Before Tune.
DATA : BEGIN OF ITAB OCCURS 0,
LGOBE LIKE T001L-LGOBE,
105DT LIKE MKPF-BUDAT,
XBLNR LIKE MKPF-XBLNR,
BEDAT LIKE EKKO-BEDAT,
LIFNR LIKE EKKO-LIFNR,
NAME1 LIKE LFA1-NAME1,
EKKO LIKE EKKO-BEDAT,
BISMT LIKE MARA-BISMT,
MAKTX LIKE MAKT-MAKTX,
NETPR LIKE EKPO-NETPR,
PEINH LIKE EKPO-PEINH,
VALUE TYPE P DECIMALS 2,
DISPO LIKE MARC-DISPO,
DSNAM LIKE T024D-DSNAM,
AGE TYPE P DECIMALS 0,
PARLIFNR LIKE EKKO-LIFNR,
PARNAME1 LIKE LFA1-NAME1,
MBLNR LIKE MSEG-MBLNR,
MJAHR LIKE MSEG-MJAHR,
ZEILE LIKE MSEG-ZEILE,
BWART LIKE MSEG-BWART,
MATNR LIKE MSEG-MATNR,
WERKS LIKE MSEG-WERKS,
LIFNR LIKE MSEG-LIFNR,
MENGE LIKE MSEG-MENGE,
MEINS LIKE MSEG-MEINS,
EBELN LIKE MSEG-EBELN,
EBELP LIKE MSEG-EBELP,
LGORT LIKE MSEG-LGORT,
SMBLN LIKE MSEG-SMBLN,
BUKRS LIKE MSEG-BUKRS,
GSBER LIKE MSEG-GSBER,
INSMK LIKE MSEG-INSMK,
XAUTO LIKE MSEG-XAUTO,
END OF ITAB.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' and
mblnr ne '5002361303' and
mblnr ne '5003501080' and
mblnr ne '5002996300' and
mblnr ne '5002996407' AND
mblnr ne '5003587026' AND
mblnr ne '5003587026' AND
mblnr ne '5003493186' AND
mblnr ne '5002720583' AND
mblnr ne '5002928122' AND
mblnr ne '5002628263'.
After tune.
TYPES : BEGIN OF ST_ITAB ,
MBLNR LIKE MSEG-MBLNR,
MJAHR LIKE MSEG-MJAHR,
ZEILE LIKE MSEG-ZEILE,
BWART LIKE MSEG-BWART,
MATNR LIKE MSEG-MATNR,
WERKS LIKE MSEG-WERKS,
LIFNR LIKE MSEG-LIFNR,
MENGE LIKE MSEG-MENGE,
MEINS LIKE MSEG-MEINS,
EBELN LIKE MSEG-EBELN,
EBELP LIKE MSEG-EBELP,
LGORT LIKE MSEG-LGORT,
SMBLN LIKE MSEG-SMBLN,
BUKRS LIKE MSEG-BUKRS,
GSBER LIKE MSEG-GSBER,
INSMK LIKE MSEG-INSMK,
XAUTO LIKE MSEG-XAUTO,
END OF ST_ITAB.
DATA : ITAB TYPE STANDARD TABLE OF ST_ITAB WITH HEADER LINE.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' AND
MBLNR NE '5002361303' AND
MBLNR NE '5003501080' AND
MBLNR NE '5002996300' AND
MBLNR NE '5002996407' AND
MBLNR NE '5003587026' AND
MBLNR NE '5003587026' AND
MBLNR NE '5003493186' AND
MBLNR NE '5002720583' AND
MBLNR NE '5002928122' AND
MBLNR NE '5002628263'.
PLEASE GIVE ME THE SOULUTION......
Reward avail for useful answer
thanks in adv,
jai.mHi.
The Select statment accessing MSEG Table is Slow Many a times.
To Improve the performance of MSEG.
1.Check for the proper notes in the Service Market Place if you are working for CIN version.
2.Index the MSEG table
2.Check and limit the Columns in the Select statment .
Possible Way.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' .
Delete itab where itab EQ '5002361303'
Delete itab where itab EQ '5003501080'
Delete itab where itab EQ '5002996300'
Delete itab where itab EQ '5002996407'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003493186'
Delete itab where itab EQ '5002720583'
Delete itab where itab EQ '5002928122'
Delete itab where itab EQ '5002628263'.
Select
Regards
Bala.M
Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM -
Getting Long time to execute select count(*) statement.
Hi all,
My table have 40 columns and it doesn't have the primary key column. it contain more than 5M records. it's taking long time to execute simple sql statement.
Such as select (*) take 1min and 30 sec. If i use select count(index_colunm) then it finished with in 3s. i did the following workarounds.
Analyzed the table.
created required indexes.
yet getting the same performance issues. please help me to solve this issue
ThanksBlueDiamond wrote:
COUNT(*) counts the number of rows produced by the query, whereas COUNT(1) counts the number of 1 values.Would you care to show details that prove that?
In fact, if you use count(1) then the optimizer actually re-writes that internally as count(*).
Count(*) and Count(1) are have identical executions.
Re: Count(*)/Count(1)
http://asktom.oracle.com/pls/asktom/f?p=100:11:6346014113972638::::P11_QUESTION_ID:1156159920245 -
Rank Function taking a long time to execute in SAP HANA
Hi All,
I have a couple of reports with rank function which is timing out/ or taking a really long time to execute, Is there any way to get the result in less time when rank functions are involved?
the following is a sample of how the Query looks,
SQL 1:
select a.column1,
b.column1,
rank () over(partition by a.column1 order by sum(b.column2) asc)
from "_SYS_BIC"."Analyticview1" b
join "Table1" a
on (a.column2 = b.column3)
group by a.column1,
b.column1;
SQL 2:
select a.column1,
b.column1,
rank () over( order by min(b.column1) asc) WJXBFS1
from "_SYS_BIC"."Analytic view2" b
cross join "Table 2" a
where (a.column2 like '%a%'
and b.column1 between 100 and 200)
group by a.column1,
b.column1
when I visualize the execution plan,the rank function is the one taking up a longer time frame. so I executed the same SQL without the rank() or partition or order by(only with Sum() in SQL1 and Min() in SQL 2) even that took a around an hour to get the result.
1.Does anyone have an any idea to make these queries to execute faster?
2. Does the latency have anything to do with the rank function or could it be size of the result set?
3. is there any workaround to implement these rank function/partition inside the Analytic view itself? if yes, will this make it give the result faster?
Thank you for your help!!
-GayathriKrishna,
I tried both of them, Graphical and CE function,
It is also taking a long time to execute
Graphical view giving me the following error after 2 hr and 36 minutes
Could not execute 'SELECT ORDER_ID,ITEM_ID,RANK from "_SYS_BIC"."EMMAPERF/ORDER_FACT_HANA_CV" group by ...' in 2:36:23.411 hours .
SAP DBTech JDBC: [2048]: column store error: search table error: [2620] executor: plan operation failed
CE function - I aborted after 40 mins
Do you know the syntax to declare local variable to use in CE function? -
Connecting to the database taking long time to connect database server
Hi
When I execute procedure i am getting the below message at bottom of the Oracle SQL Developer
"Connecting to the database"
it is taking more than 10 min plz guideHi
have you installed a normal Oracle Client also on your Host? normal Oracle Client
Did you connect with host:port:sid or with a Oracle Naming Service? through TNS Service
Can you test tnsping <alias> yes, It is working fine
Did other user have the same problem? yes
Did you connect through WAN or LAN connection? LAN (Intranet)
Can you tell more about you client/database setup?
Database setup:
OS: Window 2008 server
version: 11.1.0
Client: 11.1.0
OS: Window 2008 server
Now I am not able to execute single select query which table contains 6 records and 15 columns it is taking long time I have waited 30 min still no resutls
only one table is behaving like this remaining is working fine
Edited by: user9235224 on Oct 6, 2012 7:06 PM -
Simple query is taking long time
Hi Experts,
The below query is taking long time.
[code]SELECT FS.*
FROM ORL.FAX_STAGE FS
INNER JOIN
ORL.FAX_SOURCE FSRC
INNER JOIN
GLOBAL_BU_MAPPING GBM
ON GBM.BU_ID = FSRC.BUID
ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
WHERE FSRC.IS_DELETED = 'N'
AND GBM.BU_ID IS NOT NULL
AND UPPER (FS.FAX_STATUS) ='COMPLETED';[/code]
this query is returning 1645457 records.
[code]PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 625K| 341M| 45113 (1)|
| 1 | HASH JOIN | | 625K| 341M| 45113 (1)|
| 2 | NESTED LOOPS | | 611 | 14664 | 22 (0)|
| 3 | TABLE ACCESS FULL| FAX_SOURCE | 2290 | 48090 | 22 (0)|
| 4 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 3 | 0 (0)|
| 5 | TABLE ACCESS FULL | FAX_STAGE | 2324K| 1214M| 45076 (1)|
PLAN_TABLE_OUTPUT
Note
- 'PLAN_TABLE' is old version
15 rows selected.[/code]
The distinct number of records in each table.
[code]SELECT FAX_STATUS,count(*)
FROM fax_STAGE
GROUP BY FAX_STATUS;
FAX_STATUS COUNT(*)
BROKEN 10
Broken - New 9
Completed 2324493
New 20
SELECT is_deleted,COUNT(*)
FROM FAX_SOURCE
GROUP BY IS_DELETED;
IS_DELETED COUNT(*)
N 2290
Y 78[/code]
Total number of records in each table.
[code]SELECT COUNT(*) FROM ORL.FAX_SOURCE FSRC-- 2368
SELECT COUNT(*) FROM ORL.FAX_STAGE--2324532
SELECT COUNT(*) FROM APPS_GLOBAL.GLOBAL_BU_MAPPING--9
[/code]
To improve the performance of this query I have created the following indexes.
[code]Functional based index on UPPER (FSRC.FAX_NUMBER) ,UPPER (FS.DESTINATION) and UPPER (FS.FAX_STATUS).
Bitmap index on FSRC.IS_DELETED.
Normal Index on GBM.BU_ID and FSRC.BUID.
[/code]
But still the performance is bad for this query.
What can I do apart from this to improve the performance of this query.
Please help me .
Thanks in advance.<I have created the following indexes.
CREATE INDEX ORL.IDX_DESTINATION_RAM ON ORL.FAX_STAGE(UPPER("DESTINATION"))
CREATE INDEX ORL.IDX_FAX_STATUS_RAM ON ORL.FAX_STAGE(LOWER("FAX_STATUS"))
CREATE INDEX ORL.IDX_UPPER_FAX_STATUS_RAM ON ORL.FAX_STAGE(UPPER("FAX_STATUS"))
CREATE INDEX ORL.IDX_BUID_RAM ON ORL.FAX_SOURCE(BUID)
CREATE INDEX ORL.IDX_FAX_NUMBER_RAM ON ORL.FAX_SOURCE(UPPER("FAX_NUMBER"))
CREATE BITMAP INDEX ORL.IDX_IS_DELETED_RAM ON ORL.FAX_SOURCE(IS_DELETED)
After creating the following indexes performance got improved.
But our DBA said that new BITMAP index at FAX_SOURCE table (ORL.IDX_IS_DELETED_RAM) can cause locks
on multiple rows if IS_DELETED column is in use. Please proceed with detailed tests.
I am sending the explain plan before creating indexes and after indexes has been created.
SELECT FS.*
FROM ORL.FAX_STAGE FS
INNER JOIN
ORL.FAX_SOURCE FSRC
INNER JOIN
GLOBAL_BU_MAPPING GBM
ON GBM.BU_ID = FSRC.BUID
ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
WHERE FSRC.IS_DELETED = 'N'
AND GBM.BU_ID IS NOT NULL
AND UPPER (FS.FAX_STATUS) =:B1;
--OLD without indexes
PLAN_TABLE_OUTPUT
Plan hash value: 3076973749
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 141K| 85M| 45130 (1)| 00:09:02 |
|* 1 | HASH JOIN | | 141K| 85M| 45130 (1)| 00:09:02 |
| 2 | NESTED LOOPS | | 611 | 18330 | 22 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| FAX_SOURCE | 2290 | 59540 | 22 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 4 | 0 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | FAX_STAGE | 23245 | 13M| 45106 (1)| 00:09:02 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - access(UPPER("FSRC"."FAX_NUMBER")=UPPER("FS"."DESTINATION"))
3 - filter("FSRC"."IS_DELETED"='N')
4 - access("GBM"."BU_ID"="FSRC"."BUID")
filter("GBM"."BU_ID" IS NOT NULL)
5 - filter(UPPER("FS"."FAX_STATUS")=SYS_OP_C2C(:B1))
21 rows selected.
--NEW with indexes.
PLAN_TABLE_OUTPUT
Plan hash value: 665032407
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5995 | 3986K| 3117 (1)| 00:00:38 |
|* 1 | HASH JOIN | | 5995 | 3986K| 3117 (1)| 00:00:38 |
| 2 | NESTED LOOPS | | 611 | 47658 | 20 (5)| 00:00:01 |
|* 3 | VIEW | index$_join$_002 | 2290 | 165K| 20 (5)| 00:00:01 |
|* 4 | HASH JOIN | | | | | |
|* 5 | HASH JOIN | | | | | |
PLAN_TABLE_OUTPUT
| 6 | BITMAP CONVERSION TO ROWIDS| | 2290 | 165K| 1 (0)| 00:00:01 |
|* 7 | BITMAP INDEX SINGLE VALUE | IDX_IS_DELETED_RAM | | | | |
| 8 | INDEX FAST FULL SCAN | IDX_BUID_RAM | 2290 | 165K| 8 (0)| 00:00:01 |
| 9 | INDEX FAST FULL SCAN | IDX_FAX_NUMBER_RAM | 2290 | 165K| 14 (0)| 00:00:01 |
|* 10 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 4 | 0 (0)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID | FAX_STAGE | 23245 | 13M| 3096 (1)| 00:00:38 |
|* 12 | INDEX RANGE SCAN | IDX_UPPER_FAX_STATUS_RAM | 9298 | | 2434 (1)| 00:00:30 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - access(UPPER("DESTINATION")="FSRC"."SYS_NC00035$")
3 - filter("FSRC"."IS_DELETED"='N')
4 - access(ROWID=ROWID)
5 - access(ROWID=ROWID)
7 - access("FSRC"."IS_DELETED"='N')
10 - access("GBM"."BU_ID"="FSRC"."BUID")
filter("GBM"."BU_ID" IS NOT NULL)
12 - access(UPPER("FAX_STATUS")=SYS_OP_C2C(:B1))
31 rows selected
Please confirm on the DBA comment.Is this bitmap index locks rows in my case.
Thanks.> -
Hi,
RSPCM is taking long time to open at the same time RSPC is working fine
when I try to change the process chain status through RSPC_PROCESS_FINISH . it is executing for long time no response. I tried executing it backgrond also from past 2 days it is running, no progress.
Our basis team created index to all backend table of RSPCM, still issue persist. Please suggest me some to get rid of this.
Br,
Harishhi,
Please check the below thread
RSPCM T-Code was executing very slow
Please check note 1372931
hope it helps!
Edited by: Lavanya J on Nov 4, 2011 1:32 PM
Maybe you are looking for
-
Compilation error in PL/SQL
Hi All, Please find the strange query situation in PLSQL. If i run the query without PLSQL block (i.e. declar begin end) it runs well and insert data in table but if put the same query in PLSQL block it gives compilation error. Following is the spool
-
Import Child Items In R12 with a template is not picking up from Interface
Hi, When we are trying to Import Child Items of a master organization item, with a template name, records in the interface table are not picking up at all. We are doing this in R12, please let us know if any new columns become mandatory or what else
-
Setting up Ipod Touch to Network
Hi all, am trying unsuccessfully, to add Ipod Touch to my wireless network. I have the password to my network, but still can't connect. Do you have to put anything other settings in ie. ip address, subnet mask etc? Thanks Sharon
-
Need support to document the object
Hi gurus I am supposed to document the InfoCube which i have modelled. I would like to know is there any proper documentation tool which can be used to document the object. Or if anybody is using any documentation template to document the object plea
-
Progress bars objects not serializable message iat application runtime
I am getting errors for the beans that have to do with progress bars. It says these objects are not serializable. When I try to implement serializable I still get that error. What do I need to do? I have 3 progress bars in view scope. SEVERE: ADFc: S