Collection function taking more time to execute
Hi all,
I am using a collection function in my sql_report it is taking plenty of time to return rows, is there any way to get the resulted rows(using collection) without consuming more time.
SELECT tab_to_string(CAST(COLLECT(wot_vw."Name") AS t_varchar2_tab)) FROM REPORT_VW wot_vw
WHERE(wot_vw."Task ID" = wot."task_id") GROUP BY wot_rept_vw."Task ID") as "WO"
from TASK_TBL wot
INNER JOIN
(SELECT "name", MAX("task_version") AS MaxVersion from TASK_TBL
GROUP BY "name") q
ON (wot."name" = q."name" AND wot."task_version" = q.MaxVersion)
order by NLSSORT(wot."name",'NLS_SORT=generic_m')
Here this order by is causing problem
Apex version is 4.0
Thanks.
Edited by: apex on Feb 21, 2012 7:24 PM
'My car doesn't start, please help me to start my car'
Do you think we are clairvoyant?
Or is your salary subtracted for every letter you type here?
Please be aware this is not a chatroom, and we can not see your webcam.
Sybrand Bakker
Senior Oracle DBA
Similar Messages
-
Query is taking more time to execute
Hi,
Query is taking more time to execute.
But when i execute same query in other server then it is giving immediate output.
What is the reason of it.
thanks in advance.'My car doesn't start, please help me to start my car'
Do you think we are clairvoyant?
Or is your salary subtracted for every letter you type here?
Please be aware this is not a chatroom, and we can not see your webcam.
Sybrand Bakker
Senior Oracle DBA -
Hiii
I running a query that is including TRUNC in where condition. But it is taking more time time to execute. Query is ::::::::::
SELECT POD.REQ_DISTRIBUTION_ID, X.*
FROM
SELECT MSI.SEGMENT1||'.'||MSI.SEGMENT2||'.'||MSI.SEGMENT3||'.'||MSI.SEGMENT4 ITEM, RT.TRANSACTION_TYPE,
MSI.DESCRIPTION, rt.TRANSACTION_ID,RT.PARENT_TRANSACTION_ID,
RSH.RECEIPT_NUM,
RSH.SHIP_TO_ORG_ID,
TRUNC(RT.TRANSACTION_DATE) RCP_DATE,
RSL.QUANTITY_RECEIVED RCV_QTY,
PLLA.SHIPMENT_NUM,
PLA.LINE_NUM PO_LINE,
PHA.SEGMENT1 PO_NUM,
PHA.CREATION_DATE,
PHA.APPROVED_DATE,
PLA.QUANTITY PO_QTY,
RSH.SHIPMENT_HEADER_ID,
RSL.SHIPMENT_LINE_ID,
PLLA.LINE_LOCATION_ID
FROM PO_HEADERS_ALL PHA,
PO_LINES_ALL PLA,
MTL_SYSTEM_ITEMS MSI,
RCV_SHIPMENT_HEADERS RSH,
RCV_SHIPMENT_LINES RSL,
RCV_TRANSACTIONS RT,
PO_LINE_LOCATIONS_ALL PLLA
WHERE PHA.PO_HEADER_ID = PLA.PO_HEADER_ID
AND PHA.PO_HEADER_ID = RSL.PO_HEADER_ID
AND PHA.PO_HEADER_ID = PLLA.PO_HEADER_ID
AND PHA.ORG_ID = PLLA.ORG_ID
AND PLA.ITEM_ID = MSI.INVENTORY_ITEM_ID
AND PLA.PO_LINE_ID = RSL.PO_LINE_ID
AND MSI.INVENTORY_ITEM_ID = RSL.ITEM_ID
AND MSI.ORGANIZATION_ID = RSH.SHIP_TO_ORG_ID
AND RSH.SHIPMENT_HEADER_ID = RSL.SHIPMENT_HEADER_ID
AND RSH.SHIPMENT_HEADER_ID = RT.SHIPMENT_HEADER_ID
AND RSL.SHIPMENT_LINE_ID = RT.SHIPMENT_LINE_ID
AND RSL.PO_LINE_ID = PLLA.PO_LINE_ID
AND RT.TRANSACTION_TYPE = 'RECEIVE'
AND NVL(MSI.ENABLED_FLAG,'N') = 'Y'
AND NVL(RSL.QUANTITY_RECEIVED,0) > 0
AND PHA.ORG_ID = :P_ORG_ID
AND TRUNC(RT.TRANSACTION_DATE) BETWEEN :P_FROM_DATE AND :P_TO_DATE ) X, PO_DISTRIBUTIONS_ALL POD
WHERE POD.LINE_LOCATION_ID = X.LINE_LOCATION_ID
How can i execute it fast. Any alternate for TRUNC..?
PSYou could use a function based index,
create index idx_trunc_trans_date on RCV_TRANSACTIONS(TRUNC(TRANSACTION_DATE));I guess the trunc you are using will not be of any use unless you are having any time format in your p_from_date and p_to_date or you enter same date for to and from dates.
Trunc just truncates the time in the date and resets it begining of the day. I dont think you need trunc at all.
Just do
IF P_FROM_DATE AND P_TO_DATE
then
l_to_date := (p_to_date+1)-(1/(24*3600))
-- use local variables
then useAND RT.TRANSACTION_DATE BETWEEN :P_FROM_DATE AND L_TO_DATE
I believe you should get the same reults unless you have time in your from and to date parameters.
If you are getting, same results, just remove the trunc.
G. -
Stopping a Query taking more time to execute in runtime in Oracle Forms.
Hi,
In the present application one of the oracle form screen is taking long time to execute a query, user wanted an option to stop the query in between and browse the result (whatever has been fetched before stopping the query).
We have tried three approach.
1. set max fetch record in form and block level.
2. set max fetch time in form and block level.
in above two method does not provide the appropiate solution for us.
3. the third approach we applied is setting the interaction mode to "NON BLOCKING" at the form level.
It seems to be worked, while the query took long time to execute, oracle app server prompts an message to press Esc to cancel the query and it a displaying the results fetched upto that point.
But the drawback is one pressing esc, its killing the session itself. which is causing the entire application to collapse.
Please suggest if there is any alternative approach for this or how to overcome this perticular scenario.
This kind of facility is alreday present in TOAD and PL/SQL developer where we can stop an executing query and browse the results fetched upto that point, is the similar facility is avialable in oracle forms ,please suggest.
Thanks and Regards,
Suraj
Edited by: user10673131 on Jun 25, 2009 4:55 AMHello Friend,
You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
Please check the possbility of finding it through other AR tables.
2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
For example : For ra_cust_trx_types_all use ra_cust_trx_types.
This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
You can also check the same using
SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
If the tables are not analyzed , the CBO will not be able to tune your query.
4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
report.
Thanks,
Neeraj Shrivastava
[email protected]
Edited by: user9352949 on Oct 1, 2010 8:02 PM
Edited by: user9352949 on Oct 1, 2010 8:03 PM -
Query is taking more time to execute in PROD
Hi All,
Can anyone tell me why this query is taking more time when I am using for single trx_number record it is working fine but when I am trying to use all the records it is not fatching any records and it is keep on running.
SELECT DISTINCT OOH.HEADER_ID
,OOH.ORG_ID
,ct.CUSTOMER_TRX_ID
,ool.ship_from_org_id
,ct.trx_number IDP_SHIPMENT_ID
,ctt.type STATUS_CODE
,SYSDATE STATUS_DATE
,ooh.attribute16 IDP_ORDER_NBR --Change based on testing on 21-JUL-2010 in UAT
,lpad(rac_bill.account_number,6,0) IDP_BILL_TO_CUSTOMER_NBR
,rac_bill.orig_system_reference
,rac_ship_party.party_name SHIP_TO_NAME
,raa_ship_loc.address1 SHIP_TO_ADDR1
,raa_ship_loc.address2 SHIP_TO_ADDR2
,raa_ship_loc.address3 SHIP_TO_ADDR3
,raa_ship_loc.address4 SHIP_TO_ADDR4
,raa_ship_loc.city SHIP_TO_CITY
,NVL(raa_ship_loc.state,raa_ship_loc.province) SHIP_TO_STATE
,raa_ship_loc.country SHIP_TO_COUNTRY_NAME
,raa_ship_loc.postal_code SHIP_TO_ZIP
,ooh.CUST_PO_NUMBER CUSTOMER_ORDER_NBR
,ooh.creation_date CUSTOMER_ORDER_DATE
,ool.actual_shipment_date DATE_SHIPPED
,DECODE(mp.organization_code,'CHP', 'CHESAPEAKE'
,'CSB', 'CHESAPEAKE'
,'DEP', 'CHESAPEAKE'
,'CHESAPEAKE') SHIPPED_FROM_LOCATION --'MEMPHIS' --'HOUSTON'
,ooh.freight_carrier_code FREIGHT_CARRIER
,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)
+ NVL(XX_FSG_NA_FASTRAQ_IFACE.get_line_fr_amt ('FREIGHT',ct.customer_trx_id,ct.org_id),0)FREIGHT_CHARGE
,ooh.freight_terms_code FREIGHT_TERMS
,'' IDP_BILL_OF_LADING
,(SELECT WAYBILL
FROM WSH_DELIVERY_DETAILS_OE_V
WHERE -1=-1
AND SOURCE_HEADER_ID = ooh.header_id
AND SOURCE_LINE_ID = ool.line_id
AND ROWNUM =1) WAYBILL_CARRIER
,'' CONTAINERS
,ct.trx_number INVOICE_NBR
,ct.trx_date INVOICE_DATE
,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('LINE',ct.customer_trx_id,ct.org_id),0) +
NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) +
NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)INVOICE_AMOUNT
,NULL IDP_TAX_IDENTIFICATION_NBR
,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) TAX_AMOUNT_1
,NULL TAX_DESC_1
,NULL TAX_AMOUNT_2
,NULL TAX_DESC_2
,rt.name PAYMENT_TERMS
,NULL RELATED_INVOICE_NBR
,'Y' INVOICE_PRINT_FLAG
FROM ra_customer_trx_all ct
,ra_cust_trx_types_all ctt
,hz_cust_accounts rac_ship
,hz_cust_accounts rac_bill
,hz_parties rac_ship_party
,hz_locations raa_ship_loc
,hz_party_sites raa_ship_ps
,hz_cust_acct_sites_all raa_ship
,hz_cust_site_uses_all su_ship
,ra_customer_trx_lines_all rctl
,oe_order_lines_all ool
,oe_order_headers_all ooh
,mtl_parameters mp
,ra_terms rt
,OE_ORDER_SOURCES oos
,XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V
WHERE ct.cust_trx_type_id = ctt.cust_trx_type_id
AND ctt.TYPE <> 'BR'
AND ct.org_id = ctt.org_id
AND ct.ship_to_customer_id = rac_ship.cust_account_id
AND ct.bill_to_customer_id = rac_bill.cust_account_id
AND rac_ship.party_id = rac_ship_party.party_id
AND su_ship.cust_acct_site_id = raa_ship.cust_acct_site_id
AND raa_ship.party_site_id = raa_ship_ps.party_site_id
AND raa_ship_loc.location_id = raa_ship_ps.location_id
AND ct.ship_to_site_use_id = su_ship.site_use_id
AND su_ship.org_id = ct.org_id
AND raa_ship.org_id = ct.org_id
AND ct.customer_trx_id = rctl.customer_trx_id
AND ct.org_id = rctl.org_id
AND rctl.interface_line_attribute6 = to_char(ool.line_id)
AND rctl.org_id = ool.org_id
AND ool.header_id = ooh.header_id
AND ool.org_id = ooh.org_id
AND mp.organization_id = ool.ship_from_org_id
AND ooh.payment_term_id = rt.term_id
AND xla_ael_sl_v.last_update_date >= NVL(p_last_update_date,xla_ael_sl_v.last_update_date)
AND ooh.order_source_id = oos.order_source_id --Change based on testing on 19-May-2010
AND oos.name = 'FASTRAQ' --Change based on testing on 19-May-2010
AND ooh.org_id = g_org_id --Change based on testing on 19-May-2010
AND ool.flow_status_code = 'CLOSED'
AND xla_ael_sl_v.trx_hdr_id = ct.customer_trx_id
AND trx_hdr_table = 'CT'
AND xla_ael_sl_v.gl_transfer_status = 'Y'
AND xla_ael_sl_v.accounted_dr IS NOT NULL
AND xla_ael_sl_v.org_id = ct.org_id;
-- AND ct.trx_number = '2000080';
}Hello Friend,
You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
Please check the possbility of finding it through other AR tables.
2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
For example : For ra_cust_trx_types_all use ra_cust_trx_types.
This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
You can also check the same using
SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
If the tables are not analyzed , the CBO will not be able to tune your query.
4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
report.
Thanks,
Neeraj Shrivastava
[email protected]
Edited by: user9352949 on Oct 1, 2010 8:02 PM
Edited by: user9352949 on Oct 1, 2010 8:03 PM -
Performance tunned report taking more time to execute - URGENT
Dear Experts,
In One Report Program is Taking long time to execute at background session, I am Taking That Report To Tune The Performance But This is taking 12 hours more to excute .........
The Code is given below.
Before Tune.
DATA : BEGIN OF ITAB OCCURS 0,
LGOBE LIKE T001L-LGOBE,
105DT LIKE MKPF-BUDAT,
XBLNR LIKE MKPF-XBLNR,
BEDAT LIKE EKKO-BEDAT,
LIFNR LIKE EKKO-LIFNR,
NAME1 LIKE LFA1-NAME1,
EKKO LIKE EKKO-BEDAT,
BISMT LIKE MARA-BISMT,
MAKTX LIKE MAKT-MAKTX,
NETPR LIKE EKPO-NETPR,
PEINH LIKE EKPO-PEINH,
VALUE TYPE P DECIMALS 2,
DISPO LIKE MARC-DISPO,
DSNAM LIKE T024D-DSNAM,
AGE TYPE P DECIMALS 0,
PARLIFNR LIKE EKKO-LIFNR,
PARNAME1 LIKE LFA1-NAME1,
MBLNR LIKE MSEG-MBLNR,
MJAHR LIKE MSEG-MJAHR,
ZEILE LIKE MSEG-ZEILE,
BWART LIKE MSEG-BWART,
MATNR LIKE MSEG-MATNR,
WERKS LIKE MSEG-WERKS,
LIFNR LIKE MSEG-LIFNR,
MENGE LIKE MSEG-MENGE,
MEINS LIKE MSEG-MEINS,
EBELN LIKE MSEG-EBELN,
EBELP LIKE MSEG-EBELP,
LGORT LIKE MSEG-LGORT,
SMBLN LIKE MSEG-SMBLN,
BUKRS LIKE MSEG-BUKRS,
GSBER LIKE MSEG-GSBER,
INSMK LIKE MSEG-INSMK,
XAUTO LIKE MSEG-XAUTO,
END OF ITAB.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' and
mblnr ne '5002361303' and
mblnr ne '5003501080' and
mblnr ne '5002996300' and
mblnr ne '5002996407' AND
mblnr ne '5003587026' AND
mblnr ne '5003587026' AND
mblnr ne '5003493186' AND
mblnr ne '5002720583' AND
mblnr ne '5002928122' AND
mblnr ne '5002628263'.
After tune.
TYPES : BEGIN OF ST_ITAB ,
MBLNR LIKE MSEG-MBLNR,
MJAHR LIKE MSEG-MJAHR,
ZEILE LIKE MSEG-ZEILE,
BWART LIKE MSEG-BWART,
MATNR LIKE MSEG-MATNR,
WERKS LIKE MSEG-WERKS,
LIFNR LIKE MSEG-LIFNR,
MENGE LIKE MSEG-MENGE,
MEINS LIKE MSEG-MEINS,
EBELN LIKE MSEG-EBELN,
EBELP LIKE MSEG-EBELP,
LGORT LIKE MSEG-LGORT,
SMBLN LIKE MSEG-SMBLN,
BUKRS LIKE MSEG-BUKRS,
GSBER LIKE MSEG-GSBER,
INSMK LIKE MSEG-INSMK,
XAUTO LIKE MSEG-XAUTO,
END OF ST_ITAB.
DATA : ITAB TYPE STANDARD TABLE OF ST_ITAB WITH HEADER LINE.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' AND
MBLNR NE '5002361303' AND
MBLNR NE '5003501080' AND
MBLNR NE '5002996300' AND
MBLNR NE '5002996407' AND
MBLNR NE '5003587026' AND
MBLNR NE '5003587026' AND
MBLNR NE '5003493186' AND
MBLNR NE '5002720583' AND
MBLNR NE '5002928122' AND
MBLNR NE '5002628263'.
PLEASE GIVE ME THE SOULUTION......
Reward avail for useful answer
thanks in adv,
jai.mHi.
The Select statment accessing MSEG Table is Slow Many a times.
To Improve the performance of MSEG.
1.Check for the proper notes in the Service Market Place if you are working for CIN version.
2.Index the MSEG table
2.Check and limit the Columns in the Select statment .
Possible Way.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' .
Delete itab where itab EQ '5002361303'
Delete itab where itab EQ '5003501080'
Delete itab where itab EQ '5002996300'
Delete itab where itab EQ '5002996407'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003493186'
Delete itab where itab EQ '5002720583'
Delete itab where itab EQ '5002928122'
Delete itab where itab EQ '5002628263'.
Select
Regards
Bala.M
Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM -
"3.x Analyzer Server" EVENT taking more Time when Executing Query
Hi All,
When I am Executing the Query Through RSRT taking more time. When I have checked the statistics
and observed that "3.x Analyzer Server" EVENT taking more Time .
What I have to do , How I will reduce this 3.x Analyzer Server EVENT time.
Please Suggest me.
Thanks,
Kiran ManyamHello,
Chk this on query performance:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Query Performance
Reg,
Dhanya -
Quesry taking more time to execute
Dear All,
User complained that his query is taking more time than yesterday and DB performance is also very slow, so what would be your strategy to check and fix the problem.
What could be the right approach to check and fix this.
Regards,
DevD!
Edited by: user12138514 on Nov 8, 2009 9:35 PMAnd here is the cause list for slow performance:
Slow Network connection.
Bad Connection management.
Bad Use of Cursors and the Shared Pool
Bad SQL or Query is not perfectly tune or not using bind variables.
Use of nonstandard initialization parameters.
Getting Database I/O Wrong
Redo Log Setup Problems
Serialization of data blocks in the buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments.
Long Full Table Scans.
High Amounts of Recursive (SYS) SQL
Deployment and Migration Errors
Table require analyzed.
Table require indexed.
Overparsing.
You have old discs.
Not enough memory.
SGA is too small or too big.
Same for cache buffer.
Your OS need tuning.
Your disc is full.
Running so many instances on a single server.
You applications are badly written.
Your applications were ported from Sybase or Microsoft SQL Server.
Your applications were ported from DB2.
Your applications dynamically create temp tables.
Someone wrote referential integrity using triggers, java, vb ...
Someone wrote replication using triggers, java, vb ...
Someone wrote a sequence using max + 1.
Table datatypes are not as per DBMS concepts; like Dates and numbers are stored as strings.
The statistics are not up to date.
There are no statistics.
What are statistics?
We are using RBO.
That's a few of the possible causes I could think of, there are lots of others.
-To analyze your application, you can install Statspack. It will give you some indications.
Hth
Girish Sharma -
Report taking more time to execute
Hi,
I Have created a report on " 0FIGL_O02 " General Ledger: Line Items(DSO)
This report is creates for document details . so using this report i want to creat
jump report. But When i am running the report it is taking more tme.
variable sellections i uesed for this report.
1. company code
2.G/L Account
3.Posting date
there are no caliculations for this report. But it is taaking more time and not getting
executed.
plz sugest.
Regards,
prasad.Hi,
Please rebuild the statistics of the ODS.
Also check the various options available below:
ODS Performance
-Vikram -
Function taking longer time to execute
Hi,
I have a scenario where i am using a TABLE FUNCTION in a join con condition with a Normal TABLE but its getting longer time to execute:
The function is given below:
CREATE OR REPLACE FUNCTION GET_ACCOUNT_TYPE(
SUBNO VARCHAR2 DEFAULT NULL
RETURN ACCOUNT_TYP_KEY_1 PIPELINED AS
V_SUBNO VARCHAR2(20);
V_SUBS_TYP VARCHAR2(10);
V_ACCOUNT_TYP_KEY VARCHAR2(10);
V_ACCOUNT_TYP_KEY_1 VARCHAR2(10);
V_SUBS_TYP_KEY_1 VARCHAR2(10);
V_VAL1 VARCHAR2(255);
CURSOR C1_REC2 IS SELECT SUBNO,NULL
FROM CTVA_ETL.RA_CRM_USER_INFO
GROUP BY SUBNO,SUBSCR_TYPE;
--CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE,ACCOUNT_TYPE_KEY
--FROM CTVA_ETL.RA_CRM_USER_INFO,DIM_RA_MAST_ACCOUNT_TYPE
--WHERE ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
--WHERE MSISDN='8615025400109'
--WHERE MSISDN IN ('8615025400068','8615025400083','8615025400101','8615025400132','8615025400109')
CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE--,ACCOUNT_TYPE_KEY
FROM CTVA_ETL.RA_CRM_USER_INFO
GROUP BY SUBNO,SUBSCR_TYPE;
BEGIN
OPEN C1_REC;
LOOP
FETCH C1_REC INTO V_SUBNO ,V_SUBS_TYP;
IF V_SUBS_TYP IS NOT NULL THEN
BEGIN
SELECT
ACCOUNT_TYPE_KEY
INTO
V_ACCOUNT_TYP_KEY
FROM
DIM_RA_MAST_ACCOUNT_TYPE,
RA_CRM_USER_INFO
WHERE
ACCOUNT_TYPE_KEY=V_SUBS_TYP
AND ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
AND SUBNO=V_SUBNO;
EXCEPTION
WHEN NO_DATA_FOUND THEN
V_ACCOUNT_TYP_KEY := '-99';
V_ACCOUNT_TYP_KEY_1 := V_ACCOUNT_TYP_KEY;
END;
ELSE
V_ACCOUNT_TYP_KEY_1:='-99';
END IF;
FOR CUR IN (select
DISTINCT V_SUBNO SUBNO_TYP_2 ,V_ACCOUNT_TYP_KEY_1 ACCOUNT_TYP
from dual)
LOOP
PIPE ROW (ACCOUNT_TYP_KEY(CUR.SUBNO_TYP_2,CUR.ACCOUNT_TYP));
END LOOP;
END LOOP;
RETURN;
CLOSE C1_REC;
END;
The above function wil return rows with respect to SUBSCRIBER TYPE (if Not Null then it will return the ACCOUNT KEY and SUBNO else '-99').
But its not returning any rows so all the rows will come as
SUBNO ACCOUNT_TYP
21 -99
22 -99
23 -99
24 -99
25 -99
Thanks and RegardsHi LMLobo,
In addition to Sebastian’s answer, you can refer to the document
Server Memory Server Configuration Options to check whether the maximum server memory setting of the SQL Server is changed on the new server. Besides, you can also compare the
network packet size setting of the SQL Server as well as the network connectivity on both servers. Besides, you can refer to the following link to troubleshooting SSIS package performance
issue:
http://technet.microsoft.com/en-us/library/dd795223(v=sql.100).aspx.
Regards,
Mike Yin
TechNet Community Support -
CO41 taking more time to execute
Hi,
I am running CO41 (PRD System), but the time it is taking to execute is extensively high. Ultimately i have to kill the sesison to stop the process.
What can be the probable reason for the same.
My Inputs to CO41 are,
Planning Plant
MRP Controller
Production Plant
Kindly guide.
Thanks in advance,
HarrisDear,
This is runtime problems occur during the conversion of planned orders it may be due to ATP check or may be due to buffering of certain tables.
In the check control (Transaction OPPJ), you can define the stocks, inward movements and outward movements that are considered.
Below, you will find a list of database tables that are accessed, depending on the setting.
Stocks:
Storage locations: MARD
Sales order stocks: MSSA
Project stocks: MSSQ
Subcontracting: MSSL
Batches: MCHB
Customer consignment: MSSK
Inward/outward movements:
Purchase orders: MDBS (view for EKPO and EKET)
Production orders: MDFA (view for AFPO)
Purchase requisitions: EBAN
Dependent req., order res.: MDRS (view for RESB)
Manual reservations: MDRS (view for RESB)
Stock transfer reservations: MDUR (view for REUL and RESB)
Sales requirements: VBBE (individual records)
VBBS (totals records)
Please use releavant setting for the ATP check.
Also raise to OSS for same.
Hope it will help you.
Regards,
R.Brahmankar -
Bulk Collect taking more time. Please suggest .
I am working on oracle 11g
I have one normal insert proc
CREATE OR REPLACE PROCEDURE test2
AS
BEGIN
INSERT INTO first_table
(citiversion, financialcollectionid,
dataitemid, dataitemvalue,
unittypeid, financialinstanceid,
VERSION, providerid, user_comment,
userid, insert_timestamp,
latestflag, finalflag, filename,
final_ytdltm_flag, nmflag , partition_key
SELECT citiversion, financialcollectionid,
dataitemid, dataitemvalue, unittypeid,
new_fi, VERSION, providerid,
user_comment, userid,
insert_timestamp, latestflag,
finalflag, filename, '', nmflag,1
FROM secon_table
WHERE financialinstanceid = 1
AND changeflag = 'A'
AND processed_flg = 'N';
END test2;
To impove performance i have normal insert into convert it to bulk collect :
CREATE OR REPLACE PROCEDURE test
AS
BEGIN
DECLARE
CURSOR get_cat_fin_collection_data(n_instanceid NUMBER) IS
SELECT citiversion,
financialcollectionid,
dataitemid,
dataitemvalue,
unittypeid,
new_fi,
VERSION,
providerid,
user_comment,
userid,
insert_timestamp,
latestflag,
finalflag,
filename,
nmflag,
1
FROM secon_table
WHERE financialinstanceid = n_instanceid
AND changeflag = 'A'
AND processed_flg = 'N';
TYPE data_array IS TABLE OF get_cat_fin_collection_data%ROWTYPE;
l_data data_array;
BEGIN
OPEN get_cat_fin_collection_data(1);
LOOP
FETCH get_cat_fin_collection_data BULK COLLECT
INTO l_data limit 100;
FORALL i IN 1 .. l_data.COUNT
INSERT INTO first_table VALUES l_data (i);
EXIT WHEN l_data.count =0;
END LOOP;
CLOSE get_cat_fin_collection_data;
END;
END test;
But bulk collect is taking more time.
below is the timings
SQL> set timing on
SQL> exec test
PL/SQL procedure successfully completed
Executed in 16.703 seconds
SQL> exec test2
PL/SQL procedure successfully completed
Executed in 9.406 seconds
SQL> rollback;
Rollback complete
Executed in 2.75 seconds
SQL> exec test
PL/SQL procedure successfully completed
Executed in 16.266 seconds
SQL> rollback;
Rollback complete
Executed in 2.812 seconds
Normal insert :- 9.4 second
Bulk insert:- 16.266 seconds
I am processing 1 lakh rows.
Can you please tell me the reason why bulk collect is taking more time. ? According to my knowledge it should take less time.
Please suggect do i need to check any parameter?
Please help.
Edited by: 976747 on Feb 4, 2013 1:12 AM>
Can you please tell me the reason why bulk collect is taking more time. ? According to my knowledge it should take less time.
Please suggect do i need to check any parameter?In that case, your knowledge is flawed.
Pure SQL is almost always faster than PL/SQL.
If your Insert into Select is executing slow, then it is probably because the Select statement is taking long to execute. How many rows are being Selected and Inserted from your query?
You might also consider tuning the Select statement. For more information on Posting a Tuning request, read {message:id=3292438} and post the relevant information. -
Issue with background job--taking more time
Hi,
We have a custom program which runs as the background job. It runs every 2 hours.
Itu2019s taking more time than expected on ECC6 SR2 & SR3 on Oracle 10.2.0.4. We found that it taking more time while executing native SQL on DBA_EXTENTS. When we tried to fetch less number of records from DBA_EXTENTS, it works fine.
But we need the program to fetch all the records.
But it works fine on ECC5 on 10.2.0.2 & 10.2.0.4.
Here is the SQL statement:
EXEC SQL PERFORMING SAP_GET_EXT_PERF.
SELECT OWNER, SEGMENT_NAME, PARTITION_NAME,
SEGMENT_TYPE, TABLESPACE_NAME,
EXTENT_ID, FILE_ID, BLOCK_ID, BYTES
FROM SYS.DBA_EXTENTS
WHERE OWNER LIKE 'SAP%'
INTO
: EXTENTS_TBL-OWNER, :EXTENTS_TBL-SEGMENT_NAME,
:EXTENTS_TBL-PARTITION_NAME,
:EXTENTS_TBL-SEGMENT_TYPE , :EXTENTS_TBL-TABLESPACE_NAME,
:EXTENTS_TBL-EXTENT_ID, :EXTENTS_TBL-FILE_ID,
:EXTENTS_TBL-BLOCK_ID, :EXTENTS_TBL-BYTES
ENDEXEC.
Can somebody suggest what has to be done?
Has something changed in SAP7 (wrt background job etc) or do we need to fine tune the SQL statement?
Regards,
VivdhaHi,
there was an issue with LMT's but that was fixed in 10.2.0.4
besides missing system statistics:
But WHY do you collect every 2 hours this information? The dba_extents view is based on really heavy used system tables.
Normally , you would start queries of this type against dba_extents ie. to identify corrupt blocks and such:
SELECT owner , segment_name , segment_type
FROM dba_extents
WHERE file_id = &AFN
AND &BLOCKNO BETWEEN block_id AND block_id + blocks -1
Not sure what you want to achieve with it.
There are monitoring tools (OEM ?) around that may cover your needs.
Bye
yk -
Suddenly ODI scheduled executions taking more time than usual.
Hi,
I have set ODI packages scheduled for execution.
From some days those are taking more time to execute themselves.
Before they used to take 1 hr 30 mins approx.
Now they are taking 3 - 3 hr 15 mins approx.
And there no any major change in data in terms of Quantity.
My ODI version s
Standalone Edition Version 11.1.1
Build ODI_11.1.1.3.0_GENERIC_100623.1635
ODI packages are mainly using Oracle as SOURCE and TARGET DB.
What things should i check to get to know reasons of sudden increase in time of execution.
Any pointers regarding this would be appreciated.
Thanks,
MaheshMahesh,
Use some repository queries to retrieve the session task timings and compare your slow execution to a previous acceptable execution, then look for the biggest changes - this will highlight where you are slowing down, then its off to tune the item accordingly.
See here for some example reports , you might need to tweak for your current repos version but I dont think the table structures have changed that much :
http://rnm1978.wordpress.com/2010/11/03/analysing-odi-batch-performance/ -
*More time to Execute*
Hi,
The following code is taking more time to execute...
Kindly assist..
check wagetype selection
REFRESH it_p8wage.
CLEAR: it_p8wage, wa_p8wage, p8wage_flag.
LOOP AT p0008.
DO 20 TIMES
VARYING wa_p8wage-lgart FROM p0008-lga01 NEXT p0008-lga02
VARYING wa_p8wage-betrg FROM p0008-bet01 NEXT p0008-bet02.
IF wa_p8wage-lgart IN lgartahm
AND NOT wa_p8wage IS INITIAL.
MOVE: 'X' TO p8wage_flag,
p0008-waers TO wa_p8wage-waers.
INSERT wa_p8wage INTO TABLE it_p8wage.
CLEAR wa_p8wage.
ENDIF.
ENDDO.
ENDLOOP.
no wagetype mathing found
IF p8wage_flag IS INITIAL.
READ TABLE lgartahm.
IF sy-subrc EQ 0.
REJECT.
ENDIF.
ENDIF.Hi,
For each end every employye there may not be data for all the 20 wage types from lga01 bet01..................lga20 and bet20.
in your code it is better to exit the do loop IF wa_p8wage-lgart IS INITIAL rather than looping 20 times even there is no data.
Because wage types will be assigned to an employee in the same sequence form 1,2......20
if the lga03 is initial there there will not be data for lga04 to lga 20.
This may not be the excat reason for slow response time of ur report but may be usefull.
look at the below code for the chane i suggested
LOOP AT p0008.
DO 20 TIMES
VARYING wa_p8wage-lgart FROM p0008-lga01 NEXT p0008-lga02
VARYING wa_p8wage-betrg FROM p0008-bet01 NEXT p0008-bet02.
IF wa_p8wage-lgart IS INITIAL.
exit. " this will exit the do loop
ENDIF.
IF wa_p8wage-lgart IN lgartahm
AND NOT wa_p8wage IS INITIAL.
MOVE: 'X' TO p8wage_flag,
p0008-waers TO wa_p8wage-waers.
INSERT wa_p8wage INTO TABLE it_p8wage.
CLEAR wa_p8wage.
ENDIF.
ENDDO.
ENDLOOP.
Maybe you are looking for
-
GL account to be changed so that the Internal Order field is mandatory
Hello, We wand GL account to be changed so that the Internal Order field is mandatory? Nothing should be posted to this account without an Internal order and therefore, if this was a mandatory field people would be forced to put the Internal order
-
Anchor a button upper right in a panel?
Have to anchor a button upper right in a panel. Need it to stay anchored even when I use resize and move effects on the panel. Want the button up top, not in the panel's content pane. The only way I know to get a button up there is to put it in the
-
Getting list of base tables of view
Hi, how to find the list of base tables of some partuicular view ?
-
Applicatio​n Builder Source File Settings
I've recently upgraded to the full version of LV 8.2 and purchased the license for the application builder. On the "Source File Settings" tab I would like the program to not run when opened, and not to show the tool or scroll bars. Although I've tr
-
Embedding an emulator into a website
Is there a way to use a program like mame-x or a mame emulator thru iweb. There are others like this i'd like to intergrate into an online site. Perhaps this has been done or maybe some coding changes..any ideas