Improve performance of the below query
Hi Gurus,
I have following query with the plan described. I need to improve the the performance of this query as it is using full table scans.
How could achieve this.
SELECT csf.*,
NULL subject_flag,
NULL next_exp_rewards_date,
0 next_exp_rewards,
(csf.rewards_cf - csf.rewards_bf - csf.total_rewards) * -1 reward_spent
FROM crm_statement_fulfilled csf, crm_accounts ca
WHERE csf.account_id = ca.account_id
AND csf.procg_date = '01-jul-2012'
AND ca.cycle_id = '01'
AND ca.date_closed IS NULL
AND ca.account_type IN ('LTT', 'LTY', 'LYI')
ORDER BY csf.account_id
Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.2.0
Connected as crm
SQL> desc crm_statement_fulfilled
Name Type Nullable Default Comments
ACCOUNT_ID NUMBER(19)
CAMPAIGN_ID NUMBER(10)
CYCLE_ID VARCHAR2(2)
CUSTOMER_ID NUMBER(14) Y
PROCG_DATE DATE
GE_FULFILLED VARCHAR2(1) Y
POINTS_DISPLAYED VARCHAR2(1) Y
REASON_ID VARCHAR2(10) Y
POINTS_BF NUMBER(10,2) Y
PROMO_POINTS NUMBER(10,2) Y
GW_ADJ_POINTS NUMBER(10,2) Y
TOTAL_REWARDS NUMBER(10,2) Y
GW_FUL_POINTS NUMBER(10,2) Y
REC_POINTS NUMBER(10,2) Y
REC_FC_POINTS NUMBER(10,2) Y
POINTS_CONVERTED NUMBER(10,2) Y
POINTS_CF NUMBER(10,2) Y
TOTAL_POINTS NUMBER(10,2) Y
TOTAL_AIR_MILES_VALUE NUMBER(10) Y
POINTS_EXPIRED NUMBER(10,2) Y
CDT_IN_STR_POINTS NUMBER(10,2) Y
CDT_OUT_STR_POINTS NUMBER(10,2) Y
ANNIVERSARY_POINTS NUMBER(10,2) Y
LOYALTY_POINTS NUMBER(10,2) Y
NEXT_EXP_POINTS NUMBER(10,2) Y
NEXT_EXP_POINTS_DATE DATE Y
REWARDS_BF NUMBER(10,2) Y
REWARDS_CF NUMBER(10,2) Y
SQL> desc crm_accounts
Name Type Nullable Default Comments
ACCOUNT_ID NUMBER(19)
CYCLE_ID VARCHAR2(2)
ACCOUNT_TYPE VARCHAR2(3)
SECURITY_PASSWORD VARCHAR2(80) Y
ACCESS_PASSWORD VARCHAR2(80) Y
AIRMILES_ID VARCHAR2(16) Y
DATE_OPENED DATE
SEED VARCHAR2(1) Y
AUTO_AIRMILES VARCHAR2(1) Y
CURRENCY VARCHAR2(3) Y
DATE_CLOSED DATE Y
STORE_OPENED VARCHAR2(4) Y
DATE_ACTIVATED DATE Y
OLD_ACCOUNT_ID NUMBER(19) Y
CARDPAC_ACCOUNT_ID NUMBER(19) Y
ORG_ID VARCHAR2(10) Y
LOGO_ID VARCHAR2(10) Y
PTS_TO_ACCOUNT_ID NUMBER(19) Y
DMW_ACCOUNT_ID VARCHAR2(19) Y
SQL>
Hi,
Sid_ Z. wrote:
Hi Gurus,
I have following query with the plan described. I need to improve the the performance of this query as it is using full table scans...For all tuning requests, see the forum FAQ {message:id=9360003}
AND csf.procg_date = '01-jul-2012'Don't try to compare a DATE (such as procg_date) with as VARCHAR2 (such as '01-jul-2012'.
Use TO_DATE or a DATE literal instead.
Similar Messages
-
How to improve performance of the attached query
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1duplicate thread..
How to improve performance of attached query -
What Indexes should be created for improve performance of the sql query
Hello Admins
One of my user is facing slow performance issue while running the below query. Can someone, please guide me for the same. I want to know, what indexes should be created to improve the performance of this query. Also what else can be done to achieve the same.
SQL Query:-
SELECT UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_NUMBER))),
CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_NUMBER,
CGSBI_SHIP_DIST_S_EXTRACT.PO_SHIPMENT_NUMBER,
CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_SHIP_DIST_NUMBER,
CGSBI_SHIP_DIST_S_EXTRACT.DISTRIBUTION_DATE,
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_SHIP_DIST_LINE_ID))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PROJECT_ID))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ACCOUNT_DISTRIBUTION_CODE))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ORACLE_ACCOUNT_NUMBER))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.COMPONENT_CODE))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.TRANSACTION_CURRENCY_CODE))),
CGSBI_SHIP_DIST_S_EXTRACT.ORDER_QUANTITY, UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ORDER_UOM))),
CGSBI_SHIP_DIST_S_EXTRACT.UNIT_PRICE_TRX_CURRENCY,
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.EXPENSE_TYPE_INDICATOR))),
CGSBI_SHIP_DIST_S_EXTRACT.SOR_ID,
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_CODE))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_DESC))),
CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_LEAD_TIME,
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.UNSPSC_CODE))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.BUYER_ID))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.REQUESTOR_ID))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.APPROVER_ID))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SUPPLIER_SITE_ID))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SUPPLIER_GSL_NUMBER))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SHIP_TO_LOCATION_CODE))),
UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.TASK_ID))),
(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_RELEASE_ID)))
FROM
CGSBI_SHIP_DIST_S_EXTRACT
WHERE PO_NUMBER IS NOT NULL;
I generated the explain plan for this query and found the following:-
Explain Plan:-
SQL> explain plan for SELECT UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_NUMBER))),
2 CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_NUMBER,
3 CGSBI_SHIP_DIST_S_EXTRACT.PO_SHIPMENT_NUMBER,
4 CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_SHIP_DIST_NUMBER,
5 CGSBI_SHIP_DIST_S_EXTRACT.DISTRIBUTION_DATE,
6 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_SHIP_DIST_LINE_ID))),
7 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PROJECT_ID))),
8 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ACCOUNT_DISTRIBUTION_CODE))),
9 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ORACLE_ACCOUNT_NUMBER))),
10 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.COMPONENT_CODE))),
11 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.TRANSACTION_CURRENCY_CODE))),
12 CGSBI_SHIP_DIST_S_EXTRACT.ORDER_QUANTITY, UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.ORDER_UOM))),
13 CGSBI_SHIP_DIST_S_EXTRACT.UNIT_PRICE_TRX_CURRENCY,
14 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.EXPENSE_TYPE_INDICATOR))),
15 CGSBI_SHIP_DIST_S_EXTRACT.SOR_ID,
16 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_CODE))),
17 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_DESC))),
18 CGSBI_SHIP_DIST_S_EXTRACT.PO_LINE_ITEM_LEAD_TIME,
19 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.UNSPSC_CODE))),
20 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.BUYER_ID))),
21 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.REQUESTOR_ID))),
22 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.APPROVER_ID))),
23 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SUPPLIER_SITE_ID))),
24 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SUPPLIER_GSL_NUMBER))),
25 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.SHIP_TO_LOCATION_CODE))),
26 UPPER(LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.TASK_ID))),
27 (LTRIM(RTRIM(CGSBI_SHIP_DIST_S_EXTRACT.PO_RELEASE_ID)))
28 FROM
29 CGSBI_SHIP_DIST_S_EXTRACT
30 WHERE PO_NUMBER IS NOT NULL;
Explained.
SQL>
SQL>
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3891180274
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 77647 | 39M| 2006 (1)| 00:00:25 |
|* 1 | TABLE ACCESS FULL| CGSBI_SHIP_DIST_S_EXTRACT | 77647 | 39M| 2006 (1)| 00:00:25 |
Predicate Information (identified by operation id):
1 - filter("PO_NUMBER" IS NOT NULL)
13 rows selected.
SQL>
SQL>
Kindly suggest on this...
Thanks & Regards
-Naveen Gangil
Oracle DBARafi is correct. Since po_number is the filter column, the only chance you have for using an index to access the table is on that column. However, if there are few (or none) rows with null po_number, you will always have FTS. Does the table have a PK ( which probably consists of at least po_number, line_number )? If that is the case po_number could never be null and in which case you are dumping the whole table and no indexing scheme is going to improve the queri's performance. You might, repeat might, see performance improvement if you cleanse the data in the table ( to eliminate the need for UPPER(LTRIM)RTRIM())) ) before querying it so the the data does not have to be massaged before returning it.
-
How to improve performance of the below code.
Hello.
This below code is show 80% database performance in runtime analysis ( transaction SE30). I am using view KNA1VV for retrieving data using customer and sales area from selection screen.
Please advice how can I improve the performance of below code.
Fetch the customer details from view KNA1VV
SELECT kunnr
vkorg
vtweg
spart
land1
name1
ort01
pstlz
regio
stras
INTO TABLE t_cust
FROM kna1vv
WHERE kunnr IN s_kunnr
AND vkorg IN s_vkorg
AND vtweg IN s_vtweg
AND spart IN s_spart
AND loevm = space
AND loevm_knvv = space.
IF sy-subrc EQ 0.
SORT t_cust BY kunnr.
ELSE.
w_flag = c_true_x.
ENDIF.
Fetch customers for entered company code
IF NOT t_cust[] IS INITIAL AND NOT s_bukrs IS INITIAL.
SELECT kunnr
FROM knb1
INTO TABLE lt_knb1
FOR ALL ENTRIES IN t_cust
WHERE kunnr = t_cust-kunnr
AND bukrs IN s_bukrs
AND loevm = space.
Thanks,80% is just a relation and must not be problematic, what about the absolute runtime, is that acceptable?
Also, your range S_KUNNR could contain anything from a single value (super fast) to nothing (probably slow, depends on number of entries in KNA1VV), so what do you expect here?
Thomas -
How to improve the performance of the attached query, Please help
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1Hello,
This has really nothing to do with the Oracle Forms product.
Please, send the SQL or/and PL/SQL questions in the corresponding forums.
Francois -
Need help in improving the performance for the sql query
Thanks in advance for helping me.
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
Any suggestions or solutions for improving performance are appreciated
SQL query:
update targettable tt
set mnop = 'G',
where ( x,y,z ) in
select a.x, a.y,a.z
from table1 a
where (a.x, a.y,a.z) not in (
select b.x,b.y,b.z
from table2 b
where 'O' = b.defg
and mnop = 'P'
and hijkl = 'UVW';987981 wrote:
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both? -
Request for tunning the below query
Hi,
Can any one help me on the below query while improving the performance,
SELECT accdet, acceprec, accinvalid, accnetanal, accphy, accvalid,
actlabcost, actlabhrs, actualcontactdate, actualfinish, actualstart,
affecteddate, affectedemail, affectedperson, affectedphone,
alteration, aslaiddwg, assetnum, assetorgid, assetsiteid,
assumptions, basedet, basereq, bicounty, bidplo, bieasting,
bihousename, bihouseno, binorthing, bipobox, bipostcode, biposttown,
bistreet, bisubb as bisupp, boostcomp, boostcompdet, ccemail, cchouseno, ccid,
ccname, cctel1type, cctel2type, cctelephone1, cctelephone2, cdm,
changeby, changedate, CLASS, classstructureid, cocontact, cocounty,
codplo, coeasment, coeasting, cohousename, conorthing, copobox,
commodity, commoditygroup, coneasereq, consent, consents,
copostcode, coposttown, costcon, costreet, cosubb, cpi90,
createworelasset, customerref, custtype, depot, description, durt,
ecvpressuretier, ecvsize, enduserid, engdifficult, exaoq, existin,
existsdq, expid, exshq, externalrecid, extralanddetail, failurecode,
fr1code, fr2code, fuelpovscheme, g17, gbna, glaccount,
globalticketclass, globalticketid, govconf, govener, govenerdet,
govhouse, hasactivity, hasld, historyflag, impact, infill,
infoprovide, inheritstatus, internalpriority, interquote, isglobal,
isknownerror, isknownerrordate, kioskdet, kioskreq, langcode,
latecertdate, leadt, lengthpri, lengthpub, loadtype, LOCATION, m25,
maindesac, mainusage, meterboxty, metercon, meterloc, meterser,
mininforec, mininforeq, mprnno, newaoq, newpid, newsdq, newshq,
np14, nrswa, nsgno, oldquotever, oldticketid, orgid, originsgn,
origrecordclass, origrecordid, origrecorgid, origrecsiteid, owner,
ownergroup, packagesent, paymethod, payterms, permittowork, physub,
pressuretier, privateexc, problemcode, propertiesno, propertytype,
publicexc, purgerel, quotedate, quotetype, quotever, reinforcement,
reinforcementa, reinforcementb, relatedtoglobal, reportdate,
reportedby, reportedemail, reportedphone, reportedpriority,
rowstamp, sc, scj, scoreq, servicerelay, sgnbillcontact,
sgnblkbyfin, sgncusttobill, sgncusttosite, sgndisreasoth,
sgneasment, sgnenhance, sgneow, sgneowreq, sgngqmvalid,
sgninfillcost, sgninfillver, sgninfprojno, sgnisstdchrg,
sgnloadnoenter, sgnmainsreq, sgnmaxaccdate, sgnnoncont, sgnpipesiz,
sgnpurord, sgnqdaysremain, sgnqstd, sgnquotdate, sgnquotval,
sgnreasdis, sgntotalaoq, sgntotalshq, sgnvarreq, sicontact,
sicounty, sidplo, sieasting, sihousename, sihouseno, sinorthing,
sipobox, sipostcode, siposttown, sistreet, sisubb, sitecond, sitegt,
siteid, sitel1, sitel2, siteplpro, sitevisit, solution, sos,
sosrecdate, SOURCE, status, statusdate, subfinal, supervisor,
supplytype, surveycarr, surveydef, surveyreas, surveyreq, surveyret,
surveysent, targetcontactdate, targetfinish, targetstart, TEMPLATE,
templateid, termtype, thirdpartyeas, thirdpartypipe, ticketid,
ticketuid, totalaoq, totalpid, totalsdq, totalshq, traffictime,
typewo, urgency, variat, vendor, customer_enquiry_ref,
quote_version, costs, mains_infill_charge, mtr_housing_kiosk_charge,
mtr_housing_kiosk_base_charge, specialist_reinstatement,
easement_charge, total_quote_ex_vat, vat, total_quote_incl_vat,
design_charge, reinforcement_charge, reinforcement_cost,
connection_allowance, workorder.pscdate, workorder.ascdate,
workorder.fincode, workorder.istask, workorder.status,
workorder.targstartdate, workorder.targcompdate,
workorder.schedfinish, workorder.actfinish, workorder.estdur,
workorder.wonum, workorder.mprn,
workorder.sihousename AS wositehousename,
workorder.sihouseno AS wositehouseno,
workorder.sistreet AS wositestreet,
workorder.sicounty AS wositecounty,
workorder.siposttown AS wositeposttown,
workorder.sipostcode AS wositepostcode, workorder.workorderid
FROM (maximo.sr
INNER JOIN
(maximo.relatedrecord INNER JOIN maximo.workorder
ON relatedrecord.relatedreckey =
(CASE
WHEN workorder.PARENT IS NOT NULL
THEN workorder.PARENT
ELSE workorder.wonum
END
AND relatedrecord.orgid = workorder.orgid
AND relatedrecord.siteid = workorder.siteid
AND relatedrecord.relatedrecclass = 'WORKORDER')
ON sr.ticketid = relatedrecord.recordkey
AND sr.orgid = relatedrecord.orgid
AND sr.siteid = relatedrecord.siteid
AND relatedrecord.CLASS = 'SR')
LEFT JOIN
frozen_quote@gqmfof
ON sr.ticketid = customer_enquiry_ref
AND sr.quotever = quote_version
Regards,
graceCould you please provide more info.
Refer to the following link.
When your query takes too long ...
thanks -
Performance for the below code
Can any one help me in improving the performance for the below code.
FORM RETRIEVE_DATA .
CLEAR WA_TERRINFO.
CLEAR WA_KNA1.
CLEAR WA_ADRC.
CLEAR SORT2.
*To retrieve the territory information from ZPSDSALREP
SELECT ZZTERRMG
ZZSALESREP
NAME1
ZREP_PROFILE
ZTEAM
INTO TABLE GT_TERRINFO
FROM ZPSDSALREP.
*Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
LOOP AT GT_TERRINFO INTO WA_TERRINFO.
SELECT SINGLE * FROM KNA1 INTO WA_KNA1
WHERE KUNNR = WA_TERRINFO-SALESREP.
SELECT SINGLE * FROM ADRC INTO WA_ADRC
WHERE ADDRNUMBER = WA_KNA1-ADRNR.
IF NOT WA_ADRC-SORT2 IS INITIAL.
CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
MOVE SORT2 TO WA_TERRINFO-SORT2.
MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
APPEND WA_TERRINFO TO GT_TERRINFO1.
CLEAR WA_TERRINFO.
ENDIF.
CLEAR WA_KNA1.
CLEAR WA_ADRC.
ENDLOOP.
ENDFORM. " RETRIEVE_DATAHi
The code is easy so I don't think you can do nothing, only u can try to limit the reading of KNA1:
FORM RETRIEVE_DATA .
CLEAR WA_TERRINFO.
CLEAR WA_KNA1.
CLEAR WA_ADRC.
CLEAR SORT2.
*To retrieve the territory information from ZPSDSALREP
SELECT ZZTERRMG
ZZSALESREP
NAME1
ZREP_PROFILE
ZTEAM
INTO TABLE GT_TERRINFO
FROM ZPSDSALREP.
SORT GT_TERRINFO BY SALESREP.
*Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
LOOP AT GT_TERRINFO INTO WA_TERRINFO.
IF KNA1-KUNNR <> WA_KNA1-KUNNR.
SELECT SINGLE * FROM KNA1 INTO WA_KNA1
WHERE KUNNR = WA_TERRINFO-SALESREP.
IF SY-SUBRC <> 0.
CLEAR: WA_KNA1, WA_ADRC.
ELSE.
SELECT SINGLE * FROM ADRC INTO WA_ADRC
WHERE ADDRNUMBER = WA_KNA1-ADRNR.
IF SY-SUBRC <> 0. WA_ADRC. ENDIF.
ENDIF.
ENDIF.
IF NOT WA_ADRC-SORT2 IS INITIAL.
CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
MOVE SORT2 TO WA_TERRINFO-SORT2.
* MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
APPEND WA_TERRINFO TO GT_TERRINFO1.
CLEAR WA_TERRINFO.
ENDIF.
ENDLOOP.
ENDFORM. " RETRIEVE_DATA
If program takes many times to upload the data from ZPSDSALREP, you can try to split in sevaral packages:
SELECT ZZTERRMG ZZSALESREP NAME1 ZREP_PROFILE ZTEAM
INTO TABLE GT_TERRINFO PACKAGE SIZE <...>
FROM ZPSDSALREP.
SORT GT_TERRINFO BY SALESREP.
*Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
LOOP AT GT_TERRINFO INTO WA_TERRINFO.
IF KNA1-KUNNR <> WA_KNA1-KUNNR.
SELECT SINGLE * FROM KNA1 INTO WA_KNA1
WHERE KUNNR = WA_TERRINFO-SALESREP.
IF SY-SUBRC <> 0.
CLEAR: WA_KNA1, WA_ADRC.
ELSE.
SELECT SINGLE * FROM ADRC INTO WA_ADRC
WHERE ADDRNUMBER = WA_KNA1-ADRNR.
IF SY-SUBRC <> 0. WA_ADRC. ENDIF.
ENDIF.
ENDIF.
IF NOT WA_ADRC-SORT2 IS INITIAL.
CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
MOVE SORT2 TO WA_TERRINFO-SORT2.
* MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
APPEND WA_TERRINFO TO GT_TERRINFO1.
CLEAR WA_TERRINFO.
ENDIF.
ENDLOOP.
ENDSELECT.
Max -
I am getting "Invalid Identifier" while running the below query
Iam getting the error "Invalid Identifier, c_rank" while running the below query. Please help.
select a.*, b.pog_description pog_description, b.start_date,
row_number() over(partition by b.pog_description order by b.start_date) c_rank
from temp_codi_dept_35 a, pog_master_msi b, pog_skus_msi c
where a.sku = c.pog_sku
and b.pog_id = c.pog_id
and b.pog_dept = c.pog_dept
and b.pog_number = c.pog_number
and b.pog_level = c.pog_level
and a.sku = 10263477
and c_rank = 1;>
Iam getting the error "Invalid Identifier, c_rank" while running the below query. Please help.
select a.*, b.pog_description pog_description, b.start_date,
row_number() over(partition by b.pog_description order by b.start_date) c_rank
from temp_codi_dept_35 a, pog_master_msi b, pog_skus_msi c
where a.sku = c.pog_sku
and b.pog_id = c.pog_id
and b.pog_dept = c.pog_dept
and b.pog_number = c.pog_number
and b.pog_level = c.pog_level
and a.sku = 10263477
and c_rank = 1;
>
You can't use 'c_rank' in the where clause because it doesn't exist; you are computing it in the SELECT clause.
Remove the last condition and wrap your query in another one to select by 'c_rank'. -
I have a MAC Pro from 2011 currently running MAC OS 10.9.5. This weekend I cloned the MAC HD drive to a new SSD drive for improved performance. The clone was completed successfully with no errors. After the clone completed I successfully restarted my system using the SSD as the boot device. I then successfully tested all of my products, including Photoshop CS6 and all of its plug-ins. I successfully tested the key features that I frequently use. Today while attempting to launch Photoshop CS6 a message is being displayed indicating that a scratch disk cannot be found. All drives are available on the system via the Finder and Disk Utility. I can access all drives including the old MAC HD which is no longer the boot device. I've even attempted to launch Photoshop from the old device yet the same error persist. Is there a way to review/edit/change Photoshop preferences if Photoshop doesn't launch? I've even restarted my system several times to see if that would resolve the issues. Does anyone have any recommendations for this issue? Have you previously address this issue?
Thank you Gregg WilliamsBoilerplate text:
Reset Preferences
http://forums.adobe.com/thread/375776
1) Close the program and press Ctrl+Alt+Shift/Cmd+Option+Shift during startup (not reversible)
or
2) Move the Folder. See:
http://www.bugge.com/Family-and-friends/Illy/illy.html
--OB -
How to execute the output of the below query automatically
Hi All,
I want to execute the output of the below query automatically, instead of manually copying it and execute.
select 'alter database ['+name+'] set recovery simple' from master.sys.databases where database_id > 4 and state_desc = 'online'
Please provide me a script to do this.
ThanK
KateEXEC sp_MSforeachdb N'ALTER DATABASE [?] SET recovery simple';--- This will set the recovery model for all the system database.The query provided by Vikash16, meets my requirement. Thank you.
-
How the below query is working
Hi,
I am newly joined in the group.
the emp table has 5 rows as below
100 ram 10000.00 10
200 kumar 15000.00 10
300 william 20000.00 10
400 ravi 25000.00 10
500 victor 30000.00 10
i execute the below query
select ename,sal from emp_test where case when sal < 10000 then sal + 1000
when sal < 20000 then sal + 2000
else sal
end < 20000
it gives the below output
ram 10000.00
kumar 15000.00
How the above query is working?
Please explain. thanks in advanceIf you want it to show the changed salary, it has to be in the select line not the where:
select ename,
(case when sal < 10000 then sal + 1000
when sal < 20000 then sal + 2000
else sal
end) sal from emp_test
where case when sal < 10000 then sal + 1000
when sal < 20000 then sal + 2000
else sal
end < 20000 -
Help required for improving performance of the Query
Hello SAP Techies,
I have MRP Query which shows Inventory projection by Calendar Year/Month wise.
There are 2 variables Plant and Material in free charateristics where it has been restricted by replacement of Query result .
Another query is Control M Query which is based on multiprovider. Multiprovider is created on 5 cubes.
The Query is taking 20 -15 Mins to get the result.
Due to replacement path by query result for the 2 variables first the control M Query is excuted. Business wanted to see all those materials in MRP query which are allocated to base plant hence they designed the query to use replacement Path by Query result. So it will get all the materials and plants from the control M query and will find the Invetory projection for the same selection in MRP query.
Is there any way I can improve the performance of the Query.
Query performance has been discussed innumerable times in the forums and there is a lot of information on the blogs and the WIKI - please search the forums before posting and if the existing posts do no answer your question satisfactorily then please raise a new post - else almost all the answers you get will be rehashed versions of previous posts ( and in most cases without attribution to the original author )
Edited by: Arun Varadarajan on Apr 19, 2011 9:23 PMHi ,
Please see if you can make these changes currently to the report . It will help in improving the performance of the query
1. Select the right read mode.
Reading data during navigation minimizes the impact on
the application server resources because only data that
the user requires will be retrieved.
2. Leverage filters as much as possible. Using filters contributes to
reducing the number of database reads and the size of the result set,
hereby significantly improving query runtimes.
Filters are especially valuable when associated with u201Cbig
dimensionsu201D where there is a large number of characteristics such as
customers and document numbers.
3. Reduce RKFs in the query to as few as possible. Also, define
calculated & RKFs on the Infoprovider level instead of locally within the query.
Regards
Garima -
How can I replace the cursor in the below query?
I have this below query which calls a stored procedure that takes only 1 item's attributes at a time. But because of performance problems we are
required to remove the cursor. How can I replace the below cursor logic with set operations or CTE? Please advice.
DECLARE db_cursor_ava CURSOR
FOR
SELECT t.[agent-id],
t.[start-date],
t.[end-date],
t.[monitor-days],
t.[monitor-start],
t.[monitor-end],
t.[timezone-offset]
FROM @tmpAgentPeriodTimeRange t
OPEN db_cursor_ava
FETCH NEXT FROM db_cursor_ava INTO @agentID_ava,
@stDateTime_ava,
@endDateTime_ava,
@monDays_ava,
@monSt_ava,
@monEnd_ava,
@offset_ava
WHILE @@FETCH_STATUS = 0
BEGIN
DELETE
FROM @tmpMonitorPeriod
DELETE
FROM @tmpFinalResult
SET @runID = 1
IF(@endDateTime_ava>DATEADD(MI,@offset_ava, GETUTCDATE()))
BEGIN
SET @endDateTime_ava=DATEADD(MI,@offset_ava, GETUTCDATE())
END
INSERT INTO @tmpMonitorPeriod
EXEC core.usp_GetMonitoringPeriod
@startDate = @stDateTime_ava,
@endDate = @endDateTime_ava,
@monitoringDays = @monDays_ava,
@monitoringStart = @monSt_ava,
@monitoringEnd = @monEnd_ava
SELECT @maxID = MAX(tm.id)
FROM @tmpMonitorPeriod tm
FETCH NEXT FROM db_cursor_ava INTO @agentID_ava,
@stDateTime_ava,
@endDateTime_ava,
@monDays_ava,
@monSt_ava,
@monEnd_ava,
@offset_ava
END
CLOSE db_cursor_ava
DEALLOCATE db_cursor_ava
mayooran99You've been down this path before - and the response is exactly the same.
how to replace cursor logic
And I'll suggest that you post the entire code - since you repeatedly delete 2 table variables but only populate one. The setting of @maxID also seems to have no purpose. And perhaps the issue here isn't the cursor but the general approach. Who knows
- but it appears you may have prematurely assumed that the cursor is the problem. -
Performance of the Bex query, buit on infoset is very low
Dear experts,
I have a bex query developed on infoset, has hit the performance problem.
when I am trying to check the query, found warning message as following
Diagnosis
InfoProvider ABC does not contain characteristic 0CALYEAR. The exception aggregation for key figure xyz can therefore not be applied.
System Response
The key figure will therefore be aggregated using all characteristics of ABC with the generic aggregation SUM.
but the infoobject 0CALYEAR is active and is in infoset as an attribute to the one of the Master data infoobject.
Now, could you please help me to improve the performance of the query, which is buit on infoset.
Thanks,
MannuHi,
If Info set is based on Cube then
-->Create Aggregate on the cube for those object used in the Query
-->Compressed the Info Cube
-->then run the query
and also in RSRT there are many properties according to the Target please check which property is suitable for you..
Best Regards
Obaid
Maybe you are looking for
-
Simple Data Base Program for Mountain Lion
Does anyone have a recommendation for a simple data base program that will import comma delimited fields from early versions of Filemaker Pro (v 3.0, v5.0) The ability to import Filemaker Templates and Layouts would be a big plus. I do not need the a
-
Hi all, I have a problem invoking a small process inside a bigger process. I get the following error: 2007-12-10 14:38:00,206 ERROR [com.adobe.workflow.AWS] stalling action-instance: 4545 with message: ALC-DSC-121-000: com.adobe.idp.dsc.registry.Inpu
-
Why is data being retained in the Firefox Internet cache when I always use Private Browsing ?
OS Windows 7, using Firefox 33.1.1. I have set "Never remember history" so always use Private Browsing mode. I use CCleaner to delete unwanted files from my PC on each reboot, and notice that despite using private browsing mode CCleaner is deleting f
-
How To Configure GP For Client DC on Windows Server 2003
Dear all, I have been installed windows server 2003 and make one DC on my server. And i create several group and want to make policy for that each group. For example I have created '2014' group and all user who join my dc and registered as a part of
-
Has anyone notice that since the new update it's impossible to edit a text from the sent message, then forward it to another person. *****!