"Improving SQL query performance using secondary indexes"
I have a very old copy of this document from 1997. I'm hoping to find newer version, if one exists, but the search facility on SDN is not working at the moment. Does anyone have a more up to date copy or link they can point me to ?
thanks,
Malcolm.
HI,
check it out , may be it will help you
[http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10743/schema.htm]
[http://teradata.uark.edu/research/wang/indexes.html]
[http://www.geekinterview.com/question_details/33720]
Similar Messages
-
SQL Query not using Composite Index
Hi,
Please look at the below query:
SELECT pde.participant_uid
,pde.award_code
,pde.award_type
,SUM(decode(pde.distribution_type
,'FORFEITURE'
,pde.forfeited_quantity *
pde.sold_price * cc.rate
,pde.distributed_quantity *
pde.sold_price * cc.rate)) AS gross_Amt_pref_Curr
FROM part_distribution_exec pde
,currency_conversion cc
,currency off_curr
WHERE pde.participant_uid = 4105
AND off_curr.currency_iso_code =
pde.offering_currency_iso_code
AND cc.from_currency_uid = off_curr.currency_uid
AND cc.to_currency_uid = 1
AND cc.latest_flag = 'Y'
GROUP BY pde.participant_uid
,pde.award_code
,pde.award_type
In oracle 9i, i"ve executed this above query, it takes 6 seconds and the cost is 616, this is due to non usage of the composite index, Currency_conversion_idx(From_currency_uid, To_currency_uid, Latest_flag). I wonder why this index is not used while executing the above query. So, I've dropped the index and recreated it. Now, the query is using this index. After inserting many rows or say in 1 days time, if the same query is executed, again the query is not using the index. So everyday, the index should be dropped and recreated.
I don't want this drop and recreation of index daily, I need a permanent solution for this.
Can anyone tell me, Why this index goes stale after a period of time???? Please take some time and Solve this issue.
-SankarHi David,
This is Sankar here. Thankyou for your reply.
I've got the plan table output for this problematic query, please go thro' it and help me out why the index CURRENCY_CONVERSION_IDX is used now and why it's not using while executing the query after a day or inserting some records...
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 26 | 15678 | 147 |
| 1 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_PAYOUT_SCHEDULE | 1 | 89 | 2 |
|* 2 | INDEX UNIQUE SCAN | PART_AWARD_PAYOUT_SCHEDULE_PK1 | 61097 | | 1 |
| 3 | SORT AGGREGATE | | 1 | 67 | |
|* 4 | FILTER | | | | |
|* 5 | INDEX RANGE SCAN | PART_AWARD_PAYOUT_SCHEDULE_PK1 | 1 | 67 | 2 |
| 6 | SORT AGGREGATE | | 1 | 94 | |
|* 7 | FILTER | | | | |
|* 8 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_PAYOUT_SCHEDULE | 1 | 94 | 3 |
|* 9 | INDEX RANGE SCAN | PART_AWARD_PAYOUT_SCHEDULE_PK1 | 1 | | 2 |
|* 10 | FILTER | | | | |
|* 11 | HASH JOIN | | 26 | 15678 | 95 |
|* 12 | HASH JOIN OUTER | | 26 | 11596 | 91 |
|* 13 | HASH JOIN | | 26 | 10218 | 86 |
| 14 | VIEW | | 1 | 82 | 4 |
| 15 | SORT GROUP BY | | 1 | 116 | 4 |
|* 16 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_LEDGER | 1 | 116 | 2 |
|* 17 | INDEX RANGE SCAN | PARTICIPANT_UID_IDX | 1 | | 1 |
|* 18 | HASH JOIN OUTER | | 26 | 8086 | 82 |
|* 19 | HASH JOIN | | 26 | 6006 | 71 |
| 20 | NESTED LOOPS | | 36 | 5904 | 66 |
| 21 | NESTED LOOPS | | 1 | 115 | 65 |
| 22 | TABLE ACCESS BY INDEX ROWID | CURRENCY_CONVERSION | 18 | 756 | 2 |
|* 23 | INDEX RANGE SCAN | KLS_IDX_CURRENCY_CONV | 3 | | 1 |
| 24 | VIEW | | 1 | 73 | 4 |
| 25 | SORT GROUP BY | | 1 | 71 | 4 |
| 26 | TABLE ACCESS BY INDEX ROWID| PART_AWARD_VALUE | 1 | 71 | 2 |
|* 27 | INDEX RANGE SCAN | PAV_PARTICIPANT_UID_IDX | 1 | | 1 |
| 28 | TABLE ACCESS BY INDEX ROWID | PARTICIPANT_AWARD | 199 | 9751 | 1 |
|* 29 | INDEX UNIQUE SCAN | PARTICIPANT_AWARD_PK1 | 100 | | |
|* 30 | INDEX FAST FULL SCAN | PARTICIPANT_AWARD_TYPE_PK1 | 147 | 9849 | 4 |
| 31 | VIEW | | 1 | 80 | 10 |
| 32 | SORT GROUP BY | | 1 | 198 | 10 |
|* 33 | TABLE ACCESS BY INDEX ROWID | CURRENCY_CONVERSION | 1 | 42 | 2 |
| 34 | NESTED LOOPS | | 1 | 198 | 8 |
| 35 | NESTED LOOPS | | 2 | 312 | 4 |
| 36 | TABLE ACCESS BY INDEX ROWID| PART_DISTRIBUTION_EXEC | 2 | 276 | 2 |
|* 37 | INDEX RANGE SCAN | IND_PARTICIPANT_UID | 1 | | 1 |
| 38 | TABLE ACCESS BY INDEX ROWID| CURRENCY | 1 | 18 | 1 |
|* 39 | INDEX UNIQUE SCAN | CURRENCY_AK | 1 | | |
|* 40 | INDEX RANGE SCAN | CURRENCY_CONVERSION_AK | 2 | | 1 |
| 41 | VIEW | | 1 | 53 | 4 |
| 42 | SORT GROUP BY | | 1 | 62 | 4 |
|* 43 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_VESTING | 1 | 62 | 2 |
|* 44 | INDEX RANGE SCAN | PAVES_PARTICIPANT_UID_IDX | 1 | | 1 |
| 45 | TABLE ACCESS FULL | AWARD | 1062 | 162K| 3 |
| 46 | TABLE ACCESS BY INDEX ROWID | CURRENCY | 1 | 18 | 2 |
|* 47 | INDEX UNIQUE SCAN | CURRENCY_AK | 102 | | 1 |
Predicate Information (identified by operation id):
2 - access("PAPS"."AWARD_CODE"=:B1 AND "PAPS"."PARTICIPANT_UID"=4105 AND "PAPS"."AWARD_TYPE"=:B2
"PAPS"."INSTALLMENT_NUM"=1)
4 - filter(4105=:B1)
5 - access("PAPS"."AWARD_CODE"=:B1 AND "PAPS"."PARTICIPANT_UID"=4105 AND "PAPS"."AWARD_TYPE"=:B2)
7 - filter(4105=:B1)
8 - filter("PAPS"."STATUS"='OPEN')
9 - access("PAPS"."AWARD_CODE"=:B1 AND "PAPS"."PARTICIPANT_UID"=4105 AND "PAPS"."AWARD_TYPE"=:B2)
10 - filter("CC_A_P_CURR"."FROM_CURRENCY_UID"= (SELECT /*+ */ "CURRENCY"."CURRENCY_UID" FROM
"EWAPDBO"."CURRENCY" "CURRENCY" WHERE "CURRENCY"."CURRENCY_ISO_CODE"=:B1))
11 - access("SYS_ALIAS_7"."AWARD_CODE"="A"."AWARD_CODE")
12 - access("SYS_ALIAS_7"."AWARD_CODE"="PVS"."AWARD_CODE"(+))
13 - access("SYS_ALIAS_8"."AWARD_CODE"="PALS"."AWARD_CODE" AND
"SYS_ALIAS_8"."AWARD_TYPE"="PALS"."AWARD_TYPE")
16 - filter(TRUNC("PAL1"."LEDGER_ENTRY_DATE")<=TRUNC(SYSDATE@!) AND "PAL1"."ALLOC_TYPE"='IPU')
17 - access("PAL1"."PARTICIPANT_UID"=4105)
filter("PAL1"."PARTICIPANT_UID"=4105)
18 - access("SYS_ALIAS_8"."AWARD_CODE"="PDES"."AWARD_CODE"(+) AND
"SYS_ALIAS_8"."AWARD_TYPE"="PDES"."AWARD_TYPE"(+))
19 - access("SYS_ALIAS_7"."AWARD_CODE"="SYS_ALIAS_8"."AWARD_CODE")
23 - access("CC_A_P_CURR"."TO_CURRENCY_UID"=1 AND "CC_A_P_CURR"."LATEST_FLAG"='Y')
27 - access("PAV"."PARTICIPANT_UID"=4105)
filter("PAV"."PARTICIPANT_UID"=4105)
29 - access("SYS_ALIAS_7"."AWARD_CODE"="SYS_ALIAS_9"."AWARD_CODE" AND
"SYS_ALIAS_7"."PARTICIPANT_UID"=4105)
30 - filter("SYS_ALIAS_8"."PARTICIPANT_UID"=4105)
33 - filter("CC"."LATEST_FLAG"='Y')
37 - access("PDE"."PARTICIPANT_UID"=4105)
filter("PDE"."PARTICIPANT_UID"=4105)
39 - access("OFF_CURR"."CURRENCY_ISO_CODE"="PDE"."OFFERING_CURRENCY_ISO_CODE")
40 - access("CC"."FROM_CURRENCY_UID"="OFF_CURR"."CURRENCY_UID" AND "CC"."TO_CURRENCY_UID"=1)
43 - filter("PV"."VESTING_DATE"<=SYSDATE@!)
44 - access("PV"."PARTICIPANT_UID"=4105)
filter("PV"."PARTICIPANT_UID"=4105)
47 - access("CURRENCY"."CURRENCY_ISO_CODE"=:B1)
Note: cpu costing is off
93 rows selected.
Please help me out...
-Sankar -
SQL query performance difference with Index Hint in Oracle 10g
Hi,
I was having a problem in SQL select query which was taking around 20 seconds to get the results. So, by hit and trail method I added Index Oracle Hint into the same query with the list of indexes of the tables and the results are retrieved with in 10 milli seconds. I am not sure to get How this is working with Indexes Hint.
The query with out Index Hint:
select /*+rule*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3 where FdnTab.id=52787 and paramTab1.id= FdnTab.id and paramTab3.id = FdnTab.id and paramTab3.attr_value = FdnTab2.fdn and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2 where TemplateTab2.id=FdnTab.id and ParamTab2.id=TemplateTab2.template_id and ParamTab2.id=FdnTab2.id and ParamTab2.attr_name=paramTab1.attr_name) ==> EXECUTION TIME: 20 secs
The same query with Index Hint:
select /*+INDEX(fdnmappingtable[PRIMARY_KY_FDNMAPPINGTABLE],parametertable[PRIMARY_KY_PARAMETERTABLE])*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3 where FdnTab.id=52787 and paramTab1.id= FdnTab.id and paramTab3.id = FdnTab.id and paramTab3.attr_value = FdnTab2.fdn and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2 where TemplateTab2.id=FdnTab.id and ParamTab2.id=TemplateTab2.template_id and ParamTab2.id=FdnTab2.id and ParamTab2.attr_name=paramTab1.attr_name) ==> EXECUTION TIME: 10 milli secs
Can any one suggest what could be the real problem?
Regards,
PurushothamSorry,
The right query and the explain plan:
select /*+rule*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3 where FdnTab.id=52787 and paramTab1.id= FdnTab.id and paramTab3.id = FdnTab.id and paramTab3.attr_value = FdnTab2.fdn and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2 where TemplateTab2.id=FdnTab.id and ParamTab2.id=TemplateTab2.template_id and ParamTab2.id=FdnTab2.id and ParamTab2.attr_name=paramTab1.attr_name)
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 651267974
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
|* 1 | FILTER | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
| 4 | NESTED LOOPS | |
|* 5 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
PLAN_TABLE_OUTPUT
|* 6 | TABLE ACCESS BY INDEX ROWID| PARAMETERTABLE |
|* 7 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE |
| 8 | TABLE ACCESS BY INDEX ROWID | PARAMETERTABLE |
|* 9 | INDEX RANGE SCAN | PRIMARY_KY_PARAMETERTABLE |
| 10 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
|* 11 | INDEX UNIQUE SCAN | SYS_C005695 |
| 12 | NESTED LOOPS | |
|* 13 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE |
|* 14 | INDEX UNIQUE SCAN | PRIMARY_KEY_TRTABLE |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - filter( EXISTS (SELECT 0 FROM "TEMPLATERELATIONTABLE"
"TEMPLATETAB2","PARAMETERTABLE" "PARAMTAB2" WHERE
"PARAMTAB2"."ATTR_NAME"=:B1 AND "PARAMTAB2"."ID"=:B2 AND
"PARAMTAB2"."ID"="TEMPLATETAB2"."TEMPLATE_ID" AND
"TEMPLATETAB2"."ID"=:B3))
5 - access("FDNTAB"."ID"=52787)
6 - filter("PARAMTAB1"."ATTR_VALUE"<>'DEFAULT')
7 - access("PARAMTAB1"."ID"="FDNTAB"."ID" AND
PLAN_TABLE_OUTPUT
"PARAMTAB1"."ATTR_NAME"='harqUsersMax')
9 - access("PARAMTAB3"."ID"="FDNTAB"."ID")
11 - access("PARAMTAB3"."ATTR_VALUE"="FDNTAB2"."FDN")
13 - access("PARAMTAB2"."ID"=:B1 AND "PARAMTAB2"."ATTR_NAME"=:B2)
14 - access("TEMPLATETAB2"."ID"=:B1 AND
"PARAMTAB2"."ID"="TEMPLATETAB2"."TEMPLATE_ID")
Note
- rule based optimizer used (consider using cbo)
43 rows selected.
WITH INDEX HINT:
select /*+INDEX(fdnmappingtable[PRIMARY_KY_FDNMAPPINGTABLE],parametertable[PRIMARY_KY_PARAMETERTABLE])*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3 where FdnTab.id=52787 and paramTab1.id= FdnTab.id and paramTab3.id = FdnTab.id and paramTab3.attr_value = FdnTab2.fdn and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2 where TemplateTab2.id=FdnTab.id and ParamTab2.id=TemplateTab2.template_id and ParamTab2.id=FdnTab2.id and ParamTab2.attr_name=paramTab1.attr_name);
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 2924316070
| Id | Operation | Name | Rows | B
ytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 1 |
916 | 6 (0)| 00:00:01 |
|* 1 | FILTER | | |
| | |
| 2 | NESTED LOOPS | | 1 |
916 | 4 (0)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 |
401 | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
| 4 | NESTED LOOPS | | 1 |
207 | 2 (0)| 00:00:01 |
|* 5 | TABLE ACCESS BY INDEX ROWID| PARAMETERTABLE | 1 |
194 | 1 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE | 1 |
| 1 (0)| 00:00:01 |
|* 7 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE | 1 |
PLAN_TABLE_OUTPUT
13 | 1 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID | PARAMETERTABLE | 1 |
194 | 1 (0)| 00:00:01 |
|* 9 | INDEX RANGE SCAN | PRIMARY_KY_PARAMETERTABLE | 1 |
| 1 (0)| 00:00:01 |
| 10 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE | 1 |
515 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|* 11 | INDEX UNIQUE SCAN | SYS_C005695 | 1 |
| 1 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 |
91 | 2 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PRIMARY_KEY_TRTABLE | 1 |
26 | 1 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE | 1 |
65 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - filter( EXISTS (SELECT /*+ */ 0 FROM "TEMPLATERELATIONTABLE" "TEMPLATETAB
2","PARAMETERTABLE"
PLAN_TABLE_OUTPUT
"PARAMTAB2" WHERE "PARAMTAB2"."ATTR_NAME"=:B1 AND "PARAMTAB2"."ID"
=:B2 AND
"TEMPLATETAB2"."TEMPLATE_ID"=:B3 AND "TEMPLATETAB2"."ID"=:B4))
5 - filter("PARAMTAB1"."ATTR_VALUE"<>'DEFAULT')
6 - access("PARAMTAB1"."ID"=52787 AND "PARAMTAB1"."ATTR_NAME"='harqUsersMax')
7 - access("FDNTAB"."ID"=52787)
9 - access("PARAMTAB3"."ID"=52787)
11 - access("PARAMTAB3"."ATTR_VALUE"="FDNTAB2"."FDN")
13 - access("TEMPLATETAB2"."ID"=:B1 AND "TEMPLATETAB2"."TEMPLATE_ID"=:B2)
14 - access("PARAMTAB2"."ID"=:B1 AND "PARAMTAB2"."ATTR_NAME"=:B2)
PLAN_TABLE_OUTPUT
Note
- dynamic sampling used for this statement
39 rows selected. -
How does Index fragmentation and statistics affect the sql query performance
Hi,
How does Index fragmentation and statistics affect the sql query performance
Thanks
Shashikala
ShashikalaHow does Index fragmentation and statistics affect the sql query performance
Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
My TechNet Wiki Articles -
SELECT QUERY BASED ON SECONDARY INDEX
Hi all,
CAN ANYONE TELL ME HOW TO WRITE SELECT QUERY BASED ON SECONDARY INDEX.
IN WHAT WAY DOES IT IMPROVE PERFORMANCE.
i KNOW WHEN CREATING SECONDARY INDEX I NEED TO GIVE AN INDEX NO -iT SHOULD BE ANY NUMBER RIGHT?
I HAVE TO LIST ALL PRIMARY KEYS FIRST AND THEN THE FIELD FOR WHICH I AM CREATING SECONDARY INDEX RIGHT?
LETS SAY I HAVE 2 PRIMARY KEYS AND I WANT TO CREATE SEONDARY INDEX FOR 2 FIELDS THEN
I NEED TO CREATE A SEPERTE SECONDARY INDEX FOR EACH ONE OF THOSE FIELDS OR ONE SHOULD BE ENOUGH
pLS LET ME KNOW IF IAM WRONGHI,
If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVINGclauses, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
You create secondary indexes using the ABAP Dictionary. There you can create its columns and define it as UNIQUE. However, you should not create secondary indexes to cover all possible combinations of fields.
Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. <b>As a rule, secondary indexes should not contain more than four fields</b>, <b>and you should not have more than five indexes for a single database table</b>.
<b>What to Keep in Mind for Secondary Indexes:</b>
http://help.sap.com/saphelp_nw04s/helpdata/en/cf/21eb2d446011d189700000e8322d00/content.htm
http://www.sap-img.com/abap/quick-note-on-design-of-secondary-database-indexes-and-logical-databases.htm
Regards
Sudheer -
SQL query performance issues.
Hi All,
I worked on the query a month ago and the fix worked for me in test intance but failed in production. Following is the URL for the previous thread.
SQL query performance issues.
Following is the tkprof file.
CURSOR_ID:76 LENGTH:2383 ADDRESS:f6b40ab0 HASH_VALUE:2459471753 OPTIMIZER_GOAL:ALL_ROWS USER_ID:443 (APPS)
insert into cos_temp(
TRX_DATE, DEPT, PRODUCT_LINE, PART_NUMBER,
CUSTOMER_NUMBER, QUANTITY_SOLD, ORDER_NUMBER,
INVOICE_NUMBER, EXT_SALES, EXT_COS,
GROSS_PROFIT, ACCT_DATE,
SHIPMENT_TYPE,
FROM_ORGANIZATION_ID,
FROM_ORGANIZATION_CODE)
select a.trx_date,
g.segment5 dept,
g.segment4 prd,
m.segment1 part,
d.customer_number customer,
b.quantity_invoiced units,
-- substr(a.sales_order,1,6) order#,
substr(ltrim(b.interface_line_attribute1),1,10) order#,
a.trx_number invoice,
(b.quantity_invoiced * b.unit_selling_price) sales,
(b.quantity_invoiced * nvl(price.operand,0)) cos,
(b.quantity_invoiced * b.unit_selling_price) -
(b.quantity_invoiced * nvl(price.operand,0)) profit,
to_char(to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS'),'DD-MON-RR') acct_date,
'DRP',
l.ship_from_org_id,
p.organization_code
from ra_customers d,
gl_code_combinations g,
mtl_system_items m,
ra_cust_trx_line_gl_dist c,
ra_customer_trx_lines b,
ra_customer_trx_all a,
apps.oe_order_lines l,
apps.HR_ORGANIZATION_INFORMATION i,
apps.MTL_INTERCOMPANY_PARAMETERS inter,
apps.HZ_CUST_SITE_USES_ALL site,
apps.qp_list_lines_v price,
apps.mtl_parameters p
where a.trx_date between to_date('2010/02/01 00:00:00','yyyy/mm/dd HH24:MI:SS')
and to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS')+0.9999
and a.batch_source_id = 1001 -- Sales order shipped other OU
and a.complete_flag = 'Y'
and a.customer_trx_id = b.customer_trx_id
and b.customer_trx_line_id = c.customer_trx_line_id
and a.sold_to_customer_id = d.customer_id
and b.inventory_item_id = m.inventory_item_id
and m.organization_id
= decode(substr(g.segment4,1,2),'01',5004,'03',5004,
'02',5003,'00',5001,5002)
and nvl(m.item_type,'0') <> '111'
and c.code_combination_id = g.code_combination_id+0
and l.line_id = b.interface_line_attribute6
and i.organization_id = l.ship_from_org_id
and p.organization_id = l.ship_from_org_id
and i.org_information3 <> '5108'
and inter.ship_organization_id = i.org_information3
and inter.sell_organization_id = '5108'
and inter.customer_site_id = site.site_use_id
and site.price_list_id = price.list_header_id
and product_attr_value = to_char(m.inventory_item_id)
call count cpu elapsed disk query current rows misses
Parse 1 0.47 0.56 11 197 0 0 1
Execute 1 3733.40 3739.40 34893 519962154 11 188 0
total 2 3733.87 3739.97 34904 519962351 11 188 1
| Rows Row Source Operation
| ------------ ---------------------------------------------------
| 188 HASH JOIN (cr=519962149 pr=34889 pw=0 time=2607.35)
| 741 .TABLE ACCESS BY INDEX ROWID QP_PRICING_ATTRIBUTES (cr=519939426 pr=34889 pw=0 time=2457.32)
| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)
| 741 ....NESTED LOOPS (cr=50042 pr=7230 pw=0 time=11.37)
| 741 .....NESTED LOOPS (cr=48558 pr=7229 pw=0 time=11.35)
| 741 ......NESTED LOOPS (cr=47815 pr=7223 pw=0 time=11.32)
| 3237 .......NESTED LOOPS (cr=41339 pr=7223 pw=0 time=12.42)
| 3237 ........NESTED LOOPS (cr=38100 pr=7223 pw=0 time=12.39)
| 3237 .........NESTED LOOPS (cr=28296 pr=7139 pw=0 time=12.29)
| 1027 ..........NESTED LOOPS (cr=17656 pr=4471 pw=0 time=3.81)
| 1027 ...........NESTED LOOPS (cr=13537 pr=4404 pw=0 time=3.30)
| 486 ............NESTED LOOPS (cr=10873 pr=4240 pw=0 time=0.04)
| 486 .............NESTED LOOPS (cr=10385 pr=4240 pw=0 time=0.03)
| 486 ..............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL (cr=9411 pr=4240 pw=0 time=0.02)
| 75253 ...............INDEX RANGE SCAN RA_CUSTOMER_TRX_N5 (cr=403 pr=285 pw=0 time=0.38)
| 486 ..............TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=974 pr=0 pw=0 time=0.01)
| 486 ...............INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (cr=488 pr=0 pw=0 time=0.01)
| 486 .............INDEX UNIQUE SCAN HZ_PARTIES_U1 (cr=488 pr=0 pw=0 time=0.01)
| 1027 ............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL (cr=2664 pr=164 pw=0 time=1.95)
| 2063 .............INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (cr=1474 pr=28 pw=0 time=0.22)
| 1027 ...........TABLE ACCESS BY INDEX ROWID RA_CUST_TRX_LINE_GL_DIST_ALL (cr=4119 pr=67 pw=0 time=0.54)
| 1027 ............INDEX RANGE SCAN RA_CUST_TRX_LINE_GL_DIST_N1 (cr=3092 pr=31 pw=0 time=0.20)
| 3237 ..........TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=10640 pr=2668 pw=0 time=15.35)
| 3237 ...........INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_U1 (cr=2062 pr=40 pw=0 time=0.33)
| 3237 .........TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=9804 pr=84 pw=0 time=0.77)
| 3237 ..........INDEX UNIQUE SCAN OE_ORDER_LINES_U1 (cr=6476 pr=47 pw=0 time=0.43)
| 3237 ........TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=3239 pr=0 pw=0 time=0.04)
| 3237 .........INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=2 pr=0 pw=0 time=0.01)
| 741 .......TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=6476 pr=0 pw=0 time=0.10)
| 6474 ........INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=3239 pr=0 pw=0 time=0.03)Please help.
Regards
Ashish| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)There is no way the optimizer should choose to process that many rows using nested loops.
Either the statistics are not up to date, the data values are skewed or you have some optimizer parameter set to none default to force index access.
Please post explain plan and optimizer* parameter settings. -
Can I refactor this query to use an index more efficiently?
I have a members table with fields such as id, last name, first name, address, join date, etc.
I have a unique index defined on (last_name, join_date, id).
This query will use the index for a range scan, no sort required since the index will be in order for that range ('Smith'):
SELECT members.*
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate, idIs there any way I can get something like the following to use the index (with no sort) as well:
SELECT members.*
FROM members
WHERE last_name like 'S%'
ORDER BY joindate, idI understand the difficulty is probably; even if it does a range scan on every last name 'S%' (assuming it can?), they're not necessarily in order. Case in point:
Last_Name: JoinDate:
Smith 2/5/2010
Smuckers 1/10/2010An index range scan of 'S%' would return them in the above order, which is not ordered by joindate.
So is there any way I can refactor this (query or index) such that the index can be range scanned (using LIKE 'x%') and return rows in the correct order without performing a sort? Or is that simply not possible?xaeryan wrote:
I have a members table with fields such as id, last name, first name, address, join date, etc.
I have a unique index defined on (last_name, join_date, id).
This query will use the index for a range scan, no sort required since the index will be in order for that range ('Smith'):
SELECT members.*
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate, idIs there any way I can get something like the following to use the index (with no sort) as well:
SELECT members.*
FROM members
WHERE last_name like 'S%'
ORDER BY joindate, idI understand the difficulty is probably; even if it does a range scan on every last name 'S%' (assuming it can?), they're not necessarily in order. Case in point:
Last_Name: JoinDate:
Smith 2/5/2010
Smuckers 1/10/2010An index range scan of 'S%' would return them in the above order, which is not ordered by joindate.
So is there any way I can refactor this (query or index) such that the index can be range scanned (using LIKE 'x%') and return rows in the correct order without performing a sort? Or is that simply not possible?Come on. Index column order does matter. "LIKE 'x%'" actually is full table scan. The db engine accesses contiguous index entries and then uses the ROWID values in the index to retrieve the table rows. -
Hello gurus,
We do use RSRT, ST03(expert mode) to check if a query is running slowly. what actually do we check there and how?
Can anyone please explain me how its done.
Thanks in advance
S NHi SN,
You need to check the DB time taken by the queries and the summarization ratio for the queries. As a rule of thumb if the summarization ration > 10 and the database time > 30% then creating an aggregate will improve the query performance.
BY summarization ration i mean ratio between the number of records read from the cube to the number of records displayed in the report.
Thx,
Soumya -
How to improve the query performance
ALTER PROCEDURE [SPNAME]
@Portfolio INT,
@Program INT,
@Project INT
AS
BEGIN
--DECLARE @StartDate DATETIME
--DECLARE @EndDate DATETIME
--SET @StartDate = '11/01/2013'
--SET @EndDate = '02/28/2014'
IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
DROP TABLE #Dates
IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
DROP TABLE #DailyTasks
CREATE TABLE #Dates(WorkDate DATE)
--CREATE INDEX IDX_Dates ON #Dates(WorkDate)
;WITH Dates AS
SELECT (@StartDate) DateValue
UNION ALL
SELECT DateValue + 1
FROM Dates
WHERE DateValue + 1 <= @EndDate
INSERT INTO #Dates
SELECT DateValue
FROM Dates D
LEFT JOIN tb_Holidays H
ON H.HolidayOn = D.DateValue
AND H.OfficeID = 2
WHERE DATEPART(dw,DateValue) NOT IN (1,7)
AND H.UID IS NULL
OPTION(MAXRECURSION 0)
SELECT TSK.TaskID,
TR.ResourceID,
WC.WorkDayCount,
(TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
D.WorkDate,
TSK.ProjectID,
RES.ResourceName
INTO #DailyTasks
FROM Tasks TSK
INNER JOIN TasksResource TR
ON TSK.TaskID = TR.TaskID
INNER JOIN tb_Resource RES
ON TR.ResourceID=RES.UID
OUTER APPLY (SELECT COUNT(*) WorkDayCount
FROM #Dates
WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
INNER JOIN #Dates D
ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
-------WHERE TSK.ProjectID = @Project-----
SELECT D.ResourceID,
D.WorkDayCount,
SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
D.WorkDate,
T.TaskID,
D.ResourceName
FROM #DailyTasks D
OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
FROM #DailyTasks DA
WHERE D.WorkDate = DA.WorkDate
AND D.ResourceID = DA.ResourceID
FOR XML PATH('')) AS TaskID) T
LEFT JOIN tb_Project PRJ
ON D.ProjectID=PRJ.UID
INNER JOIN tb_Program PR
ON PRJ.ProgramID=PR.UID
INNER JOIN tb_Portfolio PF
ON PR.PortfolioID=PF.UID
WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
AND (@Program = -1 or PR.UID = @Program)
AND (@Project = -1 or PRJ.UID = @Project)
GROUP BY D.ResourceID,
D.WorkDate,
T.TaskID,
D.WorkDayCount,
D.ResourceName
HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
hi..
My SP is as above..
I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
When i selected the ALL value for parameters Program and Project..i'm unable to get output.
but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
so i commented the where condition in SP as shown above
--------where TSK.ProjectID=@Project-------------
now i'm getting output when selecting ALL value for parameters.
but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
how can i create index on temp table in this sp and how can i improve the query performance..
please help.
thanks in advance..
luckyDidnt i provide you solution in other thread?
ALTER PROCEDURE [SPNAME]
@Portfolio INT,
@Program INT,
@Project INT
AS
BEGIN
--DECLARE @StartDate DATETIME
--DECLARE @EndDate DATETIME
--SET @StartDate = '11/01/2013'
--SET @EndDate = '02/28/2014'
IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
DROP TABLE #Dates
IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
DROP TABLE #DailyTasks
CREATE TABLE #Dates(WorkDate DATE)
--CREATE INDEX IDX_Dates ON #Dates(WorkDate)
;WITH Dates AS
SELECT (@StartDate) DateValue
UNION ALL
SELECT DateValue + 1
FROM Dates
WHERE DateValue + 1 <= @EndDate
INSERT INTO #Dates
SELECT DateValue
FROM Dates D
LEFT JOIN tb_Holidays H
ON H.HolidayOn = D.DateValue
AND H.OfficeID = 2
WHERE DATEPART(dw,DateValue) NOT IN (1,7)
AND H.UID IS NULL
OPTION(MAXRECURSION 0)
SELECT TSK.TaskID,
TR.ResourceID,
WC.WorkDayCount,
(TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
D.WorkDate,
TSK.ProjectID,
RES.ResourceName
INTO #DailyTasks
FROM Tasks TSK
INNER JOIN TasksResource TR
ON TSK.TaskID = TR.TaskID
INNER JOIN tb_Resource RES
ON TR.ResourceID=RES.UID
OUTER APPLY (SELECT COUNT(*) WorkDayCount
FROM #Dates
WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
INNER JOIN #Dates D
ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
WHERE (TSK.ProjectID = @Project OR @Project = -1)
SELECT D.ResourceID,
D.WorkDayCount,
SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
D.WorkDate,
T.TaskID,
D.ResourceName
FROM #DailyTasks D
OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
FROM #DailyTasks DA
WHERE D.WorkDate = DA.WorkDate
AND D.ResourceID = DA.ResourceID
FOR XML PATH('')) AS TaskID) T
LEFT JOIN tb_Project PRJ
ON D.ProjectID=PRJ.UID
INNER JOIN tb_Program PR
ON PRJ.ProgramID=PR.UID
INNER JOIN tb_Portfolio PF
ON PR.PortfolioID=PF.UID
WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
AND (@Program = -1 or PR.UID = @Program)
AND (@Project = -1 or PRJ.UID = @Project)
GROUP BY D.ResourceID,
D.WorkDate,
T.TaskID,
D.WorkDayCount,
D.ResourceName
HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
How can i know if my query is using the index ?
Hello...
How can i know if my query is using the index of the table or not?
im using set autotrace on...but is there another way to do it?
thanks!
Alessandro Falanque.Hi,
You can use Explain Plan for checking that your query is using proper index or not. First you need to check that Plan_table is installed in your database or not. If it is not there THEN THE SCRIPT WILL BE LIKE THIS:
CREATE TABLE PLAN_TABLE (
STATEMENT_ID VARCHAR2 (30),
TIMESTAMP DATE,
REMARKS VARCHAR2 (80),
OPERATION VARCHAR2 (30),
OPTIONS VARCHAR2 (30),
OBJECT_NODE VARCHAR2 (128),
OBJECT_OWNER VARCHAR2 (30),
OBJECT_NAME VARCHAR2 (30),
OBJECT_INSTANCE NUMBER,
OBJECT_TYPE VARCHAR2 (30),
OPTIMIZER VARCHAR2 (255),
SEARCH_COLUMNS NUMBER,
ID NUMBER,
PARENT_ID NUMBER,
POSITION NUMBER,
COST NUMBER,
CARDINALITY NUMBER,
BYTES NUMBER,
OTHER_TAG VARCHAR2 (255),
PARTITION_START VARCHAR2 (255),
PARTITION_STOP VARCHAR2 (255),
PARTITION_ID NUMBER,
OTHER LONG,
DISTRIBUTION VARCHAR2 (30))
TABLESPACE SYSTEM NOLOGGING
PCTFREE 10
PCTUSED 40
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 10240
NEXT 10240
PCTINCREASE 50
MINEXTENTS 1
MAXEXTENTS 121
FREELISTS 1 FREELIST GROUPS 1 )
NOCACHE;
After that write the following command in the SQL prompt.
Explain plan for (Select statement);
Select level, SubStr( lpad(' ',2*(Level-1)) || operation || ' ' ||
object_name || ' ' || options || ' ' ||
decode(id, null , ' ', decode(position, null,' ', 'Cost = ' || position) ),1,100)
|| ' ' || nvl(other_tag, ' ') Operation
from PLAN_TABLE
start with id = 0
connect by
prior id = parent_id;
This will show how the query is getting executed . What are all the indexes it is using etc.
Cheers.
Samujjwal Basu -
Improve data load performance using ABAP code
Hi all,
I want to improve my load performance using ABAP code, how to do this?. If i writing ABAP code in SE38 how i can call
in BW side? if give sample code to improve load performance it will be usefull. please guide me.There are several points that can improve performance of your ABAP code:
1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
3. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
4.Avoid using nested SELECT and SELECT statements within LOOPs.
5. Avoid using INTO CORRESPONDING FIELDS OF. Instead use INTO TABLE.
6. Avoid using SELECT * and select only the required fields from the table.
7. Avoid Executing a SELECT multiple times in the program.
8. Avoid nested loops when working with large internal tables.
9.Whenever using READ TABLE use BINARY SEARCH addition to speed up the search.
10. Use FIELD-SYMBOLS instead of a work area when there are more than 200 entries in an internal table where some fields are being manipulated.
11. Use MOVE with individual variable/field moves instead of MOVE-CORRESPONDING.
12. Use CASE instead of IF/ENDIF whenever possible.
13. Runtime transaction code se30 can be used to measure the application performance.
14. Transaction code st05 can be used to analyse the SQL trace and measure the performance of the select statements of the program.
15. Start routines can be used when transformation is needed in the data package level. Field/individual routines can be used for a simple formula or calculation. End routines are used when you wish to populate data not present in the source but present in the target.
16. Always use a WHERE clause for DELETE statement. To delete records for multiple values, use SELECT-OPTIONS.
17. Always use 'IS INITIAL' instead of equal to '' because null for a character is '' but '0' for an integer.
Hope it helps. -
How to improve the query performance in to report level and designer level
How to improve the query performance in to report level and designer level......?
Plz let me know the detail view......first its all based on the design of the database, universe and the report.
at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
and when you create a paremeter try to get it match with the key fields in the database.
good luck
Amr -
How to write a SQL Query without using group by clause
Hi,
Can anyone help me to find out if there is a approach to build a SQL Query without using group by clause.
Please site an example if is it so,
RegardsI hope this example could illuminate danepc on is problem.
CREATE or replace TYPE MY_ARRAY AS TABLE OF INTEGER
CREATE OR REPLACE FUNCTION GET_ARR return my_array
as
arr my_array;
begin
arr := my_array();
for i in 1..10 loop
arr.extend;
arr(i) := i mod 7;
end loop;
return arr;
end;
select column_value
from table(get_arr)
order by column_value;
select column_value,count(*) occurences
from table(get_arr)
group by column_value
order by column_value;And the output should be something like this:
SQL> CREATE or replace TYPE MY_ARRAY AS TABLE OF INTEGER
2 /
Tipo creato.
SQL>
SQL> CREATE OR REPLACE FUNCTION GET_ARR return my_array
2 as
3 arr my_array;
4 begin
5 arr := my_array();
6 for i in 1..10 loop
7 arr.extend;
8 arr(i) := i mod 7;
9 end loop;
10 return arr;
11 end;
12 /
Funzione creata.
SQL>
SQL>
SQL> select column_value
2 from table(get_arr)
3 order by column_value;
COLUMN_VALUE
0
1
1
2
2
3
3
4
5
6
Selezionate 10 righe.
SQL>
SQL> select column_value,count(*) occurences
2 from table(get_arr)
3 group by column_value
4 order by column_value;
COLUMN_VALUE OCCURENCES
0 1
1 2
2 2
3 2
4 1
5 1
6 1
Selezionate 7 righe.
SQL> Bye Alessandro -
Why is this query not using the index?
check out this query:-
SELECT CUST_PO_NUMBER, HEADER_ID, ORDER_TYPE, PO_DATE
FROM TABLE1
WHERE STATUS = 'N'
and here's the explain plan:-
1
2 -------------------------------------------------------------------------------------
3 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
4 -------------------------------------------------------------------------------------
5 | 0 | SELECT STATEMENT | | 2735K| 140M| 81036 (2)|
6 |* 1 | TABLE ACCESS FULL| TABLE1 | 2735K| 140M| 81036 (2)|
7 -------------------------------------------------------------------------------------
8
9 Predicate Information (identified by operation id):
10 ---------------------------------------------------
11
12 1 - filter("STATUS"='N')
There is already an index on this column, as is shown below:-
INDEX_NAME INDEX_TYPE UNIQUENESS TABLE_NAME COLUMN_NAME COLUMN_POSITION
1 TABLE1_IDX2 NORMAL NONUNIQUE TABLE1 STATUS 1
2 TABLE1_IDX NORMAL NONUNIQUE TABLE1 HEADER_ID 1
So why is this query not using the index on the 'STATUS' Column?
I've already tried using optimizer hints and regathering the stats on the table, but the execution plan still remains the same, i.e. it still uses a FTS.
I have tried this command also:-
exec dbms_stats.gather_table_stats('GECS','GEPS_CS_SALES_ORDER_HEADER',method_opt=>'for all indexed columns size auto',cascade=>true,degree=>4);
inspite of this, the query is still using a full table scan.
The table has around 55 Lakh records, across 60 columns. And because of the FTS, the query is taking a long time to execute. How do i make it use the index?
Please help.
Edited by: user10047779 on Mar 16, 2010 6:55 AMIf the cardinality is really as skewed as that, you may want to look at putting a histogram on the column (sounds like it would be in order, and that you don't have one).
create table skewed_a_lot
as
select
case when mod(level, 1000) = 0 then 'N' else 'Y' end as Flag,
level as col1
from dual connect by level <= 1000000;
create index skewed_a_lot_i01 on skewed_a_lot (flag);
exec dbms_stats.gather_table_stats(user, 'SKEWED_A_LOT', cascade => true, method_opt => 'for all indexed columns size auto');Is an example. -
Hi abapers,
Iam creating secondary index for the database table MSEG. ....How to write select query for the secondary index that i have created..
Regards,
Ramyahow to create secondary index in tables
https://forums.sdn.sap.com/click.jspa?searchID=933015&messageID=2971112
Guidelines to create secondary index
https://forums.sdn.sap.com/click.jspa?searchID=933015&messageID=2009801
Secondary index;
http://help.sap.com/saphelp_47x200/helpdata/en/cf/21eb2d446011d189700000e8322d00/content.htm
this will definitely help you, dont forget to award points if found helpful
Maybe you are looking for
-
How do i stop an app from downloading on my mac pro?
okay - so here goes. i went and opened the app store application on my mac pro. i saw the app for downloading Lion 10.7.3. at present i have Lion 10.7.2. so i figured, well, i already purchased Lion, so this upgrade should be free - and i clicked in
-
Search help for Acct Assignment not working
Hi, When I create a PO and go into the account assignment tab at the item level and try to bring up the search help by clicking on the Binoculars , nothing happens. The backend has been upgraded to ERP2005. We are on SRM 5.0 Any thoughts on why th
-
In IE 6 File download frm Webdynpro generates RandomName whn download in EP
Hi Experts, I have developed a webdynpro application where I download a file in the excel format. I have written a code to generate the name of the file and have set it to the context attribute. When I download the file from the Application http://<s
-
Is it possible to split a song in 2 as a whole, and insert a new part in the middle?
Im recording a song, and just realized id like to have an 8 bar drum part in the middle of it, before it kicks in to the other part... Is it possible to do this or do I have to cut and paste each track indivually? There are 27 so it would be pretty
-
ADF Handling Unexpected Errors
Hi All, I have a question related to ADF exception handling. Imagine a button on a ADF jspx page which has a binding to a Application Module method. If the AM method throws a Null Pointer Exception, the normal behavior we notice is that error message