Reg Usage of Index
Hi PPL,
Can anybody tell me how to use the database table Indexes in the programs?
Regards,
Kevin Nick.
Hi...
just see these links......
Re: select statement : Secondary index
how to use secondary index
http://help.sap.com/saphelp_nw04s/helpdata/en/cf/21eb2d446011d189700000e8322d00/content.htm
http://www.sap-img.com/abap/quick-note-on-design-of-secondary-database-indexes-and-logical-databases.htm
SELECT QUERY BASED ON SECONDARY INDEX
Reward points if useful,,,,,,
Suresh......
Similar Messages
-
Basic questions on usage of indexes
Hi All,
I have the following questions on the usage of indexes. I would be glad if you could answer the same.
1) Will using two different indexes for comparison reduce the performance for eg. If in a query I were to compare columns of two different tables, in which on one we have a non unique index while on the other we have a unique index will the query’s performance be poor? If so does it mean to have optimum performance we need to compare columns having same indexes?
2) Does deleting records from a table remove the indexes present on a given column? If no whether the space occupied by this particular index is overwritten by new indexes or these indexes simply exist without any mapping?
3) Does the order of conditions in where clause have a significance in terms that we should use the columns having index on them first and so on so that the column not having index comes last in the where clause?
4) Are indexes optimally used when we compare them in the where condition using LIKE? Or indexes are optimally used only when we compare using =, >, <, <=,>= in the WHERE clause?
5) If I have the following four columns C1, C2, C3 and C4 and if I were to create a composite index including all the four in the given order (C1, C2, C3 and C4); while I were to write a query with the columns C2, C3 and C4 (excluding C1); will it be a correct usage of the composite index? If no why?
6) If I have three columns C1, C2 and C3 and if I were to put a composite index on all the three but while writing a query use C1 and C2 only or C1 and C3 only will the index be used optimally? If no then does it make sense to create two composite indexes C1 and C2 & C2 and C3 to have the optimum usage of indexes?
Thanks in advance.Most of your queries are answered on (directly or indirectly)
http://richardfoote.wordpress.com/
For those questions which remain unanswered, I am afraid you need to check it out yourself as you have the access to your environment. We, over here, can only give you the hints. -
Hi All,
cud anybody pls let me know abt the <b>creation and maintenace of indexes</b>??
any screenshots or step-by-step procedure.....!!!
its quite urgent..
regards,
abc xyzHI,
1.In SE11 Transaction display your table.
2.press button index... in tool bar, it will ask you
to create a new one.
3.give the index name
4. give short text description
5. in bottom table give the field name on which
you want index.
6. save and activate.
check this.
http://www.oreilly.com/catalog/sapadm/chapter/ch01.html
To create Secondary Index --> http://help.sap.com/saphelp_nw2004s/helpdata/en/cf/21eb47446011d189700000e8322d00/content.htm
Also Check the below links.
http://help.sap.com/saphelp_erp2005/helpdata/en/1c/252640632cec01e10000000a155106/frameset.htm
http://help.sap.com/saphelp_erp2005/helpdata/en/c7/55833c4f3e092de10000000a114027/frameset.htm.
Regards,
Sesh -
Dear Experts,
I am using the FM "SO_NEW_DOCUMENT_ATT_SEND_API1" to send the PDF attachment data via Outlook Mail. Here i need to pass "Commit work" = 'X'. It is working fine when i execute in Dialog mode / Foreground.
But when i execute in Background / Batch Mode, i get a short Dump as "Pre-commit Check required".
When i am not setting the parameter "Commit work" = 'X', i get the following message in Tcode SOST "Still no entry in queue".
Is there any other way i can solve the above issue? Kindly help me to resolve the above issue in background.
Any help would be appreciated.
Regards,
Ramesh ManoharanHi,
Kindly find the below code going for dump when run in Background:
LOOP AT lit_pdf INTO lwa_pdf.
TRANSLATE lwa_pdf USING '~'.
CONCATENATE lfd_buffer lwa_pdf INTO lfd_buffer.
ENDLOOP.
TRANSLATE lfd_buffer USING '~'.
REFRESH: lit_record.
DO.
lwa_record = lfd_buffer.
APPEND lwa_record TO lit_record.
CLEAR lwa_record.
SHIFT lfd_buffer LEFT BY 255 PLACES.
IF lfd_buffer IS INITIAL.
EXIT.
ENDIF.
ENDDO.
git_objbin[] = lit_record[].
REFRESH lit_record.
Create Message Body Title and Description
gwa_objtxt = lfd_pspid.
APPEND gwa_objtxt TO git_objtxt.
CLEAR gwa_objtxt.
CLEAR: gwa_doc_chng, lfd_lines_txt, lfd_lines_bin.
gwa_doc_chng-obj_name = 'Project ID'.
gwa_doc_chng-expiry_dat = sy-datum + 10.
gwa_doc_chng-obj_descr = 'Project ID'.
gwa_doc_chng-sensitivty = 'F'.
gwa_doc_chng-proc_type = 'R'.
gwa_doc_chng-proc_name = sy-repid.
DESCRIBE TABLE git_objtxt LINES lfd_lines_txt.
READ TABLE git_objtxt INTO gwa_objtxt INDEX lfd_lines_txt.
IF sy-subrc IS INITIAL.
gwa_doc_chng-doc_size = ( lfd_lines_txt - 1 ) * 255 + STRLEN( gwa_objtxt ).
ENDIF.
*Main Text
CLEAR gwa_objpack.
gwa_objpack-transf_bin = ' '.
gwa_objpack-head_start = 1.
gwa_objpack-head_num = 0.
gwa_objpack-body_start = 1.
gwa_objpack-body_num = lfd_lines_txt.
gwa_objpack-doc_type = 'RAW'.
APPEND gwa_objpack TO git_objpack.
CLEAR gwa_objpack.
*Attachment(PDF Attachment)
gwa_objpack-transf_bin = 'X'.
gwa_objpack-head_start = 1.
gwa_objpack-head_num = 1.
gwa_objpack-body_start = 1.
DESCRIBE TABLE git_objbin LINES lfd_lines_bin.
READ TABLE git_objbin INTO gwa_objbin INDEX lfd_lines_bin.
IF lfd_lines_bin > 0.
gwa_objpack-doc_size = lfd_lines_bin * 255.
gwa_objpack-body_num = lfd_lines_bin.
ENDIF.
gwa_objpack-doc_type = 'PDF'.
gwa_objpack-obj_name = 'Project ID'.
gwa_objpack-obj_descr = 'Project_ID.PDF'.
APPEND gwa_objpack TO git_objpack.
CLEAR gwa_objpack.
READ TABLE git_usr21 INTO gwa_usr21
WITH KEY bname = lwa_usr_spool-bname
BINARY SEARCH.
IF sy-subrc IS INITIAL.
READ TABLE git_adr6 INTO gwa_adr6
WITH KEY addrnumber = gwa_usr21-addrnumber
persnumber = gwa_usr21-persnumber
BINARY SEARCH.
IF sy-subrc IS INITIAL.
gwa_reclist-receiver = gwa_adr6-smtp_addr.
gwa_reclist-rec_type = 'U'.
gwa_reclist-com_type = 'INT'.
APPEND gwa_reclist TO git_reclist.
CLEAR gwa_reclist.
ENDIF.
ENDIF.
IF NOT git_reclist[] IS INITIAL.
*SAPoffice: Send new document with attachments using RFC
CALL FUNCTION 'SO_NEW_DOCUMENT_ATT_SEND_API1'
EXPORTING
document_data = gwa_doc_chng
put_in_outbox = 'X'
commit_work = 'X'
IMPORTING
new_object_id = lwa_obj_id
TABLES
packing_list = git_objpack
contents_bin = git_objbin
contents_txt = git_objtxt
receivers = git_reclist
EXCEPTIONS
too_many_receivers = 1
document_not_sent = 2
document_type_not_exist = 3
operation_no_authorization = 4
parameter_error = 5
x_error = 6
enqueue_error = 7
OTHERS = 8.
REFRESH: git_objpack, git_objbin, git_objtxt, git_reclist,
lit_pdf.
CLEAR: gwa_doc_chng, lfd_buffer.
ENDIF.
Any help to resolve the Issue ?
Regards,
Ramesh Manoharan
Edited by: ramesh.manoharan on Apr 7, 2010 12:39 PM -
Basic Question - Update - Usage of index
Gurus,
I have a basic question. As per my knowledge, an index will speed up the process while we are selecting the data. If we are doing some DML operations (especially Update), do we need the index to speed up the process eventhough the indexed column is in the where condition?
Regards
Edited by: Sarma12 on Apr 17, 2012 5:59 AMHave you tried setting up a test scenario? For example:
SQL> CREATE TABLE test AS SELECT 1 num FROM DUAL CONNECT BY LEVEL <= 1000;
Table created.
SQL> UPDATE test SET NUM = 99 WHERE ROWNUM = 1;
1 row updated.
SQL> COMMIT;
Commit complete.
SQL> UPDATE /*+gather_plan_statistics*/
2 test
3 SET num = 2
4 WHERE num = 99
5 ;
1 row updated.
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(null,null,'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
SQL_ID 96wt4ddwy0sa5, child number 0
UPDATE /*+gather_plan_statistics*/ test SET num = 2 WHERE
num = 99
Plan hash value: 3859524075
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 0 | UPDATE STATEMENT | | 1 | | 0 |00:00:00.01 | 7 |
| 1 | UPDATE | TEST | 1 | | 0 |00:00:00.01 | 7 |
|* 2 | TABLE ACCESS FULL| TEST | 1 | 1 | 1 |00:00:00.01 | 4 |
Predicate Information (identified by operation id):
2 - filter("NUM"=99)
Note
- dynamic sampling used for this statement (level=2)
24 rows selected.
SQL> rollback;
Rollback complete.
SQL> CREATE INDEX test_x1 ON test(num);
Index created.
SQL> UPDATE /*+gather_plan_statistics*/
2 test
3 SET num = 2
4 WHERE num = 99
5 ;
1 row updated.
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(null,null,'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
SQL_ID 96wt4ddwy0sa5, child number 0
UPDATE /*+gather_plan_statistics*/ test SET num = 2 WHERE
num = 99
Plan hash value: 734435536
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 0 | UPDATE STATEMENT | | 1 | | 0 |00:00:00.01 | 9 |
| 1 | UPDATE | TEST | 1 | | 0 |00:00:00.01 | 9 |
|* 2 | INDEX RANGE SCAN| TEST_X1 | 1 | 1 | 1 |00:00:00.01 | 2 |
Predicate Information (identified by operation id):
2 - access("NUM"=99)
Note
- dynamic sampling used for this statement (level=2)
24 rows selected.So yes, an index may be used in DML. -
HI,
While using Secondary index,
If I am having a table and I set fields f3,f4,f5 as secondary index1, f6,f7,f8 as secondary index2, f9,f10,f11 as secondary index3 for using them for selecting fields for 3 different programs respectively.
Here i have some doubts---
For doing like this, will the table be affected performance wise? if yes how? Then what will be solution for using those fields?
pls suggest.
thanks
points will be awardedHi everyone,
I agree with Rob, in the sense I would not create any secondary indices on standard tables, if possible (unless recommended by a SAP note).
In Z tables -> well, the performance of SELECT statements for those fields is improved, for sure. But every time you want to INSERT/UPDATE/DELETE on the table, SAP takes more time to do the operation, because it has to populate the data in the table, and also update the indices. So be careful and check the performance on your test systems before transporting to productive environment.
I hope it helps. Best regards,
Alvaro -
Hi,
I have the following query
SELECT *
FROM
(SELECT
/*+ FIRST_ROWS(200) */
a.*,
ROWNUM rnum
FROM
(SELECT DISTINCT t0.ENTITYID AS a1,
t0.ENTITYCLASS AS a2,
t0.STARTDATE AS a3,
t0.NATIVEEMSSERVICESTATE AS a4,
t0.ENDDATE AS a5,
t0.LASTMODIFIEDUSER AS a6,
t0.ID AS a7,
t0.PARTITION AS a8,
t0.DESCRIPTION AS a9,
t0.NAME AS a10,
t0.PERMISSIONS AS a11,
t0.CREATEDDATE AS a12,
t0.ACTIVITY AS a13,
t0.ENTITYVERSION AS a14,
t0.NOSPEC AS a15,
t0.NATIVEEMSNAME AS a16,
t0.ADMINSTATE AS a17,
t0.OWNER AS a18,
t0.LASTMODIFIEDDATE AS a19,
t0.OBJECTSTATE AS a20,
t0.NATIVEEMSADMINSERVICESTATE AS a21,
t0.PHYSICALLOCATION AS a22,
t0.CREATEDUSER AS a23,
t0.SPECIFICATION AS a24 FROM LogicalDevice t0,LogicalDeviceConsumer t12
WHERE ((( NOT EXISTS
(SELECT 1
FROM LogicalDeviceCondition t4,
LogicalDeviceConsumer t3,
LogicalDeviceConsumer t2,
LogicalDevice t1
WHERE ((((t3.ENTITYID = t2.ENTITYID)
AND (t2.ENDDATE > SYSDATE))
AND (t4.TYPE = 'BLOCKED'))
AND (((t4.ENTITYID = t3.ENTITYID)
AND (t3.ENTITYCLASS = 'LogicalDeviceConditionDAO'))
AND (t1.ENTITYID = t2.LOGICALDEVICE)))
AND NOT EXISTS
(SELECT 1
FROM LogicalDeviceConsumer t7,
LogicalDeviceConsumer t6,
LogicalDevice t5,
LogicalDeviceReservation t8
WHERE ((((t7.ENTITYID = t6.ENTITYID)
AND (t6.ENDDATE > SYSDATE))
AND NOT ((t8.RESERVATIONTYPE IS NULL)))
AND (((t8.ENTITYID = t7.ENTITYID)
AND (t7.ENTITYCLASS = 'LogicalDeviceReservationDAO'))
AND (t5.ENTITYID = t6.LOGICALDEVICE)))
AND ((t0.OBJECTSTATE = 'ACTIVE')
OR (t0.OBJECTSTATE = 'INACTIVE')))
AND (t0.ENTITYCLASS = 'LogicalDeviceDAO')
AND (t0.ENTITYID = t12.LOGICALDEVICE)
AND (((t12.ADMINSTATE = 'ASSIGNED')
OR (t12.ADMINSTATE = 'PENDING_ASSIGN'))
OR (t12.ADMINSTATE = 'PENDING_UNASSIGN')))
ORDER BY t0.ID ASC
) a
WHERE ROWNUM <= 200
WHERE rnum > 0
It is taking 500 sec to execute on a volume database.
Here is the explain plan of that query
Plan hash value: 2643227695
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 200 | 326K| | 5119K (1)| 00:02:56 | | |
|* 1 | VIEW | | 200 | 326K| | 5119K (1)| 00:02:56 | | |
|* 2 | COUNT STOPKEY | | | | | | | | |
| 3 | VIEW | | 49M| 76G| | 5119K (1)| 00:02:56 | | |
|* 4 | SORT ORDER BY STOPKEY | | 49M| 9173M| 11G| 5119K (1)| 00:02:56 | | |
| 5 | HASH UNIQUE | | 49M| 9173M| 11G| 3018K (1)| 00:01:44 | | |
|* 6 | FILTER | | | | | | | | |
|* 7 | HASH JOIN | | 49M| 9173M| 1274M| 917K (1)| 00:00:32 | | |
| 8 | PART JOIN FILTER CREATE | :BF0000 | 49M| 708M| | 67818 (1)| 00:00:03 | | |
|* 9 | INDEX FAST FULL SCAN | POONAM_ADMNSTT_LD_EID | 49M| 708M| | 67818 (1)| 00:00:03 | | |
| 10 | PARTITION HASH JOIN-FILTER | | 49M| 8535M| | 334K (1)| 00:00:12 |:BF0000|:BF0000|
|* 11 | TABLE ACCESS FULL | LOGICALDEVICE | 49M| 8535M| | 334K (1)| 00:00:12 |:BF0000|:BF0000|
| 12 | NESTED LOOPS | | 1 | 113 | | 8 (0)| 00:00:01 | | |
| 13 | NESTED LOOPS | | 1 | 107 | | 7 (0)| 00:00:01 | | |
| 14 | NESTED LOOPS | | 1 | 83 | | 4 (0)| 00:00:01 | | |
|* 15 | TABLE ACCESS FULL | LOGICALDEVICERESERVATION | 1 | 40 | | 2 (0)| 00:00:01 | | |
| 16 | PARTITION HASH ITERATOR | | 1 | 43 | | 2 (0)| 00:00:01 | KEY | KEY |
|* 17 | TABLE ACCESS BY GLOBAL INDEX ROWID| LOGICALDEVICECONSUMER | 1 | 43 | | 2 (0)| 00:00:01 | ROWID | ROWID |
|* 18 | INDEX RANGE SCAN | IDX_LDCNSM_EID | 1 | | | 2 (0)| 00:00:01 | KEY | KEY |
| 19 | PARTITION HASH ITERATOR | | 1 | 24 | | 3 (0)| 00:00:01 | KEY | KEY |
|* 20 | TABLE ACCESS BY GLOBAL INDEX ROWID | LOGICALDEVICECONSUMER | 1 | 24 | | 3 (0)| 00:00:01 | ROWID | ROWID |
|* 21 | INDEX RANGE SCAN | IDX_LDCNSM_EID | 1 | | | 2 (0)| 00:00:01 | KEY | KEY |
|* 22 | INDEX UNIQUE SCAN | SYS_C0020125 | 1 | 6 | | 1 (0)| 00:00:01 | | |
| 23 | NESTED LOOPS | | 1 | 113 | | 6 (0)| 00:00:01 | | |
| 24 | NESTED LOOPS | | 1 | 107 | | 5 (0)| 00:00:01 | | |
| 25 | NESTED LOOPS | | 1 | 83 | | 2 (0)| 00:00:01 | | |
|* 26 | INDEX RANGE SCAN | RAMA_LDC_TYPE_EID | 1 | 40 | | 0 (0)| 00:00:01 | | |
| 27 | PARTITION HASH ITERATOR | | 1 | 43 | | 2 (0)| 00:00:01 | KEY | KEY |
|* 28 | TABLE ACCESS BY GLOBAL INDEX ROWID| LOGICALDEVICECONSUMER | 1 | 43 | | 2 (0)| 00:00:01 | ROWID | ROWID |
|* 29 | INDEX RANGE SCAN | IDX_LDCNSM_EID | 1 | | | 2 (0)| 00:00:01 | KEY | KEY |
| 30 | PARTITION HASH ITERATOR | | 1 | 24 | | 3 (0)| 00:00:01 | KEY | KEY |
|* 31 | TABLE ACCESS BY GLOBAL INDEX ROWID | LOGICALDEVICECONSUMER | 1 | 24 | | 3 (0)| 00:00:01 | ROWID | ROWID |
|* 32 | INDEX RANGE SCAN | IDX_LDCNSM_EID | 1 | | | 2 (0)| 00:00:01 | KEY | KEY |
|* 33 | INDEX UNIQUE SCAN | SYS_C0020125 | 1 | 6 | | 1 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
1 - filter("RNUM">0)
2 - filter(ROWNUM<=200)
4 - filter(ROWNUM<=200)
6 - filter( NOT EXISTS (SELECT 0 FROM "LOGICALDEVICERESERVATION" "T8","LOGICALDEVICE" "T5","LOGICALDEVICECONSUMER"
"T6","LOGICALDEVICECONSUMER" "T7" WHERE "T8"."ENTITYID"="T7"."ENTITYID" AND "T7"."ENTITYCLASS"='LogicalDeviceReservationDAO' AND
"T7"."ENTITYID"="T6"."ENTITYID" AND "T6"."ENDDATE">SYSDATE@! AND "T5"."ENTITYID"="T6"."LOGICALDEVICE" AND "T8"."RESERVATIONTYPE" IS NOT
NULL) AND NOT EXISTS (SELECT 0 FROM "LOGICALDEVICE" "T1","LOGICALDEVICECONSUMER" "T2","LOGICALDEVICECONSUMER"
"T3","LOGICALDEVICECONDITION" "T4" WHERE "T4"."TYPE"='BLOCKED' AND "T4"."ENTITYID"="T3"."ENTITYID" AND
"T3"."ENTITYCLASS"='LogicalDeviceConditionDAO' AND "T3"."ENTITYID"="T2"."ENTITYID" AND "T2"."ENDDATE">SYSDATE@! AND
"T1"."ENTITYID"="T2"."LOGICALDEVICE"))
7 - access("T0"."ENTITYID"="T12"."LOGICALDEVICE")
9 - filter("T12"."ADMINSTATE"='ASSIGNED' OR "T12"."ADMINSTATE"='PENDING_ASSIGN' OR "T12"."ADMINSTATE"='PENDING_UNASSIGN')
11 - filter("T0"."ENTITYCLASS"='LogicalDeviceDAO' AND ("T0"."OBJECTSTATE"='ACTIVE' OR "T0"."OBJECTSTATE"='INACTIVE'))
15 - filter("T8"."RESERVATIONTYPE" IS NOT NULL)
17 - filter("T7"."ENTITYCLASS"='LogicalDeviceReservationDAO')
18 - access("T8"."ENTITYID"="T7"."ENTITYID")
20 - filter("T6"."ENDDATE">SYSDATE@!)
21 - access("T7"."ENTITYID"="T6"."ENTITYID")
22 - access("T5"."ENTITYID"="T6"."LOGICALDEVICE")
26 - access("T4"."TYPE"='BLOCKED')
28 - filter("T3"."ENTITYCLASS"='LogicalDeviceConditionDAO')
29 - access("T4"."ENTITYID"="T3"."ENTITYID")
31 - filter("T2"."ENDDATE">SYSDATE@!)
32 - access("T3"."ENTITYID"="T2"."ENTITYID")
33 - access("T1"."ENTITYID"="T2"."LOGICALDEVICE")
I changed the query as below
SELECT *
FROM
(SELECT
/*+ FIRST_ROWS(200) */
a.*,
ROWNUM rnum
FROM
(SELECT t0.entityId FROM LogicalDevice t0,LogicalDeviceConsumer t12
WHERE ((( NOT EXISTS
(SELECT 1
FROM LogicalDeviceCondition t4,
LogicalDeviceConsumer t3,
LogicalDeviceConsumer t2,
LogicalDevice t1
WHERE ((((t3.ENTITYID = t2.ENTITYID)
AND (t2.ENDDATE > SYSDATE))
AND (t4.TYPE = 'BLOCKED'))
AND (((t4.ENTITYID = t3.ENTITYID)
AND (t3.ENTITYCLASS = 'LogicalDeviceConditionDAO'))
AND (t1.ENTITYID = t2.LOGICALDEVICE)))
AND NOT EXISTS
(SELECT 1
FROM LogicalDeviceConsumer t7,
LogicalDeviceConsumer t6,
LogicalDevice t5,
LogicalDeviceReservation t8
WHERE ((((t7.ENTITYID = t6.ENTITYID)
AND (t6.ENDDATE > SYSDATE))
AND NOT ((t8.RESERVATIONTYPE IS NULL)))
AND (((t8.ENTITYID = t7.ENTITYID)
AND (t7.ENTITYCLASS = 'LogicalDeviceReservationDAO'))
AND (t5.ENTITYID = t6.LOGICALDEVICE)))
AND ((t0.OBJECTSTATE = 'ACTIVE')
OR (t0.OBJECTSTATE = 'INACTIVE')))
AND (t0.ENTITYCLASS = 'LogicalDeviceDAO')
AND (t0.ENTITYID = t12.LOGICALDEVICE)
AND (((t12.ADMINSTATE = 'ASSIGNED')
OR (t12.ADMINSTATE = 'PENDING_ASSIGN'))
OR (t12.ADMINSTATE = 'PENDING_UNASSIGN')))
ORDER BY t0.ID ASC
) a
WHERE ROWNUM <= 200
WHERE rnum > 0
This query takes less than 3 secs to execute
Explain plan for this query is
Plan hash value: 337125913
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 200 | 5200 | 1034 (0)| 00:00:01 | | |
|* 1 | VIEW | | 200 | 5200 | 1034 (0)| 00:00:01 | | |
|* 2 | COUNT STOPKEY | | | | | | | |
| 3 | VIEW | | 202 | 2626 | 1034 (0)| 00:00:01 | | |
|* 4 | FILTER | | | | | | | |
| 5 | NESTED LOOPS | | | | | | | |
| 6 | NESTED LOOPS | | 202 | 11312 | 1020 (0)| 00:00:01 | | |
|* 7 | TABLE ACCESS BY GLOBAL INDEX ROWID | LOGICALDEVICE | 49M| 1955M| 209 (0)| 00:00:01 | ROWID | ROWID |
|* 8 | INDEX RANGE SCAN | IDX_LD_CLS_ID_EID | 204 | | 5 (0)| 00:00:01 | | |
| 9 | PARTITION HASH ITERATOR | | 1 | | 2 (0)| 00:00:01 | KEY | KEY |
|* 10 | INDEX RANGE SCAN | IDX_1316713511427 | 1 | | 2 (0)| 00:00:01 | KEY | KEY |
|* 11 | TABLE ACCESS BY GLOBAL INDEX ROWID | LOGICALDEVICECONSUMER | 1 | 15 | 4 (0)| 00:00:01 | ROWID | ROWID |
| 12 | NESTED LOOPS | | 1 | 113 | 8 (0)| 00:00:01 | | |
| 13 | NESTED LOOPS | | 1 | 107 | 7 (0)| 00:00:01 | | |
| 14 | NESTED LOOPS | | 1 | 83 | 4 (0)| 00:00:01 | | |
|* 15 | TABLE ACCESS FULL | LOGICALDEVICERESERVATION | 1 | 40 | 2 (0)| 00:00:01 | | |
| 16 | PARTITION HASH ITERATOR | | 1 | 43 | 2 (0)| 00:00:01 | KEY | KEY |
|* 17 | TABLE ACCESS BY GLOBAL INDEX ROWID| LOGICALDEVICECONSUMER | 1 | 43 | 2 (0)| 00:00:01 | ROWID | ROWID |
|* 18 | INDEX RANGE SCAN | IDX_LDCNSM_EID | 1 | | 2 (0)| 00:00:01 | KEY | KEY |
| 19 | PARTITION HASH ITERATOR | | 1 | 24 | 3 (0)| 00:00:01 | KEY | KEY |
|* 20 | TABLE ACCESS BY GLOBAL INDEX ROWID | LOGICALDEVICECONSUMER | 1 | 24 | 3 (0)| 00:00:01 | ROWID | ROWID |
|* 21 | INDEX RANGE SCAN | IDX_LDCNSM_EID | 1 | | 2 (0)| 00:00:01 | KEY | KEY |
|* 22 | INDEX UNIQUE SCAN | SYS_C0020125 | 1 | 6 | 1 (0)| 00:00:01 | | |
| 23 | NESTED LOOPS | | 1 | 113 | 6 (0)| 00:00:01 | | |
| 24 | NESTED LOOPS | | 1 | 107 | 5 (0)| 00:00:01 | | |
| 25 | NESTED LOOPS | | 1 | 83 | 2 (0)| 00:00:01 | | |
|* 26 | INDEX RANGE SCAN | RAMA_LDC_TYPE_EID | 1 | 40 | 0 (0)| 00:00:01 | | |
| 27 | PARTITION HASH ITERATOR | | 1 | 43 | 2 (0)| 00:00:01 | KEY | KEY |
|* 28 | TABLE ACCESS BY GLOBAL INDEX ROWID| LOGICALDEVICECONSUMER | 1 | 43 | 2 (0)| 00:00:01 | ROWID | ROWID |
|* 29 | INDEX RANGE SCAN | IDX_LDCNSM_EID | 1 | | 2 (0)| 00:00:01 | KEY | KEY |
| 30 | PARTITION HASH ITERATOR | | 1 | 24 | 3 (0)| 00:00:01 | KEY | KEY |
|* 31 | TABLE ACCESS BY GLOBAL INDEX ROWID | LOGICALDEVICECONSUMER | 1 | 24 | 3 (0)| 00:00:01 | ROWID | ROWID |
|* 32 | INDEX RANGE SCAN | IDX_LDCNSM_EID | 1 | | 2 (0)| 00:00:01 | KEY | KEY |
|* 33 | INDEX UNIQUE SCAN | SYS_C0020125 | 1 | 6 | 1 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
1 - filter("RNUM">0)
2 - filter(ROWNUM<=200)
4 - filter( NOT EXISTS (SELECT 0 FROM "LOGICALDEVICERESERVATION" "T8","LOGICALDEVICE" "T5","LOGICALDEVICECONSUMER"
"T6","LOGICALDEVICECONSUMER" "T7" WHERE "T8"."ENTITYID"="T7"."ENTITYID" AND "T7"."ENTITYCLASS"='LogicalDeviceReservationDAO'
AND "T7"."ENTITYID"="T6"."ENTITYID" AND "T6"."ENDDATE">SYSDATE@! AND "T5"."ENTITYID"="T6"."LOGICALDEVICE" AND
"T8"."RESERVATIONTYPE" IS NOT NULL) AND NOT EXISTS (SELECT 0 FROM "LOGICALDEVICE" "T1","LOGICALDEVICECONSUMER"
"T2","LOGICALDEVICECONSUMER" "T3","LOGICALDEVICECONDITION" "T4" WHERE "T4"."TYPE"='BLOCKED' AND
"T4"."ENTITYID"="T3"."ENTITYID" AND "T3"."ENTITYCLASS"='LogicalDeviceConditionDAO' AND "T3"."ENTITYID"="T2"."ENTITYID" AND
"T2"."ENDDATE">SYSDATE@! AND "T1"."ENTITYID"="T2"."LOGICALDEVICE"))
7 - filter("T0"."OBJECTSTATE"='ACTIVE' OR "T0"."OBJECTSTATE"='INACTIVE')
8 - access("T0"."ENTITYCLASS"='LogicalDeviceDAO')
10 - access("T0"."ENTITYID"="T12"."LOGICALDEVICE")
11 - filter("T12"."ADMINSTATE"='ASSIGNED' OR "T12"."ADMINSTATE"='PENDING_ASSIGN' OR "T12"."ADMINSTATE"='PENDING_UNASSIGN')
15 - filter("T8"."RESERVATIONTYPE" IS NOT NULL)
17 - filter("T7"."ENTITYCLASS"='LogicalDeviceReservationDAO')
18 - access("T8"."ENTITYID"="T7"."ENTITYID")
20 - filter("T6"."ENDDATE">SYSDATE@!)
21 - access("T7"."ENTITYID"="T6"."ENTITYID")
22 - access("T5"."ENTITYID"="T6"."LOGICALDEVICE")
26 - access("T4"."TYPE"='BLOCKED')
28 - filter("T3"."ENTITYCLASS"='LogicalDeviceConditionDAO')
29 - access("T4"."ENTITYID"="T3"."ENTITYID")
31 - filter("T2"."ENDDATE">SYSDATE@!)
32 - access("T3"."ENTITYID"="T2"."ENTITYID")
33 - access("T1"."ENTITYID"="T2"."LOGICALDEVICE")
The only key difference is in the second SQL, we are just getting the entityId instead of all details in the query. Could this make so much of difference in terms of performance? Also the explain plan now uses indexes properly instead of full table scans. What is helping Optimizer here? Any thoughts?
Thanks,
Rama1)One thing you have to know about Index is by default all the b-tree indexes are stored in ascending order on their defined key,so in your case no need of doing Order By manipulation, so no extra work.
2)And and the very consuming part is DISTINCT, in 1st query after all HASH JOINING and filtering the Records send to UNIQUE operation is 49 million, and after doing UNIQUE operation still the number is 49M ie.. it's waste of time doing sort operation and discovering that nothing is sorted out, because of this Oracle has spend 3018K cpu ie.. ~4 times than HASH JOIN.And temporary space used is 11 GB, so after all doing UNIQUE operation oracle have to get all these from TEMP tablespace(I/O is spend too much over here).
5 HASH UNIQUE 49M 9173M 11G 3018K (1) 00:01:44
* 6 FILTER
* 7 HASH JOIN 49M 9173M 1274M 917K (1) 00:00:32 3)It was shown very clear that INDEX, IDX_LD_CLS_ID_EID was used
* 8 INDEX RANGE SCAN IDX_LD_CLS_ID_EID 204 5 (0) 00:00:0for the purpose of
8 - access("T0"."ENTITYCLASS"='LogicalDeviceDAO') -
Reg: Exporting the index file
Hi,
Is it possible to export the indexex??Hi,
yes, for example, you can put all indexes in a tablespace and export only this tablespace.
what do you want to do?
--sgc -
Reg: statistics on indexes
Hi,
Need to know how to build the statistics on the primary index.
Command : brconnect -u / -c -f stats -m +I
But i don know how to build the statistics for the primary index.Hello Ambarish,
> But now when checked in production the query is picking up the index that is primary index and in quality system it is picking up the same
So how do you get the conclusion that an index rebuild will fix the issue? Does the index sizes differ so much? What is the clustering factor? What are the wait events of the SQL statement in the quality system?
At first you need to understand the root cause of the performance problem until you can solve the problem. Many questions are still open and an index rebuild is just one "little piece of the big oracle mosaic".
> Any one please help.How to rebuild primary key index .
I have already posted the brconnect call to rebuild the primary index.
Regards
Stefan -
Reg: Recreation of Indexes
Hi all,
We are facing some performance issues. According to SAP suggetion we are planning to recreate some indexes. But i never created an index before...... can anybody help in this......please sugfgest in proper way to recreate indexes....
thnx in advance
with regards
HarishHi
To create index:
1. Go to DB02, click on Missing Indexes. Select index in next screen to create and click "Create in DB"
2. if you know the table name for which you want to create the index, go to SE14 -> table name -> Go to index -> In next screen "select create".
3. Yo have to use trx SE11 into Dev system, Enter the database table name and press
Display -> Indexes -> Create Enter index name. Choose Maintain logon language.
Enter short description and index fields. -
Hi All,
We need to build an interface from PI to some database using JDBC adapter to send out emails to business users of the exception records created in the table.please help in using the xslt mapping for this interface.
Thanks in Advance,Hi,
You can build your scenario as follows:
1) If the emails have to be sent from PI then you need to get back a response from the database.....if this is failure message then initiate your send email process...the whole process can be implemented using a BPM: Source ---> PI ---> JDBC --->PI ---> Email
2) You need to use a receiver mail adapter to send emails
3) Refer this blog of michal in which it is mentioned how to use the XSLT mapping and configuration for sending the emails:
/people/michal.krawczyk2/blog/2005/11/23/xi-html-e-mails-from-the-receiver-mail-adapter
You can also refer: http://www.riyaz.net/blog/xipi-sending-emails-using-xi-mail-adapter/
For more clarification you can also refer my answer in this thread:
Re: xml in mail
The XSLT mapping that you need will be only in the PI ---> Email scenario...
Make sure that you download the desired email structure from service market place before you start with the scenario.......do let me know if my answer helps.
Regards,
Abhishek. -
Reg:usage of 7.1v NWDS
Hi experts
i am working now on 7.1v NWDS .But when i try to deploy it is giving below error.
error :Firefox can't establish a connection to the server at h2syss.vctl.ad.
More Over i have specified the properties under window->preferences->SAP AS JAVA .
But when i have given instance name and instance No .Register Sap System.
In the table column it is giving two instance host and no.
that is h2syss 1
h2syss.vctl.ad 0.
can any one help me out.
How to resolve this issue.
Thanks & Regards
DeepikaHi Deepika,
The possible error are :
1. The user id and password you are using may not have deploying rights.
2. The Network might be fluctuating one.
3. The Instance could have changed ie the server would have restarted and port could have changed.
It is always suggested that to add server instance just before you deploy and after the server has taken a restart.
Regards
Piyas Kumar Das -
Reg. usage of BAPI_DOCUMENT_GETOBJECTDOCS
Dear Experts,
My requirement is that i need to pass OBJECTTYPE as 'MARA' if it is Material BOM and OBJECTKEY as material no.
and OBJECTTYPE as 'STPO_DOC' if it is BOM item and OBJECTKEY as GUIDX using in BOM Comparison to fetch Document Info Record attached to it. Can anyone explain me with sample code?
Thanks,
Ramesh ManoharanHi Raj,
Thanks for your immediate reply. But the link sent by you did not help me. I need to understand Material BOM and BOM item and also pass OBJECTTYPE as 'STPO_DOC' if it is BOM item and OBJECTTYPE as 'MARA' if it is Material BOM. How do i differentiate between Material BOM and BOM item?
Thanks,
Ramesh Manoharan -
REG: Usage of PGP(pretty good privacy) encryption
Hi all,
I need to use PGP encryption in XI. Can u suggest is it possible or not. If yes can you tell me how can it be done.hi,
PGP Encryption is used to support the transmission of sensitive data to / from third party systems via XI.
Adapter modules are developed to encrypt the file using PGP.
We had a similar requirement where we used PGP encryption.The module was developed using Cryptix OpenPGP which is a Java implementation of the OpenPGP standard.When the module is called in the adapter, it uses the PGP key provided by the party that will receive the encrypted message. This module should be called prior to calling the Sap adapter
Logic Flow/Processing:
1.Read the XML payload and message for getting the needed data.
2.Read the key to be used in the encryption and log the key to be used and the beginning of the encryption.
3.Call the PGP encryption and compression method.
4.Log whether encryption has been successful.
5.Set as payload the message content encrypted, and the principal data.
6.If any error occurs, logs an exception in PGP adapter module and the error reason.
7.Return the message.
regards
kummari -
Reg usage of break point in debug mode
Dear experts,
I want to know how to go to a particular line in debugging mode. Suppose for example, if a program contains main program and includes. I want to go to a particular line in debugging mode in the includes. How can i go? kindly help me.
tksHi Prabhakar,
Using break points you can stop the execution control at that break point.
If you use only F5, F6, F7 for debugging you don't need of break points.
But if you want to stop at particular statement while running the report (using F8) you should use Break Point. Ofcourse you can use watchpoints also.
Watchpoint --- If you put watch point execution control stops at the particular statement when the value of watchpoint is reached.
Reward If Helpful.
Regards
Sasidhar Reddy Matli.
Maybe you are looking for
-
HT2589 How do I delete credit card from my iPad ?
How do I delete credit card information from iPad?
-
Long story short i need to set up a password for my links...
long story short i need to set up a password for my linksys wireless router... i tried go to linksys' website link i went ahead and set up a password using instructions from the link above...went to http://192.168.1.1/...clicked on admin tab and set
-
How do I correct this recent issue where my iPhone 4 powers down overnight? It started shutting down as if I turned off the phone. It began around three or four weeks ago in December 2013. Is anyone else experiencing the same issue? I bought my iPhon
-
E72 & Microsoft Office Outlook
Hello, i need some help please. I've purchased an E72 and i'm trying to synchonize my Microsoft Office Outlook Contacs with my mobile. The problem is that i can't sort them on my mobile like on Microsoft Office Outlook. I have much fewer options of s
-
How can i tell what Service Pack I on
Post Author: Fran CA Forum: Upgrading and Licensing Hi there, I am running Crystal Reports XI and wanted to know how i can tell what Service Pack is installed - i looked under help but couldnt see anything obvious. Thanks Fran