Performance issue - Select statement
Hi I am having the 10 lack records in the KONP table . If i am trying to see all the records in SE11 , it is giving the short dump TSV_TNEW_PAGE_ALLOC_FAILED . I know this is because of less memory in the Server . Is there any other way to get the data ? How to optimise the below SELECT statement if i have large data in the table .
i_condn_data - is having 8 lack records .
SELECT knumh kznep valtg valdt zterm
FROM konp
INTO TABLE i_condn_data_b
FOR ALL ENTRIES IN i_condn_data
WHERE knumh = i_condn_data-knumh
AND kschl = p_kschl.
Please suggest .
Hi,
try to use "UP TO n ROWS" to control the quantity of selected data in each Loop step.
Something like this:
sort itab by itab-knumh.
flag = 'X'.
while flag = 'X'.
SELECT knumh kznep valtg valdt zterm
FROM konp
INTO TABLE i_condn_data_b UP TO one_million ROWS
WHERE knumh > new_value_for_selection
AND kschl = p_kschl.
describe table i_condn_data_b lines i.
read table i_condn_data_b index i.
new_value_for_selection = i_condn_data_b-knumh.
*....your logic for table i_condn_data_b
if one_million > i.
clear flag.
endif.
endwhile.
Regards
Similar Messages
-
What happen while issuing select statement
hi,
Can any one please explain what happen while we are issuing select statement in instance ?
please give me info.
thanks,
Sanjeev.Please read Overview of SQL Processing in http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/sqllangu.htm#CHDFCAGA
-
Performance issue with statement
This is the same as my other thread but with everything formatted.
I'm having a lot of issues trying to tune this statement. I have added some new indexes and even moved existing indexes to a 32k tablespace. The execution plan has improved but when I execute the statement the data never returns. I see where my bottle-neck is but I'm lost on what else I can do to improve the performance.
STATEMENT:
SELECT DISTINCT c.oprclass, a.business_unit, i.descr, a.zsc_load,
b.ship_to_cust_id, b.zsc_load_status, f.ld_cnt,
b.zsc_mill_release, b.address_seq_num, d.name1,
e.address1 || ' - ' || e.city || ', ' || e.state || ' '
|| e.postal
FROM ps_zsc_ld a,
ps_zsc_ld_seq b,
ps_sec_bu_cls c,
ps_customer d,
ps_set_cntrl_group g,
ps_rec_group_rec r,
ps_bus_unit_tbl_fs i,
(SELECT business_unit, zsc_load, COUNT (*) AS ld_cnt
FROM ps_zsc_ld_seq
GROUP BY business_unit, zsc_load) f,
(SELECT *
FROM ps_cust_address ca
WHERE effdt =
(SELECT MAX (effdt)
FROM ps_cust_address ca1
WHERE ca.setid = ca1.setid
AND ca.cust_id = ca1.cust_id
AND ca.address_seq_num = ca1.address_seq_num
AND ca1.effdt <= SYSDATE)) e
WHERE a.business_unit = b.business_unit
AND a.zsc_load = b.zsc_load
AND r.recname = 'CUSTOMER'
AND g.rec_group_id = r.rec_group_id
AND g.setcntrlvalue = a.business_unit
AND d.setid = g.setid
AND b.ship_to_cust_id = d.cust_id
AND e.setid = g.setid
AND b.ship_to_cust_id = e.cust_id
AND b.address_seq_num = e.address_seq_num
AND a.business_unit = f.business_unit
AND a.zsc_load = f.zsc_load
AND a.business_unit = c.business_unit
AND a.business_unit = i.business_unit;EXECUTION PLAN:
Plan
SELECT STATEMENT CHOOSECost: 1,052 Bytes: 291 Cardinality: 1
25 SORT UNIQUE Cost: 1,052 Bytes: 291 Cardinality: 1
24 SORT GROUP BY Cost: 1,052 Bytes: 291 Cardinality: 1
23 FILTER
19 NESTED LOOPS Cost: 1,027 Bytes: 291 Cardinality: 1
17 NESTED LOOPS Cost: 1,026 Bytes: 279 Cardinality: 1
15 NESTED LOOPS Cost: 1,025 Bytes: 263 Cardinality: 1
12 NESTED LOOPS Cost: 1,024 Bytes: 227 Cardinality: 1
10 NESTED LOOPS Cost: 1,023 Bytes: 28,542 Cardinality: 134
7 HASH JOIN Cost: 60 Bytes: 134,101 Cardinality: 803
5 NESTED LOOPS Cost: 49 Bytes: 5,175 Cardinality: 45
3 NESTED LOOPS Cost: 48 Bytes: 1,230,725 Cardinality: 12,955
1 TABLE ACCESS FULL SYSADM.PS_CUST_ADDRESS Cost: 20 Bytes: 3,465 Cardinality: 45
2 INDEX RANGE SCAN UNIQUE SYSADM.TEST3 Cost: 1 Bytes: 5,130 Cardinality: 285
4 INDEX UNIQUE SCAN UNIQUE SYSADM.PS_REC_GROUP_REC Bytes: 20 Cardinality: 1
6 INDEX FAST FULL SCAN NON-UNIQUE SYSADM.PS0CUSTOMER Cost: 10 Bytes: 252,460 Cardinality: 4,855
9 TABLE ACCESS BY INDEX ROWID SYSADM.PS_ZSC_LD_SEQ Cost: 2 Bytes: 46 Cardinality: 1
8 INDEX RANGE SCAN UNIQUE SYSADM.TEST7 Cost: 1 Cardinality: 1
11 INDEX UNIQUE SCAN UNIQUE SYSADM.PS_ZSC_LD Bytes: 14 Cardinality: 1
14 TABLE ACCESS BY INDEX ROWID SYSADM.PS_BUS_UNIT_TBL_FS Cost: 2 Bytes: 36 Cardinality: 1
13 INDEX UNIQUE SCAN UNIQUE SYSADM.PS_BUS_UNIT_TBL_FS Cardinality: 1
16 INDEX FULL SCAN UNIQUE SYSADM.PS_SEC_BU_CLS Cost: 2 Bytes: 96 Cardinality: 6
18 INDEX RANGE SCAN UNIQUE SYSADM.PS_ZSC_LD_SEQ Cost: 1 Bytes: 12 Cardinality: 1
22 SORT AGGREGATE Bytes: 31 Cardinality: 1
21 FIRST ROW Cost: 2 Bytes: 31 Cardinality: 1
20 INDEX RANGE SCAN (MIN/MAX) UNIQUE SYSADM.PS_CUST_ADDRESS Cost: 2 Cardinality: 5,364 TRACE INFO:
call count cpu elapsed disk query current rows
Parse 1 0.22 0.24 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 1208.24 1179.86 92 221319711 0 0
total 3 1208.46 1180.11 92 221319711 0 0
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 81
Rows Row Source Operation
0 SORT UNIQUE (cr=0 r=0 w=0 time=0 us)
0 SORT GROUP BY (cr=0 r=0 w=0 time=0 us)
0 FILTER (cr=0 r=0 w=0 time=0 us)
0 NESTED LOOPS (cr=0 r=0 w=0 time=0 us)
0 NESTED LOOPS (cr=0 r=0 w=0 time=0 us)
0 NESTED LOOPS (cr=0 r=0 w=0 time=0 us)
0 NESTED LOOPS (cr=0 r=0 w=0 time=0 us)
0 NESTED LOOPS (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
2717099 NESTED LOOPS (cr=221319646 r=92 w=0 time=48747178172 us)
220447566 NESTED LOOPS (cr=872143 r=92 w=0 time=10965565169 us)
4590 TABLE ACCESS FULL OBJ#(15335) (cr=99 r=92 w=0 time=58365 us)
220447566 INDEX RANGE SCAN OBJ#(2684506) (cr=872044 r=0 w=0 time=2533034831 us)(object id 2684506)
2717099 INDEX UNIQUE SCAN OBJ#(583764) (cr=220447568 r=0 w=0 time=23792811449 us)(object id 583764)
0 INDEX FAST FULL SCAN OBJ#(15319) (cr=0 r=0 w=0 time=0 us)(object id 15319)
0 TABLE ACCESS BY INDEX ROWID OBJ#(735431) (cr=0 r=0 w=0 time=0 us)
0 INDEX RANGE SCAN OBJ#(2684517) (cr=0 r=0 w=0 time=0 us)(object id 2684517)
0 INDEX UNIQUE SCAN OBJ#(550855) (cr=0 r=0 w=0 time=0 us)(object id 550855)
0 TABLE ACCESS BY INDEX ROWID OBJ#(11041) (cr=0 r=0 w=0 time=0 us)
0 INDEX UNIQUE SCAN OBJ#(582984) (cr=0 r=0 w=0 time=0 us)(object id 582984)
0 INDEX FULL SCAN OBJ#(583859) (cr=0 r=0 w=0 time=0 us)(object id 583859)
0 INDEX RANGE SCAN OBJ#(2684186) (cr=0 r=0 w=0 time=0 us)(object id 2684186)
0 SORT AGGREGATE (cr=0 r=0 w=0 time=0 us)
0 FIRST ROW (cr=0 r=0 w=0 time=0 us)
0 INDEX RANGE SCAN (MIN/MAX) OBJ#(15336) (cr=0 r=0 w=0 time=0 us)(object id 15336)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file scattered read 14 0.00 0.00
direct path write 3392 0.00 0.06
db file sequential read 8 0.00 0.00I had an index on that table but still that is not where my bottle neck was showing so I removed it. I have added the index back and clearly it has helped the execution plan.
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 1 | 291 | 1035 (1)|
| 1 | SORT UNIQUE | | 1 | 291 | 1035 (1)|
| 2 | SORT GROUP BY | | 1 | 291 | 1035 (1)|
| 3 | FILTER | | | | |
| 4 | NESTED LOOPS | | 1 | 291 | 1010 (1)|
| 5 | NESTED LOOPS | | 1 | 279 | 1009 (1)|
| 6 | NESTED LOOPS | | 1 | 243 | 1008 (1)|
| 7 | NESTED LOOPS | | 1 | 227 | 1006 (0)|
| 8 | NESTED LOOPS | | 135 | 28755 | 1005 (0)|
| 9 | HASH JOIN | | 805 | 131K| 39 (0)|
| 10 | HASH JOIN | | 45 | 5175 | 28 (0)|
| 11 | TABLE ACCESS FULL | PS_CUST_ADDRESS | 45 | 3465 | 20 (0)|
| 12 | NESTED LOOPS | | 3398 | 126K| 7 (0)|
| 13 | INDEX FAST FULL SCAN | PS_REC_GROUP_REC | 1 | 20 | 5 (0)|
| 14 | INDEX RANGE SCAN | TEST11 | 3398 | 61164 | 3 (0)|
| 15 | INDEX FAST FULL SCAN | PS0CUSTOMER | 4855 | 246K| 10 (0)|
| 16 | TABLE ACCESS BY INDEX ROWID| PS_ZSC_LD_SEQ | 1 | 46 | 2 (0)|
| 17 | INDEX RANGE SCAN | PS0ZSC_LD_SEQ | 1 | | 1 (0)|
| 18 | INDEX UNIQUE SCAN | PS_ZSC_LD | 1 | 14 | |
| 19 | INDEX FULL SCAN | PS_SEC_BU_CLS | 3 | 48 | 2 (0)|
| 20 | TABLE ACCESS BY INDEX ROWID | PS_BUS_UNIT_TBL_FS | 1 | 36 | 2 (50)|
| 21 | INDEX UNIQUE SCAN | PS_BUS_UNIT_TBL_FS | 1 | | |
| 22 | INDEX RANGE SCAN | PS_ZSC_LD_SEQ | 1 | 12 | 1 (0)|
| 23 | SORT AGGREGATE | | 1 | 31 | |
| 24 | FIRST ROW | | 1 | 31 | 2 (0)|
| 25 | INDEX RANGE SCAN (MIN/MAX) | PS_CUST_ADDRESS | 5364 | | 2 (0)|
------------------------------------------------------------------------------------------------ -
Performance Issue: Select From BSEG & BKPF
Hi experts,
Performance issue on the select statements; how can I improve the performance?
Select Company Code (BUKRS)
Accounting Document Number (BELNR)
Document Type (BLART)
Posting Date in the Document (BUDAT)
Document Status (BSTAT)
Reversal Document or Reversed Document Indicator (XREVERSAL)
From Accounting Document Header (BKPF)
Into I_BKPF
Where BKPF-BUKRS = I_VBAK-BUKRS_VF
BKPF-BLART = KI
BKPF-BUDAT = SY-DATUM 2 days
BKPF-BSTAT = Initial
BKPF-XREVERSAL <> 1 or 2
Select Company Code (BUKRS)
Accounting Document Number (BELNR)
Assignment Number (ZUONR)
Sales Document (VBEL2)
Sales Document Item (POSN2)
P & L Statement Account Type (GVTYP)
From Accounting Document Segment (BSEG)
Into I_BSEG
Where BSEG-BUKRS = I_VBAK-BUKRS
BSEG-VBELN = I_VBAK-VBEL2
BSEG-POSN2 = I_VBAP-POSNR
BSEG-BELNR = I_BKPF-BELNR
P & L Statement Account Type (GVTYP) = XHi,
to improve the performance, you can use the secondary indices viz., BSIK / BSAK, BSID / BSAD, BSIS.
Hope this helps.
Best Regards, Murugesh AS -
[Performance Issue] Select from MSEG
Hi experts,
Need your help on how to improve the performance in the select from MSEG, it takes about 30 minutes to just finish the select. Thanks!
SELECT mblnr
mjahr
zeile
bwart
matnr
werks
lgort
charg
shkzg
menge
ummat
lgpla
FROM mseg
INTO CORRESPONDING FIELDS OF TABLE i_mseg2
FOR ALL ENTRIES IN i_likp
WHERE bwart IN ('601','602','653','654')
AND matnr IN s_matnr
AND werks IN s_werks
AND lgort IN s_sloc
AND lgpla EQ i_likp-vbeln.store all the vbeln to ranges.
ranges:r_vbeln for i_likp-vbeln.
r_vbeln-option = 'EQ'.
r_vbeln-sign = 'I'.
loop at i_likp.
r_vbeln-low = i_likp-vbeln.
append r_vbeln.
endloop.
sort r_vbeln ascending
delete adjacent duplicates from r_vbeln.
then modify the fetch as below.
donot use a loop to fetch data from mseg.
SELECT mblnr mjahr zeile bwart matnr werks lgort charg shkzg menge ummat lgpla
FROM mseg client specified INTO CORRESPONDING FIELDS OF TABLE i_mseg2
FOR ALL ENTRIES IN i_likp
where mandt = sy-mandt
and (bwart = '601' or bwart = '602' or bwart = '653' or bwart = '654' )
AND matnr IN s_matnr
AND werks IN s_werks
AND lgort IN s_sloc
AND lgpla in r_vbeln.
there is another table where u can get this data...i,ll let u know shortly...
try with this if useful
reward points.... -
How to improve Performance for Select statement
Hi Friends,
Can you please help me in improving the performance of the following query:
SELECT SINGLE MAX( policyterm ) startterm INTO (lv_term, lv_cal_date) FROM zu1cd_policyterm WHERE gpart = gv_part GROUP BY startterm.
Thanks and Regards,
Johnylong lists can not be produced with a SELECT SINGLE, there is also nothing to group.
But I guess the SINGLE is a bug
SELECT MAX( policyterm ) startterm
INTO (lv_term, lv_cal_date)
FROM zu1cd_policyterm
WHERE gpart = gv_part
GROUP BY startterm.
How many records are in zu1cd_policyterm ?
Is there an index starting with gpart?
If first answer is 'large' and second 'no' => slow
What is the meaning of gpart? How many different values can it assume?
If many different values then an index makes sense, if you are allowed to create
an index.
Otherwise you must be patient.
Siegfried -
Performance Issue: Update Statement
Hi Team,
My current environment is Oracle 11g Rac...
My application team is executing a update statement (ofcourse it is there daily activity) ...
updating rows of 1 lac, daily it takes about 3-4 minutes to run the statement.
But today its taking more time i.e more than 8 hours.
then I have generated the explain plan of the update statement and found that its taking full table scan.
Kindly assist me in fixing the issue by letting me know where and how to look for the problem.
**Note: Stats gather is updated
Thanks in advance.
RegardsIf you notice there are no indexes to the below update statement -
UPDATE REMEDY_JOURNALS_FACT SET JNL_CREATED_BY_IDENTITY_KEY = ?, JNL_CREATED_BY_HR_KEY = ?, JNL_CREATED_BY_NTWRK_KEY = ?, JNL_MODIFIED_BY_IDENTITY_KEY = ?, JNL_MODIFIED_BY_HR_KEY = ?, JNL_MODIFIED_BY_NTWRK_KEY = ?, JNL_ASSGN_TO_IDENTITY_KEY = ?, JNL_ASSGN_TO_HR_KEY = ?, JNL_ASSGN_TO_NTWRK_KEY = ?, JNL_REMEDY_STATUS_KEY = ?, JOURNALID = ?, JNL_DATE_CREATED = ?, JNL_DATE_MODIFIED = ?, ENTRYTYPE = ?, TMPTEMPDATETIME1 = ?, RELATEDFORMNAME = ?, RELATED_RECORDID = ?, RELATEDFORMKEYWORD = ?, TMPRELATEDRECORDID = ?, ACCESS_X = ?, JOURNAL_TEXT = ?, DATE_X = ?, SHORTDESCRIPTION = ?, TMPCREATEDBY = ?, TMPCREATE_DATE = ?, TMPLASTMODIFIEDBY = ?, TMPMODIFIEDDATE = ?, TMPJOURNALID = ?, JNL_JOURNALTYPE = ?, COPIEDTOWORKLOG = ?, PRIVATE = ?, RELATEDKEYSTONEID = ?, URLLOCATION = ?, ASSIGNEEGROUP = ?, LAST_UPDATE_DT = ? WHERE REMEDY_JOURNALS_KEY = ?
Explain Plan -
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | UPDATE STATEMENT | | | | 1055 (100)| | | | | | |
| 1 | UPDATE | REMEDY_JOURNALS_FACT | | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 784 | 1055 (1)| 00:00:05 | | | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 1 | 784 | 1055 (1)| 00:00:05 | 1 | 10 | Q1,00 | PCWC | |
|* 5 | TABLE ACCESS STORAGE FULL| REMEDY_JOURNALS_FACT | 1 | 784 | 1055 (1)| 00:00:05 | 1 | 10 | Q1,00 | PCWP | |
Predicate Information (identified by operation id):
5 - storage(:Z>=:Z AND :Z<=:Z AND "REMEDY_JOURNALS_KEY"=:36) filter("REMEDY_JOURNALS_KEY"=:36)
Note
- automatic DOP: skipped because of IO calibrate statistics are missing
Edited by: GeetaM on Aug 17, 2012 2:18 PM -
Performance of SELECT Statements
Hi All,
I have a small confusion.
Consider these 3 statements :
A.
SELECT * from table1 where {conditions based on primary key}.
exit.
Endselect.
B.
SELECT SINGLE * from table1 where {conditions based on primary key}.
C.
SELECT upto 1 rows from table1 where {conditions based on primary key}.
Considering that table1 has more than 1 primary keys,Which statement gives the best peformance when
1.All the keys are used in the where condition.
2.Only a few keys are used in the where condition
Thanks in advance.
HariHi,
If you are interested in only one record then
1.All the keys are used in the where condition.
Ans B
2.Only a few keys are used in the where condition
Ans C
santhosh -
Hi,
I have recently been asked to generate a program that reports of payroll postings to FI. This involves creating a giant select statement from the ppoix table to gather all the postings. My select statement is as follows:
SELECT pernr "EE Number
seqno "Sequential number
actsign "Indicator: Status of record
runid "Number of posting run
postnum "Number
tslin "Line number of data transfer
lgart "Wage Type
betrg "Amount
waers "Currency
anzhl "Number
meins "Base unit of measure
spprc "Special processing of posting items
momag "Transfer to FI/CO:EE grouping for acct determi
komok "Transfer to FI/CO: Symbolic account
mcode "Matchcode search term
koart "Account assignment type
auart "Expenditure type
nofin "Indicator: Expenditure type is not funded
INTO CORRESPONDING FIELDS OF TABLE i_ppoix
FROM ppoix
FOR ALL ENTRIES IN run_doc_xref
WHERE runid = run_doc_xref-runid
AND tslin = run_doc_xref-linum
AND spprc <> 'A'
AND lgart IN s_lgart
AND pernr in s_pernr.
where s_pernr is a select option that holds personnel nummbers and s_lgart is a select option that holds wagetypes. This statement works fine for a certain amount of personnel numbers and a certain amount of wagetypes, but once you exceed a certain limit the Database does not allow you to perform a select statement this large. Is there a better way to perform such a large select such as this one) ie: FM, or some other method I am not aware of. This select statement comes from the standard SAP delivered cost center admin report and this report dumps as well when too much data is passed to it.
any ideas would be much appreciated.
thanks.The problem here is with the select-options.
For a select statement, you cannot have more that certain amount of data.
The problem with your select becomes complex because of the FOR ALL ENTRIES in and the huge s_pernr and the 40 million records :(.
I am guessing that the s_lgart will be small.
How many entries do you have in internal table "run_doc_xref"?
If there are not that many, then I would suggest this:
TYPES:
BEGIN OF ty_temp_ppoix,
pernr TYPE ppoix-pernr,
lgart TYPE ppoix-lgart,
seqno TYPE ppoix-seqno,
actsign TYPE ppoix-actsign,
runid TYPE ppoix-runid,
postnum TYPE ppoix-postnum,
tslin TYPE ppoix-tslin,
betrg TYPE ppoix-betrg,
spprc TYPE ppoix-spprc,
END OF ty_temp_ppoix.
DATA:
i_temp_ppoix TYPE SORTED TABLE OF ty_temp_ppoix
WITH NON-UNIQUE KEY pernr lgart
INITIAL SIZE 0
WITH HEADER LINE.
DATA:
v_pernr_lines TYPE sy-tabix,
v_lgart_lines TYPE sy-tabix.
IF NOT run_doc_xref[] IS INITIAL.
DESCRIBE TABLE s_pernr LINES v_pernr_lines.
DESCRIBE TABLE s_lgart LINES v_lgart_lines.
IF v_pernr_lines GT 800 OR
v_lgart_lines GT 800.
* There is an index on runid and tslin. This should be ok
* ( still bad because of the huge table :( )
SELECT pernr lgart seqno actsign runid postnum tslin betrg spprc
* Selecting into sorted TEMP table here
INTO TABLE i_temp_ppoix
FROM ppoix
FOR ALL ENTRIES IN run_doc_xref
WHERE runid = run_doc_xref-runid
AND tslin = run_doc_xref-linum
AND spprc <> 'A'.
* The sorted table should make the delete faster
DELETE i_temp_ppoix WHERE NOT pernr IN s_pernr
AND NOT lgart IN s_lgart.
* Now populate the actual target
LOOP AT i_temp_ppoix.
MOVE: i_temp_ppoix-pernr TO i_ppoix-pernr.
* and the rest of the fields
APPEND i_ppoix.
DELETE i_temp_ppoix.
ENDLOOP.
ELSE.
SELECT pernr seqno actsign runid postnum tslin lgart betrg spprc
* Selecting into your ACTUAL target here
INTO TABLE i_ppoix
FROM ppoix
FOR ALL ENTRIES IN run_doc_xref
WHERE runid = run_doc_xref-runid
AND tslin = run_doc_xref-linum
AND spprc <> 'A'
AND pernr IN s_pernr
AND lgart IN s_lgart.
ENDIF.
ELSE.
* Error message because of no entries in run_doc_xref?
* Please answer this so a new solution can be implemented here
* if it is NOT an error
ENDIF.
Hope this helps.
Regards,
-Ramesh -
Hey everyone,
First of all, yes, I have been looking through the 8.5 database schema guide. As I have been reviewing the schema, I have been developing some ideas as to how to collect the desired data. However, if anyone has already developed or found the SQL statements (which I'm sure someone already has) it would help me by minimizing buggs in my data collection program.
All of these statistics need to be grouped by CSQ and selected for a certain time range (<start time> and <stop time>). I.e. 1 hour increments. I have no problem getting a list of results and then performing calulations to get the desired end result. Also, if I need to perform multiple select statements to essentialy join two tables, please include both statements. Finally, I saw the RtCSQsSummary table, but I have to collect data for the past, not at that given time.
1. Total calls presented per CSQ
2. Total calls answered per CSQ
3. Total calls abandoned per CSQ
4. Percentage of calls abandoned per CSQ (if this is not stored in the database, I'm thinking: <calls abandonded>/<calls presented>)
5. Average abandon time in seconds (if this is not stored in the db, I'm thinking: sum(<abandon time>)/<calls abandonded>)
6. Service Level - % calls answered in 90 seonds by a skill-set (I saw metServiceLevel in table ContactQueueDetail; however, I would have to find how to configure this threshold for the application)
7. Average speed of answer per CSQ
8. Average call talk time per CSQ
9. Aggregate logged in time of CSQ resources/agents
10. Aggregate ready time of CSQ resources/agents
I realize that some of these should be easy to find (as I am still digging through the db schema guide), but I was reading how a new record is created for every call leg so I could easily see how I could get inaccurate information without properly developed select statements.
Any help will be greatly appreciated.
BrendanHi,
kindly use the below link
http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/crs/express_8_5/user/guide/uccx85dbschema.pdf
it is the data base schema for UCCX 8.5.
if you want to connect to DB. go to page 123. it shows you how to connect to DB. it is for UCCX 9 not 8.5 but it worths to try
http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/crs/express_9_02/programming/guide/UCCX_BK_CFD16E30_00_cisco-unified-contact-center-express.pdf
HTH
Anas
please rate all helpful posts -
Performance Issue in Select Statement (For All Entries)
Hello,
I have a report where i have two select statement
First Select Statement:
Select A B C P Q R
from T1 into Table it_t1
where ....
Internal Table it_t1 is populated with 359801 entries through this select statement.
Second Select Statement:
Select A B C X Y Z
from T2 in it_t2 For All Entries in it_t1
where A eq it_t1-A
and B eq it_t1-B
and C eq it_t1-C
Now Table T2 contains more than 10 lac records and at the end of select statement it_t2 is populated with 844003 but it takes a lot of time (15 -20 min) to execute second select statement.
Can this code be optimized?
Also i have created respective indexes on table T1 and T2 for the fields in Where Condition.
Regards,If you have completed all the steps mentioned by others, in the above thread, and still you are facing issues then,.....
Use a Select within Select.
First Select Statement:
Select A B C P Q R package size 5000
from T1 into Table it_t1
where ....
Second Select Statement:
Select A B C X Y Z
from T2 in it_t2 For All Entries in it_t1
where A eq it_t1-A
and B eq it_t1-B
and C eq it_t1-C
do processing........
endselect
This way, while using for all entries on T2, your it_t1, will have limited number of entries and thus the 2nd select will be faster.
Thanks,
Juwin -
Performance Issue in select statements
In the following statements, it is taking too much time for execution,Is there any best way to change these selection statements...int_report_data is my final internal table....
select fsplant fvplant frplant fl1_sto pl1_delivery pl1_gr pl2_sto pl2_delivery perr_msg into (dochdr-swerks,dochdr-vwerks,dochdr-rwerks,dochdr-l1sto,docitem-l1xblnr, docitem-l1gr,docitem-l2sto, docitem-l2xblnr,docitem-err_msg) from zdochdr as f inner join zdocitem as p on fl1_sto = pl1_sto where fsplant in s_werks and
fvplant in v_werks and frplant in r_werks and pl1_delivery in l1_xblnr and pl1_gr in l1_gr and p~l2_delivery in l2_xblnr.
move : dochdr-swerks to int_report_data-i_swerks,
dochdr-vwerks to int_report_data-i_vwerks,
dochdr-rwerks to int_report_data-i_rwerks,
dochdr-l1sto to int_report_data-i_l1sto,
docitem-l1xblnr to int_report_data-i_l1xblnr,
docitem-l1gr to int_report_data-i_l1gr,
docitem-l2sto to int_report_data-i_l2sto,
docitem-l2xblnr to int_report_data-i_l2xblnr,
docitem-err_msg to int_report_data-i_errmsg.
append int_report_data.
endselect.
Goods receipt
loop at int_report_data.
select single ebeln from ekbe into l2gr where ebeln = int_report_data-i_l2sto and bwart = '101' and bewtp = 'E' and vgabe = '1'.
if sy-subrc eq 0.
move l2gr to int_report_data-i_l2gr.
modify int_report_data.
endif.
endloop.
first Billing document (I have to check fkart = ZRTY for second billing *document..how can i write the statement)
select vbeln from vbfa into (tabvbfa-vbeln) where vbelv = int_report_data-i_l2xblnr or vbelv = int_report_data-i_l1xblnr.
select single vbeln from vbrk into tabvbrk-vbeln where vbeln = tabvbfa-vbeln and fkart = 'IV'.
if sy-subrc eq 0.
move tabvbrk-vbeln to int_report_data-i_l2vbeln.
modify int_report_data.
endif.
endselect.
Thanks in advance,
YadHi!
Which of your selects is slow? Make a SQL-trace, check which select(s) is(are) slow.
For EKBE and VBFA you are selecting first key field - in general that is fast. If your z-tables are the problem, maybe an index might help.
Instead of looping and making a lot of select singles, one select 'for all entries' can help, too.
Please analyze further and give feedback.
Regards,
Christian -
Performance problem(ANEA/ANEP table) in Select statement
Hi
I am using below select statement to fetch data.
Does the below where statement have performance issue?
can you Pls suggest.
1)In select of ANEP table, i am not using all the Key field in where condition. will it have performance problem?
2)does the order of where condition should be same as in table, if any one field order change also will have effect performance
SELECT bukrs
anln1
anln2
afabe
gjahr
peraf
lnran
bzdat
bwasl
belnr
buzei
anbtr
lnsan
FROM anep
INTO TABLE o_anep
FOR ALL ENTRIES IN i_anla
WHERE bukrs = i_anla-bukrs
AND anln1 = i_anla-anln1
AND anln2 = i_anla-anln2
AND afabe IN s_afabe
AND bzdat =< p_date
AND bwasl IN s_bwasl.
SELECT bukrs
anln1
anln2
gjahr
lnran
afabe
aufwv
nafal
safal
aafal
erlbt
aufwl
nafav
aafav
invzv
invzl
FROM anea
INTO TABLE o_anea
FOR ALL ENTRIES IN o_anep
WHERE bukrs = o_anep-bukrs
AND anln1 = o_anep-anln1
AND anln2 = o_anep-anln2
AND gjahr = o_anep-gjahr
AND lnran = o_anep-lnran
AND afabe = o_anep-afabe.
Moderator message: Please Read before Posting in the Performance and Tuning Forum
Edited by: Thomas Zloch on Aug 9, 2011 9:37 AM1. Yes. If you have only a few primary keys in youe WHERE condition that does affect the performance. But some times requirement itself may be in that way. We may not be knowing all the primary keys to given them in WHER conditon. If you know the values, then provide them without fail.
2. Yes. It's better to always follow the sequence in WHERE condition and even in the fields being fetched.
One important point is, whenever you use FOR ALL ENTRIES IN, please make sure that the itab IS NOT INITIAL i.e. the itab must have been filled in. So, place the same conditin before both the SELECT queries like:
IF i_anla[] IS NOT INITIAL.
SELECT bukrs
anln1
anln2
afabe
gjahr
peraf
lnran
bzdat
bwasl
belnr
buzei
anbtr
lnsan
FROM anep
INTO TABLE o_anep
FOR ALL ENTRIES IN i_anla
WHERE bukrs = i_anla-bukrs
AND anln1 = i_anla-anln1
AND anln2 = i_anla-anln2
AND afabe IN s_afabe
AND bzdat =< p_date
AND bwasl IN s_bwasl.
ENDIF.
IF o_anep[] IS NOT INITIAL.
SELECT bukrs
anln1
anln2
gjahr
lnran
afabe
aufwv
nafal
safal
aafal
erlbt
aufwl
nafav
aafav
invzv
invzl
FROM anea
INTO TABLE o_anea
FOR ALL ENTRIES IN o_anep
WHERE bukrs = o_anep-bukrs
AND anln1 = o_anep-anln1
AND anln2 = o_anep-anln2
AND gjahr = o_anep-gjahr
AND lnran = o_anep-lnran
AND afabe = o_anep-afabe.
ENDIF. -
Performance Issue on Select Condition on KNA1 table
Hi,
I am facing problem when i am selecting from the table KNA1 for given account group and attribute9 it is taking lot of time .
Please suggest the select query or any other feasible soln to solve this problem
select
kunnr
kotkd
from kna1
where kunnr eq parameter value and
kotkd eq 'ZPY' and katr9 = 'L' or 'T'.
Firstly i am using in in katr9 then i removed dur to performance issue using read further please suggest further performanace solnHi,
The select should be like:
select
kunnr
kotkd
from kna1
where kunnr eq parameter value
and kotkd eq 'ZPY'
and katr9 in r_katr9. " 'L' or 'T'.
create a range for katr9 like r_katr9 with L or T.
Because while selecting the entries from KNA1, it will check for KATR9 = L and then KATR9 = T.
Hope the above statement is useful for you.
Regards,
Shiva. -
Performance issue with view selection after migration from oracle to MaxDb
Hello,
After the migration from oracle to MaxDb we have serious performance issues with a lot of our tableview selections.
Does anybody know about this problem and how to solve it ??
Best regards !!!
Gert-JanHello Gert-Jan,
most probably you need additional indexes to get better performance.
Using the command monitor you can identify the long running SQL statements and check the optimizer access strategy. Then you can decide which indexes might help.
If this is about an SAP system, you can find additional information about performance analysis in SAP notes 725489 and 819641.
SAP Hosting provides the so-called service 'MaxDB Migration Support' to help you in such cases. The service description can be found here:
http://www.saphosting.de/mediacenter/pdfs/solutionbriefs/MaxDB_de.pdf
http://www.saphosting.com/mediacenter/pdfs/solutionbriefs/maxDB-migration-support_en.pdf.
Best regards,
Melanie Handreck
Maybe you are looking for
-
Best way to make an image backup of a Boot Camp partition?
I want to upgrade my Windows XP partition to Windows 7. Before I do, though, I want to make a complete image backup of the partition, so if anything gets terribly screwed up, I can restore from the image and be right back where I started. How can I m
-
SRW224G4P: no web interface access
Hi to all! I've read many ppl having similar problem, but mine is a little bit different. The web interface of my switch (SRW224G4P) used to work very well, but since some time it is not possible to access, it sends back a page that contains: <html><
-
How do I get my bought music to work as ringtone?
How do I get my bought music to work as ringtones?
-
Navigating from mc, back out and back into a different movieclip.
This must be pretty simple, but im having trouble. I have a movieClip on the main timeline called mcFlyouts. inside is a flyout button called btnAV which when pressed needs to navigate back out of mcFlyouts and into another movieClip on the main time
-
My CD has photos with info which I can read off the CD. On 'Import to Library' the photos are imported to my iPhoto but info is not. How can I import photos including info?