Performance issue and functional question regarding updates on tables
A person at my site wrote some code to update a custom field on the MARC table that was being copied from the MARA table. Here is what I would have expected to see as the code. Assume that both sets of code have a parameter called p_werks which is the plant in question.
data : commit_count type i.
select matnr zfield from mara into (wa_marc-matnr, wa_marc-zfield).
update marc set zfield = wa_marc-zfield
where werks = p_werks and matnr = wa_matnr.
commit work and wait.
endselect.
I would have committed every 200 rows instead of every one row, but here's the actual code and my question isn't around the commits but something else. In this case an internal table was built with two elements - MATNR and WERKS - could have done that above too, but that's not my question.
DO.
" Lock the record that needs to be update with material creation date
CALL FUNCTION 'ENQUEUE_EMMARCS'
EXPORTING
mode_marc = 'S'
mandt = sy-mandt
matnr = wa_marc-matnr
werks = wa_marc-werks
EXCEPTIONS
foreign_lock = 1
system_failure = 2
OTHERS = 3.
IF sy-subrc <> 0.
" Wait, if the records not able to perform as lock
CALL FUNCTION 'RZL_SLEEP'.
ELSE.
EXIT.
ENDIF.
ENDDO.
" Update the record in the table MARC with material creation date
UPDATE marc SET zzdate = wa_mara-zzdate
WHERE matnr = wa_mara-matnr AND
werks = wa_marc-werks. " IN s_werks.
IF sy-subrc EQ 0.
" Save record in the database table MARC
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
EXPORTING
wait = 'X'
IMPORTING
return = wa_return.
wa_log-matnr = wa_marc-matnr.
wa_log-werks = wa_marc-werks.
wa_log-type = 'S'.
" text-010 - 'Material creation date has updated'.
wa_log-message = text-010.
wa_log-zzdate = wa_mara-zzdate.
APPEND wa_log TO tb_log.
CLEAR: wa_return,wa_log.
ELSE.
" Roll back the record(un save), if there is any issue occurs
CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'
IMPORTING
return = wa_return.
wa_log-matnr = wa_marc-matnr.
wa_log-werks = wa_marc-werks.
wa_log-type = 'E'.
" 'Material creation date does not updated'.
wa_log-message = text-011.
wa_log-zzdate = wa_mara-zzdate..
APPEND wa_log TO tb_log.
CLEAR: wa_return, wa_log.
ENDIF.
" Unlock the record from data base
CALL FUNCTION 'DEQUEUE_EMMARCS'
EXPORTING
mode_marc = 'S'
mandt = sy-mandt
matnr = wa_marc-matnr
werks = wa_marc-werks.
ENDIF.
Here's the question - why did this person enqueue and dequeue explicit locks like this ? They claimed it was to prevent issues - what issues ??? Is there something special about updating tables that we don't know about ? We've actually seen it where the system runs out of these ENQUEUE locks.
Before you all go off the deep end and ask why not just do the update, keep in mind that you don't want to update a million + rows and then do a commit either - that locks up the entire table!
The ENQUEUE lock insure that another program called by another user will not update the data at the same time, so preventing database coherence to be lost. In fact, another user on a SAP correct transaction, has read the record and locked it, so when it will be updated your modifications will be lost, also you could override modifications made by another user in another luw.
You cannot use a COMMIT WORK in a SELECT - ENDSELECT, because COMMIT WORK will close each and every opened database cursor, so your first idea would dump after the first update. (so the internal table is mandatory)
Go through some documentation like [Updates in the R/3 System (BC-CST-UP)|http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCCSTUP/BCCSTUP_PT.pdf]
Regards
Similar Messages
-
ABAP performance issues and improvements
Hi All,
Pl. give me the ABAP performance issue and improvement points.
Regards,
HemaPerformance tuning for Data Selection Statement
For all entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of
entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the
length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Some steps that might make FOR ALL ENTRIES more efficient:
Removing duplicates from the the driver table
Sorting the driver table
If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
FOR ALL ENTRIES IN i_tab
WHERE mykey >= i_tab-low and
mykey <= i_tab-high.
Nested selects
The plus:
Small amount of data
Mixing processing and reading of data
Easy to code - and understand
The minus:
Large amount of data
when mixed processing isnt needed
Performance killer no. 1
Select using JOINS
The plus
Very large amount of data
Similar to Nested selects - when the accesses are planned by the programmer
In some cases the fastest
Not so memory critical
The minus
Very difficult to program/understand
Mixing processing and reading of data not possible
Use the selection criteria
SELECT * FROM SBOOK.
CHECK: SBOOK-CARRID = 'LH' AND
SBOOK-CONNID = '0400'.
ENDSELECT.
SELECT * FROM SBOOK
WHERE CARRID = 'LH' AND
CONNID = '0400'.
ENDSELECT.
Use the aggregated functions
C4A = '000'.
SELECT * FROM T100
WHERE SPRSL = 'D' AND
ARBGB = '00'.
CHECK: T100-MSGNR > C4A.
C4A = T100-MSGNR.
ENDSELECT.
SELECT MAX( MSGNR ) FROM T100 INTO C4A
WHERE SPRSL = 'D' AND
ARBGB = '00'.
Select with view
SELECT * FROM DD01L
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T
WHERE DOMNAME = DD01L-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
SELECT * FROM DD01V
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
Select with index support
SELECT * FROM T100
WHERE ARBGB = '00'
AND MSGNR = '999'.
ENDSELECT.
SELECT * FROM T002.
SELECT * FROM T100
WHERE SPRSL = T002-SPRAS
AND ARBGB = '00'
AND MSGNR = '999'.
ENDSELECT.
ENDSELECT.
Select Into table
REFRESH X006.
SELECT * FROM T006 INTO X006.
APPEND X006.
ENDSELECT
SELECT * FROM T006 INTO TABLE X006.
Select with selection list
SELECT * FROM DD01L
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
ENDSELECT
SELECT DOMNAME FROM DD01L
INTO DD01L-DOMNAME
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
ENDSELECT
Key access to multiple lines
LOOP AT TAB.
CHECK TAB-K = KVAL.
ENDLOOP.
LOOP AT TAB WHERE K = KVAL.
ENDLOOP.
Copying internal tables
REFRESH TAB_DEST.
LOOP AT TAB_SRC INTO TAB_DEST.
APPEND TAB_DEST.
ENDLOOP.
TAB_DEST[] = TAB_SRC[].
Modifying a set of lines
LOOP AT TAB.
IF TAB-FLAG IS INITIAL.
TAB-FLAG = 'X'.
ENDIF.
MODIFY TAB.
ENDLOOP.
TAB-FLAG = 'X'.
MODIFY TAB TRANSPORTING FLAG
WHERE FLAG IS INITIAL.
Deleting a sequence of lines
DO 101 TIMES.
DELETE TAB_DEST INDEX 450.
ENDDO.
DELETE TAB_DEST FROM 450 TO 550.
Linear search vs. binary
READ TABLE TAB WITH KEY K = 'X'.
READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
Comparison of internal tables
DESCRIBE TABLE: TAB1 LINES L1,
TAB2 LINES L2.
IF L1 <> L2.
TAB_DIFFERENT = 'X'.
ELSE.
TAB_DIFFERENT = SPACE.
LOOP AT TAB1.
READ TABLE TAB2 INDEX SY-TABIX.
IF TAB1 <> TAB2.
TAB_DIFFERENT = 'X'. EXIT.
ENDIF.
ENDLOOP.
ENDIF.
IF TAB_DIFFERENT = SPACE.
ENDIF.
IF TAB1[] = TAB2[].
ENDIF.
Modify selected components
LOOP AT TAB.
TAB-DATE = SY-DATUM.
MODIFY TAB.
ENDLOOP.
WA-DATE = SY-DATUM.
LOOP AT TAB.
MODIFY TAB FROM WA TRANSPORTING DATE.
ENDLOOP.
Appending two internal tables
LOOP AT TAB_SRC.
APPEND TAB_SRC TO TAB_DEST.
ENDLOOP
APPEND LINES OF TAB_SRC TO TAB_DEST.
Deleting a set of lines
LOOP AT TAB_DEST WHERE K = KVAL.
DELETE TAB_DEST.
ENDLOOP
DELETE TAB_DEST WHERE K = KVAL.
Tools available in SAP to pin-point a performance problem
The runtime analysis (SE30)
SQL Trace (ST05)
Tips and Tricks tool
The performance database
Optimizing the load of the database
Using table buffering
Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
Select DISTINCT
ORDER BY / GROUP BY / HAVING clause
Any WHERE clasuse that contains a subquery or IS NULL expression
JOIN s
A SELECT... FOR UPDATE
If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.
Use the ABAP SORT Clause Instead of ORDER BY
The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
Avoid ther SELECT DISTINCT Statement
As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows. -
Questions regarding update function module
Hello experts,
I am on customer site to help them investigate one issue: they have a background job which runs periodically.
In the report database table A is changed firstly ( new entries are inserted ), then a update function module is called via keyword CALL FUNCTION ... IN UPDATE TASK.
Inside the function module database table B is updated. ( existing entries are updated )
Customer issue:
sometimes they find A is updated as expected, however B remains unchanged at the same time.
customer could not find exact steps to reproduce the issue. However the issue does exist there and occur from time to time.
the issue could only be reproduced in their production system, but works perfectly well in dev & Q system. It is difficult to debug in their production system for trouble shooting.
After analyzing related code, I have one doubt: according to ABAP help on CALL FUNCTION aaa IN UPDATE TASK, I know the function module aaa is called in a new update work process. I wonder whether there is any possibility there this issue might be caused because the update function module fails to get called at all? ( perhaps due to heavy system load so no free update function module could serve the table B update ? )
If update function module fails to execute, is there any system utility to record this? That is to say, will it be recorded in such as SM13 or SM21?
Looking forward to your expertise on this topic!
Best regards,
JerryHello friends,
Thanks a lot for your interests on this issue. I update all my findings:
1. issue background: this issue occurs in SAP CRM Channel manageement Solution, software component CRM-CHM-POS.
2. due to some limitations, the table CMSD_CI_HISTORY and history table are not updated in the same LUW. Instead the first one is updated in normal work process while the other is done in update work process. Since I am not the original developer I didn't know the whole complex scenario ( I did see this is done delibrately in note 1764006 - CMS:Sell In Release creating PB with zero available quantity ).
So for the moment we have to accept this design.
3. during our testing ,we ensure COMMIT WORK is always called.
4. So why sometimes the first table update fails, however there is no hint at all for this failure in the system like ST22 and SM21 ?? ( forget SM13, since it is updated in normal work process ).
The root cause is the flaw of SAP code below.
The code has planned to raise exception if insertion failed due to duplicate records to be inserted.
Unfortunately, the fact is if we use "INSERT db FROM TABLE xxx" to insert records into database and some record already exists with the same key, it will result in a termination but SY-SUBRC is STILL 0; Just compare it with single insertion using "INSERT db FROM <work area>", in the same error situation, processing does not terminate, and SY-SUBRC is set to 4.
As a result in this case even the insertion fails, line 29 will never be executed as sy-subrc is always 0. Since the insertion fails and the exception is caught without any notification, so customer sufferred because they do not know what has happened.
Best regards,
Jerry -
Performance issue and data getting interchanged in BO Webi report + SAP BW
Hi,
We are using SAP BW queries as the source for creating some BO reports.
Environments :
SAP - SAP BI 7.1
BO - BO XI 3.1
Issues :
The reports were working fine in Dev and Q with less data. But when we point the universes to BW prod ( where we have much data), the reports are taking quite a long time to refresh and getting timed out. This query has some key figures which are having customer exits defined to show only one month data. And also BW accelerators are updated for the infocubes pertaining to this query. The BO report is giving data if we apply a filter in 'Query Panel' of Webi to show only current month dates. But then the issue is the values are getting interchanged for many objects. For ex: there are 2 objects- ABS version and Market region. The values are getting interchanged in the BO level.
Please let us know if anything needs to be done in BO or BW to fix this issue if anyone has faced the same
Also Please let us know if customer exits and accelerators works fine with BO
Thanks
SivakamiHi,
Thanks Roberto. We'll check the notes
@Ingo,
We are able to solve the performance issue by removing unused Key figures and dimensions from the query, but the column value interchange issue still persisits
The build version is - 12.3.0
Query Stripping
Where should we enable query stripping? When i went through some documentation it was written that it'll be enabled automatically from XI 3.1 Sp3. Can you please let us know if its so and what we need to do to enable it.
The coulmn interchange is happening when we use dimensions in a certain order. When product type is used along with Market region. Market region shows values of Product type also in Webi report.
Thanks & Regards,
Sivakami -
Performance issue: CALL FUNCTION inside a Loop.
Hi Friends
I have a Performance Issue. That is, inside a loop...endloop a CALL FUNCTION has been used which gets data from another database table. Finally it's appended into another internal table. Please see this :
LOOP AT i_mdkp.
REFRESH lt_mdtbx.
CLEAR lt_mdtbx.
CALL FUNCTION 'READ_MRP_LIST'
EXPORTING
idtnum = i_mdkp-dtnum
icflag = 'X'
tables
mdtbx = lt_mdtbx
APPEND LINES OF lt_mdtbx TO i_mdtb.
ENDLOOP.
It happens for each record available in i_mdkp. Suppose, i_mdkp have around 50000 records, it needs to call the function module till that much time.
So, I want to split it. Can I?
Please give me your valueable suggestions.
Regards
SenthilIf internal table i_mdkp has 50,000 records it does not mean that you need to run 50,000 iterations. You just need dtnum from internal table i_mdkp so you number of iterations should be eqaul to the unique number of dtnum in the internal table. Sort the internal table by dtnum and delete adjacent duplicates from the internal table comparing dtnum before looping. Look at the code below.
DATA i_mdkp_tmp LIKE TABLE OF i_mdkp.
IF NOT i_mdkp[] IS INITIAL.
i_mdkp_tmp[] = i_mdkp[].
SORT i_mdkp BY dtnum.
DELETE ADJACENT DUPLICATES FROM i_mdkp COMPARING dtnum.
REFRESH i_mdtb.
LOOP AT i_mdkp.
CALL FUNCTION 'READ_MRP_LIST'
EXPORTING
idtnum = i_mdkp-dtnum
icflag = 'X'
TABLES
mdtbx = lt_mdtbx.
APPEND LINES OF lt_mdtbx TO i_mdtb.
REFRESH lt_mdtbx.
ENDLOOP.
i_mdkp[] = i_mdkp_tmp[]
ENDIF. -
ODI and Essbase - question about updating structure (temp otls)
Hi,
versions:
ODI 11.1.1
essbase 11.1.2.2.1 (linux)
I'm running an interface that intends to update a dimension structure with the data from my respective dimension in Oracle relational. Actually a pretty simple interface using the KM "IKM SQL to
Hyperion Essbase (METADATA)". The execution is running fine with no errors and the structure is updated as expected, however we noticed that there are otl files being created in a tmp folder and they're never deleted. This folder is in the essbase server (/tmp). If I run this interface many times in a day, I'll have as many files in this folder as executions I did. So my question is if anybody knows why those files are being created and why they are not removed from there when the interface execution ends.
Thinking ahead, I'll have to create a shell script to clean up this folder in order to never have storage issues with those temporary otl files.
Thanks in advance for any contributions
EduardoAgreed.
I also think those files were created by the java APIs. Just for testing purposes, I ran the rule manually and those otls were not created. This is one more reason for me to believe on this theory.
However, what makes me think is: no one else got this issue? What are you guys doing with those files?
Maybe there is some setup I have to do and those files will not be created anymore.
Thanks,
Eduardo -
Huge Performance issue and RSRT
Hi BW Gurus,
We are using BCS cube for our consolidation queries and reports . There is a huge prformance problem.
I need to know that wht should be the appropriate size of the Global cache as compared to Local Cache. My global cache size is 100 MB and Global Cache size is 200 MB.
Also when I go to RSRT properties
Read Mode is H: Query to read when you navigate or expand hierarchy .
Cache Mode is : 4 persistent cache across each application server
persistence mode : 3 transparent table (BLOB).
Do I have to change these settings ....please give your suggestions
will appreciated with lot of points
ThanksHi Folks,..
Could you'll please tell me where exactly we put the break point I will paste my code. I did Run SE30 and the list cube extraction simaltaneoulsy and gave me a message error generating the test frame
tatics:
FUNCTION RSSEM_CONSOLIDATION_INFOPROV3.
""Lokale Schnittstelle:
*" IMPORTING
*" REFERENCE(I_INFOPROV) TYPE RSINFOPROV
*" REFERENCE(I_KEYDATE) TYPE RSDRC_SRDATE
*" REFERENCE(I_TH_SFC) TYPE RSDD_TH_SFC
*" REFERENCE(I_TH_SFK) TYPE RSDD_TH_SFK
*" REFERENCE(I_TSX_SELDR) TYPE RSDD_TSX_SELDR
*" REFERENCE(I_FIRST_CALL) TYPE RS_BOOL
*" REFERENCE(I_PACKAGESIZE) TYPE I
*" EXPORTING
*" REFERENCE(E_T_DATA) TYPE STANDARD TABLE
*" REFERENCE(E_END_OF_DATA) TYPE RS_BOOL
*" REFERENCE(E_T_MSG) TYPE RS_T_MSG
*" EXCEPTIONS
*" ERROR_IN_BCS
statics:
UT begin:
this flag is switched in order to record data returned by the current query in UT
it can only be switched on/off in debug mode.
s_record_mode type rs_bool,
s_qry_memo type char256, " at the moment, for query name
package No, UUID, for unit testing
s_packageno type i,
s_guid type guid_22,
UT end.
s_first_call like i_first_call,
s_destination type rfcdest,
s_basiccube type rsinfoprov,
s_dest_back type rfcdest,
s_report type programm,
s_bw_local type rs_bool,
sr_data type ref to data,
sr_data_p type ref to data,
st_sfc type t_sfc,
st_sfk type t_sfk,
st_range type t_seqnr_range,
st_hienode type t_seqnr_hienode,
st_hienodename type t_seqnr_hienodename,
st_seltype type t_seqnr_seltype,
st_datadescr type T_DATADESCR,
s_end_of_data type rs_bool
data:
l_ucr_data_read_3 type funcname value 'UCR_DATA_READ_3',
l_packagesize like i_packagesize,
lt_message type t_message,
ls_message like line of e_t_msg,
l_xstring type xstring,
l_nr type i.
field-symbols:
<ls_message> type s_message,
<lt_data> type standard table,
<ls_data> type any,"nos100804
<lt_data_p> type hashed table."nos100804
clear: e_t_data, e_end_of_data, e_t_msg.
react on packagesize -1
if i_packagesize le 0. "nos050705
l_packagesize = rssem_cs_integer-max.
else.
l_packagesize = i_packagesize.
endif.
if i_first_call = rs_c_true.
s_first_call = rs_c_true.
clear s_end_of_data.
begin "nos100804
data:
lo_structdescr type ref to cl_abap_structdescr
,lo_tabledescr type ref to cl_abap_tabledescr
,lo_typedescr type ref to cl_abap_typedescr
data:
lt_key type table of abap_compname.
field-symbols <ls_component> type abap_compdescr.
create data sr_data_p like line of e_t_data.
assign sr_data_p->* to <ls_data>.
CALL METHOD CL_ABAP_STRUCTDESCR=>DESCRIBE_BY_DATA
EXPORTING
P_DATA = <ls_data>
RECEIVING
P_DESCR_REF = lo_typedescr.
lo_structdescr ?= lo_typedescr.
collect all key components to lt_key
loop at lo_structdescr->components assigning <ls_component>.
insert <ls_component>-name into table lt_key.
if <ls_component>-name = '&KEYEND'.
exit.
endif.
endloop.
data ls_sfk like line of i_th_sfk.
data l_key type abap_compname.
loop at i_th_sfk into ls_sfk.
l_key = ls_sfk-kyfnm.
if l_key is not initial.
delete table lt_key from l_key.
endif.
l_key = ls_sfk-value_returnnm.
if l_key is not initial.
delete table lt_key from l_key.
endif.
endloop.
create data sr_data_p like hashed table of <ls_data>
with unique key (lt_key).
create data sr_data_p like e_t_data.
create data sr_data like e_t_data.
end "nos100804
perform determine_destinations using i_infoprov
changing s_destination
s_dest_back
s_report
s_basiccube.
perform is_bw_local changing s_bw_local.
***--> convert the selection, enhance non-Sid-values.
--> Handle fiscper7
data:
lt_SFC TYPE RSDRI_TH_SFC
,lt_sfk TYPE RSDRI_TH_SFK
,lt_range TYPE RSDRI_T_RANGE
,lt_RANGETAB TYPE RSDRI_TX_RANGETAB
,lt_HIER TYPE RSDRI_TSX_HIER
,lt_adj_hier type t_sfc "nos290704
statics: so_convert type ref to lcl_sid_no_sid
, sx_seldr_fp34 type xstring
, s_fieldname_fp7 type RSALIAS
, st_sfc_fp34 TYPE RSDD_TH_SFC
create object so_convert type lcl_sid_no_sid
exporting i_infoprov = i_infoprov.
Transform SIDs...
perform convert_importing_parameter
using i_th_sfc
i_th_sfk
i_tsx_seldr
so_convert
e_t_data
changing lt_sfc
lt_sfk
lt_range
lt_rangetab
lt_hier
sx_seldr_fp34
"Complete SELDR as XSTRING
st_sfc_fp34
"SFC of a selection with
"FISCPER3/FISCYEAR
s_fieldname_fp7
"Name of Field for 0FISCPER
"(if requested)
This is the old routine, but ST_HIENDODE and ST_HIENODENAME can
be neglected, since they are not used at all.
perform prepare_selections
using lt_sfc
lt_sfk
lt_range
lt_rangetab
lt_hier
changing st_sfc
st_sfk
st_range
st_hienode
st_hienodename
st_seltype.
endif.
assign sr_data->* to <lt_data>.
assign sr_data_p->* to <lt_data_p>.
describe table <lt_data_p> lines l_nr.
while l_nr < l_packagesize and s_end_of_data is initial.
if s_dest_back is initial and s_bw_local = rs_c_true.
Local call
call function l_UCR_DATA_READ_3
EXPORTING
IT_SELTYPE = sT_SELTYPE
IT_HIENODE = sT_HIENODE "not used
IT_HIENODENAME = sT_HIENODENAME "not used
IT_RANGE = sT_RANGE
I_PACKAGESIZE = i_packagesize
I_KEYDATE = i_Keydate
IT_SFC = sT_SFC
IT_SFK = sT_SFK
i_infoprov = i_infoprov
i_rfcdest = s_destination
ix_seldr = sx_seldr_fp34
it_bw_sfc = st_sfc_fp34
it_bw_sfk = i_th_sfk
i_fieldname_fp7 = s_fieldname_fp7
IMPORTING
ET_DATA = <lT_DATA>
E_END_OF_DATA = s_END_OF_DATA
ET_MESSAGE = lT_MESSAGE
et_adj_hier = lt_adj_hier "nos290704
CHANGING
c_first_call = s_first_call.
elseif s_dest_back is initial and s_bw_local = rs_c_false.
!!! Error !!! No SEM-BCS destination registered for infoprovider!
if 1 = 2.
message e151(rssem) with i_infoprov.
endif.
ls_message-msgty = 'E'.
ls_message-msgid = 'RSSEM'.
ls_message-msgno = '151'.
ls_message-msgv1 = i_infoprov.
insert ls_message into table e_t_msg.
else.
remote call to SEM-BCS
** Call UCR_DATA_READ_3 ...
if s_first_call is not initial.
get the datadescription to create the requested return-structure
in the RFC-System.
perform get_datadescr
using <lt_data>
changing st_datadescr
endif.
call function 'UCR_DATA_READ_4'
destination s_dest_back
exporting i_infoprov = i_infoprov
i_rfcdest = s_destination
i_first_call = s_first_call
i_packagesize = i_packagesize
i_keydate = i_keydate
ix_seldr = sx_seldr_fp34
it_bw_sfc = st_sfc_fp34
it_bw_sfk = i_th_sfk
it_datadescr = st_datadescr
i_fieldname_fp7 = s_fieldname_fp7
importing c_first_call = s_first_call
e_end_of_data = s_end_of_data
e_xstring = l_xstring
tables it_seltype = st_seltype
it_range = st_range
it_hienode = st_hienode "not used
it_hienodename = st_hienodename "not used
it_sfc = st_sfc
it_sfk = st_sfk
et_message = lt_message
et_adj_hier = lt_adj_hier. "nos290704.
clear <lt_data>.
if lt_message is initial.
call function 'RSSEM_UCR_DATA_UNWRAP'
EXPORTING
i_xstring = l_xstring
CHANGING
ct_data = <lt_data>.
endif.
endif.
convert the returned data (SID & Hierarchy).
call method so_convert->convert_nosid2sid
exporting it_adj_hier = lt_adj_hier[] "nos290704
CHANGING
ct_data = <lt_data>.
e_t_data = <lt_data>.
Begin "nos100804
data l_collect type sy-subrc.
l_collect = 1.
if <lt_data_p> is initial and
<lt_data> is not initial.
call function 'ABL_TABLE_HASH_STATE'
exporting
itab = <lt_data>
IMPORTING
HASH_RC = l_collect "returns 0 if hash key exist.
endif.
if l_collect is initial.
<lt_data_p> = <lt_data>.
else.
loop at <lt_data> assigning <ls_data>.
collect <ls_data> into <lt_data_p>.
endloop.
endif.
append lines of <lt_data> to <lt_data_p>.
End "nos100804
messages
loop at lt_message assigning <ls_message>.
move-corresponding <ls_message> to ls_message.
insert ls_message into table e_t_msg.
endloop.
if e_t_msg is not initial.
raise error_in_bcs.
endif.
describe table <lt_data_p> lines l_nr.
endwhile.
if l_nr <= l_packagesize.
e_t_data = <lt_data_p>.
clear <lt_data_p>.
e_end_of_data = s_end_of_data.
else.
Begin "nos100804
<lt_data> = <lt_data_p>.
append lines of <lt_data> to l_packagesize to e_t_data.
data l_from type i.
l_from = l_packagesize + 1.
clear <lt_data_p>.
insert lines of <lt_data> from l_from into table <lt_data_p>.
clear <lt_data>.
End "nos100804
endif.
UT begin: start to record data
if s_record_mode = rs_c_true.
if i_first_call = rs_c_true.
clear: s_guid, s_packageno.
perform prepare_unit_test_rec_param
using
e_end_of_data
i_infoprov
i_keydate
i_th_sfc
i_th_sfk
i_tsx_seldr
i_packagesize
lt_key
e_t_data
s_qry_memo
changing
s_guid.
endif.
add 1 to s_packageno.
perform prepare_unit_test_rec_data
using
s_guid
s_packageno
e_t_data
i_infoprov
e_end_of_data.
endif. "s_record_mode = rs_c_true
UT end.
if not e_end_of_data is initial.
clean-up
clear: s_first_call, s_destination, s_report, s_bw_local,
st_sfc, st_sfk, st_range, st_hienode, s_basiccube,
st_hienodename, st_seltype, s_dest_back, sr_data,
so_convert , s_end_of_data, sr_data_p."nos100804
free: <lt_data> , <lt_data_p>.
endif.
endfunction.
It stores query parameters into cluster table
form prepare_unit_test_rec_param using i_end_of_data type rs_bool
i_infoprov type rsinfoprov
i_keydate type rrsrdate
i_th_sfc type RSDD_TH_SFC
i_th_sfk type RSDD_TH_SFk
i_tsx_seldr type rsdd_tsx_seldr
i_packagesize type i
it_key type standard table
it_retdata type standard table
i_s_memo type char256
changing c_guid type guid_22.
data:
ls_key type g_rssem_typ_key,
ls_cluster type rssem_rfcpack,
l_timestamp type timestampl.
get GUID, ret component type
call function 'GUID_CREATE'
importing
ev_guid_22 = c_guid.
ls_key-idxrid = c_guid.
clear ls_key-packno.
cluster record
get time stamp field l_timestamp.
ls_cluster-infoprov = i_infoprov.
ls_cluster-end_of_data = i_end_of_data.
ls_cluster-system_time = l_timestamp.
ls_cluster-username = sy-uname.
return data type
data:
lo_tabtype type ref to cl_abap_tabledescr,
lo_linetype type ref to cl_abap_structdescr,
lt_datadescr type t_datadescr,
ls_datadescr like line of lt_datadescr,
lt_retcomptab type abap_compdescr_tab,
ls_retcomptab like line of lt_retcomptab,
lt_rangetab type t_seqnr_range.
lo_tabtype ?= cl_abap_typedescr=>describe_by_data( it_retdata ).
lo_linetype ?= lo_tabtype->get_table_line_type( ).
lt_retcomptab = lo_linetype->components.
call the sub procedure to use external format of C, instead of interal format (unicode).
otherwise, when create data type from internal format, it won't be the same length as stored in cluster.
PERFORM get_datadescr USING it_retdata
CHANGING lt_datadescr.
loop at lt_datadescr into ls_datadescr.
move-corresponding ls_datadescr to ls_retcomptab.
append ls_retcomptab to lt_retcomptab.
endloop.
range, excluding
record param
export p_infoprov from i_infoprov
p_keydate from i_keydate
p_th_sfc from i_th_sfc
p_th_sfk from i_th_sfk
p_txs_seldr from i_tsx_seldr
p_packagesize from i_packagesize
p_t_retcomptab from lt_retcomptab
p_t_key from it_key
p_memo from i_s_memo
to database rssem_rfcpack(ut)
from ls_cluster
client sy-mandt
id ls_key.
endform.
It stores return data to cluster table
form prepare_unit_test_rec_data using
i_guid type guid_22
i_packageno type i
it_retdata type standard table
i_infoprov type rsinfoprov
i_end_of_data type rs_bool.
data:
l_lines type i,
ls_key type g_rssem_typ_key,
ls_cluster type rssem_rfcpack,
l_timestamp type timestampl.
ls_key-idxrid = i_guid.
ls_key-packno = i_packageno.
describe table it_retdata lines l_lines.
if l_lines = 0.
clear it_retdata.
endif.
cluster record
get time stamp field l_timestamp.
ls_cluster-infoprov = i_infoprov.
ls_cluster-end_of_data = i_end_of_data.
ls_cluster-system_time = l_timestamp.
ls_cluster-username = sy-uname.
export p_t_retdata from it_retdata
to database rssem_rfcpack(ut)
from ls_cluster
client sy-mandt
id ls_key.
endform.
form convert_importing_parameter
using i_th_sfc TYPE RSDD_TH_SFC
i_th_sfk TYPE RSDD_TH_SFK
i_tsx_seldr TYPE RSDD_TSX_SELDR
io_convert type ref to lcl_sid_no_sid
i_t_data type any table
changing et_sfc TYPE RSDRI_TH_SFC
et_sfk TYPE RSDRI_TH_SFK
et_range TYPE RSDRI_T_RANGE
et_rangetab TYPE RSDRI_TX_RANGETAB
et_hier TYPE RSDRI_TSX_HIER
ex_seldr type xstring
e_th_sfc TYPE RSDD_TH_SFC
e_fieldname_fp7 type rsalias
data lt_seldr TYPE RSDD_TSX_SELDR.
data ls_th_sfc type RRSFC01.
0) rename 0BCSREQUID > 0REQUID
data l_tsx_seldr like i_tsx_seldr.
data l_th_sfc like i_th_sfc.
data l_th_sfc2 like i_th_sfc. "nos070605
l_tsx_seldr = i_tsx_seldr.
l_th_sfc = i_th_sfc.
data ls_sfc_requid type RRSFC01.
data ls_seldr_requid type RSDD_SX_SELDR.
ls_sfc_requid-chanm = '0BCS_REQUID'.
read table l_th_sfc from ls_sfc_requid into ls_sfc_requid.
if sy-subrc = 0.
delete table l_th_sfc from ls_sfc_requid.
ls_sfc_requid-chanm = '0REQUID'.
insert ls_sfc_requid into table l_th_sfc.
endif.
ls_seldr_requid-chanm = '0BCS_REQUID'.
read table l_tsx_seldr from ls_seldr_requid into ls_seldr_requid.
if sy-subrc = 0.
delete table l_tsx_seldr from ls_seldr_requid.
ls_seldr_requid-chanm = '0REQUID'.
field-symbols: <ls_range> like line of ls_seldr_requid-range-range.
loop at ls_seldr_requid-range-range assigning <ls_range>.
check <ls_range>-keyfl is not initial. "jhn190106
if <ls_range>-sidlow is initial and <ls_range>-low is not initial.
<ls_range>-sidlow = <ls_range>-low.
clear <ls_range>-low.
endif.
if <ls_range>-sidhigh is initial and <ls_range>-high is not initial.
<ls_range>-sidhigh = <ls_range>-high.
clear <ls_range>-high.
endif.
clear <ls_range>-keyfl. "jhn190106
endloop.
insert ls_seldr_requid into table l_tsx_seldr.
endif.
*1) Convert SIDs..., so that all parameter look like the old ones.
call method io_convert->convert_sid2nosid
EXPORTING
it_sfc = l_th_sfc
it_sfk = i_th_sfk
it_seldr = l_tsx_seldr
it_data = i_t_data
IMPORTING
et_sfc = et_sfc
et_sfk = et_sfk
et_range = et_range
et_rangetab = et_rangetab
e_th_sfc = l_th_sfc2 "nos070605
Ignore the old hierachy information:
clear et_hier.
delete et_range where chanm = '0REQUID'.
delete table et_sfc with table key chanm = '0REQUID'.
*2) Eliminate FISCPER7, from new strucutres:
lt_seldr = i_tsx_seldr. "nos131004
e_th_sfc = l_th_sfc.
the fiscper7 can be deleted completly from the SID-selection, because
it is also treated within et_range...
clear e_fieldname_fp7.
delete lt_seldr where chanm = cs_iobj_time-fiscper7."nos131004
Begin "nos131004
Ensure that there is no gap in the seldr.
data:
ls_seldr like line of lt_seldr
,l_fems_act like ls_seldr-fems
,l_fems_new like ls_seldr-fems
loop at l_tsx_seldr into ls_seldr
where chanm ne cs_iobj_time-fiscper7.
if ls_seldr-fems ne l_fems_act.
l_fems_act = ls_seldr-fems.
add 1 to l_fems_new.
endif.
ls_seldr-fems = l_fems_new.
insert ls_seldr into table lt_seldr.
endloop.
end "nos131004
e_th_sfc = l_th_sfc2. "nos070605
Is fiscper7 in the query? (BCS requires allways two fields)
read table e_th_sfc with key chanm = cs_iobj_time-fiscper7
into ls_th_sfc.
if sy-subrc = 0.
==> YES
--> change the SFC, so that FISCPER3 and FISCYEAR is requested.
The table ET_RANGE does contain also the selection for
FISCPER3/FISCYEAR
But since also E_FIELDNAME_FP7 is transferred to BCS, the
transformation of the data, back to FISCPER7 is done on BCS-side.
e_fieldname_fp7 = ls_th_sfc-KEYRETURNNM.
"begin nos17060
if e_fieldname_fp7 is initial.
e_fieldname_fp7 = ls_th_sfc-sidRETURNNM.
translate e_fieldname_fp7 using 'SK'.
endif.
"end nos17060
delete table e_th_sfc from ls_th_sfc.
ls_th_sfc-chanm = cs_iobj_time-fiscper3.
ls_th_sfc-keyreturnnm = ls_th_sfc-chanm.
insert ls_th_sfc into table e_th_sfc.
ls_th_sfc-chanm = cs_iobj_time-fiscyear.
ls_th_sfc-keyreturnnm = ls_th_sfc-chanm.
insert ls_th_sfc into table e_th_sfc.
endif.
Store the SELDR in a XSTRING and unpack it just before selecting data
in BW. It is not interpreted in BCS!
export t_seldr = lt_seldr
Store also the SFC, because the BW-Systems migth be differrnt rel./SP.
t_bw_sfc = e_th_sfc to data buffer ex_seldr compression on.
endform. "convert_importing_parameter
*& Form get_datadescr
text
-->IT_DATA text
-->ET_DATADESCtext
form get_datadescr
using it_data type any table
changing et_datadescr type t_datadescr
data: lr_data type ref to data
, lo_descr TYPE REF TO CL_ABAP_TYPEDESCR
, lo_elemdescr TYPE REF TO CL_ABAP_elemDESCR
, lo_structdescr TYPE REF TO CL_ABAP_structDESCR
, lt_components type abap_component_tab
, ls_components type abap_componentdescr
, ls_datadescr type s_datadescr
field-symbols: <ls_data> type any
, <ls_components> type abap_compdescr
clear et_datadescr.
create data lr_data like line of it_data.
assign lr_data->* to <ls_data>.
CALL METHOD CL_ABAP_STRUCTDESCR=>DESCRIBE_BY_DATA
EXPORTING
P_DATA = <ls_data>
RECEIVING
P_DESCR_REF = lo_descr.
lo_structdescr ?= lo_descr.
CALL METHOD lo_structdescr->GET_COMPONENTS
RECEIVING
P_RESULT = lt_components.
loop at lo_structdescr->components assigning <ls_components>.
move-corresponding <ls_components> to ls_datadescr.
if ls_datadescr-type_kind = cl_abap_elemdescr=>typekind_char
or ls_datadescr-type_kind = cl_abap_elemdescr=>typekind_num
read table lt_components with key name = <ls_components>-name
into ls_components.
if sy-subrc = 0.
lo_elemdescr ?= ls_components-type.
ls_datadescr-length = lo_elemdescr->output_length.
endif.
endif.
append ls_datadescr to et_datadescr.
endloop.
endform. "get_datadescr
Try to give your inputs will appreciate that
thanks -
Downloaded Trial - An issues and some questions
I just downloaded the trial and have some issues and questions I hope someone here can help with.
My problem:
I am unable to use the Media encoder to create any output files. I get the error, "The source and output color space are not compatible or a conversion does not exist". This happens when I attempt compositing video clips. If I use a single video clip I don't have this issue.
Using the Export - Movie menu option works (for the same composite), but the amount of time it takes (a hour and 20 minutes for a 4 minute movie) too much and I can't see myselfe going too far or dooing too much work in this fashion. I believe my hardware specs are well above the required specs. If this normal?
Questions:
1. I can't import mpeg video. Is this by design? If so why?
3. I can't export to wmv? Is this by design, if so why?
What is the true minimum hardware spec. for someone intending to edit/create HDV? In other words, what is someone out there using that works. I'm really not looking to have to spend a ton of money on a new machine :).John,
Thank you for the info. So I take it I can use mpeg as my source in the purchased product?
Do you know why I am not able to use the media encoder options to generate any output? What does the error message mean?
Jeron,
I'm not sure what your reply is in response to. (since I didn't ask about HDV) :)
I would like to hear from anyone about the hardware they're using for HDV work.
Are the minimum specs for HDV cited on the documentation really a viable editing experience? -
Performance issue when using select count on large tables
Hello Experts,
I have a requirement where i need to get count of data from a database table.Later on i need to display the count in ALV format.
As per my requirement, I have to use this select count inside a nested loops.
Below is the count snippet:
LOOP at systems assigning <fs_sc_systems>.
LOOP at date assigning <fs_sc_date>.
SELECT COUNT( DISTINCT crmd_orderadm_i~header )
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient "MANDT is referred as client
AND crmd_orderadm_iguid EQ bbp_pdigpguid
INTO w_sc_count
WHERE crmd_orderadm_i~created_at BETWEEN <fs_sc_date>-start_timestamp
AND <fs_sc_date>-end_timestamp
AND bbp_pdigp~zz_scsys EQ <fs_sc_systems>-sys_name.
endloop.
endloop.
In the above code snippet,
<fs_sc_systems>-sys_name is having the system name,
<fs_sc_date>-start_timestamp is having the start date of month
and <fs_sc_date>-end_timestamp is the end date of month.
Also the data in tables crmd_orderadm_i and bbp_pdigp is very large and it increases every day.
Now,the above select query is taking a lot of time to give the count due to which i am facing performance issues.
Can any one pls help me out to optimize this code.
Thanks,
SumanHi Choudhary Suman ,
Try this:
SELECT crmd_orderadm_i~header
INTO it_header " interna table
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient
AND crmd_orderadm_iguid EQ bbp_pdigpguid
FOR ALL ENTRIES IN date
WHERE crmd_orderadm_i~created_at BETWEEN date-start_timestamp
AND date-end_timestamp
AND bbp_pdigp~zz_scsys EQ date-sys_name.
SORT it_header BY header.
DELETE ADJACENT DUPLICATES FROM it_header
COMPARING header.
describe table it_header lines v_lines.
Hope this information is help to you.
Regards,
José -
Bapi Or Function Module for Updating a table
Can u Plz let me know , is there any bapi or function module to update few fields of a standard table using an internable.
Hi Shiva Kumar Tirumalasetty ,
There is no FM / BAPI to update directly to any SAP tables . SAP won't suggests to develop a FM/BAPI like that , as it will cause some data inconsistency problem .
We have to search and find BAPI's/ FM's for requirement specific if not exists then need to think about the alternate options (LSMW/ BDC etc..)
Hope this answers your question.
Thanks,
Greetson -
Hi, Bapi or function module to update RBCO table from an internal table.
I have a requirement to update RBCO table from an internal table. is there any Bapi or function module or any other method other than update, modify statements.
Moderator message: Welcome to SCN!
Moderator message: please do more research before asking, show what you have done yourself when asking.
[Rules of engagement|http://wiki.sdn.sap.com/wiki/display/HOME/RulesofEngagement]
[Asking Good Questions in the Forums to get Good Answers|/people/rob.burbank/blog/2010/05/12/asking-good-questions-in-the-forums-to-get-good-answers]
Edited by: Thomas Zloch on Jul 12, 2011 12:28 PMI don't know if any FM exists for your requirement. But you may like to copy it into a custom table and modify it according to your enterprise needs.
-
Function module for updating COBRB table
Hi,
Is there any function module to update the entries in COBRB table.
I have already tried the following but it's not updating ithe entries:
1) k_settlement_rule_fill : get the objnr from PRPS and use it for CORBA and COBRB
2) k_settlement_rule_delete : using objnr only
3) k_posting_rule_insert
Regards
PrabhatHi,
Please check this FM K_SETTLEMENT_RULES_UPDATE.
Check FM AUC_SETTLEMENT_POST for sample codes.
Regards,
Ferry Lianto -
Performance manager sql action rule for updating metric table
Hi, I need to update metric stop_date using a sql action rule (Performance Manager execute sql action rule). My problem is I can't update stop_date into the PM Repository Database. Sql action database connection is properly set, but when I set sql for executing update in table ci_probe and I schedule the rule the system doesn't seem to connect to Database (the rule run successfully, but the table ci_probe is not updated). I don't understand if the problem is database connection or wrong sql code.
Can Anyone help me with suggestions or sql action rule samples?
Thanks
Luigi
Edited by: Luigi Oliva on Jun 13, 2008 1:32 PMHi It's working, Problem was in repeat_interval it's working now,
Thanks,
I changed
repeat_interval => 'FREQ=DAILY;BYSECOND=10',to
repeat_interval => 'FREQ=SECONDLY;BYSECOND=10',Thanks,
Edited by: NSK2KSN on Jul 26, 2010 11:14 AM -
Bapi function module to update PRPS table
Hi ,
Presently i have a requirement which needs to update some data from ZIOS table into PRPS table. Can any one tell me what is the Bapi function module for updating data into PRPS table.
<REMOVED BY MODERATOR - REQUEST OR OFFER POINTS ARE FORBIDDEN>
Thanks,
Satish Raju
Edited by: Alvaro Tejada Galindo on Jan 12, 2010 11:46 AMThese ZZ fields are specific to your application, use the EXTENSION parameters.
Look in BAPI_PS_INITIALIZATION documentation, there is an explanation how-to fill specific fields.
For the BAPIs used to create and change project definitions, WBS
elements, networks, activities, and activity elements, you can
automatically fill the fields of the tables PROJ, PRPS, AUFK, and AFVU
that have been defined for customer enhancements in the standard system.
For this purpose, help structures that contain the respective key
fields, as well as the CI include of the table are supplied. The BAPIs
contain the parameter ExtensionIN in which the enhancement fields can be
entered and also provide BAdIs in which the entered values can be
checked and, if required, processed further.
CI Include Help Structure Key
CI_PRPS BAPI_TE_WBS_ELEMENT WBS_ELEMENT
Procedure for Filling Standard Enhancements
Before you call the BAPI for each object that is to be created or
changed, for which you want to enter customer-specific table enhancemen
fields, add a data record to the container ExtensionIn:
o STRUCTURE: Name of the corresponding help structure
+o VALUEPART1: Key of the object + start of the data part+
o VALUEPART2-4: If required, the continuation of the data part
VALUPART1 to VALUPART4 are therefore filled consecutively, first with
the keys that identify the table rows and then with the values of the
customer-specific fields. By structuring the container in this way, it
is possible to transfer its content with one MOVE command to the
structure of the BAPI table extension.
Note that when objects are changed, all fields of the enhancements are
overwritten (as opposed to the standard fields, where only those fields
for which the respective update indicator is set are changed).
Therefore, even if you only want to change one field, all the fields
that you transfer in ExtensionIn must be filled.
You have to use these parameters in BAPI_BUS2054_GETDATA as well as in BAPI_BUS2054_CHANGE_MULTI.
Regards -
Any Bapi or function Module to update standard table
Can u Plz let me know , is there any bapi or function module to update few fields of a standard table using an internable.
I don't know if any FM exists for your requirement. But you may like to copy it into a custom table and modify it according to your enterprise needs.
Maybe you are looking for
-
Can't boot from cd anymore after using norton systemworks 3
It is an imac 233 , and now all that comes up on boot up is a command line and it won't boot from cd anymore when I hold down c. I'm not familiar with any mac command line at all except dos. Any ideas? Recently trying to install a new hard drive by t
-
Error in PL/SQL generated package
Hello, With the help of ODM (version 10,2,0,3,1 -Build 479) I created a SVM classification model, which works very fine. After that, I generated the PL/SQL package, which returns an ORA-06512 in the call into DBMS_DATA_MINING.CREATE_MODEL. I tried to
-
How to create Rules with Flex Field mapping in the bpm worklist
I Have created a flex field label and was able to map to the flex field attributes . But when i try to create a rules , I don't see the label or the flex attributes in the task payload . Can someone please help is understanding how to create Rules wi
-
Photoshop documents not displaying in catalogue
When I edit an image in Photoshop, the image doesn't appear in Lightroom. Instead, I'm taken to a blank grey background where the image would normally be. If I return to grid view in the Library module, the .psd is not visible. Filtering by metadata
-
Hi Friends, I need a make a java canvas in which i have to paint thumbnail images by native cocoa code. I had got drawing surface(ds) and drawing surface interface(dsi).Also i am able to read the thumbnail images but donot know how to paint images in