BAPI_SAG_CHANGE - ITEM_COND_VALIDITY - Update VALID_TO & VALID_FROM
Hi All, I'm using BAPI_SAG_CHANGE to update SAG Conditions (Price and Validate Dates) and I cant do it.
I can update prices but when I want update any date of ITEM_COND_VALIDITY table (A016), the BAPI create a new condition serial (KNUMH).
If I do it from ME32L, these dates are updated right and the KNUMH doesnt change.
I'm tring to update only one condition period.
This is the A016 Data
KAPPL KSCHL EVRTN EVRTP DATBI DATAB KNUMH
M ZP00 6200000200 00001 31.12.9999 01.01.0001 0000479323
This is my SE37 input data
PURCHASINGDOCUMENT 6200000200
ITEM_COND_VALIDITY 00001 0000479323 01.07.2011 01.08.2011
ITEM_COND_VALIDITYX 00001 0000479323 X X
ITEM_CONDITION ITEM_ SERIAL_ID CO D COND CHANGE_ID
00001 0000479323 01 ZP00 U
ITEM_CONDITIONX ITEM SERIAL_ID CO I S C
00001 0000479323 01 X X X
After do it I use BAPI_TRANSACTION_COMMIT in the same LWO
Thanks all!!
Sebastián Soto
Hello Juan,
Just a quick question. Did you get this to work? I am also using BAPI_SAG_CHANGE to add new prices I just want to know how you did it.
Regards,
Jamiros
Similar Messages
-
Problem in pricing conditions are not updated in BAPI_SAG_CHANGE
Hallo Friends,
we have faced the one problem with the standard bapi *BAPI_SAG_CHANGE.
we have passed the 10 item level data to the BAPI_SAG_CHANGE , the bapi BAPI_SAG_CHANGE successfully updated the 10 item level data to the schedule agreement .
the problem is pircing condtions are not updated succesfully for the each items in the schedule agreement.
please suggest the good solution.
thanks
kumarHi jons,
Thanks for respoding my request, Please find my below code lines
CALL FUNCTION 'BAPI_SAG_CHANGE'
EXPORTING
purchasingdocument = st_header-number
header = st_header
headerx = st_headerx
VENDOR_ADDRESS =
HEAD_EXPORT_IMPORT =
HEAD_EXPORT_IMPORTX =
TESTRUN =
TECHNICAL_DATA =
IMPORTING
EXP_HEADER =
EXP_HEAD_EXPORT_IMPORT =
TABLES
return = st_return
Begin of modify by kirankumar 07/12/2009
item = st_item
item = st_item[]
End of modify by kirankumar 07/12/2009
itemx = st_itemx[]
account = st_account
accountprofitsegment = st_accountprofitsegment
accountx = st_accountx
schedule = st_schedule
schedulex = st_schedulex
sc_component = st_sc_component
sc_componentx = st_sc_componentx
shipping = st_shipping[]
shippingx = st_shippingx[]
shipping_exp = st_shipping_exp
delivery_address = st_delivery_address
item_cond_validity = st_item_cond_validity[]
item_cond_validityx = st_item_cond_validityx[]
item_condition = st_item_condition[]
item_conditionx = st_item_conditionx[]
item_cond_scale_value = st_item_cond_scale_value
item_cond_scale_quan = st_item_cond_scale_quan
export_import = st_export_import
export_importx = st_export_importx
item_text = st_item_text
header_text = st_header_text
head_cond_validity = st_head_cond_validity
head_cond_validityx = st_head_cond_validityx
head_condition = st_head_condition
head_conditionx = st_head_conditionx
head_cond_scale_val = st_head_cond_scale_val
head_cond_scale_quan = st_head_cond_scale_quan
partner = st_partner[]
partnerx = st_partnerx[]
extensionin = st_extensionin
extensionout = st_extensionout. -
RE: Customized Auditing of Data base Tables
Dear experts
I really need your help.
We have a requirement to do auditing on customized Data Base tables. I know there is a tick in the table to keep a log and you can then use SCU3 to check it. But this uses alot of resources that we can not afford. We need to design our own customized table to update any data changes to any of our customized tables.
Now we need this table to show us the user. the date and time, the old value of the field, the new value of the field, the field name, and the table name. However there will also be a report to check who changed, edited added or deleted what entries on any of the customized tables. So my problem is that when updating my customized table that holds the data. The logical flow of data does not make sense. Please see example below;
Z_SELLING_PRICE Table (the client does not want to use standard pricing that is why we use customized tables)
MANDT MATNR PRICE VALID_TO VALID_FROM CURRRENCY
100 TYRES 200 20100201 20100228 ZAR - (user changes price to 100)
100 TYPES 250 20100301 20100331 ZAR - (user changes valid_to to 20100302)
100 RIMS 150 20100301 20100331 ZAR
Z_AUDIT Table
MANDT TABLE FIELD OLDVALUE NEWVALUE CHANGE_TYPE USER DATE TIME
100 Z_SELLING_PRICE PRICE 200 100 Modified PETER 20100202 121216
100 Z_SELLING_PRICE VALID_TO 20100301 20100302 Modified JANE 20100301 154553
My problem here is how will my report know that the price (for example) that was changed wat the price changed on Types as suppose to rims?
Maby my logic is too complicated. And if i save all the fields regardless if they were changed so that my report of this table (Z_AUDIT) can make logical sense. how will i know what the logical flow of field names is to be combined to make up the record that was changed.
Please help.
Kind regardsHey Thomas
Thanks for your quick response. Yes the resourses (in my opinion would probably be the same) but unfortunatly we have a couple of basis consultants that convinced the business other wise. I get the idee that they are carefull of the fact that if they open the system to this function and someone adds a log to a standard table that the system might not have suffiecient memory.
So business decided that they wil not take this "risk" and asked the ABAP team to design our own updates.
Another option that was presented was to add a USER DATE and TIME Field to every Customized table that needs to have logging on. But in some cases we cannot allow for duplicate keys and if i make these fields keys the logic behind that will be bypassed as the same user can then enter a "duplicate key" the next day because it is not seen as a duplicate entry in the DB because the date would then differ.
So by adding a section to the user exit behind SM30 and calling the same function module (that has to stil be written) we will be able to do this "the ABAP" way. I was just wondering if there is (hopefully) someone that had a similiar situation so that we can maby duplicate the logic.??
Kind regards -
Changing a BOM Item several times
Hi,
Programmatically, I want to update a BOM several times. I want to update its valid_from and valid_to date several times.For this, I am using CSAP_MAT_BOM_MAINTAIN function module.The problem is I am only able to change the
valid_from / valid_to date (using change_no and chg_no_to fields) for a single time.Once the change_no and chg_no_to fields are populated, it does not allow me for any further changes in valid_from and valid_to fields.
SO, how can I change the valid_from and valid_to date of a BOM several times.
Regards,
TanHi,
try using this BAPI BAPI_MATERIAL_BOM_GROUP_CREATE.
Regards
Raj.D -
Restrict the report to a time interval containing current date
hi all,
we have a time dependent master data object, for which there are two records in the master data for two different time interval for example
comp valid_to valid_from amount
ab00 10/12/2006 01/01/100 100
ab00 31/12/9999 11/12/2006 200
now we want to show only those record for which the current date lies in the time interval (valid_to and valid_from) i.e only the second record should be present in the report.
how can we achieve this at the query level.
any help will be appreciated.
thanks,
RkHi RK,
Use Query Key Date(give current date) then it will only pics data which falling into that perticulr interval(Valid from & Valid to).
You can add Key Date, at Query Properties--> show variable.
Please check : [Query Properties|http://help.sap.com/saphelp_nw04/Helpdata/EN/07/0ab63c71f41d5ce10000000a114084/content.htm]
Hope it Helps
Srini -
hey friends,
I need to change for custom fields updation and valid_from field through BAPI direct execution in SE37, could anyone help me to know the correct process to check with commit?
Thanks
TInaHi,
Use the EXTENSIONIN internal table in the BAPI to change the custom fields.
Then the test sequences give the BAPI_TRANSACTION_COMMIT function module to commit the changes..
Thanks,
Naren
Message was edited by: Narendran Muthukumaran -
Unique constraint on valid_to and valid_from dates
Suppose I have a table:
create table test (name varchar2(40), valid_from date, valid_to date);
insert into test values ('fred', to_date('01/04/2008', 'dd/mm/yyyy'), to_date('01/04/2008', 'dd/mm/yyyy');
insert into test values ('fred', to_date('04/04/2008', 'dd/mm/yyyy'), to_date('06/04/2008', 'dd/mm/yyyy');
insert into test values ('fred', to_date('08/04/2008', 'dd/mm/yyyy'), to_date('09/04/2008', 'dd/mm/yyyy');How can I enforce uniqueness such that at any one point in time, only one row exists in the table for each name?
eg. insert into test values ('fred', to_date('02/04/2008', 'dd/mm/yyyy'), to_date('03/04/2008', 'dd/mm/yyyy'); -- success!
insert into test values ('fred', to_date('02/04/2008', 'dd/mm/yyyy'), to_date('05/04/2008', 'dd/mm/yyyy'); -- fail!
insert into test values ('fred', to_date('01/04/2008', 'dd/mm/yyyy'), to_date('03/04/2008', 'dd/mm/yyyy'); -- fail!
insert into test values ('fred', to_date('05/04/2008', 'dd/mm/yyyy'), to_date('05/04/2008', 'dd/mm/yyyy'); -- fail!
insert into test values ('fred', to_date('07/04/2008', 'dd/mm/yyyy'), to_date('11/04/2008', 'dd/mm/yyyy'); -- fail!Is there a method using fbi's or unique constraints? I'd really rather avoid using triggers and pl/sql if I can, but I can't think of a way...
Message was edited by:
Boneist
Added some extra test conditionsHow about this pair of indexes:
CREATE UNIQUE INDEX test_fromdate_idx ON test(name,valid_from);
CREATE UNIQUE INDEX test_todate_idx ON test(name,valid_to);Here is the test:
SQL> create table test (name varchar2(40), valid_from date, valid_to date);
Table created.
SQL> insert into test values ('fred', to_date('01/04/2008', 'dd/mm/yyyy'), to_date('01/04/2008', 'dd/mm/yyyy'));
1 row created.
SQL> insert into test values ('fred', to_date('04/04/2008', 'dd/mm/yyyy'), to_date('06/04/2008', 'dd/mm/yyyy'));
1 row created.
SQL> insert into test values ('fred', to_date('08/04/2008', 'dd/mm/yyyy'), to_date('09/04/2008', 'dd/mm/yyyy'));
1 row created.
SQL> CREATE UNIQUE INDEX test_fromdate_idx ON test(name,valid_from);
Index created.
SQL> CREATE UNIQUE INDEX test_todate_idx ON test(name,valid_to);
Index created.
SQL> insert into test values ('fred', to_date('02/04/2008', 'dd/mm/yyyy'), to_date('03/04/2008', 'dd/mm/yyyy'));
1 row created.
SQL> insert into test values ('fred', to_date('02/04/2008', 'dd/mm/yyyy'), to_date('05/04/2008', 'dd/mm/yyyy'));
insert into test values ('fred', to_date('02/04/2008', 'dd/mm/yyyy'), to_date('05/04/2008', 'dd/mm/yyyy'))
ERROR at line 1:
ORA-00001: unique constraint (TEST_USER.TEST_FROMDATE_IDX) violated
SQL> insert into test values ('fred', to_date('01/04/2008', 'dd/mm/yyyy'), to_date('03/04/2008', 'dd/mm/yyyy'));
insert into test values ('fred', to_date('01/04/2008', 'dd/mm/yyyy'), to_date('03/04/2008', 'dd/mm/yyyy'))
ERROR at line 1:
ORA-00001: unique constraint (TEST_USER.TEST_FROMDATE_IDX) violated
SQL> spool off;Hope this helps! -
Problem in updating the GR processing time using BAPI 'BAPI_SAG_CHANGE'
Hi,
I am using the BAPI 'BAPI_SAG_CHANGE' for uopdating the GR processing time(BAPIMEOUTITEM-GR_PR_TIME) of the scheduling agreements.
Issue: when there is no value maintained for GR processing time for a particular scheduling agreemnet the BAPI updates the GR processing time successfully but when there is some value already existing for a particular scheculing agreement then the BAPI does not update this field eventhough the return parameter shows the message 'successfully updated'.
Can somebody suggest what could be the reason.
OR
Is there any other way/FM to update the scheduling agreements.
Thanks,
RavindraThis appears to be old-style BAPI, with ....X tables. Did you populate the field in the X version of the outitem for the column(s) to be changed? Did you call the COMMIT bapi after the update?
-
Unique constraint violation while updating a non PK column
Hi,
I seem to have found this strange error.
What I try to do is bulk fetch a cursor in some table arrays. with limit of 1000
Then using a forall and a save exceptions at the end
I update a table with the values inside one of the table arrays.
The column I update is not part of a PK
I catch the error message: ORA-24381
by using PRAGMA exception_init(dml_errors, -24381);
and later on :
WHEN dml_errors THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
FOR i IN 1..sql%BULK_EXCEPTIONS.count LOOP
lr_logging.parameters:= 'index = ' || sql%BULK_EXCEPTIONS(i).error_index || 'error = ' ||Sqlerrm(-sql%BULK_EXCEPTIONS(i).error_code) ;
END LOOP;
I insert these errors in another table. and i get 956 errors
first one is :
index = 3error = ORA-00001: unique constraint (.) violated
last one is
index = 1000error = ORA-00001: unique constraint (.) violated
How can this be.Since i don't update in a PKcolumn.
FULL CODE IS:
PROCEDURE Update_corr_values( as_checkdate_from IN VARCHAR2,
as_checkdate_until IN VARCHAR2,
as_market IN VARCHAR2
IS
LS_MODULE_NAME CONSTANT VARCHAR2(30) := 'update_values';
lr_logging recon_logging.logrec;
CURSOR lc_update IS
SELECT /*+ORDERED*/c.rowid,c.ralve_record_id,d.value,c.timestamp,f.value
FROM rcx_allocated_values a,
rcx_allocated_values b,
meter_histories e,
rcx_allocated_lp_value c,
rcx_allocated_lp_value d,
counter_values f
WHERE a.slp_type NOT IN ('S89', 'S88', 'S10', 'S30') --AELP
AND b.slp_type IN ('S89', 'S88') --residu
AND a.valid_from >= to_date(as_checkdate_from,'DDMMYYYY HH24:MI')
AND a.valid_to <= to_date(as_checkdate_until,'DDMMYYYY HH24:MI')
AND a.market = as_market
AND a.market = b.market
AND a.ean_sup = b.ean_sup
AND a.ean_br = b.ean_br
AND a.ean_gos = b.ean_gos
AND a.ean_dgo = b.ean_dgo
AND a.direction = b.direction
AND a.valid_from = b.valid_from
AND a.valid_to = b.valid_to
AND c.ralve_record_id = a.record_id
AND d.ralve_record_id = b.record_id
AND c.TIMESTAMP = d.TIMESTAMP
AND e.ASSET_ID = 'KCF.SLP.' || a.SLP_TYPE
--AND f.timestamp between to_date(gs_checkdate_from,'ddmmyyyy') and to_Date(as_checkdate_until,'ddmmyyyy')
AND e.SEQ = f.MHY_SEQ
AND f.TIMESTAMP =c.timestamp - 1/24
ORDER BY c.rowid;
TYPE t_value IS TABLE OF RCX_ALLOCATED_LP_VALUE.VALUE%TYPE;
TYPE t_kcf IS TABLE OF COUNTER_VALUES.VALUE%TYPE;
TYPE t_timestamp IS TABLE OF RCX_ALLOCATED_LP_VALUE.TIMESTAMP%TYPE;
TYPE t_ralverecord_id IS TABLE OF RCX_ALLOCATED_LP_VALUE.RALVE_RECORD_ID%TYPE;
TYPE t_row IS TABLE OF UROWID;
ln_row t_row :=t_row();
lt_value t_value := t_Value();
lt_kcf t_kcf := t_kcf();
lt_timestamp t_timestamp := t_timestamp();
lt_ralve t_ralverecord_id := t_ralverecord_id();
v_bulk NUMBER := 1000;
val number;
kcf number;
ralve number;
times date;
dml_errors EXCEPTION;
errors NUMBER;
PRAGMA exception_init(dml_errors, -24381);
BEGIN
--setting arguments for the logging record
lr_logging.module := LS_MODULE_NAME;
lr_logging.context := 'INFLOW_ALL_VALUES_PARTS';
lr_logging.logged_by := USER;
lr_logging.parameters := 'Date time started: ' || TO_CHAR(sysdate,'DD/MM/YYYY HH24:MI');
-- log debugs
recon_logging.set_logging_env (TRUE, TRUE);
recon_logging.log_event(lr_logging,'D');
OPEN lc_update;
LOOP
FETCH lc_update BULK COLLECT INTO ln_row,lt_ralve,lt_value,lt_timestamp,lt_kcf LIMIT v_bulk;
FORALL i IN NVL(lt_value.first,1)..NVL(lt_value.last,0) SAVE EXCEPTIONS
UPDATE RCX_ALLOCATED_LP_VALUE
SET VALUE = VALUE * lt_value(i) * lt_kcf(i)
WHERE rowid =ln_row(i);
COMMIT;
lt_value.delete;
lt_timestamp.delete;
lt_ralve.delete;
lt_kcf.delete;
ln_row.delete;
EXIT WHEN lc_update%NOTFOUND;
END LOOP;
CLOSE lc_update;
recon_logging.log_event(lr_logging,'D');
lr_logging.parameters := 'Date time ended: ' || TO_CHAR(sysdate,'DD/MM/YYYY HH24:MI');
recon_logging.log_event(lr_logging,'D');
--to be sure
COMMIT;
EXCEPTION
WHEN dml_errors THEN
recon_logging.set_logging_env(TRUE,TRUE);
lr_logging.module := 'updatevalues';
lr_logging.context := 'exception';
lr_logging.logged_by := USER;
lr_logging.parameters := 'in dml_errors';
recon_logging.log_event(lr_logging);
errors := SQL%BULK_EXCEPTIONS.COUNT;
lr_logging.parameters:=errors;
recon_logging.log_event(lr_logging);
lr_logging.parameters :=('Number of errors is ' || errors);
--DBMS_OUTPUT.PUT_LINE('Number of errors is ' || errors);
FOR i IN 1..sql%BULK_EXCEPTIONS.count LOOP
lr_logging.parameters:= 'index = ' || sql%BULK_EXCEPTIONS(i).error_index || 'error = ' ||Sqlerrm(-sql%BULK_EXCEPTIONS(i).error_code) ;
recon_logging.log_event(lr_logging);
END LOOP;
--recon_logging.set_logging_env(TRUE,TRUE);
--recon_logging.log_event(lr_logging);
commit;
WHEN OTHERS THEN
lr_logging.module := 'updatevalues';
lr_logging.context := 'exception';
lr_logging.logged_by := USER;
recon_logging.set_logging_env(TRUE,TRUE);
lr_logging.parameters := 'in others error=' || SQLERRM;
recon_logging.log_event(lr_logging);
commit;--to look which is truly the last (else only commit after 1000)
--raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
END Update_corr_values;Hi,
No I didn't update a unique constraint.
But I found out that there is a trigger that causes the unique constraint while updating in the table.
Silly mistake.Didn't know there was a trigger there.
Thx anyway.
Greetz -
Conditions not updated in Contract Creation using BAPI_CONTRACT_CREATE
Hi all,
The conditions for the Contracts does not get updated eventhough the contract is created.
BAPI Used : BAPI_CONTRACT_CREATE
iam using the following code,
conitem_cond_validity-item_no = con_data-ebelp.
conitem_cond_validity-plant = con_data-werks.
CONCATENATE con_headerdata-kdatb6(4) con_headerdata-kdatb3(2) con_headerdata-kdatb+0(2)
INTO conitem_cond_validity-valid_from.
CONCATENATE con_headerdata-kdate6(4) con_headerdata-kdate3(2) con_headerdata-kdate+0(2)
INTO conitem_cond_validity-valid_to.
APPEND conitem_cond_validity.
CLEAR conitem_cond_validity.
the respective conitem_cond_validityx table has been updated with the value 'X'.
conitem_condition-item_no = con_data-ebelp.
conitem_condition-cond_type = kschl_1.
conitem_condition-cond_value = kbetr_1.
conitem_condition-currency = konwa_1.
conitem_condition-cond_unit = kmein_1.
conitem_condition-cond_p_unt = kpein_1.
conitem_condition-numerator = kumza_1.
conitem_condition-denominator = kumne_1.
conitem_condition-change_id = 'I'.
APPEND conitem_condition.
CLEAR conitem_condition.
the respective conitem_conditionx table has been updated with the value 'X'.
Please help to solve the issue.
Thanks and Regards,
RajHi Paromita Roy Chowdhury,
without knowing your exact situation (code), i can see a possible reasons: If it is exactly the 1000th, there may be a field of length 3 used as counter that will overflow when adding 1 to 999.
Regards,
Clemens -
Update email address in vendor master using report RPRAPA00
Hi All ,
My requirement is to update email address in communication tab of vendor master .
There is one report RPRAPA00 running in background to update vendor master except email .
I could not able to see anything to update email address in user-exits available in report RPRAPA00.
Is there any way to update email address using this report or any alternative way .
Please help .
Thanks and regards ,
Sijin KP.You can use a couple of function modules from function group SZA0 to update the email address.
Attached is a simple test program that inserts a new email address for a vendor whose address number is known (from LFA1-ADRNR):
DATA: comtab TYPE STANDARD TABLE OF adsmtp WITH HEADER LINE.
comtab-consnumber = '001'.
comtab-flgdefault = 'X'.
comtab-smtp_addr = 'email_address_comes_here'.
comtab-dft_receiv = 'X'.
comtab-valid_from = '00010101'.
comtab-valid_to = '99991231'.
comtab-updateflag = 'I'.
APPEND comtab.
CALL FUNCTION 'ADDR_COMM_MAINTAIN'
EXPORTING
address_number = '0000141306'
table_type = 'ADSMTP'
TABLES
comm_table = comtab
EXCEPTIONS
parameter_error = 1
address_not_exist = 2
internal_error = 3
OTHERS = 4
IF sy-subrc NE 0.
WRITE sy-subrc.
ELSE.
CALL FUNCTION 'ADDR_MEMORY_SAVE'
EXCEPTIONS
address_number_missing = 1
person_number_missing = 2
internal_error = 3
database_error = 4
reference_missing = 5
OTHERS = 6
IF sy-subrc NE 0.
WRITE sy-subrc.
ENDIF.
ENDIF. -
Dear Sir,
i have table plan and projects and spatial table called roads and first fime i have stored data into roads table with projects and plan next time i want to update road and keep old road data and new road data. how do i maintain displaying both old and new data of that road. please can any one help me in this matter.
i want to maintain the roads updated information in projects table with plan table.
Thanks
KabeerKabeer,
There are many ways to do this.
I add columns to the table which are DATE and contain VALID_FROM and VALID_TO values.
It really depends on what you want to be able to do.
Regards.
Ivan -
Hi,
I have a database table with the following structure: -
MANDT: - PK (Primary Key)
PLANT: - PK
MATNR: - PK
VALID TO: - PK
VALID_FROM
The table contains a record such as the following: -
400 3330 material1 31/12/9999 28/06/2008
What I am trying to do programmatically is to replace the above record with the following 2 records (CONTAINED IN AN INTERNAL TABLE PT_PRICE_LEVEL): -
400 3330 material1 28/06/2008 28/06/2008
400 3330 material1 31/12/9999 29/06/2008
In order to achieve this, I created a function module and called it in update mode. The function module has the following code in it: -
DELETE FROM table_name
WHERE plant = 3330
AND matnr = 'material1'
AND valid_to = '31/12/9999'. (This deletes the existing record)
INSERT zptp_rcd_pricelv FROM TABLE pt_price_level. (This inserts the new records).
This should have worked but instead it generates an error message saying that duplicate records can't be inserted.
I think I know the reason. I think I need to call a commit after the delete operation and then insert the new records. But, this code is in an update FM. I don't think I am allowed to call the COMMIT statement within an update FM. Can anyone provide me with suggestions on how an existing records primary key can be updated? My approach certainly doesn't seem to work.
Any help would be appreciated.
Regards,
DivyamanHi,
Thank you all for your suggestions. But, the real cause of the problem is as follows: -
The primary key had been specified on the following four keys: -
MANDT: - PK (Primary Key)
PLANT: - PK
MATNR: - PK
VALID_TO: - PK
However, a unique index had also been created (something which I wasn't aware of) which included only the first 3 fields. As a consequence, when I entered 2 rows with the same combination of mandt, plant and matnr but a different valud for valid_to, I was getting an error message stating that duplicate records can't be inserted.
Regards,
Divyaman -
Hello Experts,
I've the below scenario,
I created new Org dependent Z set type and assigned to a category.
And I'm using the API - COM_PRODUCT_GETDETAIL_API to get the details about the material via the export parameter ES_MATERIAL_DATA which has the structure of COMT_PROD_MAT_MAINTAIN_API.
My doubt :
Whether the newly created Z set type structure will be automatically updated to this COMT_PROD_MAT_MAINTAIN_API? (In my case it has not happened)
I hope system has to append this structure automatically.
If not how to get this job done. Is there any settings to be done?
If not Whether I can append this structure manually? will it affect anywhere?
Kindly help me on this regard.
Thanks in Advance!
Regards,
Senthil.Also Senthil,
I am pasting some sample code for your help. this should solve your issue
Also make sure the set type structures are appened in COM PRODUCT MAINTAIN API Structure else the updation logic doesn't work.
DATA: ls_detail1 TYPE zmsr_detail_maintain,
gs_set_typ TYPE comt_prod_mat_maintain_api,
gt_product_api TYPE comt_product_maintain_api_tab,
gs_product_api TYPE comt_product_maintain_api,
gs_text1 TYPE comt_prlgtext_maintain,
fill the new comm_product
gs_product_api-com_product-product_guid = g_product_guid.
gs_product_api-com_product-product_type = c_product_type.
gs_product_api-com_product-logsys = g_logsys.
gs_product_api-com_product-valid_from = g_valid_from.
gs_product_api-com_product-valid_to = g_valid_to.
gs_product_api-categories = gt_categories.
gs_text1-data-stxh-mandt = sy-mandt.
gs_text1-data-stxh-tdobject = c_product. "'PRODUCT'.
gs_text1-data-stxh-tdid = c_base. "'BASE'.
gs_text1-data-stxh-tdspras = c_en. "'EN'.
ls_line-tdline = gs_upload1-lg_text.
APPEND ls_line TO lt_lines.
CLEAR ls_line.
ls_longtxt-lines = lt_lines.
gs_text1-data-lines = ls_longtxt-lines.
APPEND gs_text1 TO gs_product_api-long_texts.
gs_text-data-langu = c_en. "'EN'.
gs_text-data-short_text = gs_upload1-msr_desc.
gs_text-data-valid_from = g_valid_from.
gs_text-data-valid_to = g_valid_to.
gs_text-data-logsys = g_logsys.
APPEND gs_text TO gs_product_api-short_texts.
CLEAR gs_text.
APPEND gs_product_api TO gt_product_api.
ls_detail1-data-zz0010 = gs_upload1-zz0010_1.
ls_detail1-data-zz0011 = gs_upload1-zz0011_1.
ls_detail1-data-zz0012 = gs_upload1-zz0012_1.
ls_detail1-data-zz0013 = gs_upload1-zz0013_1.
ls_detail1-data-zz0014 = gs_upload1-zz0014_1.
ls_detail1-data-zz0015 = gs_upload1-zz0015_1.
ls_detail1-data-zz0016 = gs_upload1-zz0016_1.
ls_detail1-data-zz0017 = gs_upload1-zz0017_1.
ls_detail1-data-zz0018 = gs_upload1-zz0018_1.
ls_detail1-data-zz0019 = gs_upload1-zz0019_1.
ls_detail1-data-zz0020 = gs_upload1-zz0020_1.
ls_detail1-data-zz0021 = gs_upload1-zz0021_1.
ls_detail1-data-zz0022 = gs_upload1-zfil_nbr.
ls_detail1-data-product_guid = g_product_guid.
ls_detail1-data-logsys = g_logsys.
ls_detail1-data-valid_from = g_valid_from.
ls_detail1-data-valid_to = g_valid_to.
APPEND ls_detail1 TO gs_set_typ-zmsr_detail.
CLEAR ls_detail1.
APPEND gs_set_typ TO gt_set_typ.
CLEAR gs_set_typ.
CALL FUNCTION 'IB_CONVERT_INTO_TIMESTAMP'
EXPORTING
i_datlo = sy-datum
i_timlo = sy-uzeit
IMPORTING
e_timestamp = g_valid_from.
APPEND gs_set_typ TO gt_set_typ.
CLEAR gs_set_typ.
*-------Mainten. Functions for Materials
CALL FUNCTION 'COM_PROD_MATERIAL_MAINTAIN_API'
EXPORTING
it_product = gt_product_api
it_set = gt_set_typ
iv_commit_work = 'X'
iv_update_task = 'X'
iv_enqueue_mode = 'E'
IMPORTING
et_bapireturn = gt_bapireturn.
IF sy-subrc = 0.
Commit the BAPI
COMMIT WORK .
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
HOpe this helps !!!!! -
Update with correlated subquery
Hi,
I have the following SQL update statement:
update temp_input_sms b
set (b.provider_id,b.region_id)=
(select provider_id,region_id
from
select a.PROVIDER_ID,a.REGION_ID,row_number() over (order by fraud) rn
from icdm.smsc_lookup a
where sysdate>=a.VALID_FROM and sysdate<a.VALID_TO
and b.user_summarisation like a.SMSC_CODE_LIKE
) where rn=1)
The problem is that there are two nested subqueries on the right side of the set clause, and in the inner subquery the 'b.user_summarisation' column is not available.
Could anyone suggest a workaround for this problem?
Thanks.
Viktorpossible workaround:
update temp_input_sms c
set (c.provider_id, c.region_id) =
(select d.provider_id, d.region_id
from (select a.provider_id, a.region_id, smsc_code_like,
row_number () over
(partition by a.smsc_code_like
order by a.fraud) rn
from smsc_lookup a, temp_input_sms b
where sysdate >= a.VALID_FROM
and sysdate < a.VALID_TO
and b.user_summarisation like a.SMSC_CODE_LIKE) d
where rn=1
and c.user_summarisation like d.smsc_code_like)
[pre]
Maybe you are looking for
-
GR/IR clearing amount in PO
Hi, We have a PO,user wants to delete this PO ,GR reversal is done.But Account maitenance is showing 83000 INR in GR/IR clearing value . in local currency.And user is not able to delete the PO because of this.Is there anyway to close this PO. can we
-
Can't Update or Remove apps, iPhone 4, iOS 7
iPhone 4, 16 GB, running iOS 7.1.2, not jail broken, iTunes 11.4, Mac OS 10.6.8. I always sync & update via iTunes & USB cable. Lately when I sync iPhone to computer, and update apps, for some reason, available space changes radically- one minute it
-
I can no longer attach anything to my Gmail messages. No problems in IE, but Firefox is my browser of choice!
-
No record found in the table while using condition for the new added field
Hi, I have added a new field in Z table. There is lots of record in the table. The field which I added have null records. When I am checking the record using the condition new field equal(EQ) to space or blank. This shows no record in the table, but
-
Unable to install Quicktime and itunes
Hi, I am getting beyond despair and desperate for help on this please (I have spent hours and hours and still no further forward). I am running XP with service pack 3 installed. Windows updates are up to date. When iTunes 9 first came out it wanted t