Issue while using Collect statement in program
Hi friend's,
i have some issue regarding Collect statement..
Issue is somthing like this,i have created a structure and internal table and work area with same type,
u can look in below code..i wann to collect 'enmng' f'ield value whose matnr,charg,mvt type ,aufnr,rsnum are same but rspos and lgort are differnt for this i need to collect 'ENMNG' field value can any one tel me what to do...it not collectng value instead of this it append entire row...
types: begin of ty_resb,
rsnum type resb-rsnum, "Resrvation Number
rspos type resb-rspos, "item position
xloek type resb-xloek, "Item Deleted if = 'X'
matnr type resb-matnr, "Material Number
lgort type resb-lgort, "Storage location 'REJT'
charg type resb-charg, "Batch No
enmng type resb-enmng, "Quantity withdrawn should be greater then '0'
enwrt type resb-enwrt, "Value Withdrawn
aufnr type resb-aufnr, "Order
bwart type resb-bwart, "Movement type
end of ty_resb.
data:tt_resb type table of ty_resb ,
tt_resb1 type table of ty_resb ,
tt_resb2 type standard table of ty_resb ,
**workarea.
ts_resb type ty_resb,
ts_resb2 type ty_resb,
ts_resb1 type ty_resb.
**move data to another internal table
tt_resb1[] = tt_resb[].
sort tt_resb by rsnum rspos matnr charg bwart lgort aufnr .
sort tt_resb1 by rsnum rspos matnr charg bwart lgort aufnr.
DELETE adjacent duplicates from tt_resb comparing rsnum rspos matnr charg bwart lgort aufnr .
DELETE adjacent duplicates from tt_resb1 comparing rsnum rspos matnr charg bwart lgort aufnr.
loop at tt_resb into ts_resb .
clear:ts_resb1,ts_resb2.
read table tt_resb1 into ts_resb1 with key rsnum = ts_resb-rsnum rspos = ts_resb-rspos matnr = ts_resb-matnr
charg = ts_resb-charg aufnr = ts_resb-aufnr binary search.
if ts_resb1-bwart = '261' .
move ts_resb1-aufnr to ts_resb2-aufnr.
move ts_resb1-matnr to ts_resb2-matnr.
move ts_resb1-charg to ts_resb2-charg.
move ts_resb1-bwart to ts_resb2-bwart.
move ts_resb1-enwrt to ts_resb2-enwrt.
move ts_resb1-lgort to ts_resb2-lgort.
move ts_resb1-enmng to ts_resb2-enmng.
collect ts_resb2 into tt_resb2.
endif.
clear ts_resb.
endloop.
Regard's,
shaikh khalid.
Hi Shaikh,
I have added new declarations as highlighted below and new lines within your loop:
Execute your program in debug mode and see what happens to the internal tt_collect I added. The collect will sum the fields enmng for all entries that have same value on the fields matnr,charg,mvt type ,aufnr,rsnum.
Question from me: what do you want to do with the collected/Summed up entries?
types: begin of ty_resb,
rsnum type resb-rsnum, "Resrvation Number
rspos type resb-rspos, "item position
xloek type resb-xloek, "Item Deleted if = 'X'
matnr type resb-matnr, "Material Number
lgort type resb-lgort, "Storage location 'REJT'
charg type resb-charg, "Batch No
enmng type resb-enmng, "Quantity withdrawn should be greater then '0'
enwrt type resb-enwrt, "Value Withdrawn
aufnr type resb-aufnr, "Order
bwart type resb-bwart, "Movement type
end of ty_resb.
data:tt_resb type table of ty_resb ,
tt_resb1 type table of ty_resb ,
tt_resb2 type standard table of ty_resb ,
types: begin of ty_collect,
rsnum type resb-rsnum,
matnr type resb-matnr,
charg type resb-charg,
bwart type resb-bwart,
aufnr type resb-aufnr,
enmng type resb-enmng,
end of ty_collect.
data: tt_collect type table of ty_collect,
ts_collect type ty_collect.
**workarea.
ts_resb type ty_resb,
ts_resb2 type ty_resb,
ts_resb1 type ty_resb.
**move data to another internal table
tt_resb1[] = tt_resb[].
sort tt_resb by rsnum rspos matnr charg bwart lgort aufnr .
sort tt_resb1 by rsnum rspos matnr charg bwart lgort aufnr.
DELETE adjacent duplicates from tt_resb comparing rsnum rspos matnr charg bwart lgort aufnr .
DELETE adjacent duplicates from tt_resb1 comparing rsnum rspos matnr charg bwart lgort aufnr.
loop at tt_resb into ts_resb .
clear:ts_resb1,ts_resb2.
read table tt_resb1 into ts_resb1 with key rsnum = ts_resb-rsnum rspos = ts_resb-rspos matnr = ts_resb-matnr
charg = ts_resb-charg aufnr = ts_resb-aufnr binary search.
if ts_resb1-bwart = '261' .
move ts_resb1-aufnr to ts_resb2-aufnr.
move ts_resb1-matnr to ts_resb2-matnr.
move ts_resb1-charg to ts_resb2-charg.
move ts_resb1-bwart to ts_resb2-bwart.
move ts_resb1-enwrt to ts_resb2-enwrt.
move ts_resb1-lgort to ts_resb2-lgort.
move ts_resb1-enmng to ts_resb2-enmng.
collect ts_resb2 into tt_resb2.
move-corresponding ts_resb1 to ts_collect.
collect ts_collect into tt_collect
endif.
clear ts_resb.
clear ts_collect.
endloop.
Similar Messages
-
hi everybody,
how to use collect statement to get the total amount paid to different vendor payments
data : begin of wa occurs 0,
bukrs type bsak-bukrs,
lifnr type bsak-lifnr,
land1 type lfa1-land1,
name1 like lfa1-name1,
dmbtr like bsak-dmbtr,
count type i value 0,
tot_vend type i,
vend type i.
data :end of wa.
data : itab like table of wa.
select distinct bukrs lifnr waers from bsak into
corresponding fields of wa
where bukrs in s_bukrs
and lifnr in s_lifnr
and bschl in s_bschl.
i want the total amount paid according to vendor i am using this way but i am not getting
loop at itab into wa.
wa-dmbtr = bsak-dmbtr.
collect wa-dmbtr into itab.
modify itab from wa transporting dmbtr.
i am unalbe to get it
can anybody help me regarding this if possible with example.
thanks in advance,
regards,
venu.Hi Venu,
types: BEGIN OF ty,
NAME(20),
SALES TYPE I,
END OF ty.
data : itab type standard table of ty,
itab1 type standard table of ty,
wa type ty,
wa1 type ty.
wa-NAME = 'Duck'. wa-SALES = 10.
append wa to itab.
wa-NAME = 'Tiger'. wa-SALES = 20.
append wa to itab.
wa-NAME = 'Duck'. wa-SALES = 30.
append wa to itab.
loop at itab into wa.
wa1 = wa.
collect wa1 into itab1.
endloop.
loop at itab1 into wa1.
write : / wa1-name , wa1-sales.
endloop.
COLLECT is used to create unique or compressed datsets. The key fields are the default key fields of the internal table itab .
If you use only COLLECT to fill an internal table, COLLECT makes sure that the internal table does not contain two entries with the same default key fields.
<b>If, besides its default key fields, the internal table contains number fields (see also ABAP/4 number types ), the contents of these number fields are added together if the internal table already contains an entry with the same key fields.</b>
If the default key of an internal table processed with COLLECT is blank, all the values are added up in the first table line.
In the program you mentioned yesterday,I am not able to get the logic since many lines are commented. -
In Oracle 10g Error while using COLLECT
I getting error while using collect in 10g
SQL> ed
Wrote file afiedt.buf
1 SELECT deptno
2 , COLLECT(ename) AS emps
3 FROM emp
4 GROUP BY
5* deptno
SQL> /
, COLLECT(ename) AS emps
ERROR at line 2:
ORA-00932: inconsistent datatypes: expected NUMBER got -
Please give me the solution.you are using old version of SQL*Plus. if you use later version it will give you correct result.
Edited by: unus on Mar 14, 2010 4:25 AM -
How to use collect statement for below
data : begin of itab,
n(3) type c,
n1 type n,
k(5) type c,
end of itab.
select n n1 from into itab table /zteest.
*internal table has
n n1 k
gar 100 uji
hae 90 iou
gar 90 uji
hae 87 iou
I want
gar 190
hae 177
How to use collect statement as n1 is n ..?
let me know..
Thankstry this..
DATA : BEGIN OF itab OCCURS 0,
n(3) TYPE c,
n1(3) TYPE p DECIMALS 2,
k(5) TYPE c,
END OF itab.
itab-n = 'gar'.
itab-n1 = 100.
itab-k = 'uji'.
COLLECT itab .CLEAR itab.
itab-n = 'hae'.
itab-n1 = 90.
itab-k = 'iou'.
COLLECT itab .CLEAR itab.
itab-n = 'gar'.
itab-n1 = 90.
itab-k = 'uji'.
COLLECT itab .CLEAR itab.
itab-n = 'hae'.
itab-n1 = 87.
itab-k = 'iou'.
COLLECT itab .CLEAR itab. -
Issue while using SUBPARTITION clause in the MERGE statement in PLSQL Code
Hello All,
I am using the below code to update specific sub-partition data using oracle merge statements.
I am getting the sub-partition name and passing this as a string to the sub-partition clause.
The Merge statement is failing stating that the specified sub-partition does not exist. But the sub-partition do exists for the table.
We are using Oracle 11gr2 database.
Below is the code which I am using to populate the data.
declare
ln_min_batchkey PLS_INTEGER;
ln_max_batchkey PLS_INTEGER;
lv_partition_name VARCHAR2 (32767);
lv_subpartition_name VARCHAR2 (32767);
begin
FOR m1 IN ( SELECT (year_val + 1) AS year_val, year_val AS orig_year_val
FROM ( SELECT DISTINCT
TO_CHAR (batch_create_dt, 'YYYY') year_val
FROM stores_comm_mob_sub_temp
ORDER BY 1)
ORDER BY year_val)
LOOP
lv_partition_name :=
scmsa_handset_mobility_data_build.fn_get_partition_name (
p_table_name => 'STORES_COMM_MOB_SUB_INFO',
p_search_string => m1.year_val);
FOR m2
IN (SELECT DISTINCT
'M' || TO_CHAR (batch_create_dt, 'MM') AS month_val
FROM stores_comm_mob_sub_temp
WHERE TO_CHAR (batch_create_dt, 'YYYY') = m1.orig_year_val)
LOOP
lv_subpartition_name :=
scmsa_handset_mobility_data_build.fn_get_subpartition_name (
p_table_name => 'STORES_COMM_MOB_SUB_INFO',
p_partition_name => lv_partition_name,
p_search_string => m2.month_val);
DBMS_OUTPUT.PUT_LINE('The lv_subpartition_name => '||lv_subpartition_name||' and lv_partition_name=> '||lv_partition_name);
IF lv_subpartition_name IS NULL
THEN
DBMS_OUTPUT.PUT_LINE('INSIDE IF => '||m2.month_val);
INSERT INTO STORES_COMM_MOB_SUB_INFO T1 (
t1.ntlogin,
t1.first_name,
t1.last_name,
t1.job_title,
t1.store_id,
t1.batch_create_dt)
SELECT t2.ntlogin,
t2.first_name,
t2.last_name,
t2.job_title,
t2.store_id,
t2.batch_create_dt
FROM stores_comm_mob_sub_temp t2
WHERE TO_CHAR (batch_create_dt, 'YYYY') = m1.orig_year_val
AND 'M' || TO_CHAR (batch_create_dt, 'MM') =
m2.month_val;
ELSIF lv_subpartition_name IS NOT NULL
THEN
DBMS_OUTPUT.PUT_LINE('INSIDE ELSIF => '||m2.month_val);
MERGE INTO (SELECT *
FROM stores_comm_mob_sub_info
SUBPARTITION (lv_subpartition_name)) T1 --> Issue Here
USING (SELECT *
FROM stores_comm_mob_sub_temp
WHERE TO_CHAR (batch_create_dt, 'YYYY') =
m1.orig_year_val
AND 'M' || TO_CHAR (batch_create_dt, 'MM') =
m2.month_val) T2
ON (T1.store_id = T2.store_id
AND T1.ntlogin = T2.ntlogin)
WHEN MATCHED
THEN
UPDATE SET
t1.postpaid_totalqty =
(NVL (t1.postpaid_totalqty, 0)
+ NVL (t2.postpaid_totalqty, 0)),
t1.sales_transaction_dt =
GREATEST (
NVL (t1.sales_transaction_dt,
t2.sales_transaction_dt),
NVL (t2.sales_transaction_dt,
t1.sales_transaction_dt)),
t1.batch_create_dt =
GREATEST (
NVL (t1.batch_create_dt, t2.batch_create_dt),
NVL (t2.batch_create_dt, t1.batch_create_dt))
WHEN NOT MATCHED
THEN
INSERT (t1.ntlogin,
t1.first_name,
t1.last_name,
t1.job_title,
t1.store_id,
t1.batch_create_dt)
VALUES (t2.ntlogin,
t2.first_name,
t2.last_name,
t2.job_title,
t2.store_id,
t2.batch_create_dt);
END IF;
END LOOP;
END LOOP;
COMMIT;
end;
Much appreciate your inputs here.
Thanks,
MK.
(SORRY TO POST THE SAME QUESTION TWICE).
Edited by: Maddy on May 23, 2013 10:20 PMDuplicate question
-
Error while using threads in proc program
Hi,
I am getting the error fetched column value NULL (-1405) in proc program while using the threads.
The execution of the program is as follows.
Tot_Threads = 5 (Total threads, totally 5 records with value instance names)
No_Of_threads = 1 (No Of threads executed at the same time)
Example :
INSTANCE_NAME Link1, link2, link3, link4, link5 (All different Databases)
NO_OF_THREADS - 5
Threading Logic:
Based on the maintanence NO_OF_THREADS, the program will process.
If (NO_OF_THREADS == 0)
Process_Sequence
else
if NO_OF_THREADS == TOT_THREADS
Process_Type1
else
Process_Type2
In a loop for all different instances,
New context area will be created and allocated.
New oracle session will be created.
New structure will be created and all global parameters are assigned to that structure.
For each instance, a new thread will be created (thr_create) and all the threads will call
the same(MainProcess) function which takes the structure as parameter.
At the end of every session, the corresponding oracle session will logged out.
Process_Type1 logic :
/* For Loop for all threads in a loop */
for(Cnt=0;Cnt < Tot_Threads;Cnt++)
/* Allocating new contect for every different thread */
EXEC SQL CONTEXT ALLOCATE :ctx[Cnt];
/* Connected to new oracle session */
logon(ctx[Cnt],ConnStr);
/* Assigning all the global parameters to the structure and then passing it to InsertBatching function */
DataSet[Cnt].THINDEX=Cnt;
DataSet[Cnt].sDebug=DebugMode;
strcpy(DataSet[Cnt].THNAME,(char *)InsNameArr[Cnt].arr);
DataSet[Cnt].ctx=ctx[Cnt];
/* creating new threads for time in a loop */
RetVal = thr_create(NULL,0,InsertBatching,&DataSet[Cnt],THR_BOUND,&threads[Cnt]);
sprintf(LocalStr1,"\nCreated thread %d", Cnt);
DebugMessage(mptr,LocalStr1,DebugMode);
for(Cnt=0;Cnt <Tot_Threads;Cnt++)
/* Waiting for threads to complete */
if (thr_join(threads[Cnt],NULL,NULL))
printf("\nError in thread Finish \n");
exit(-1);
/* Logout from the specific oracle session */
logoff(ctx[Cnt]);
/* Free the context area after usage */
EXEC SQL CONTEXT FREE :ctx[Cnt];
used functions:
thr_create with thr_suspend option
thr_join to wait for the thread to complete
Process_Type2 logic :
Here the idea is , if the load is heavy , then we can change the maintanence of NO_OF_THREADS (Ex - 2), so that only two threads will be executed in the same time , others should wait for the same to complete ,once the first two threads completed and the next two should be started and then in the same manner it will do for all the threads.
The parameters passing and the structure passing are same as above.
Here all threads will be created in suspended mode, and then only No_Of_threads(2) will
be started, others will be in suspended mode, once the first two is completed then the
other two thread will be started and in the same manner other threads.
used functions:
thr_create with thr_suspend option
thr_continue to start the suspended thread
thr_join to wait for the thread to complete
Process_Sequence logic :
Here the idea is , to run the program for all the instances , without creating the threads.Hence in the for loop , one by one instance will be taken and call the same function to Process. This will call the same function repeated for each different value. The parameters passing and the structure passing are same as above.
The InsertBatching function will prepare the cursor and pick all records for batching and then , it will call other individual functions for processing. For all the functions the structure variable will be passed as parameter which holds all the neccessary values.
Here in all the sub functions , we have used
EXEC SQL CONTEXT USE :Var;
Var is corresponding context allocated for that thread, which we assume that the corresponding context is used in all sub functions.
EXEC SQL INCLUDE SQLCA;
This statement we have given in InsertBatching Function not in all sub functiosn
Example for the Sub functions used in the program :-
/* File pointer fptr and dptr are general file pointers , to write the debub messages, DataStruct will hold all global parameters and also context area .
int Insert(FILE fptr,FILE dptr,DataStruct d1)
EXEC SQL BEGIN DECLARE SECTION;
VARCHAR InsertStmt[5000];
EXEC SQL END DECLARE SECTION;
char LocalStr[2000];
EXEC SQL CONTEXT USE :d1.ctx;
InsertStmt will hold insert statement
EXEC SQL EXECUTE IMMEDIATE :InsertStmt;
if (ERROR)
sprintf(LocalStr,"\nError in Inserting Table - %s",d1.THNAME);
DebugMessage(dptr,LocalStr,d1.sDebug);
sprintf(LocalStr,"\n %d - %s - %s",sqlca.sqlcode,ERROR_MESG,d1.THNAME);
DebugMessage(dptr,LocalStr,d1.sDebug);
return 1;
return 0;
I get this error occationally and not always. While preparing the sql statement also i am getting this error.
The code contains calls to some stored procedures also.
Thanks in advancein every select nvl is handled and this error is occuring while preparing statements also
-
How to use collect statement properly
In PBID table , i will get 4 rows for a material . I will get the corresponding values of bdzei from the same table . Now i will pass this bdzei into pbhi table. then i will get some 200 rows of data. I have to sum up the field plnmg , by grouping the laeda field of pbhi. In short, i need to know the sum of pbhi-plnmg for a particular pbhi-laeda . I know a way to do it. But i want to know how to use the COLLECT statement for this purpose. My output should contain oly 1 line for 1 material ..
PBID table
Matnr BDZEI
p4471 457
1002
2309
2493
PBHI table
BDZEI LAEDA PLNMG
1002 06.08.2004 0.000
1002 06.08.2004 83.000
457 07.08.2004 12.000
457 07.08.2004 24.000
Reqd O/p
MATNR LAEDA PLNMG
p4471 06.08.2004 83
p4471 07.08.2004 36
Hope u understood my situation .please help me out ...
Thanking you in advance ..
ShankarREPORT zppr_zpipr NO STANDARD PAGE HEADING LINE-SIZE 150 LINE-COUNT 63.
TABLES: pbid,
pbhi,
makt,
mseg,
mkpf.
DATA: BEGIN OF it_pbid OCCURS 0,
matnr LIKE pbid-matnr, " Material
status TYPE c LENGTH 4, " For distinguishing materials from pbid and pbim .. will contain space for PBID and 'PBIM' for PBIM
bdzei LIKE pbid-bdzei,
laeda LIKE pbhi-laeda,
end of it_pbid.
DATA: BEGIN OF it_pbim OCCURS 0,
matnr LIKE pbid-matnr, " Material
status TYPE c LENGTH 4, " For distinguishing materials from pbid and pbim .. will contain space for PBID and 'PBIM' for PBIM
bdzei LIKE pbim-bdzei,
laeda LIKE pbhi-laeda,
end of it_pbim.
DATA: BEGIN OF it_pbid_pbim OCCURS 0,
matnr LIKE pbid-matnr, " Material
laeda LIKE pbhi-laeda, " Reduction Date
dbmng LIKE pbhi-dbmng, " Planned quantity in the data base
plnmg LIKE pbhi-plnmg, " Planned quantity
status TYPE c LENGTH 4, " For distinguishing materials from pbid and pbim .. will contain space for PBID and 'PBIM' for PBIM
mblnr LIKE mseg-mblnr, " Material Doc Number
pbfnr LIKE pbid-pbdnr, " Plan Number
maktx LIKE makt-maktx, " Matl Desc
aenam LIKE pbhi-aenam, " User Changed
erfmg LIKE mseg-erfmg, " Qty Invoiced
budat LIKE mkpf-budat, " Invoice date
bdzei LIKE pbid-bdzei, " Independent requirements pointer
werks LIKE pbid-werks, " plant
pirrednqty TYPE i, " PIR Reduction Quantity = pbih-plnmg - pbih-dbmng
diff TYPE i, " Difference
slno TYPE i, " Sl No
END OF it_pbid_pbim.
DATA: BEGIN OF it_allrows OCCURS 0.
INCLUDE STRUCTURE it_pbid_pbim.
DATA: END OF it_allrows.
*DATA: BEGIN OF it_final OCCURS 0.
INCLUDE STRUCTURE it_pbid_pbim.
*DATA: END OF it_final.
DATA: BEGIN OF line,
matnr LIKE pbid-matnr, " Material
laeda LIKE pbhi-laeda, " Reduction Date
plnmg LIKE pbhi-plnmg, " Planned quantity
maktx LIKE makt-maktx, " Matl Desc
dbmng LIKE pbhi-dbmng, " Planned quantity in the data base
mblnr LIKE mseg-mblnr, " Material Doc Number
pbfnr LIKE pbid-pbdnr, " Plan Number
aenam LIKE pbhi-aenam, " User Changed
erfmg LIKE mseg-erfmg, " Qty Invoiced
budat LIKE mkpf-budat, " Invoice date
bdzei LIKE pbid-bdzei, " Independent requirements pointer
werks LIKE pbid-werks, " plant
pirrednqty TYPE i, " PIR Reduction Quantity = pbih-plnmg - pbih-dbmng
diff TYPE i, " Difference
slno TYPE i, " Sl No
status TYPE c LENGTH 4, " For distinguishing materials from pbid and pbim .. will contain space for PBID and 'PBIM' for PBIM
END OF line.
DATA Itfinal1 LIKE STANDARD TABLE
OF LINE
WITH DEFAULT KEY.
DATA ITfinal LIKE ITfinal1.
DATA: BEGIN OF it_dates OCCURS 0,
date TYPE sy-datum,
END OF it_dates.
DATA: l_slno TYPE i.
DATA: l_zebra TYPE c.
SELECT-OPTIONS:
s_werks FOR pbid-werks obligatory.
SELECT-OPTIONS:
s_matnr FOR pbid-matnr.
SELECT-OPTIONS:
s_pbdnr FOR pbid-pbdnr.
SELECT-OPTIONS:
s_laeda FOR pbhi-laeda obligatory.
parameter:
c_print type checkbox.
SELECT matnr bdzei FROM pbid INTO (it_pbid-matnr, it_pbid-bdzei) WHERE werks IN s_werks AND matnr IN s_matnr AND pbdnr IN s_pbdnr.
it_pbid-status = 'PBID'.
APPEND it_pbid.
move-corresponding it_pbid to it_pbid_pbim.
Append it_pbid_pbim.
ENDSELECT.
SELECT matnr bdzei FROM pbim INTO (it_pbim-matnr,it_pbim-bdzei) WHERE werks IN s_werks AND matnr IN s_matnr AND pbdnr IN s_pbdnr.
APPEND it_pbim.
Append it_pbid_pbim.
ENDSELECT.
*break-point.
START-OF-SELECTION.
LOOP AT s_laeda.
it_dates-date = s_laeda-low.
APPEND it_dates.
ENDLOOP.
DATA: l_startdate LIKE sy-datum.
IF s_laeda-high EQ space.
ELSE.
l_startdate = s_laeda-low + 1.
DO.
IF l_startdate <= s_laeda-high.
it_dates-date = l_startdate.
APPEND it_dates.
ELSE.
it_dates-date = sy-datum.
EXIT.
ENDIF.
l_startdate = l_startdate + 1.
ENDDO.
ENDIF.
*break-point.
LOOP AT it_pbim.
LOOP AT it_dates.
it_pbim-laeda = it_dates-date.
it_pbim-status = 'PBIM'.
MODIFY it_pbim TRANSPORTING laeda status.
MOVE-CORRESPONDING it_pbim TO it_pbid_pbim.
APPEND it_pbid_pbim.
ENDLOOP.
ENDLOOP.
break-point.
l_zebra = 'X'.
DATA: l_toterfmg LIKE mseg-erfmg.
DATA: l_erfmg LIKE mseg-erfmg.
data: l_totpir type i.
**************************************PBID*************************************
LOOP AT it_pbid.
move-corresponding it_pbid to it_pbid_pbim.
append it_pbid_pbim.
SELECT SINGLE maktx FROM makt INTO (it_pbid_pbim-maktx) WHERE matnr = it_pbid-matnr.
MODIFY table it_pbid_pbim TRANSPORTING maktx.
SELECT aenam laeda FROM pbhi INTO (it_pbid_pbim-aenam,it_pbid_pbim-laeda) WHERE bdzei = it_pbid-bdzei AND aenam = 'Abbau-'.
MODIFY TABLE it_pbid_pbim TRANSPORTING aenam laeda. " debug here
select single sum( dbmng ) sum( plnmg ) FROM pbhi INTO (it_pbid_pbim-dbmng,it_pbid_pbim-plnmg) WHERE bdzei = it_pbid-bdzei AND aenam = 'Abbau-' and laeda = it_pbid_pbim-laeda..
MODIFY TABLE it_pbid_pbim TRANSPORTING dbmng plnmg. " debug here
endselect.
it_pbid_pbim-pirrednqty = it_pbid_pbim-dbmng - it_pbid_pbim-plnmg.
MODIFY table it_pbid_pbim TRANSPORTING pirrednqty.
IF ( it_pbid_pbim-laeda IN s_laeda AND it_pbid_pbim-aenam EQ 'Abbau-' ).
MOVE-CORRESPONDING it_pbid_pbim TO it_allrows.
append it_allrows.
ELSEIF NOT it_pbid_pbim-laeda IN s_laeda.
delete it_pbid_pbim index sy-tabix. " debug here
ENDIF.
ENDSELECT.
ENDLOOP.
**************************************PBIM*************************************
LOOP AT it_pbim.
move-corresponding it_pbim to it_pbid_pbim.
append it_pbid_pbim.
SELECT SINGLE maktx FROM makt INTO (it_pbid_pbim-maktx) WHERE matnr = it_pbim-matnr.
MODIFY table it_pbid_pbim TRANSPORTING maktx.
SELECT aenam laeda FROM pbhi INTO (it_pbid_pbim-aenam,it_pbid_pbim-laeda) WHERE bdzei = it_pbim-bdzei AND aenam = 'Abbau-'.
MODIFY TABLE it_pbid_pbim TRANSPORTING aenam laeda. " debug here
select single sum( dbmng ) sum( plnmg ) FROM pbhi INTO (it_pbid_pbim-dbmng,it_pbid_pbim-plnmg) WHERE bdzei = it_pbim-bdzei AND aenam = 'Abbau-' and laeda = it_pbid_pbim-laeda..
MODIFY TABLE it_pbid_pbim TRANSPORTING dbmng plnmg. " debug here
it_pbid_pbim-pirrednqty = it_pbid_pbim-dbmng - it_pbid_pbim-plnmg.
MODIFY table it_pbid_pbim TRANSPORTING pirrednqty.
IF ( it_pbid_pbim-laeda IN s_laeda AND it_pbid_pbim-aenam EQ 'Abbau-' ).
MOVE-CORRESPONDING it_pbid_pbim TO it_allrows.
append it_allrows.
ELSEIF NOT it_pbid_pbim-laeda IN s_laeda.
delete it_pbid_pbim index sy-tabix. " debug here
ENDIF.
ENDSELECT.
ENDLOOP.
sort it_allrows by matnr laeda status.
**********************************ALL ROWS************************
loop at it_allrows.
line-matnr = it_allrows-matnr.
line-laeda = it_allrows-laeda.
line-plnmg = it_allrows-plnmg.
line-dbmng = it_allrows-dbmng.
line-mblnr = it_allrows-mblnr.
line-pbfnr = it_allrows-pbfnr.
line-maktx = it_allrows-maktx.
line-aenam = it_allrows-aenam.
line-erfmg = it_allrows-erfmg.
line-budat = it_allrows-budat.
line-bdzei = it_allrows-bdzei.
line-werks = it_allrows-werks.
line-pirrednqty = it_allrows-pirrednqty.
line-diff = it_allrows-diff.
line-slno = it_allrows-slno.
line-status = it_allrows-status.
collect line into itfinal1.
endloop.
loop at itfinal1 into line.
collect line into itfinal.
write: / line-matnr, line-plnmg, line-dbmng,line-laeda,line-status..
endloop.
skip 4.
loop at itfinal into line.
write: / line-matnr, line-plnmg, line-dbmng,line-laeda,line-status..
endloop.
break-point. -
Issue while using SUNOPSIS MEMORY ENGINE (High Priority)
Hi Gurus,
While using SUNOPSIS MEMORY ENGINE to generate a .csv file using the database table as a source it is throwing an error in the operator like.
ODI-1228: Task SrcSet0 (Loading) fails on the target SUNOPSIS ENGINE connection SUNOPSIS MEMORY ENGINE.
Caused By: java.sql.SQLException: unknown token
(LKM used : LKM Sql to Sql.
IKM used : IKM Sql to File Append.)
can you please help me regarding this ASAP as it has became the show stopper for me to proceed further.
Any Help will be greatly Appreciable.
Many Thanks,
Pavan
Edited by: Pavan. on Jul 11, 2012 10:22 AMHi All,
The Issue got resolved successfully.
The solution is
we need to change the E$_,I$_,J$_,...... to E_,I_,J_,.... ((i.e; removing the '$' Symbol)) in the PHYSICAL SCHEMA of SUNOPSIS MEMORY ENGINE as per the information given below.
When running interfaces and using a XML or Complex File schema as the staging area, the "Unknown Token" error appears. This error is caused by the updated HSQL version (2.0). This new version of HSQL requires that table names containing a dollar sign ($) are surrounded by quotes. Temporary tables (Loading, Integration, and so forth) that are created by the Knowledge Modules do not meet this requirement on Complex Files and HSQL technologies.
As a workaround, edit the Physical Schema definitions to remove the dollar sign ($) from all the Work Tables Prefixes. Existing scenarios must be regenerated with these new settings.
It worked fine for me.
Thanks ,
Pavan Kumar -
Mapping issues while using MapwithDefault Node function for Idoc
Hi Experts,
We are facing issues while trying to generate nodes at reciver side even though it does not exist at sender side.
we are using the matmas05 and we want the nodes E1marcm and e1mardm to generated at target structure.
the structure is like this
Matmas05
idoc
E1maram
segment
E1marcm
segment
msgfn
.....other fields
E1mardm (can be many segments)
segment
lgort
..other fields
E1mpgdm
segment
...e1mpgdm fields
the mapping has been done like:
e1marcm -mapwithdefault-e1marcm
constant -segment
werks-mapwithdefault-werks
other fields(one to one mapping)
e1mardm-mapwith default-e1mardm
lgort-mapwithdefault(value 1000)-lgort
other fields -one to one mapping
now the problem we are facing is when e1mardm is not existing in source structure the values from the other node of e1mardm(which exists) are getting overwritten into it.
we want only the lgort to be 1000 and the segment should populate at target side.
like
e1mardm
segment
lgort value 1000
Could you please assist in solving this issue and give your valuable suggestions as how could we handle it.
Thanks and regards,
jyotiU will achieve this using UDF function. Map the below Like.
e1mardm-mapwith default(Value Constant)-e1mardm
lgort (A) ---
e1mardm-mapwith default(Value Exit)u2014(EqualsS) (B)- (UDF1)- lgort
Constant(Value Exit)---
Constant(Value 1000)(C)---
Other one to one mapping fields also you should use one more UDF.
Other one to one fields (A) ---
e1mardm-mapwith default(Value Exit)u2014(EqualsS) (B) - (UDF2)- Target
Constant(Value Exit)---
UDF1:
Argument A,B,C and select queue.
//write your code here
int j = 0;
for(int i=0;i<a.length;i++)
if (b[j].equals("true"))
result.addValue(c[0]);
i--;
result.addValue(ResultList.CC);
else
result.addValue(a<i>);
j++;
UDF2:
Argument A,B and select queue.
//write your code here
int j = 0;
for(int i=0;i<a.length;i++)
if (b[j].equals("true"))
i--;
result.addValue(ResultList.CC);
else
result.addValue(a<i>);
j++; -
Authorization issue while using business content objects
Hi all,
I am getting an authorization error while loading from DSO to Cube (for standard business content objects only). Even i am not able access the data in Bex reports from Standard cubes or DSO.
The user is assigned to SAP_ALL .
But there is no such issue while accessing user defined DSO or cubes.
any solutions will be helpful.
Regards,
VarmaHi,
Have you seen the error thru the SU53 t-code ? There you should see what is the missing authorization with the user.
Regards, Federico -
I am trying to learn HANA on my own.i have product id,product name,delivery date and Grossamount in my calculated view.i am trying to create calculated column where i need Grossamount in two columns based on delivery date.I have 2012 and 2013 as values for my delivery date.so i have created two column as grossamount_2012 and grossamount_2013.if i have delivery date as 4thdec,2012 i want the grossamount value to be in coloumn grossamount_2012 and the grossamount_2013 should be blank.i have written an expression like this
if("Deliverydate" <= longdate(2012-12-04),"Grossamount","0")
and it looks like this is wrong.i am getting the text Grossamount rather than values for that field in my output.so can anyone help me please?Hi chandra
i am trying to get the same result by using sql script and CE functions.i have written the following code
select A."PRODUCTID",
E."TEXT" as "PRODUCTNAME",
C."COUNTRY",
D."DELIVERYDATE",
Sum(D."GROSSAMOUNT") as "GROSSAMOUNT"
from "SAP_HANA_DEMO"."sap.hana.democontent.epm.data::EPM.MasterData.Products" as A
inner join "SAP_HANA_DEMO"."sap.hana.democontent.epm.data::EPM.MasterData.BusinessPartner" as B
on A."SUPPLIERID" = B."PARTNERID"
inner join "SAP_HANA_DEMO"."sap.hana.democontent.epm.data::EPM.MasterData.Addresses" as C
on B."ADDRESSID" = C."ADDRESSID"
inner join "SAP_HANA_DEMO"."sap.hana.democontent.epm.data::EPM.Purchase.Item" as D
on A."PRODUCTID" = D."PRODUCTID"
inner join "SAP_HANA_DEMO"."sap.hana.democontent.epm.data::EPM.Util.Texts" as E
on A."NAMEID" = E."TEXTID"
GROUP BY A."PRODUCTID",E."TEXT",C."COUNTRY",D."DELIVERYDATE";
this is working fine but i want to split the grossamount based on current year and last year.Any idea how to do this
In calculation view using script can we use if and case statements? -
PLSQL Error while using collections dATABASE:10G
Hi,
I am getting below error while compiling below code:
Error: DML statement without BULK In-BIND cannot be used inside FORALL
Could you suggest.
create or replace PROCEDURE V_ACCT_MTH ( P_COMMIT_INTERVAL NUMBER DEFAULT 10000)
is
CURSOR CUR_D_CR_ACCT_MTH
IS
SELECT * FROM D_ACCT_MTH;
TYPE l_rec_type IS TABLE OF CUR_D_CR_ACCT_MTH%ROWTYPE
INDEX BY PLS_INTEGER;
v_var_tab l_rec_type;
v_empty_tab l_rec_type;
v_error_msg VARCHAR2(80);
v_err_code VARCHAR2(30);
V_ROW_CNT NUMBER :=0;
--R_DATA NUMBER :=1;
BEGIN
OPEN CUR_D_CR_ACCT_MTH;
v_var_tab := v_empty_tab;
LOOP
FETCH CUR_D_CR_ACCT_MTH BULK COLLECT INTO v_var_tab LIMIT P_COMMIT_INTERVAL;
EXIT WHEN v_var_tab.COUNT=0;
FORALL R_DATA IN 1..v_var_tab.COUNT
INSERT INTO ACCT_F_ACCT_MTH
DATE_KEY
,ACCT_KEY
,P_ID
,ORG_KEY
,FDIC_KEY
,BAL
,BAL1
,BAL2
,BAL3
,BAL4
,BAL5
,BAL6
,BAL7
,BAL8
,BAL9
,BAL10
,BAL11
,BAL12
,BAL13
,BAL14
,BAL15
VALUES
DATE_KEY(R_DATA)
,ACCT_KEY(R_DATA)
,P_ID(R_DATA)
,ORG_KEY(R_DATA)
,FDIC_KEY(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
,BAL(R_DATA)
COMMIT;
END LOOP;
CLOSE CUR_D_CR_ACCT_MTH;
EXCEPTION
WHEN OTHERS THEN
v_error_msg:=substr(sqlerrm,1,50);
v_err_code :=sqlcode;
DBMS_OUTPUT.PUT_LINE(v_error_msg,v_err_code);
END V_ACCT_MTH;931832 wrote:
Here i am using above method using forall because of large volume of data.Which is a FLAWED approach. Always.
FORALL is not suited to "move/copy" large amounts of data from one table to another.
Any suggestion ?Use only SQL. It is faster. It has less overheads. It can execute in parallel.
So execute it in parallel to move/copy that data. You can roll this manually via the DBMS_PARALLEL_EXECUTE interface. Simplistic example:
declare
taskName varchar2(30) default 'PQ-task-1';
parallelSql varchar2(1000);
begin
--// create trask
DBMS_PARALLEL_EXECUTE.create_task( taskName );
--// chunk the table by rowid ranges
DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(
task_name => taskName,
table_owner => user,
table_name => 'D_ACCT_MNTH',
by_row => true,
chunk_size => 100000
--// create insert..select statement to copy a chunk of rows
parallelSql := 'insert into acct_f_acct_mth select * from d_acct_mnth
where rowid between :start_id and :end_id';
--// run the task using 5 parallel processes
DBMS_PARALLEL_EXECUTE.Run_Task(
task_name => taskName,
sql_stmt => parallelSql,
language_flag => DBMS_SQL.NATIVE,
parallel_level => 5
--// wait for it to complete
while DBMS_PARALLEL_EXECUTE.task_status( taskName ) != DBMS_PARALLEL_EXECUTE.Finished loop
DBMS_LOCK.Sleep(10);
end loop;
--// remove task
DBMS_PARALLEL_EXECUTE.drop_task( taskName );
end;
/Details in Oracle® Database PL/SQL Packages and Types Reference guide.
For 10g, the EXACT SAME approach can be used - by determining the rowid chunks/ranges via a SQL and then manually running parallel processes as DBMS_JOB. See {message:id=1108593} for details. -
Network speed issues while using osx
this may seem a little bit strange, and i can't make any sense out of it but if anyone can figure out what my problem might be it would be greatly appreciated.
i have an older duel os (win xp, leopard) mbp and have had a lot of problems with hardware/software (osx) in the past, and have ended up, to my dismay, almost exclusively using windows.
anyway, just yesterday i decided it was time to give osx another chance. so besides having it hanging constantly while doing simple tasks, i have noticed my network speed is drastically reduced.
i ran a test on speedtest.net and ended up with 500kps average download speed, and 70kbps upload speed, with the closest server in my town at 800 ping.
i straight away restarded into windows and ran the test again, and achieved 13500kbps download, and 600kpbs upload, with a ping of 66.
NOTHING changed between the time of restarting. The router was left as it was, no other computers are switched on, and nothing is downloading.
In the past when using osx i have had frequent connection dropping issues, but as i run wireless i always just attributed it to my router. Although since using windows exclusively this has also ceased.
I have looked through airport settings and cannot find anything that looks like it may be the problem, although i'm not as experienced at getting into the back-end settings on osx.
the part that confuses me the most is the ping. latency generally increases with distance, not so much speed. i dont understand how the same server situated nearby can give 66 ms in windows and 800 in osx.
if anyone has any ideas, i thank you in advance.
p.s. i have osx 1.5.4 and was updating to 1.5.5 but the slow download speeds caused me to create this post. it does mention an increase in network performance in update notes for 1.5.5, if that will solve this than i apologize, although i doubt this drastic speed difference could have existed for so long if everything is set up correctly.You are certainly not alone. I have the same problem. Download speed seems to be limited to 500KB/s (any program, any service) and 2.18MB/s using Windows. I tried two different routers with the same results. Even downloading the same files using rapidshare led to this. Very strange.
This has nothing to do with the DNS server. My connection speed is pretty fast. With and without the recommended OpenDNS server settings. I just re-installed Leopard and updated to 10.5.6. Nothing changed.
Seems like something is throttling the max. dl speed. Odd! -
UIImagePickerController Custom Overlay Black Screen issue while using Video trimming controller
Hello Everyone ,
I have problem while opening camera in recording mode using UIImagePicker Controller with custom overlay and it should open first time properly .But when i push to next view controller and return back again to open camera at that time camera displays black screen for few time.
The Logic in the inside view controller is for VideoTrimming . For video trimming i have used "SAVideoRangeSlider" controll. SAVideoRange Silder perform task to get all thumbnail images from video which i have assign . It uses AVAsset image generator class provided by apple to get image frames from videos and it runs in background using its handler method.
Handler Method ::
[self.imageGenerator generateCGImagesAsynchronouslyForTimes:times
completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime,
AVAssetImageGeneratorResult result, NSError *error) {
I have nilling all the objects related to image generator class when i came back to Camera screen. still issues of black screen occur while opening camera.
This issue occur in ios7 but not in ios6.
Can anyone have any idea about this ...Please relply me ASAP.Hello Everyone ,
I have problem while opening camera in recording mode using UIImagePicker Controller with custom overlay and it should open first time properly .But when i push to next view controller and return back again to open camera at that time camera displays black screen for few time.
The Logic in the inside view controller is for VideoTrimming . For video trimming i have used "SAVideoRangeSlider" controll. SAVideoRange Silder perform task to get all thumbnail images from video which i have assign . It uses AVAsset image generator class provided by apple to get image frames from videos and it runs in background using its handler method.
Handler Method ::
[self.imageGenerator generateCGImagesAsynchronouslyForTimes:times
completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime,
AVAssetImageGeneratorResult result, NSError *error) {
I have nilling all the objects related to image generator class when i came back to Camera screen. still issues of black screen occur while opening camera.
This issue occur in ios7 but not in ios6.
Can anyone have any idea about this ...Please relply me ASAP. -
Issue while using FM L_TO_CREATE_POSTING_CHANGE
Hi All,
We are facing an issue when creating a transfer order using the FM
L_TO_CREATE_POSTING_CHANGE.
We have developed a custom program to change the status (Zfield) of a
specific SU in LQUA and update its stock category. The program creates
the material document and posting change document in the background.
During this process, the badi MB_DOCUMENT_BADI~MB_DOCUMENT_UPDATE gets
triggered where we call the above FM in the background task. During TO
creation, we have also implemented the badi LE_WM_LE_QUANT to update
the quant with the specific SU which was changed by the user in the
custom program. (We get this from the custom data base table).
The TO is created for mvt types 321, 322, 343, 344, 349 and 350.
We do not pass the t_LUBQU table in the above FM so that the program
gets the updated quant from the badi. The issue is that the FM
sometimes does not process properly and ends up without creating the
TO. We are not able to get the exact scenario for which it is
failing. When we debug it, we get message S178 - Foreground processing
is required.
Can anyone let me know what is the problem here?
Thanks in advance,
ChetanAs mentioned by Madhan
you are trying to do a chicken and egg thing...
You have to commit the MM posting of the BAPI to create the change posting in WM before you can convert the change posting using the FM...
In other words.
call BAPI_GOODSMVMT_CREATE
commit work and wait
call L_TO_CREATE_POSTING_CHANGE
be aware that you still might get locking problems. the commit work and wait does not handle that all the time correctly...
bjorn
Maybe you are looking for
-
Nokia 928 receives texts but won't display them
2 weeks ago everything was great. I put phone on airplane mode for 2 weeks while offshore. Wi-fi only. Got back into port yesterday. I have great signal. I turned off airplane mode. I can make calls. BUT when I send or receive texts, they don't show
-
Drop down list in web-dynpro abap
Hi friend, i want to drop down list in WEB-DYNPRO abap kindly give me one example . regards vikash
-
Hello again. I have another problem with my macbook pro, in addition to the previous one that nobody was able to help me with. I have gotten to the point where I turn on my mac, go to my main user, and there would only be a blue screen without any ic
-
Is adobe cs3 free now like cs2 or can it be only used with a purchased serial?
-
FIFO STRATEGY FOR STOCK REMOVAL
Dear Experts, I have problem, I have assigned FIFO Stock Strategy for Stock Removal in all storage types, but I am facing problem when we recieve sales return (with reference to old sales order), our company wants that the removal of old sales retur