How to display the data monthwise in the following report

see the following report. i will give u the code
if we go to fd10n then we see the monthwise debit,credit and the outstanding amount . i want to display the data monthwise in my report. means if i select the first month then it will dislpay only month of april debit, credit, and the outstanding amount.
if anyone know the answer then please help me.
Thanks.
Dinesh
*& Report  ZCUSTOMER_GROUP
REPORT  ZCUSTOMER_GROUP NO STANDARD PAGE HEADING.
TABLES: KNA1,KNVV,KNB1,T151T,T685T,KNVI,TSKDT,BSID,T001,KNC1,KNC3.
TYPE-POOLS: SLIS.
TYPES: BEGIN OF IT_FINAL,
            BZIRK LIKE KNVV-BZIRK,
            VKBUR LIKE KNVV-VKBUR,
            KDGRP LIKE KNVV-KDGRP,
            KUNNR LIKE KNVV-KUNNR,
            VKORG LIKE KNVV-VKORG,
            VTWEG LIKE KNVV-VTWEG,
            SPART LIKE KNVV-SPART,
            KTGRD LIKE KNVV-KTGRD,
            NAME1 LIKE KNA1-NAME1,
            ORT01 LIKE KNA1-ORT01,
            STCEG LIKE KNA1-STCEG,
            REGIO LIKE KNA1-REGIO,
            STRAS LIKE KNA1-STRAS,
            TELF1 LIKE KNA1-TELF1,
            WERKS LIKE KNA1-WERKS,
            SPRAS TYPE SPRAS,
            GFORM LIKE KNA1-GFORM,
            LAND1 LIKE KNA1-LAND1,
            ALTKN LIKE KNB1-ALTKN,
            AKONT LIKE KNB1-AKONT,
            VZSKZ TYPE KNB1-VZSKZ,
           VTEXT LIKE T685T-VTEXT,
            KTEXT LIKE T151T-KTEXT,
            TATYP TYPE KNVI-TATYP,
            TAXKD TYPE KNVI-TAXKD,
          taxkd type tskdt-taxkd,
          tatyp type tskdt-tatyp,
           VTEXT TYPE TSKDT-VTEXT,
           UMSKZ LIKE BSID-UMSKZ,
           DMBTR TYPE BSID-DMBTR,
          coll LIKE bsid-dmbtr,
          coll1 LIKE bsid-dmbtr,
          coll2 LIKE bsid-dmbtr,
          coll3 LIKE bsid-dmbtr,
          advance LIKE bsid-dmbtr,
          deposit LIKE bsid-dmbtr,
          amount type bsid-dmbtr,
          shkzg type bsid-shkzg,
          blart type bsid-blart,
           BUDAT TYPE BSID-BUDAT,
           BLDAT TYPE BSID-BLDAT,
          GJAHR LIKE BSID-GJAHR,
          outstand LIKE bsid-dmbtr,
          debit like bsid-dmbtr,
           UM01S LIKE KNC1-UM01S,
           UM02S LIKE KNC1-UM02S,
           UM03S LIKE KNC1-UM03S,
           UM04S LIKE KNC1-UM04S,
           UM05S LIKE KNC1-UM05S,
           UM06S LIKE KNC1-UM06S,
           UM07S LIKE KNC1-UM07S,
           UM08S LIKE KNC1-UM08S,
           UM09S LIKE KNC1-UM09S,
           UM10S LIKE KNC1-UM10S,
           UM11S LIKE KNC1-UM11S,
           UM12S LIKE KNC1-UM12S,
           UM13S LIKE KNC1-UM13S,
           UM14S LIKE KNC1-UM14S,
           UM15S LIKE KNC1-UM15S,
           UM16S LIKE KNC1-UM16S,
          DEBIT LIKE KNC1-UM01S,
           GJAHR LIKE KNC1-GJAHR,
            DEBIT TYPE STRING,
           UM01H LIKE KNC1-UM01H,
           UM02H LIKE KNC1-UM02H,
           UM03H LIKE KNC1-UM03H,
           UM04H LIKE KNC1-UM04H,
           UM05H LIKE KNC1-UM05H,
           UM06H LIKE KNC1-UM06H,
           UM07H LIKE KNC1-UM07H,
           UM08H LIKE KNC1-UM08H,
           UM09H LIKE KNC1-UM09H,
           UM10H LIKE KNC1-UM10H,
           UM11H LIKE KNC1-UM11H,
           UM12H LIKE KNC1-UM12H,
           UM13H LIKE KNC1-UM13H,
           UM14H LIKE KNC1-UM14H,
           UM15H LIKE KNC1-UM15H,
           UM16H LIKE KNC1-UM16H,
           CREDIT TYPE STRING,
           UMSAV LIKE KNC1-UMSAV,
         GJAHR LIKE KNC1-GJAHR,
           SHBKZ LIKE KNC3-SHBKZ,
           SALDV LIKE KNC3-SALDV,
           SOLLL LIKE KNC3-SOLLL,
           HABNL LIKE KNC3-HABNL,
           DOWN TYPE STRING,
           DEPOSIT TYPE STRING,
           COLL TYPE STRING,
   END OF IT_FINAL.
TYPES : BEGIN OF IT_KNVV ,
            BZIRK LIKE KNVV-BZIRK,
            VKBUR LIKE KNVV-VKBUR,
            KDGRP LIKE KNVV-KDGRP,
            KUNNR LIKE KNVV-KUNNR,
            VKORG LIKE KNVV-VKORG,
            VTWEG LIKE KNVV-VTWEG,
            SPART LIKE KNVV-SPART,
            KTGRD LIKE KNVV-KTGRD,
END OF IT_KNVV.
TYPES: BEGIN OF IT_KNA1 ,
            NAME1 LIKE KNA1-NAME1,
            ORT01 LIKE KNA1-ORT01,
            STCEG LIKE KNA1-STCEG,
            REGIO LIKE KNA1-REGIO,
            STRAS LIKE KNA1-STRAS,
            TELF1 LIKE KNA1-TELF1,
            WERKS LIKE KNA1-WERKS,
            KUNNR LIKE KNA1-KUNNR,
            GFORM LIKE KNA1-GFORM,
            SPRAS TYPE SPRAS,
            LAND1 LIKE KNA1-LAND1,
END OF IT_KNA1.
TYPES: BEGIN OF IT_KNB1 ,
        ALTKN LIKE KNB1-ALTKN,
        AKONT LIKE KNB1-AKONT,
        VZSKZ TYPE KNB1-VZSKZ,
        KUNNR LIKE KNB1-KUNNR,
END OF IT_KNB1.
TYPES: BEGIN OF IT_T151T ,
       KTEXT LIKE T151T-KTEXT,
       KDGRP LIKE T151T-KDGRP,
       SPRAS TYPE SPRAS,
  END OF IT_T151T.
DATA: BEGIN OF IT_T685T OCCURS 0,
      VTEXT LIKE T685T-VTEXT,
      spras type spras,
END OF IT_T685T.
TYPES: BEGIN OF IT_KNVI ,
      TAXKD TYPE KNVI-TAXKD,
      TATYP TYPE KNVI-TATYP,
      KUNNR TYPE KNVI-KUNNR,
END OF IT_KNVI.
TYPES: BEGIN OF IT_TSKDT ,
      TAXKD TYPE TSKDT-TAXKD,
     TATYP TYPE tskdt-TATYP,
      VTEXT TYPE TSKDT-TATYP,
      SPRAS TYPE SPRAS,
END OF IT_TSKDT.
TYPES: BEGIN OF IT_BSID ,
      UMSKZ TYPE BSID-UMSKZ,
      DMBTR TYPE BSID-DMBTR,
      COLL LIKE BSID-DMBTR,
      COLL1 LIKE BSID-DMBTR,
      COLL2 LIKE BSID-DMBTR,
      COLL3 LIKE BSID-DMBTR,
      ADVANCE LIKE BSID-DMBTR,
      DEPOSIT LIKE BSID-DMBTR,
       SHKZG TYPE BSID-SHKZG,
       KUNNR TYPE BSID-KUNNR,
       BUDAT TYPE BSID-BUDAT,
       BLART TYPE BSID-BLART,
       OUTSTAND LIKE BSID-DMBTR,
       GJAHR LIKE BSID-GJAHR,
   END OF IT_BSID.
TYPES : BEGIN OF IT_KNC1,
           UM01S LIKE KNC1-UM01S,
           UM02S LIKE KNC1-UM02S,
           UM03S LIKE KNC1-UM03S,
           UM04S LIKE KNC1-UM04S,
           UM05S LIKE KNC1-UM05S,
           UM06S LIKE KNC1-UM06S,
           UM07S LIKE KNC1-UM07S,
           UM08S LIKE KNC1-UM08S,
           UM09S LIKE KNC1-UM09S,
           UM10S LIKE KNC1-UM10S,
           UM11S LIKE KNC1-UM11S,
           UM12S LIKE KNC1-UM12S,
           UM13S LIKE KNC1-UM13S,
           UM14S LIKE KNC1-UM14S,
           UM15S LIKE KNC1-UM15S,
           UM16S LIKE KNC1-UM16S,
          DEBIT LIKE KNC1-UM01S,
            DEBIT TYPE STRING,
           UM01H LIKE KNC1-UM01H,
           UM02H LIKE KNC1-UM02H,
           UM03H LIKE KNC1-UM03H,
           UM04H LIKE KNC1-UM04H,
           UM05H LIKE KNC1-UM05H,
           UM06H LIKE KNC1-UM06H,
           UM07H LIKE KNC1-UM07H,
           UM08H LIKE KNC1-UM08H,
           UM09H LIKE KNC1-UM09H,
           UM10H LIKE KNC1-UM10H,
           UM11H LIKE KNC1-UM11H,
           UM12H LIKE KNC1-UM12H,
           UM13H LIKE KNC1-UM13H,
           UM14H LIKE KNC1-UM14H,
           UM15H LIKE KNC1-UM15H,
           UM16H LIKE KNC1-UM16H,
           CREDIT TYPE STRING,
        UMSAV LIKE KNC1-UMSAV,
        KUNNR LIKE KNC1-KUNNR,
        GJAHR LIKE KNC1-GJAHR,
END OF IT_KNC1.
TYPES : BEGIN OF IT_KNC3,
        SHBKZ LIKE KNC3-SHBKZ,
        SALDV LIKE KNC3-SALDV,
        SOLLL LIKE KNC3-SOLLL,
        HABNL LIKE KNC3-HABNL,
        KUNNR LIKE KNC3-KUNNR,
        GJAHR LIKE KNC3-GJAHR,
        DOWN TYPE STRING,
        DEPOSIT TYPE STRING,
END OF IT_KNC3.
*INTERNAL TABLE DECLARATION
DATA: I_FINAL TYPE STANDARD TABLE OF IT_FINAL,
      I_KNVV  TYPE STANDARD TABLE OF IT_KNVV,
      I_KNA1  TYPE STANDARD TABLE OF IT_KNA1,
      I_KNB1  TYPE STANDARD TABLE OF IT_KNB1,
      I_T151T  TYPE STANDARD TABLE OF IT_T151T,
      I_KNVI  TYPE STANDARD TABLE OF IT_KNVI,
      I_TSKDT  TYPE STANDARD TABLE OF IT_TSKDT,
      I_BSID  TYPE STANDARD TABLE OF IT_BSID,
      I_KNC1  TYPE STANDARD TABLE OF IT_KNC1,
      I_KNC3  TYPE STANDARD TABLE OF IT_KNC3.
*DECLARATION FOR THE WORK AREA
DATA: WA_FINAL TYPE IT_FINAL,
       WA_KNVV TYPE IT_KNVV,
       WA_KNA1 TYPE IT_KNA1,
       WA_KNB1 TYPE IT_KNB1,
       WA_T151T TYPE IT_T151T,
       WA_KNVI TYPE IT_KNVI,
       WA_TSKDT TYPE IT_TSKDT,
       WA_BSID TYPE IT_BSID,
       WA_KNC1 TYPE IT_KNC1,
       WA_KNC3 TYPE IT_KNC3.
DECLARATION FOR THE FIELD CATALOG.
DATA: FIELDCATALOG TYPE SLIS_T_FIELDCAT_ALV WITH HEADER LINE,
      WA_FIELDCATALOG TYPE SLIS_T_FIELDCAT_ALV WITH HEADER LINE,
      GD_LAYOUT TYPE SLIS_LAYOUT_ALV.
DATA  GD_REPID LIKE SY-REPID.
SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
BREAK-POINT.
SELECT-OPTIONS: CUST_GRP FOR KNVV-KDGRP.
SELECT-OPTIONS: SALES_OF FOR KNVV-VKBUR.
SELECT-OPTIONS: REGION FOR KNA1-REGIO.
SELECT-OPTIONS: DATE FOR BSID-BLDAT.
SELECT-OPTIONS: YEAR FOR KNC1-GJAHR.
SELECT-OPTIONS: CUST_NO FOR KNVV-KUNNR.
SELECTION-SCREEN :END OF BLOCK B1.
* DECLARATION FOR THE SUBROUTIN FOR BASIC LIST.
PERFORM data_retrival.
PERFORM F_GET_I_KNVV_DATA.
PERFORM F_GET_I_KNA1_DATA.
PERFORM F_GET_I_KNB1_DATA.
PERFORM F_GET_I_T151T_DATA.
*PERFORM F_GET_IT_T685T_DATA.
PERFORM F_GET_I_KNVI_DATA.
PERFORM F_GET_I_TSKDT_DATA.
PERFORM F_GET_I_BSID_DATA.
PERFORM F_GET_I_KNC1_DATA.
PERFORM F_GET_I_KNC3_DATA.
PERFORM F_GET_I_FINAL_DATA.
  PERFORM DATA_BUILD.
  PERFORM DATA_CALL.
FORM F_GET_I_KNVV_DATA.
SELECT BZIRK KDGRP KTGRD KUNNR SPART VKBUR VKORG VTWEG
FROM KNVV
INTO CORRESPONDING FIELDS OF TABLE I_KNVV
WHERE KDGRP IN CUST_GRP AND VKBUR IN SALES_OF AND KUNNR IN CUST_NO.
ENDFORM.
FORM F_GET_I_KNA1_DATA.
SELECT KUNNR NAME1 ORT01 REGIO STCEG STRAS TELF1 WERKS  GFORM SPRAS LAND1
FROM KNA1
INTO CORRESPONDING FIELDS OF TABLE I_KNA1 FOR ALL ENTRIES IN I_KNVV
  WHERE KUNNR = I_KNVV-KUNNR AND  REGIO IN REGION AND LAND1 = 'IN' .
ENDFORM.
FORM F_GET_I_KNB1_DATA.
SELECT ALTKN AKONT VZSKZ KUNNR
FROM KNB1
INTO CORRESPONDING FIELDS OF TABLE I_KNB1 FOR ALL ENTRIES IN I_KNVV
WHERE KUNNR = I_KNVV-KUNNR.
  ENDFORM.
FORM F_GET_I_T151T_DATA.
SELECT KTEXT KDGRP SPRAS
FROM T151T
INTO CORRESPONDING FIELDS OF TABLE I_T151T FOR ALL ENTRIES IN I_KNVV
WHERE KDGRP = I_KNVV-KDGRP AND SPRAS = 'E' .
ENDFORM.
*FORM F_GET_IT_T685T_DATA.
*select vtext spras
*from t685t
*into CORRESPONDING FIELDS OF TABLE it_t685t for all entries in it_kna1
where spras = it_kna1-spras and spras = 'E'.
*endform.
FORM F_GET_I_KNVI_DATA.
SELECT TAXKD TATYP KUNNR
FROM KNVI
INTO CORRESPONDING FIELDS OF TABLE I_KNVI FOR ALL ENTRIES IN I_KNVV
WHERE KUNNR = I_KNVV-KUNNR
AND TAXKD = '8' OR TAXKD = '3' OR TAXKD = '0'
AND TATYP = 'Z1AU' OR TATYP = 'ZCST'.
*and ( taxkd = '8' or taxkd = '0' or taxkd = '3')
and ( tatyp = 'Z1AU' OR TATYP = 'ZCST' ).
ENDFORM.
FORM F_GET_I_TSKDT_DATA.
SELECT VTEXT TAXKD SPRAS FROM
TSKDT
INTO CORRESPONDING FIELDS OF TABLE I_TSKDT FOR ALL ENTRIES IN I_KNVI
WHERE TAXKD = I_KNVI-TAXKD AND SPRAS = 'E'.
ENDFORM.
FORM F_GET_I_BSID_DATA.
*DATA: DMBTR TYPE BSID-DMBTR,
     OUTSTAND LIKE BSID-DMBTR.
SELECT UMSKZ DMBTR SHKZG KUNNR BLART GJAHR BUDAT BLDAT
FROM BSID
INTO CORRESPONDING FIELDS OF TABLE I_BSID
FOR ALL ENTRIES IN I_KNVV
WHERE KUNNR = I_KNVV-KUNNR AND BLDAT IN DATE.
*AND  ( BLART = 'HR' OR BLART = 'DZ' OR BLART = 'P1'  )
*and umskz = 'A' or umskz = 'L' OR UMSKZ = ''
*AND SHKZG = 'S' OR SHKZG = 'H'.
ENDFORM.
FORM F_GET_I_KNC1_DATA.
SELECT UM01S UM02S UM03S UM04S UM05S UM06S UM07S UM08S UM09S UM10S UM11S UM12S UM13S UM14S UM15S UM16S
       UM01H UM02H UM03H UM04H UM05H UM06H UM07H UM08H UM09H UM10H UM11H UM12H UM13H UM14H UM15H UM16H
       UMSAV KUNNR GJAHR
FROM KNC1
INTO CORRESPONDING FIELDS OF TABLE I_KNC1
FOR ALL ENTRIES IN I_KNVV
WHERE KUNNR = I_KNVV-KUNNR AND GJAHR IN YEAR .
ENDFORM.
FORM F_GET_I_KNC3_DATA.
SELECT KUNNR GJAHR SHBKZ SALDV SOLLL HABNL
FROM KNC3
INTO CORRESPONDING FIELDS OF TABLE I_KNC3
FOR ALL ENTRIES IN I_KNVV
WHERE KUNNR = I_KNVV-KUNNR
AND SHBKZ = 'A' OR SHBKZ = 'L'.
ENDFORM.
FORM F_GET_I_FINAL_DATA.
*BREAK-POINT.
LOOP AT I_KNVV INTO WA_KNVV.
CLEAR WA_FINAL.
WA_FINAL-BZIRK = WA_KNVV-BZIRK.
WA_FINAL-VKBUR = WA_KNVV-VKBUR.
WA_FINAL-KDGRP = WA_KNVV-KDGRP.
WA_FINAL-KUNNR = WA_KNVV-KUNNR.
WA_FINAL-VKORG = WA_KNVV-VKORG.
WA_FINAL-VTWEG = WA_KNVV-VTWEG.
WA_FINAL-SPART = WA_KNVV-SPART.
WA_FINAL-KTGRD = WA_KNVV-KTGRD.
READ TABLE I_KNA1 INTO WA_KNA1 WITH KEY KUNNR = WA_KNVV-KUNNR.
WA_FINAL-NAME1 = WA_KNA1-NAME1.
WA_FINAL-ORT01 = WA_KNA1-ORT01.
WA_FINAL-STCEG = WA_KNA1-STCEG.
WA_FINAL-REGIO = WA_KNA1-REGIO.
WA_FINAL-STRAS = WA_KNA1-STRAS.
WA_FINAL-TELF1 = WA_KNA1-TELF1.
WA_FINAL-WERKS = WA_KNA1-WERKS.
WA_FINAL-GFORM = WA_KNA1-GFORM.
WA_FINAL-LAND1 = WA_KNA1-LAND1.
IF SY-SUBRC = 0.
ENDIF.
READ TABLE I_KNB1 INTO WA_KNB1 WITH KEY KUNNR = WA_KNVV-KUNNR.
IF SY-SUBRC = 0.
WA_FINAL-ALTKN = WA_KNB1-ALTKN.
WA_FINAL-AKONT = WA_KNB1-AKONT.
WA_FINAL-VZSKZ = WA_KNB1-VZSKZ.
ENDIF.
READ TABLE I_T151T INTO WA_T151T WITH KEY KDGRP = WA_KNVV-KDGRP .
IF SY-SUBRC = 0.
WA_FINAL-KTEXT = WA_T151T-KTEXT.
ENDIF.
READ TABLE I_KNVI INTO WA_KNVI  WITH KEY KUNNR = WA_KNVV-KUNNR.
IF SY-SUBRC = 0.
WA_FINAL-TAXKD = WA_KNVI-TAXKD.
WA_FINAL-TATYP = WA_KNVI-TATYP.
ENDIF.
READ TABLE I_TSKDT INTO WA_TSKDT WITH KEY TAXKD = WA_KNVI-TAXKD
WA_FINAL-VTEXT = WA_TSKDT-VTEXT.
*UP TO THIS POINT REPORT IS OK
*read table I_bsid INTO WA_BSID with key kunnr = WA_kna1-kunnr
                           UMSKZ = 'A' .
*if sy-subrc = 0.
*WA_final-DOWN = WA_bsid-dmbtr * -1.
**modify i_final.
**clear it_bsid.
*endif.
*read table i_bsid INTO WA_BSID with key kunnr = WA_kna1-kunnr
                           UMSKZ = 'L'  .
*if sy-subrc = 0.
*WA_final-deposit = WA_bsid-dmbtr * -1.
*endif.
*read table i_bsid INTO WA_BSID with key kunnr = WA_kna1-kunnr
                           blart = 'DZ' .
**IF BLART = 'HR' OR BLART = 'DZ' OR BLART = 'P1'.
*if sy-subrc = 0.
*WA_final-coll1 = WA_bsid-dmbtr * -1.
*ENDIF.
*read table i_bsid INTO WA_BSID with key kunnr = WA_kna1-kunnr
                           blart = 'HR' .
**IF BLART = 'HR' OR BLART = 'DZ' OR BLART = 'P1'.
*if sy-subrc = 0.
*WA_final-coll2 = WA_bsid-dmbtr * -1.
*ENDIF.
*read table i_bsid INTO WA_BSID with key kunnr = WA_kna1-kunnr
                           blart = 'P1' .
**IF BLART = 'HR' OR BLART = 'DZ' OR BLART = 'P1'.
*if sy-subrc = 0.
*WA_final-coll3 = WA_bsid-dmbtr * -1.
*ENDIF.
*read table i_bsid INTO WA_BSID with key kunnr = WA_kna1-kunnr.
**IF BLART = 'HR' OR BLART = 'DZ' OR BLART = 'P1'.
*if sy-subrc = 0.
*WA_final-coll = ( WA_final-coll1 + WA_final-coll2 + WA_final-coll3 ) - ( wa_final-deposit ) * -1.
*ENDIF.
*read table i_bsid INTO WA_BSID with key kunnr = WA_kna1-kunnr
                           UMSKZ = 'L'  .
*if sy-subrc = 0.
*WA_final-deposit = WA_bsid-dmbtr * -1.
*endif.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM01S = WA_KNC1-UM01S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM02S = WA_KNC1-UM02S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM03S = WA_KNC1-UM03S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM04S = WA_KNC1-UM04S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM05S = WA_KNC1-UM05S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM06S = WA_KNC1-UM06S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM07S = WA_KNC1-UM07S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM08S = WA_KNC1-UM08S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM09S = WA_KNC1-UM09S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM10S = WA_KNC1-UM10S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM11S = WA_KNC1-UM11S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM12S = WA_KNC1-UM12S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM13S = WA_KNC1-UM13S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM14S = WA_KNC1-UM14S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM15S = WA_KNC1-UM15S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM16S = WA_KNC1-UM16S.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR  .
IF SY-SUBRC = 0.
WA_FINAL-DEBIT = WA_KNC1-UM01S + WA_KNC1-UM02S + WA_KNC1-UM03S + WA_KNC1-UM04S + WA_KNC1-UM05S + WA_KNC1-UM06S + WA_KNC1-UM07S +
                 WA_KNC1-UM08S + WA_KNC1-UM09S + WA_KNC1-UM10S + WA_KNC1-UM11S + WA_KNC1-UM12S + WA_KNC1-UM13S + WA_KNC1-UM14S +
                 WA_KNC1-UM15S + WA_KNC1-UM16S .
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR.
IF SY-SUBRC = 0.
WA_FINAL-UM01H = WA_KNC1-UM01H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM02H = WA_KNC1-UM02H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM03H = WA_KNC1-UM03H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM04H = WA_KNC1-UM04H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM05H = WA_KNC1-UM05H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM06H = WA_KNC1-UM06H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM07H = WA_KNC1-UM07H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM08H = WA_KNC1-UM08H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM09H = WA_KNC1-UM09H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM10H = WA_KNC1-UM10H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM11H = WA_KNC1-UM11H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM12H = WA_KNC1-UM12H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM13H = WA_KNC1-UM13H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM14H = WA_KNC1-UM14H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM15H = WA_KNC1-UM15H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-UM16H = WA_KNC1-UM16H.
ENDIF.
READ TABLE I_KNC1 INTO WA_KNC1 WITH KEY KUNNR = WA_KNVV-KUNNR
IF SY-SUBRC = 0.
WA_FINAL-CREDIT = WA_KNC1-UM01H + WA_KNC1-UM02H + WA_KNC1-UM03H + WA_KNC1-UM04H + WA_KNC1-UM05H + WA_KNC1-UM06H + WA_KNC1-UM07H +
                 WA_KNC1-UM08H + WA_KNC1-UM09H + WA_KNC1-UM10H + WA_KNC1-UM11H + WA_KNC1-UM12H + WA_KNC1-UM13H + WA_KNC1-UM14H +
                 WA_KNC1-UM15H + WA_KNC1-UM16H .
ENDIF.
READ TABLE I_KNC3 INTO WA_KNC3 WITH KEY KUNNR = WA_KNVV-KUNNR GJAHR = WA_KNC1-GJAHR
                                            SHBKZ  = 'A' .
IF SY-SUBRC = 0.
WA_FINAL-SALDV = WA_KNC3-SALDV  .
ENDIF.
READ TABLE I_KNC3 INTO WA_KNC3 WITH KEY KUNNR = WA_KNVV-KUNNR GJAHR = WA_KNC1-GJAHR
                                             SHBKZ = 'A' .
IF SY-SUBRC = 0.
WA_FINAL-SOLLL = WA_KNC3-SOLLL  .
ENDIF.
READ TABLE I_KNC3 INTO WA_KNC3 WITH KEY KUNNR = WA_KNVV-KUNNR GJAHR = WA_KNC1-GJAHR
                                             SHBKZ = 'A' .
IF SY-SUBRC = 0.
WA_FINAL-HABNL = WA_KNC3-HABNL  .
ENDIF.
READ TABLE I_KNC3 INTO WA_KNC3 WITH KEY KUNNR = WA_KNVV-KUNNR GJAHR = WA_KNC1-GJAHR
                                    SHBKZ = 'A'  .
IF SY-SUBRC = 0.
WA_FINAL-DOWN =   WA_KNC3-SALDV + WA_KNC3-SOLLL + WA_KNC3-HABNL  * -1.
ENDIF.
READ TABLE I_KNC3 INTO WA_KNC3 WITH KEY KUNNR = WA_KNVV-KUNNR GJAHR = WA_KNC1-GJAHR
                            SHBKZ = 'L' .
IF SY-SUBRC = 0.
WA_FINAL-SALDV = WA_KNC3-SALDV  .
ENDIF.
READ TABLE I_KNC3 INTO WA_KNC3 WITH KEY KUNNR = WA_KNVV-KUNNR GJAHR = WA_KNC1-GJAHR
                            SHBKZ = 'L' .
IF SY-SUBRC = 0.
WA_FINAL-SOLLL = WA_KNC3-SOLLL  .
ENDIF.
READ TABLE I_KNC3 INTO WA_KNC3 WITH KEY KUNNR = WA_KNVV-KUNNR GJAHR = WA_KNC1-GJAHR
                            SHBKZ = 'L' .
IF SY-SUBRC = 0.
WA_FINAL-HABNL = WA_KNC3-HABNL  .
ENDIF.
READ TABLE I_KNC3 INTO WA_KNC3 WITH KEY KUNNR = WA_KNVV-KUNNR GJAHR = WA_KNC1-GJAHR
                            SHBKZ = 'L'  .
IF SY-SUBRC = 0.
WA_FINAL-DEPOSIT =  WA_KNC3-SALDV + WA_KNC3-SOLLL + WA_KNC3-HABNL * -1.
ENDIF.
READ TABLE I_KNC3 INTO WA_KNC3 WITH KEY KUNNR = WA_KNVV-KUNNR GJAHR = WA_KNC1-GJAHR.
IF SY-SUBRC = 0.
WA_FINAL-COLL =  WA_FINAL-DOWN - WA_FINAL-DEPOSIT.
ENDIF.
APPEND WA_FINAL TO I_FINAL.
ENDLOOP.
ENDFORM.
FORM DATA_BUILD.
  CLEAR WA_FIELDCATALOG.
  WA_FIELDCATALOG-FIELDNAME = 'NAME1'.
  WA_FIELDCATALOG-SELTEXT_M = 'NAME'.
fieldcatalog-col_pos = 1.
  WA_FIELDCATALOG-OUTPUTLEN = 15 .
fieldcatalog-emphasize = 'X'. " COLOR OF THIS COLUMN.
  APPEND WA_FIELDCATALOG TO FIELDCATALOG.
CLEAR WA_FIELDCATALOG.
  WA_FIELDCATALOG-FIELDNAME = 'ORT01'.
  WA_FIELDCATALOG-SELTEXT_M = 'CITY'.
fieldcatalog-col_pos = 2.
  WA_FIELDCATALOG-OUTPUTLEN = 15 .
fieldcatalog-emphasize = 'X'. " COLOR OF THIS COLUMN.
  APPEND WA_FIELDCATALOG TO FIELDCATALOG.
CLEAR WA_FIELDCATALOG.
WA_FIELDCATALOG-FIELDNAME = 'STCEG'.
  WA_FIELDCATALOG-SELTEXT_M = 'VAT. REG. NO.'.
fieldcatalog-col_pos = 3.
  WA_FIELDCATALOG-OUTPUTLEN = 15 .
fieldcatalog-emphasize = 'X'. " COLOR OF THIS COLUMN.
  APPEND WA_FIELDCATALOG TO FIELDCATALOG.
CLEAR WA_FIELDCATALOG.
WA_FIELDCATALOG-FIELDNAME = 'REGIO'.
  WA_FIELDCATALOG-SELTEXT_M = 'REGION'.
fieldcatalog-col_pos = 4.
  WA_FIELDCATALOG-OUTPUTLEN = 15 .
fieldcatalog-emphasize = 'X'. " COLOR OF THIS COLUMN.
  APPEND WA_FIELDCATALOG TO FIELDCATALOG.
CLEAR WA_FIELDCATALOG.
WA_FIELDCATALOG-FIELDNAME = 'STRAS'.
  WA_FIELDCATALOG-SELTEXT_M = 'STREET'.
fieldcatalog-col_pos = 5.
  WA_FIELDCATALOG-OUTPUTLEN = 15 .
fieldcatalog-emphasize = 'X'. " COLOR OF THIS COLUMN.
  APPEND WA_FIELDCATALOG TO FIELDCATALOG.
CLEAR WA_FIELDCATALOG.
WA_FIELDCATALOG-FIELDNAME = 'TELF1'.
  WA_FIELDCATALOG-SELTEXT_M = 'TELEPHONE'.
fieldcatalog-col_pos = 6.
  WA_FIELDCATALOG-OUTPUTLEN = 15 .
fieldcatalog-emphasize = 'X'. " COLOR OF THIS COLUMN.
  APPEND WA_FIELDCATALOG TO FIELDCATALOG.
CLEAR WA_FIELDCATALOG.
WA_FIELDCATALOG-FIELDNAME = 'WERKS'.
  WA_FIELDCATALOG-SELTEXT_M = 'PLANT'.
fieldcatalog-col_pos = 7.
  WA_FIELDCATALOG-OUTPUTLEN = 15 .
fieldcatalog-emphasize = 'X'. " COLOR OF THIS COLUMN.
  APPEND WA_FIELDCATALOG TO FIELDCATALOG.
CLEAR WA_FIELDCATALOG.
WA_FIELDCATALOG-FIELDNAME = 'BZIRK'.
  WA_FIELDCATALOG-SELTEXT_M = 'SALES DISTRICT'.
fieldcatalog-col_pos = 8.
  WA_FIELDCATALOG-OUTPUTLEN = 15 .
fieldcatalog-emphasize = 'X'. " COLOR OF THIS COLUMN.
  APPEND WA_FIELDCATALOG TO FIELDCATALOG.
CLEAR WA_FIELDCATALOG.
WA_FIELDCATALOG-FIELDNAME = 'VKBUR'.
  WA_FIELDCATALOG-SELTEXT_M = 'SALES OFFICE'.

hi dinesh......
please use  'INTO TABLE' instead of  ' INTO CORRESPONDING FIELDS OF TABLE '.  and use ' FOR ALL ENTRIES '............based on relavence ....coz when I set break point  over the select queries it took me long time to execute .......
becoz u didnt used the for all entries  in select query ............ u r read statement
READ TABLE I_KNA1 INTO WA_KNA1 WITH KEY KUNNR = WA_KNVV-KUNNR.  ..... 
it is reading wrong kunnr value .............even it took long time to execute ......
use sort where ever required...
use for all entries .......and check weather the table initial or not......
use sy-subrc check where ever reqired ..............

Similar Messages

  • How to display a date also in the alternate calendar format beside the main region calendar format?

    In case I defined alternate calendar and I want to display he date from the main calendar and also from the alternate calendar how can I do it?
    keren tsur

    Hi
    first convert the date to YYYYMMDD
    CONVERSION_EXIT_PDATE_INPUT
    then pass the date to this FM
    Try using this FM
    CONVERSION_EXIT_SDATE_OUTPUT
    Import parameters               Value
    INPUT                           20070220
    Export parameters               Value
    OUTPUT                          20.FEB.2007
    regards
    Shiva

  • How to display group data only when the particular group is clicked

    Hi frnds,
    I want to design my report as follows:
    Data is grouped by country, and for each country it is showing details for that country. I need to find out a way to display all group names first.  E.g.
    Argentina
    Aruba
    Australia
    And on click of particular country name it should display its details below it
    e.g.
    -Argentina
         BBB            Mendoza          123456
    +Aruba
    +Australia
    Has anyone done that before??? Is it possible to achieve it through Crystal Report Designer (2008)?? If yes then how???
    A prompt reply would be appriciated as i need this information urgently.
    Thanx.

    Thanx Jehanzeb,
    The sample u suggested did not solve my problem since it is opening the group data in new window.
    My question is - can we show/hide group data by clicking on that particular group (under that group name).
    e.g.
    ->(initial display - only groups)
    + Australia
    + America
    + Bhutan
    ->(on clicking a group)
    + Australia
    \- America
    abc    xyx    12213213    wqe9090
    dsd    dcv     90eur90e    ifjjdioifdoi
    + Bhutan
    In short, I am looking for on-demand display of records grouped by some field and the expansion of data must be done in the same page.
    Edited by: Kuldeep Chitrakar on Aug 6, 2008 12:44 PM
    Edited by: Kuldeep Chitrakar on Aug 6, 2008 12:45 PM
    Edited by: Kuldeep Chitrakar on Aug 6, 2008 12:46 PM

  • I scanned an old photo but the date displayed is today, I want the date to be the date the photo was taken.  How do I change the date of the photo in iPhoto 11?

    I scanned an old photo but the date displayed is today, I want the date to be the date the photo was taken.  How do I change the date of the photo in iPhoto 11?

    There are two kinds of metadata involved when you consider jpeg or other image file.
    One is the file data. This is what the Finder shows. This tells you nothing about the contents of the file, just the File itself.
    The problem with File metadata is that it can easily change as the file is moved from place to place or exported, e-mailed, uploaded etc.
    Photographs have also got both Exif and IPTC metadata. The date and time that your camera snapped the Photograph is recorded in the Exif metadata. Regardless if what the file date says, this is the actual time recorded by the camera.
    Photo applications like iPhoto, Aperture, Lightroom, Picasa, Photoshop etc get their date and time from the Exif metadata.
    When you export from iPhoto to the Finder new file is created containing your Photo (and its Exif). The File date is - quite accurately - reported as the date of Export.
    However, the Photo Date doesn't change.
    The problem is that the Finder doesn't work with Exif.
    So, your photo has the correct date, and so does the file, but they are different things. To sort on the Photo date you'll need to use a photo app.

  • How to display the data according to the search criteria

    Hi ,
    I want to display the order data in my application.
    I have certain search criteria in my application like
    ordernumber , date etc.
    If I click the submit button without giving any criteria , all the data is getting displayed properly in the table.
    But,If I want to display the data according to the order number, all the orders present are getting displayed , but actually it should display the order only for that particular order number.
    The code is  like : onActionSearch()
    wdThis.wdComponentController().executeGetlist_Input();
    Can any one tell me how to display the orders depending upon the search criteria.
    Ideas & suggestions are truely welcome.
    Thanks & Regards
    Neha Mahanty

    Hi Neha,
    Design ur RFC in such a way that it should display the result according to your search criteria.
    If you are making a search on "Order Number" pass some input(search criteria along with some parameter, if order number is not null then pass "X" as parameter) to RFC so that it will return the result accordingly. And same with date also.
    Thanks n Regards,
    Jhansi Miryala

  • How to display multi-channel image in the 'proxy'?

    There're many examples to show how to display composite channels in the 'proxy'. But I don't find any example to show how to display multi-channel image in the 'proxy'. I found that I can use PSPixelOverlay to display alpha channel data like this:
    int nSpotChannel = gChannelCount - 4;
    PSPixelOverlay* overlay = new PSPixelOverlay[nSpotChannel];
    for(int i = 0; i < nSpotChannel; i++){
           if( i != (nSpotChannel - 1) )
                 overlay[i].next = overlay + i + 1;
           else
           overlay[i].next = NULL;
           overlay[i].data = gChannelData + (4 + i) * nPlaneBytes;
           overlay[i].rowBytes = gProxyRect.Width() * gDocDesc->depth / 8;
           overlay[i].colBytes = 1;
           overlay[i].r  = 230;
           overlay[i].g = 161;
           overlay[i].b = 174;
           overlay[i].opacity = 255;
           overlay[i].overlayAlgorithm = kStandardAlphaOverlay;
    pixels.pixelOverlays = overlay;
    Then, Seeing red part, it will trigger a new problem, that is how to get the color value of the alpha channel by plung-in itself? It seems that no channel color value info is in FilterRecord.
    If you have other solution, please tell me. Many thanks!

    This is what I've been doing - was just curious if there was a way to see a more cohesive image.
    If the individual EQ plugins are in fact the answer, is there any way to smooth how the Analyzer displays? The image I posted above, all of the tonal curves are very smooth. The analyzer tool shows a lot of peaks and valleys within the overall curve and it's hard to pinpoint each instrument's "sweet spot." Vocals for example are very hard to spot.
    - Morgan

  • How to get ONLY the data shown in the plot area (Chart)

    My chart history length are 120000, often I don't need to save the whole
    buffer, just the data shown in the plot area?
    Richard Pettersen

    Hi Richard,
    you haven't said which version of LabVIEW you're using, so I've assumed 6.1, although this should be fine for older versions too.
    I put down a chart with a scrollbar, so I could go through the old data, and put an indicator attached to the x-scale->Range->minimum and x-scale->Range->maximum. These followed the displayed elements in the chart as I scrolled through the data. You can therefore link in the chart history (history data property of the chart) with a array subset to retrieve the appropriate portion.
    Be aware though. The minimum and maximum that the chart is showing on the x-scale may not be a part of the history data. E.g. if I set the history length to 1000 points, and produce on the first run, 1000 points, then my minimum and maximum are a
    t say 899 and 999. I then produce a second set of data, 1000 points long, and my min and max move to 1899 and 1999. As my history data is only 1000 long, I can scroll back to 1000 OK, but my min and max used as indexes don't relate anymore. The array of data is from 0 to 999, but my indexes can run from 1000 to 1999. So I have to add in a check that sees how much data is available before attempting to index, and then it's down to the flow of the program to try to work out where the data actually is (I'll leave that one to you - you'll need to keep track of the history data compared to the maximum (shown just after a point is added though this then makes a mess of your data! - possible to keep a track of new minimum when adding new data : the maximum (which will show once new data goes on) minus the history length gives the minimum that can be scrolled to - subtract this from all max's and min's to get real offset into the data - history length can be got from the history data and an a
    rray size sub .vi)
    Obviously there's no problem if you're clearing the graph everytime you either plot new data, or reach maximum capacity on the history data as the indexes return to zero.
    Hope that helps
    S.
    // it takes almost no time to rate an answer

  • I am receiving the data through the rs232 in labview and i have to store the data in to the word file only if there is a change in the data and we have to scan the data continuasly how can i do that.

    i am receiving the data through the rs232 in labview and i have to store the data in to the word or text file only if there is a change in the data. I have to scan the data continuasly. how can i do that. I was able to store the data into the text or word file but could not be able to do it.  I am gettting the data from rs232 interms of 0 or 1.  and i have to print it only if thereis a change in data from 0 to 1. if i use if-loop , each as much time there is 0 or 1 is there that much time the data gets printed. i dont know how to do this program please help me if anybody knows the answer

    I have attatched the vi.  Here in this it receives the data from rs232 as string and converted into binery. and indicated in led also normally if the data 1 comes then the led's will be off.  suppose if 0 comes the corresponding data status is wrtten into the text file.  But here the problem is the same data will be printed many number of times.  so i have to make it like if there is a transition from 1 to o then only print it once.  how to do it.  I am doing this from few weeks please reply if you know the answer immediatly
    thanking you 
    Attachments:
    MOTORTESTJIG.vi ‏729 KB

  • How to add the date field in the dso and info cube

    Hi all.
    I am new to bi 7. in the earlier version v hav to button to add the date field. but in the bi 7 der is no option so can any body tell me how to add the date field in the data targets
    Thanks & Regard
    KK

    my prob is solved
    KK

  • How can I import data in to the digital word generator in Multisim?

    How can I import data in to the digital word generator in Multisim?
    I just  received this comment from a friend, a RADAR engineer, who has just down loaded Multisim.  He has been using HP/Agilent software.  He has a work around using a piecewise linear voltage waveform with data imported from Excel but this is not really a good solution.  It would also be helpful to import data from Mathcad or equivalent.
    "I thought I was about to be impressed with MultiSim but it ended only in disappointment. There is a word generator in the simulation instrument panel which can drive the DAC with a waveform and it can have thousands of lines of values. I opened Excel, wrote the formula to generate the time and voltage points for a chirp, converted to DAC values in Hex and then went back to the word generator in MultiSim to load the values only to find that you have to enter each value manually. It doesn’t even allow you to paste in a list of values from a text file. I’m not going to type 5000 values by hand. If you get the chance to give feedback to National Instruments please ask them if the paste option can be added to the word generator. MultiSim is useful in many regards, but in this case, it left me with the impression that it is considerably limited in capability compared to what I’m used to."

    Hi,
    You can load your data automatically in the Multisim word generator. Follow these steps:
    - Save your data file (in excel .xslx ir .csv format) on your computer
    - Change the extension of the file to ".dp"
    - Double-click the word generator in Multisim and click on Set...
    - In the Settings dialog box, click on Load and then Accept
    - This will prompt you to select the .dp file you have on your computer, select it and you're good to go
    However, in Multisim you have the option of creating your own custom simulation analysis and instrument.
    I will try creating the instrument and send it back to you but it might take some time.
    Multisim and LabVIEW are very powerful in test automation, with the custom instruments you create for Multisim you don't need to export your data file into excel from LabVIEW (or MathCAD or other tools) and then reload it into Multisim. The test procedure is automated instead.
    Please check this reference design about automated simulation
    http://zone.ni.com/devzone/cda/tut/p/id/7825
    Here is how you can create your own custom measurement tool in Multisim and LabVIEW, but as I mentioned, I will create the word generator and come back to you anyways
    http://zone.ni.com/devzone/cda/tut/p/id/5635
    Let me know if you have any questions.
    Mahmoud W
    National Instruments

  • How to display properties of dimension on the input form and report in bpc

    Dear Expert,
    Please tell me how to display properties of dimension on the input form and report in bpc. I can only display dimension number.
    thanks so much
    hungth

    Hi Hungh,
    If your are using BPC 10 with EPM add in client, you can use the function:
    EPMDimensionProperty ()
    This function retrieves the properties of a specified dimension in a specified cell range.
    Thanks, Safa

  • How to select the data efficiently from the table

    hi every one,
      i need some help in selecting data from FAGLFLEXA table.i have to select many amounts from different group of G/L accounts
    (groups are predefined here  which contains a set of g/L account no.).
    if i select every time for each group then it will be a performance issue, in order to avoid it what should i do, can any one suggest me a method or a smaple query so that i can perform the task efficiently.

    Hi ,
    1.select and keep the data in internal table
    2.avoid select inside loop ..endloop.
    3.try to use for all entries
    check the below details
    Hi Praveen,
    Performance Notes
    1.Keep the Result Set Small
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    • There are no more physical I/Os in the database than necessary
    • No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    • The CPU usage of the database host is minimize
    • The network load is reduced, since only the data that is required by the application is transferred to the application server.
    Minimize the Amount of Data Transferred
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Reduce the Database Load
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    • Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    • Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    • Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1. An ABAP program requests data from a buffered table.
    2. The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3. If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4. The database server passes the data to the application server, which places it in the table buffer.
    5. The data is passed to the program.
    When you change a buffered table, the following happens:
    1. The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2. All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3. Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    • Tables that are read very frequently
    • Tables that are changed very infrequently
    • Relatively small tables (few lines, few columns, or short columns)
    • Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    • The BYPASSING BUFFER addition in the FROM clause
    • The DISTINCT addition in the SELECT clause
    • Aggregate expressions in the SELECT clause
    • Joins in the FROM clause
    • The IS NULL condition in the WHERE clause
    • Subqueries in the WHERE clause
    • The ORDER BY clause
    • The GROUP BY clause
    • The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    • Establishing and terminating connections between the work process and the database.
    • Access to database tables
    • Access to R/3 Repository objects (ABAP programs, screens and so on)
    • Access to catalog information (ABAP Dictionary)
    • Controlling transactions (commit and rollback handling)
    • Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running. A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
    1. The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
    2. The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
    3. While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
    4. After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
    5. While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    • A dialog step from a program is assigned to a single work process for execution.
    • The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    • A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • How to change the data type in the table ESLL for the field USERF2_NUM ?

    Hello Friends,
    I have a requirement in which one of the change is to convert the data type of the field 'USERF2_NUM' in the table 'ESLL'  from 'QUAN' to 'CHAR'. 
    How do i do it if i have an access to change it..........i think i should also check the impact of the change if done.
    Kindly tell me as my requirement starts with this small change.
    Regards,
    Rajesh Kumar

    Thanks for the reply Sowmya.
    I would like to know 2 things.
    1. Is it ok to change the data type of the field 'USERF2_NUM '  which is in the table ESLL. from quan to char.
    2.  The table ESLL  already has entries. if we change the data type from QUAN to CHAR what is the  effect on the existing entries of the table .
    Kindly reply me back.
    Thanks & Regards,
    Rajesh Kumar

  • How to provide the data directly for the report using web service

    Hi all,
    I'm trying to execute a report from the webservice API (using BIP version 10.1.3.4) and want to provide the report with a pre-fetched data set. According to the Dev-Guide, I should use element /ReportRequest/reportData for this, but I can't find a proper example illustrating this use case. The problem is that I don't know how to enter my data set into the element. My data set is xml (text) while the data type of the reportData element is base64binary.
    I've tried something like the following, but without any success:
    ...<reportData>
           <ns1:rowset>
               <ns1:row>
                   <ns1:emp>
                      <ns1:name>
                   </ns1:emp>
               </ns1:row>
           </ns1:rowset>
       </reportData>
    ...Any help is highly appreciated.
    Thanks in advance,
    H
    Edited by: Harm Verschuren on Nov 10, 2008 1:01 PM

    Hello,
    Thanks for your answer, the situation is a little different from what you describe.
    We make a call to BI Publisher via a Web Service (PublicReportService) via the ReportRequest operation.
    The report we call contains a query to a database.
    When we do not include XML in the reportData field then the report data is obtained from the Database and the report result is returned via the Web Service.
    !http://s3.amazonaws.com/twitpic/photos/full/2863197.jpg?AWSAccessKeyId=0ZRYP5X5F6FSMBCCSE82&Expires=1235722431&Signature=oboPzLWqQDwB2AGCDCAj3ujHRi4%3D!
    When, however, we include XML data in the reportData field, then we see that ONLY the XML data in the reportData field is used, and NOT data from the Database.
    !http://twitpic.com/img/1pzrp-81573d3dde07b9b643d91800cf2715ef.49a79f9e-original.jpg!
    Is there a possibility to combine the two?
    Regards Léon

  • How to Find the Data Type of the Field In a Database Table

    Hi Experts,
    I'm currently working on a program which needs to find out the data type of the given field in a database table. In addition to accessing DD03L directly, is there any other workaround such as function module to help me achieve this? It would be helpful if a demo example could be provided.
    Thanks a lot.

    Hi,
    Use this..
    DESCRIBE FIELD dobj  TYPE typ.
    write typ.
    type will contain the data type of the object.
    and check this thread also....
    Re: How to get datatype of fields in dynamic structures
    Cheers,
    Simha.
    Reward all the helpful answers..

Maybe you are looking for

  • How do we track Serial No of Finished Goods in 3rd year of Sales

    What is the tracking system of finished goods with their respective Serial.No during: a)  The current financial year b) The previous financial year c) Received during the warranty period ie in the 3rd year. How do we track the Sales Invoice No  and D

  • [JS][CS4]PDF Export With RegExp Doesn't Include First Zero-Please Help

    This is a script we put together with the help of Jongware, Shonkyin and Kaysan a while back... Here's the link to the original thread: http://forums.adobe.com/thread/481958?tstart=0 ( function() { if(app.documents.length != 0){ var myFolder = app.ac

  • How to Edit Custom Fields of Project Server 2010 Programatically

    Hello, I want to change the Custom field value of Project Server 2010. I am following the brain article. My problem is the code run successfully but the changes did not reflect to the Project server. Following is my code, Any Help?? private void Upda

  • Lost Adobe plug-in for Safari 4

    I downloaded the free version of Adobe because I thought I needed it for certain files. However, I soon realised that Preview is much more versatile as a reader and I uninstalled Adobe. However, in the process I appear to have removed the link that e

  • Can't find update software on my ipad 2 please help

    I am trying to update my OS on the ipad 2 and under setting>General> there is no software update.  Please help.