LIS structure in a converted unicode system
Hello all,
We are in the process of converting to unicode. We have successfully created a unicode test system and converted our data and programs. We have set the unicode flag on most of our "Z" programs and for the most part things are working well. However, we are getting the following error:
In program "RMCX0002 ", the following syntax error occurred
in the Include "RMCX0002 " in line 0:
"The program "RMCX0002" is not Unicode-compatible, according to its program attributes."
It looks to me like RMCX0002 is a generated program from an LIS Info-structure.(whatever that is???) The documentation block looks like this:
* Generated report for statistics update LIS *
* Info-Structure: S510 *
* Update group..: 000001 *
* DDIC-Structure: S510 *
* Client........: 000 *
* Author........: DDIC *
* Date..........: 07.07.2002 08:21:36 *
* Do not change this report ! *
When looking at the attributes of the program I see that, indeed, the unicode attribute is not set. However, I am reluctant to set this flag myself due to it being a SAP generated program.
Is there a way to get this attribute set by regenerating these LIS structures or something like that? If so, how do I do that? and does it need to be in a unicode system, or can we do it in our non-unicode DEV system and transport it to the unicode system?
BTW, I have no idea what an LIS structure is or what is used for. I just know it's not working in our unicode system.
Thanks,
Larry Browning
ABAP developer
Baldor Electric Co.
Message was edited by:
Larry Browning
I have a similar issue. And I was told that if I regenerated the infostructure in MC25 it will solve the dump. And it did.
Now what I want to know is how did the regeneration solved the unicode issue for the RMCX* update program of an infostruc? I read before regeneration updates the infostructures with the latest numbers to be used by the BW.
I'd like to know how the infostruc regeneration and unicoding the update program are connected.
Thanks!
Similar Messages
-
Errors in PDF converter for Unicode systems
Hello experts,
I have some problems with the PDF conversion in a Unicode system.
I have to convert a smartform to PDF using CONVERT_OTF. I have implemented SAP notes 812821 and 999712.
The problem is that the special characters(diacritics, language specific characters) are overlapped in the generated PDF document. Does anyone knows what is the problem? I don't know what to do anymore...Hi,
Try Below code
*& Report ZTEST_NREDDY1
REPORT ztest_nreddy1 NO STANDARD PAGE HEADING.
DATA: it_otf TYPE STANDARD TABLE OF itcoo,
it_docs TYPE STANDARD TABLE OF docs,
it_lines TYPE STANDARD TABLE OF tline,
st_job_output_info TYPE ssfcrescl,
st_document_output_info TYPE ssfcrespd,
st_job_output_options TYPE ssfcresop,
st_output_options TYPE ssfcompop,
st_control_parameters TYPE ssfctrlop,
v_len_in TYPE so_obj_len,
v_language TYPE sflangu VALUE 'E',
v_e_devtype TYPE rspoptype,
v_bin_filesize TYPE i,
v_name TYPE string,
v_path TYPE string,
v_fullpath TYPE string,
v_filter TYPE string,
v_uact TYPE i,
v_guiobj TYPE REF TO cl_gui_frontend_services,
v_filename TYPE string,
v_fm_name TYPE rs38l_fnam.
CONSTANTS c_formname TYPE tdsfname VALUE 'ZTEST'.
CALL FUNCTION 'SSF_GET_DEVICE_TYPE'
EXPORTING
i_language = v_language
i_application = 'SAPDEFAULT'
IMPORTING
e_devtype = v_e_devtype.
st_output_options-tdprinter = v_e_devtype.
*st_output_options-tdprinter = 'locl'.
st_control_parameters-no_dialog = 'X'.
st_control_parameters-getotf = 'X'.
.................GET SMARTFORM FUNCTION MODULE NAME.................
CALL FUNCTION 'SSF_FUNCTION_MODULE_NAME'
EXPORTING
formname = c_formname
IMPORTING
fm_name = v_fm_name
EXCEPTIONS
no_form = 1
no_function_module = 2
OTHERS = 3.
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
...........................CALL SMARTFORM............................
CALL FUNCTION v_fm_name
EXPORTING
control_parameters = st_control_parameters
output_options = st_output_options
IMPORTING
document_output_info = st_document_output_info
job_output_info = st_job_output_info
job_output_options = st_job_output_options
EXCEPTIONS
formatting_error = 1
internal_error = 2
send_error = 3
user_canceled = 4
OTHERS = 5.
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ELSE.
.........................CONVERT TO OTF TO PDF.......................
CALL FUNCTION 'CONVERT_OTF_2_PDF'
IMPORTING
bin_filesize = v_bin_filesize
TABLES
otf = st_job_output_info-otfdata
doctab_archive = it_docs
lines = it_lines
EXCEPTIONS
err_conv_not_possible = 1
err_otf_mc_noendmarker = 2
OTHERS = 3.
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
Regards
jana -
Convertion To type x to char field in unicode system......
Hi all
please refer the following code .i need to assign type xstructure to single char field where data are .........
SRTFDHIGH = xsrtfdhigh
unicode system gives error to convertion
DATA: BEGIN OF xsrtfdhigh,
pernr LIKE pc2b0-pernr,
restkey1(16) TYPE x VALUE 'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF',
restkey2(16) TYPE x VALUE 'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF',
END OF xsrtfdhigh.
DATA: SRTFDHIGH LIKE PCL2-SRTFD VALUE
'9999999999999999999999999999999999999999'.
PCL2-SRTFD type char 40.
pc2b0-pernr type numc 8.
is there any solution to convert this .....
Edited by: Vikram shirole on Feb 23, 2008 8:52 AM
Edited by: Vikram shirole on Feb 23, 2008 8:54 AMRead first some documentation like [Character String Processing|http://help.sap.com/saphelp_nw2004s/helpdata/en/79/c554d9b3dc11d5993800508b6b8b11/frameset.htm], usually hexadecimal fields contain control characters (line feed, carriage return, tabulations and the like), so use constants of class [CL_ABAP_CHAR_UTILITIES|http://help.sap.com/abapdocu_70/en/ABENCL_ABAP_CHAR_UTILITIES.htm].
Regards,
Raymond -
Convert TYPE string to TYPE p in a UNICODE system
Hi-
I've got a string that contains an amount and I want to convert it into a TYPE p in my program to store as a proper amount in the database.
I can't do a simple move or use FMs like HMRC_AMOUNT_STRING_CONVERT as I am in a Unicode system and it errors if I try these.
How can I do it.
Example.
parameters p_amount type string.
start-of-selection.
data l_amount TYPE p decimals 2.
MOVE p_amount TO l_amount.
=>compile error.Hi Tristan.....
IN PARAMETERS we ca have only have predefined data
The data types valid for parameters include The built-in ABAP types c, d, i, n, p, t, and x
You cannot use data type F, references and aggregate types.
But you are using type STRING which is not allowed.
so you are getting a compiler error.
Suresh..... -
Export / Import to memory Id 2 . ( Unicode System)
hi all ,
we get unicode syntax error for the statement
EXPORT d1200 TO MEMORY ID 2.
Error discription is -
2 must be a character-type field (data type C, N, D, or T) . "INTERFACE".
We get similar error in import statement as well.
Please refer this code ..........
DATA: BEGIN OF d1200,
form LIKE mard-matnr,
formbez LIKE makt-maktx,
labst(10),
zeichnr LIKE drad-doknr,
zeichnr1 LIKE drad-doknr,
produktion_brutto(08) TYPE p, " (16) type c
reparatur(08) TYPE p,
zaehlerstand(16),
maxprod(08) TYPE p,
kapazitaet(06) TYPE p DECIMALS 2,
hoehe(07),
gewicht(07),
netto-gesamt(07) TYPE p,
ausschuss-gesamt(07) TYPE p,
brutto-gesamt(07) TYPE p,
restmenge(07) TYPE p,
*mico20000430
*notwendig für korrekte Anzeige in Druckliste
restmenge(08) TYPE p,
datenuebernahme(16),
menge(08) TYPE p decimals 0 ,
maxprod2(08) TYPE p decimals 0,
umwertung(08) TYPE p decimals 0,
ohnefa(08) TYPE p decimals 0,
charge like mchb-charg,
fhmnr LIKE mara-matnr,
END OF d1200.
EXPORT d1200 TO MEMORY ID 2.Unicode systems have a problem in that you can't define stuff as type X. Structures also have to be properly "bounded". You can also no longer rely on code such as field1+7(9) .
You could define your structure as a STRING and delimit the fields with say a tab symbol ( use the attribute in cl_abap_utilities to get the tab char in a unicode system).
export the string to memory.
on the import end use the SPLIT command on the string to re-create your data elements.
You will then have to manage the decimal arithmetic yourself.
It's a pain - but that's largely the point of Unicode --everything is treated as TYPE C.
Structures containing anything other than type C data are also a real problem if you want to upload / download data. Handling Binary files is a nightmare.
There are some SAP supplied utilities / notes / documentations on converting to a Unicode system.
Cheers
jimbo -
Need to assign a multi datatype struct to a single field in Unicode system
Hi all,
I had posted an earlier question related to this to understand what the issue is. Few responses helped. Then now I have gone thru the conversion rules in Unicode systems related to assigning a structure (with different data types) to a single field and looking for a solution for my case. The current code written in 4.6 says
l_bapiparex-valuepart1 = l_bape_vbap
where
l_bape_vbap is a structure of type bape_vbap and
l_bapiparex of type bapiparex.
So according to the rules, when I do this kind of an assignment in ECC , the first field's length (i.e. VBELN) must be min 240 characters long. Only then this assignment will be error free, am I right ?
Or Is this an inadmissable assignment situation ?
If not then is there any other way how I can achieve the same result by fixing this issue ? I am not able to think of any !! Has any one come across this kind of an issue during upgrade? Please do suggest.
Just for your reference I have pasted the rule details from SAP help..
Conversion in Unicode Programs
The following rules apply in Unicode programs when converting a flat structure to a single field and vice versa:
If a structure is purely character-like, it is processed during conversion like a data object of the type c (Casting). The single field can have any elementary data type.
If the structure is not purely character-like, the single field must have the type c and the structure must begin with a character-like fragment that is at least as long as the single field. The assignment takes place only between this fragment and the single field. The character-like fragment of the structure is treated like a data object of the type c (Casting) in the assignment. If the structure is the target field, the remaining character-like fragments are filled with blanks and all other components with the initial value that corresponds to their type.
No conversion rule is defined for any other cases, so that assignment is not possible.
Note
If a syntax error occurs due to an inadmissible assignment between flat structures and single fields, you can display the fragment view of the corresponding structure when displaying the syntax error in the ABAP Editor by choosing the pushbutton with the information icon.
thanksTry this way
call method cl_abap_container_utilities=>fill_container_c
exporting
im_value = l_bape_vbap
importing
ex_container = l_bapiparex-valuepart1
exceptions
illegal_parameter_type = 1
others = 2.
a® -
TABLE_ENTRIES_GET_VIA_RFC in unicode system
Hi all,
I know this is going to be a long initial post, but please please take the time to read it. Otherwise there would be many unnecessary questions.
We are using a middleware for mobile devices that reads connects to SAP for reading table data (via a.m. RFC) and posting RFCs/BAPIs.
Now we try to connect to an unicode SAP system (6.20). The statement
SELECT * FROM (TABNAME) INTO TABLE TABENTRY WHERE (SEL_TAB)
where TABENTRY is a table of type char(2048) does not work any more as in unicode systems the structure of the db table and the internal table have to be the same.
So we found the fm CRM_CODEX_GET_TABLE_VIA_RFC in CRM which is built from a copy of the a.m. fm and solves this problem by
1.) creating dynamically an internal table of the same type as the db table.
2.) select the data into this new internal table
3.) loop over the internal table and converting each field of its structure to a char variable and then appending it to a the result char(2048).
Theoretically everything's ok. The fm works now and returns correct data. But there's still one problem, the middleware doesn't convert the data correctly, as the values of fields of type 'p' are passed differently.
non unicode (standard fm):
1000000000000280401011000COMPDL 200408 ###E8##############################
unicode (changed fm):
1000000000000280401016000COMPDL 200504 10.000 0.000 0.000 0.000
As you can see, the select statement from the top of this post just puts the data into the string without actually converting the numbers in fields of type p (or QUAN, CURR in db).
The changed fm with converting every field also converts the number values, now they appear as char fields.
The middleware tries to convert the number values, but always returns 0 (I can only the results as the actually programming is a black box for me).
Has anyone any idea how to solve this problem? (beside getting help from the middleware vendor, which is difficult, as there is a new release working with unicode systems. But we will stay on the old release for some months from now...)Hi Raja,
thanks for your answer.
I had already searched the forum and found your document about RFC_READ_TABLE which I think is quite interesting and a good solution.
But unfortunately, I cannot change the middleware's RFC logic, e. g. change the BAPI or make changes to the in-/output streams.
I now live with a workaround:
I modified the RFC to convert all p type fields to character fields and also changed the metadata RFCs accordingly, which works OK.
For all RFCs I use to post data to SAP, I write a wrapper RFC with character only structures and convert them to the internal RFCs inside SAP.
This is not my preferred solution, but I am very short of time and it works pretty well.
Regards,
Hans -
Translate string using hex(0020) in Unicode system.
Hello all,
We are facing a problem of the "translate" statement in the Unicode system.
The original statement goes as follows:
TRANSLATE BULOG USING WS_STRING1.
Here BULOG is a structure and ws_string1 is declared as follows:
DATA : ws_string1(2) type x value '0020'.
On compilation in the new system which is Unicode enabled, the above mentioned statement removes all the '#' placed in the structure.
We tried the following statement instead of the original TRANSLATE statement.
TRANSLATE BULOG USING SPACE.
But this statement leaves the '#' unchanged.
We have already REPLACE statement after converting the structure contents into a string.
We have also tried the convert methods of the classes CL_ABAP_CONV_IN_CE and CL_ABAP_CONV_OUT_CE.
Hoping to receive a fast response.
Thanks in advance.
Zankruti.You might want to read ABAP Help on TRANSLATE:
Addition 2
... USING pattern
Effect
If you specify USING, the characters in text are converted according to the rule specified in pattern.
pattern must be a character-type data object whose contents are interpreted as a sequence of
character pairs.
Your option 1 is not working because type X is "Byte field", technically it's not a character-type. Your option 2 is not working because <pattern> must be a sequence of character pairs (you had just SPACE).
In Option 1 just change the definition from X to C or STRING. -
Client Copy between unicode system and non-unicode system
Hello,
we have to build up a new system for japan. It is planned to install a new 4.7Ext200 unicode system and then make a client copy from a non-unicode 4.7Ext200 (language= german, english) system to the unicode system. I don't think that this is possible way, but I can't find information regarding client copies from non-unicode to unicode. I would advice the project to convert the existing non-unicode system to unicode, make a system copy for the new system and then install japan languages. Any information, which can help me.
Regards,
AlexanderHi,
without conversion client copy is not possible
look at following
Re: Client copy between unicode and Non-unicode
regards,
kaushal -
GUI_DOWNLOAD problems with CR+LF when transfering from unicode system
Hi,
I was successfuly used FM GUI_DOWNLOAD in a non-unicode systems for years. Lately I faced a challenge to rewrite my code for a unicode system. The configuration is:
- SAP R/3 unicode system;
- data to be downloaded at presentation server in a non-unicode codepage (cp 9504).
I have successfuly used a GUI_DOWNLOAD-parameter CODEPAGE and the data is translated correctly when checking local file, but due to some reasons CRLF are replaced with '#' (which is default value of REPLACEMENT parameter of this function) - means at the end of each row as a result I have '##' instead of CRLF.
My question is: how can I force correct behaviour of GUI_DOWNLOAD in order to get my output file at presentation server with CR+LF?
Any help would be highly appreciated.
Many thanks in advance.
Regards,
Ivaylo Mutafchiev
SAP/ABAP consultant
VBS Ltd.
P.S. In order to find some other way to fix my problem I'm still playing with the instanciation of a CL_ABAP_CONV_OBJ and its methods create & convert, but without success for now - resulted strings are not as expected.Hi,
in fact, I never placed CRLF in my lines before your suggestion. The rest was done by the FM 'GUI_DOWNLOAD'. It works fine even when I use unicode file as output - means I got my CRLF at the end of the record in MY OUTPUT FILE ONLY but not in my internal table - I never placed CR+LF in there.
The problem occures when I tried to use GUI_DOWNLOAD with parameter CODEPAGE = '9504' (some non-unicode codepage), and the original data (my internal table) is in unicode. Then (in my opinion) this function doesn't translate the unicoded CR+LF into non-unicode ones (if thats possible at all, I can't be sure) and the result is '##' in the output file.
I checked the value of CL_ABAP_CHAR_UTILITIES=>CR_LF by getting it in my variable - and it is '##'.
Whet should I put into this class-attribute in order to get it working in this scenario? I have no idea...
The attribute type is ABAP_CR_LF - which is char 2.
What next?
Thanks,
Ivaylo -
Reading TemSe files in unicode system showing greek letters
Hi Gurus,
I am facing issue with the newly converted unicode (cp 4102) spool files. I am using the standard functions RSTS_OPEN_RLC , RSTS_READ and RSTS_CLOSE to read TemSe files and create output files on application server using OPEN DATASET and TRANSFER.
After conversion to unicode, these function calls are returning some greek characters with size of the itab doubled (and that is due to double-byte UTF-16 charcter set) .
The program generates
Runtime Error CONVT_CODEPAGE
Except. CX_SY_CONVERSION_CODEPAGE
" While a text was being converted from code page '4102' to '1100', on
the following occurred:
- an character was discovered that could not be represented in one o
the two code pages;
- the system established that this conversion is not supported."
the statement
TRANSFER TEMSE_DATA TO ACT_FILE
generates this error.
Can someone tell me what is missing ?
TIA
WasimThe extra characters are pasted below:
㤀 ㈀㈀㐀 ㈀㜀㐀㐀㐀 㤀 ㈀ 㘀 㤀㈀ 圀 㤀㐀 唀⸀匀⸀ 䈀䄀一䬀
These kind of characters are also visible in SP11 contents for spool files.
Again the question: which font to you use to display in your SAPGUI?
The spool files are generated through language EN and should not show any non-english characters.
That doesn´t tell anything. One advantage of Unicode is that you can enter, display and modify all languages/characters with an English logon.
Another question: What technology is generating those spool files? Sapscript? Smartforms? Self written program?
Markus -
PDF conversion for chineese characters in Unicode system
I am facing a problem while converting the SAP Script Output to PDF format for Chinese characters.
I am working on ECC (5.0) Unicode system.
Scenario:
After saving a Purchase Order an E-mail is sent to the customer - attaching the
PO output in PDF format. E-mail was received successfully by the receiver, but while opening the pdf all the chinese characters were displayed in junk characters in the pdf. All the English characters are properly displayed. I tried to open the pdf file in Acrobat Reader versions 6.0, 7.0, 8.0. but no result. I used CONVERT_OTF function module for converting the OTF format to PDF format. I tried using the fonts CNSONG also.
I tried by executing the standard program RSTXPDFT4 for converting to PDF by giving the spool. In the spool it is showing the Chinese characters perfectly but in the PDF the Chinese characters were were showing as Junk.
Can you please help and advice to see the Chinese characters in PDF in Unicode systems.
Thanks in advance.>
Juraj Danko wrote:
> Hi,
> I have similar problem than you ... how have you solved it?
> thanks
> Juraj
I found a solution, but I am not sure, if it was for this problem or
output problem with for example PL in non-unicode systems.
I created the input for CONVERT_OTF with CALL FUNCTION 'PRINT_TEXT'.
PRINT_TEXT has to be called with DEVICE = 'PRINTER',
DEVICE = 'ABAP' uses internally the wrong code page.
You have also to set otf_options-tdprinter to a valid printer,
if it is empty, the default printer from user settings is used.
You can use code example from SAP note 413295.
Before you call CONVERT_OTF, you can also check entries with 'FC' in OTF input.
The font (see description of OTF format in SAP help) must be set like described in SAP note 144718.
/Tibor
Edited by: Tibor Gerke on Jan 13, 2011 10:29 AM -
Convert_otf in unicode system
Hello,
we have switched a CRM4.0 system from non-unicode to unicode. In this context we became two problems with the result of the function module 'CONVERT_OTF'. We call the function Module following:
call function 'CONVERT_OTF'
exporting
format = 'PDF'
max_linewidth = 132
copynumber = 0
importing
bin_filesize = lv_bin_filesizepdf
tables
otf = it_otfdata
lines = lt_lines_pdf
the otf tabel is generated by a Smartform.
Noticeable is, that the pdf table in the unicode system has similarly only the half quantity of rows than in the non-unicode system. (The parameter FLATE_COMPR_OFF in Report RSTXPDF3 allways is on)
1. Problem
We send the pdf table (lt_lines_pdf) via RFC to a non-unicode R/3 system and save it there on the file system. After switching to unicode, the document in the RFC system is damaged an cannot processed.
2. Problem
We transform the lt_lines_pdf for an archive in the following way.
data: ls_data type char1024
, lv_134 type char134
, ls_lines type tline
, lt_data type standard table of tbl1024
field-symbols: <fs> type tbl1024.
assign ls_data to <fs> casting.
lv_count = 0.
loop at lt_lines_pdf into ls_lines.
lv_134 = ls_lines.
lv_offset = lv_count.
lv_count = lv_count + 134.
if lv_count < 1024.
ls_data+lv_offset(134) = lv_134.
else.
lv_count = lv_count - 1024.
lv_begin = 134 - lv_count.
ls_data+lv_offset(lv_begin) = lv_134(lv_begin).
append <fs> to lt_data.
clear ls_data.
if lv_count > 0.
ls_data = lv_134+lv_begin(*).
endif.
endif.
endloop.
if ls_data is not initial.
append <fs> to lt_data.
endif.
We need lt_data for the FM 'SCMS_AO_TABLE_CREATE'.
I hope anybody can help me, I invest already much time in this problem!Hi,
CONVERT_OTF packs two byte into one character of i-tab LINES on unicode systems. If you pass this via RFC to non-unicode systems data ist lost. The table contents are bytes when if the data type is character. The conversion routines between unicode and non-unicode do not know about this.
However the function module contains an export paramter BIN_FILE. This parameter is of type XSTRING, which has always a byte-representation. This datatype will not be converted between RFC-calls.
Greetings -
How to convert Unicode characters to Non-unicode
Hi! Gurus,
How can I convert a unicode character into a non-unicode character?
We have an ABAP program in 4.5B that writes a file with a 15 character fixed length that is passed to another application. When upgraded into mySAP we encountered issue on the length of the the strings when japanese or chinese character are included in the string, it exceeded the 15 character length.Hi,
Try this link
[Conversion of Non Unicode to Unicode system]
Regards,
Surinder -
Field lengths in unicode system.
Hi gurus,
I am creating a dynamic internal table. To get the structure of DDIC table at run time i use the following code to create field catalog:
DATA : idetails TYPE abap_compdescr_tab,
xdetails TYPE abap_compdescr.
DATA : xfc TYPE lvc_s_fcat,
ifc TYPE lvc_t_fcat.
ref_table_des ?=
cl_abap_typedescr=>describe_by_name( p_ztab_name ).
idetails[] = ref_table_des->components[].
LOOP AT idetails INTO xdetails.
CLEAR xfc.
xfc-fieldname = xdetails-name .
xfc-datatype = xdetails-type_kind.
xfc-inttype = xdetails-type_kind.
xfc-intlen = xdetails-length.
xfc-decimals = xdetails-decimals.
APPEND xfc TO ifc.
ENDLOOP.
The xdetails-length is doubled in a unicode system.
However it is correct as field length of table in non-unicode system.
I am then writing the data from internall table to file using
OPEN DATASET ld_fullpath FOR OUTPUT IN TEXT MODE ENCODING UTF-8
For ex. a numeric field in the file are written as 00 when its actual value in table is 0.
Please suggest how to solve this problem?
Should i build field catalog in some other way.
Assured points for helpful answers.
Regards,
AbhishekHi,
use this function module to get the filed info.
DATA:lta_dfies type dfies occurs 0 with header line.
CALL FUNCTION 'DDIF_FIELDINFO_GET'
EXPORTING
TABNAME = TABLE
TABLES
DFIES_TAB = lta_dfies.
loop at lta_dfies.
if (condition to get required).
perform build_fcat.
endif.
endloop.
form build_fcat.
CLEAR xfc.
xfc-fieldname = lta_dfies-FIELDNAME .
xfc-datatype = lta_dfies-DATATYPE.
xfc-inttype = lta_dfies-inttype.
xfc-intlen = lta_dfies-INTLEN.
xfc-decimals = lta_dfies-decimals.
APPEND xfc TO ifc.
endform.
rgds,
bharat.
Message was edited by:
Bharat Kalagara
Maybe you are looking for
-
I am trying to find out how to get the restore the option to choose iPad from the airplay button.
-
I have downloaded itunes 10.7 at least a dozen times but itunes is still telling me it can't recognise my iphone because I need to download it!! I am just trying to sync my iphone so I can access my contacts list. Why is this so hard?
-
I want to create Virtual Machine used iso image (Oracle Enterprise Linux 5 update 3). I use one Shared pool master & Utility Server (in same machine), and 2 Virtual Machine Servers. I can't create Virtual Machine it display Error and log display My i
-
I have Final Cut Studio 2 with Compressor 3.05. After many months of perfect performance, recently, when exporting video from Final Cut to Compressor, nothing would happen after I pressed Submit. And if I open Compressor by itself, it stalls on the s
-
I currently have Windows 2003, IIS6 and CF7 and CF8 successfully running side by side on my development server. CF7 is loaded in stand alone mode while CF8 is loaded in multi-server mode. I have 15 CF8 instances running on that server in a pre-develo