Short dump-internal table size issue
Hi,
I get the following message in the short dump analysis for a report.
No storage space available for extending table "IT_920".
You attempted to extend an internal table, but the required space was not available.
Error Analysis:
The internal table "IT_920" could not be enlarged further.
To extend the internal table, 9696 bytes of storage space was
needed, but none was available. At this point, the table "IT_920" has
1008240 entries.
Its an old report and I saw the internal table declaration using the "OCCURS" clause in the internal table declaration.
begin of itab occurs 100.
end of itab.
I tried the option of changing to "OCCURS 0", still issue persists.
Any help would be highly appretiated
CM
Hello CMV,
This is basic problem with SAP internal tables. For every internal table memory is alocated ( Max..256K)...once you cross the memory size/limit of the internal table it resuls in short dump.
Only way to overcome this problem is handle limited number of records at a time..
Please refer following sample code which will help you to avoid short dump while processing large number of records....
SORT TAB_RESULT.
DESCRIBE TABLE TAB_RESULT LINES W_RECORDS.
W_LOW = 1.
W_UP = 1000.
*Spliting the records from tab_result1 by pakage of 1000 at a time
*to avoid short dump in case of more records
WHILE W_LOW <= W_RECORDS.
R_PKUNWE-SIGN = 'I'.
R_PKUNWE-OPTION = 'EQ'.
R_WERKS-SIGN = 'I'.
R_WERKS-OPTION = 'EQ'.
LOOP AT TAB_RESULT FROM W_LOW TO W_UP.
MOVE TAB_RESULT-PKUNWE TO R_PKUNWE-LOW.
MOVE TAB_RESULT-WERKS TO R_WERKS-LOW.
APPEND R_PKUNWE.
APPEND R_WERKS.
ENDLOOP.
*fetch sold to party
SELECT KUNNR NAME1
FROM KNA1
APPENDING CORRESPONDING FIELDS OF TABLE TAB_KNA1
WHERE KUNNR IN R_PKUNWE.
*fetch plant
SELECT WERKS NAME1
FROM T001W
APPENDING CORRESPONDING FIELDS OF TABLE TAB_T001W
WHERE WERKS IN R_WERKS.
REFRESH: R_PKUNWE,
R_WERKS.
W_LOW = W_LOW + 1000.
W_UP = W_UP + 1000.
ENDWHILE.
Hope this will help you to solve problem.
Cheers,
Nilesh
Similar Messages
-
Internal table size - short dump
Hi all,
I am getting following error:
'Following a SELECT statement, the data read could not be placed in AN
the output area.
A conversion may have been intended that is not supported by the
system, or the output area may be too small.'
Actually I am extracting data into an internal table. I have implemented my own logic to fetch data in packets. And after each packet, I am freeing the internal table using FREE command. After fetching 16 data packets, it gives me this error.
Please help me with this.
Thanks and regards,
RidhimaIt gives short dump in selecting data from ETXDCJ
*-- Local Data
DATA: t_etxdch TYPE STANDARD TABLE OF tp_etxdch,
t_etxdci TYPE STANDARD TABLE OF tp_etxdci,
t_etxdcj TYPE STANDARD TABLE OF tp_etxdcj,
t_docnr TYPE STANDARD TABLE OF tp_docnr,
wa_etxdch TYPE tp_etxdch,
wa_etxdci TYPE tp_etxdci,
wa_etxdcj TYPE tp_etxdcj,
wa_docnr TYPE tp_docnr,
wa_data TYPE zoxdev0388,
l_lines TYPE sy-tabix,
l_line1 TYPE sy-tabix,
l_times TYPE sy-index,
l_div TYPE sy-index,
l_cnt1 TYPE sy-tabix,
l_cnt2 TYPE sy-tabix.
*-- Clean Up
CLEAR: t_etxdch[],
t_etxdci[],
t_etxdcj[],
t_docnr[].
*-- Get the document numbers selected in extraction
LOOP AT c_t_data.
CLEAR wa_docnr.
wa_docnr = c_t_data-docnr.
APPEND wa_docnr TO t_docnr.
ENDLOOP.
DATA: t_doc TYPE STANDARD TABLE OF tp_docnr,
wa_doc TYPE tp_docnr.
DESCRIBE TABLE t_docnr LINES l_lines.
CHECK l_lines IS NOT INITIAL.
WHILE l_cnt2 LE l_lines.
l_cnt1 = l_cnt2 + 1.
l_cnt2 = l_cnt1 + 200.
APPEND LINES OF t_docnr FROM l_cnt1 TO l_cnt2 TO t_doc.
SORT t_doc BY docnr.
DELETE ADJACENT DUPLICATES FROM t_doc COMPARING docnr.
*-- Get Header data
SELECT docnr
client
currency
gl_currency
FROM etxdch
INTO TABLE t_etxdch
FOR ALL ENTRIES IN t_doc
WHERE docnr = t_doc-docnr.
*-- Success
IF sy-subrc IS INITIAL.
SORT t_etxdch BY docnr.
ENDIF.
*-- Get Item data
SELECT docnr
itemnr
quantity
unit
amount
gross_amount
freight_am
exempt_amt
taxpcov
taxamov
gl_taxpcov
gl_taxamov
FROM etxdci
INTO TABLE t_etxdci
FOR ALL ENTRIES IN t_doc
WHERE docnr = t_doc-docnr.
*-- Success
IF sy-subrc IS INITIAL.
SORT t_etxdci BY docnr
itemnr.
ENDIF.
*-- Get Jurisdiction data
SELECT docnr
itemnr
txjlv
taxpct
taxamt
taxbas
examt
gl_taxpct
gl_taxamt
gl_taxbas
FROM etxdcj
INTO TABLE t_etxdcj
FOR ALL ENTRIES IN t_doc
WHERE docnr = t_doc-docnr.
*-- Success
IF sy-subrc IS INITIAL.
SORT t_etxdcj BY docnr
itemnr
txjlv.
ENDIF.
*-- Populate the enhanced fields
CLEAR wa_data.
LOOP AT c_t_data INTO wa_data FROM l_cnt1 TO l_cnt2.
*-- Populate Header fields
CLEAR wa_etxdch.
READ TABLE t_etxdch INTO wa_etxdch WITH KEY docnr = wa_data-docnr
BINARY SEARCH.
IF sy-subrc IS INITIAL.
wa_data-yyclient = wa_etxdch-client.
wa_data-yycurrency = wa_etxdch-currency.
wa_data-yygl_currency = wa_etxdch-gl_currency.
ENDIF.
*-- Populate Item fields
CLEAR wa_etxdci.
READ TABLE t_etxdci INTO wa_etxdci WITH KEY docnr = wa_data-docnr
itemnr = wa_data-itemnr
BINARY SEARCH.
IF sy-subrc IS INITIAL.
wa_data-yyquantity = wa_etxdci-quantity.
wa_data-yyunit = wa_etxdci-unit.
wa_data-yyamount = wa_etxdci-amount.
wa_data-yygross_amount = wa_etxdci-gross_amount.
wa_data-yyfreight_am = wa_etxdci-freight_am.
wa_data-yyexempt_amt = wa_etxdci-exempt_amt.
wa_data-yytaxpcov = wa_etxdci-taxpcov.
wa_data-yytaxamov = wa_etxdci-taxamov.
wa_data-yygl_taxpcov = wa_etxdci-gl_taxpcov.
wa_data-yygl_taxamov = wa_etxdci-gl_taxamov.
ENDIF.
*-- Populate Jurisdiction fields
CLEAR wa_etxdcj.
READ TABLE t_etxdcj INTO wa_etxdcj WITH KEY docnr = wa_data-docnr
itemnr = wa_data-itemnr
txjlv = wa_data-txjlv
BINARY SEARCH.
IF sy-subrc IS INITIAL.
wa_data-yytaxpct = wa_etxdcj-taxpct.
wa_data-yytaxamt = wa_etxdcj-taxamt.
wa_data-yytaxbas = wa_etxdcj-taxbas.
wa_data-yyexamt = wa_etxdcj-examt.
wa_data-yygl_taxpct = wa_etxdcj-gl_taxpct.
wa_data-yygl_taxamt = wa_etxdcj-gl_taxamt.
wa_data-yygl_taxbas = wa_etxdcj-gl_taxbas.
ENDIF.
*-- Update values in Extract structure
MODIFY c_t_data FROM wa_data.
CLEAR wa_data.
ENDLOOP.
CLEAR: t_doc[].
FREE : t_etxdch,
t_etxdci,
t_etxdcj.
ENDWHILE. -
Short dump due to performance issue
Hi all,
I am facing performance issue in my sandbox. Below is the content of short dump.
Short text
Unable to fulfil request for 805418 bytes of memory space.
What happened?
Each transaction requires some main memory space to process
application data. If the operating system cannot provide any more
space, the transaction is terminated.
What can you do?
Try to find out (e.g. by targetted data selection) whether the
transaction will run with less main memory.
If there is a temporary bottleneck, execute the transaction again.
If the error persists, ask your system administrator to check the
following profile parameters:
o ztta/roll_area (1.000.000 - 15.000.000)
Classic roll area per user and internal mode
usual amount of roll area per user and internal mode
o ztta/roll_extension (10.000.000 - 500.000.000)
Amount of memory per user in extended memory (EM)
o abap/heap_area_total (100.000.000 - 1.500.000.000)
Amount of memory (malloc) for all users of an application
server. If several background processes are running on
one server, temporary bottlenecks may occur.
Pls help me to resolve this issue
Regards,
Kalyani
Edited by: kalyani usa on Jan 9, 2008 9:04 PMHi Rob Burbank,
I am pasting the transaction I found in the dump
Transaction......... "SESSION_MANAGER "
Transactions ID..... "4783E5B027A73C1EE10000000A200A17"
Program............. "SAPMSYST"
Screen.............. "SAPMSYST 0500"
Screen line......... 16
Also i am pasting the screenshot of ST02
Nametab (NTAB) 0
Table definition 99,22 6.799 3.591 62,97 20.000 12.591 62,96 0 8.761
Field definition 99,06 31.563 345 1,15 20.000 13.305 66,53 244 7.420
Short NTAB 99,22 3.625 2.590 86,33 5.000 3.586 71,72 0 1.414
Initial records 52,50 6.625 3.408 56,80 5.000 249 4,98 817 5.568
0
program 99,58 300.000 1.212 0,42 75.000 67.561 90,08 7.939 46.575
CUA 99,08 3.000 211 8,84 1.500 1.375 91,67 23.050 846
Screen 99,46 4.297 1.842 45,00 2.000 1.816 90,80 81 963
Calendar 100,00 488 401 85,14 200 111 55,50 0 89
OTR 100,00 4.096 3.281 100,00 2.000 2.000 100,00 0
0
Tables 0
Generic Key 99,69 29.297 2.739 9,87 5.000 177 3,54 57 56.694
Single record 89,24 10.000 63 0,64 500 468 93,60 241 227.134
0
Export/import 76,46 50.000 40.980 83,32 2.000 2.676
Exp./ Imp. SHM 97,82 4.096 3.094 94,27 2.000 1.999 99,95 0
SAP Memory Curr.Use % CurUse[KB] MaxUse[KB] In Mem[KB] OnDisk[KB] SAPCurCach HitRatio %
Roll area 0,16 432 18.672 131.072 131.072 IDs 98,11
Page area 0,19 496 187.616 65.536 196.608 Statement 95,00
Extended memory 9,89 151.552 1.531.904 1.531.904 0 0,00
Heap memory 0 0 1.953.045 0
0,00
Regards,
Kalyani -
Short Dump in table maintenance events
Dear Experts,
I am getting a short dump when i am trying to maintain my Z table. the error is "The statement "MOVE src TO dst" requires that the operands "dst" and "src" are convertible. Since this statement is in a Unicode program, the special conversion rules for Unicode programs apply. In this case, these rules were violated.
the actual short dump is appearing at Move total to fs_zsdslsbud.
ZSDSLSBUD table has a NUMC, 4 char field but in total internal table it is appearing as 0000####. I guess this is the error. I tried to replicate the same code where there are only char type fields and i dont have any issues with below code.
My Z table also have a currency field. How to handle the unicode conversion in this case? can i use Try and Catch statements. A piece of code would be appriciated.
Thanks in advance.
FORM f_trigger_before_save.
DATA: wf_index TYPE sy-tabix. "Index to note the lines found
DATA: BEGIN OF fs_zsdslsbud.
INCLUDE STRUCTURE zsdslsbud.
INCLUDE STRUCTURE vimtbflags.
DATA: END OF fs_zsdslsbud.
LOOP AT total.
IF <action> = neuer_eintrag .
READ TABLE extract WITH KEY <vim_xtotal_key>.
IF sy-subrc EQ 0.
wf_index = sy-tabix.
ELSE.
CLEAR wf_index.
ENDIF.
* (make desired changes to the line total)
MOVE total TO fs_zsdslsbud.
fs_zsdslsbud-ernam = sy-uname.
fs_zsdslsbud-erdat = sy-datum.
fs_zsdslsbud-erzet = sy-uzeit.
total = fs_zsdslsbud.
MODIFY total.
CHECK wf_index GT 0.
extract = total.
MODIFY extract INDEX wf_index.
ELSEIF <action> = aendern.
READ TABLE extract WITH KEY <vim_xtotal_key>.
IF sy-subrc EQ 0.
wf_index = sy-tabix.
ELSE.
CLEAR wf_index.
ENDIF.
* (make desired changes to the line total)
MOVE total TO fs_zsdslsbud.
fs_zsdslsbud-aenam = sy-uname.
fs_zsdslsbud-aedat = sy-datum.
fs_zsdslsbud-aezet = sy-uzeit.
total = fs_zsdslsbud.
MODIFY total.
CHECK wf_index GT 0.
extract = total.
MODIFY extract INDEX wf_index.
ENDIF.
ENDLOOP.
sy-subrc = 0.
ENDFORM. "f_trigger_before_save
Thanks,
Rajesh.Hi,
Use this FM instead of Move statement.. In Unicode environments the left and right operands should be of same data type. to over come this you can use the below FM
CALL FUNCTION 'HR_99S_COPY_STRUC1_STRUC2'
EXPORTING
P_STRUCT1 = total
IMPORTING
P_STRUCT2 = fs_zsdslsbud. -
Internal table Memory Issue Exception TSV_TNEW_PAGE_ALLOC_FAILED
Hi experts,
I am working on a conversiojn programme. This programme is dealing with 4 input files.
Each of these files is having more than 50,000 records. I am reading the corresponding application server files to fill
the internal tables related to these files.
The files are being read properly and internal tables are being filled.
However when i try to assign any of these 4 internal tables to other temproray internal tables in programme(requirement)
i get a dump TSV_TNEW_PAGE_ALLOC_FAILED.
The dump is related to memory issue.
I think The memory available in the programme at this point is not sufficient for table assignment.
Please suggest any alternatives where i can save any memory .
Changig of basis setting is not an option.
Regards,
Abhishek KokateHi Kiran,
I am not agree with you , I am agree with Hermann.
While writting file you restrict the record max 5,000 to 10,000 records and process don't store the mutch data into internal table.
After every used refresh the internal table, Declare table where necessary.
But you can try to avoid the copy cost.
Rgds
Ravi Lanjewar -
i have a internal table i_final in program 1 and this internal table data need to be input of other program, so how i need to pass the i_final data to other program.
this is the code in program 1:
ENDIF.
**appending work area to internal table.
APPEND wa_final TO i_final.
ENDLOOP.
EXPORT i_final to memory id 'zfinal'.
this is the code in program 2:
TYPES: BEGIN OF t_final,
bukrs TYPE bkpf-bukrs,
belnr TYPE bkpf-belnr,
blart TYPE bkpf-blart,
xblnr TYPE bkpf-xblnr,
ebeln TYPE ekbe-ebeln,
ebelp TYPE ekbe-ebelp,
belnr1 TYPE ekbe-belnr,
xblnr1 TYPE ekbe-xblnr,
ebeln1 TYPE ekko-ebeln,
exnum TYPE ekko-exnum,
lands TYPE ekko-lands,
stceg_l TYPE ekko-stceg_l,
END OF t_final.
DATA: i_final TYPE STANDARD TABLE OF t_final WITH HEADER LINE.
DATA: wa_final TYPE t_final.
IMPORT i_final from memory ID 'zfinal'.
write: i_final.
its giving Dump. -
Hi All,
I have one logical issue related to internal table manipulation.
I have one internal table :
I_DAT - This is related to Loading/Unloading of Goods.
for example with 3 fields
VSTEL, KUNNA, KMMANG.
Now suppose my data looks like this after sorting:
VSTEL KUNNA KMMANG
100 - -
200 - -
300 - -
400 - -
- 500 X
- 600 X
- 700 X
- 800 X
Here 100,200,300,400 are Loading points.
ANd 500,600,700,800 are unloading points.
Now what i want is For loading & Unloading points i need to pick up Address and print one after other.
But how they need to print is:
FOR INITIAL LOADING OF
ADDRESS- For 100
FIRST STOP: FOR LOADING OF
ADDRESS- For 200
SECOND STOP: FOR LOADING OF
ADDRESS- For 300
Etc .....
Then
FOR UNLOADING OF:
ADDRESS- For 400
FIRST STOP: FOR UNLOADING OF
etc.
FINAL STOP: FOR FINAL UNLOADING OF
We might get as many records :
Can any body give me the logic:
Printing Address is not a problem:
But Above every address we need to print FIRST STOP, SECOND etc like that.
For this i need logic.
Can anybody give the solution!
Thanks in advance.
Thanks & Regards,
Prasad.Try this.I think you want output like this......
DATA: BEGIN OF LINE,
CARRID TYPE SBOOK-CARRID,
CONNID TYPE SBOOK-CONNID,
FLDATE TYPE SBOOK-FLDATE,
CUSTTYPE TYPE SBOOK-CUSTTYPE,
CLASS TYPE SBOOK-CLASS,
BOOKID TYPE SBOOK-BOOKID,
END OF LINE.
DATA ITAB LIKE SORTED TABLE OF LINE WITH UNIQUE KEY TABLE LINE.
SELECT CARRID CONNID FLDATE CUSTTYPE CLASS BOOKID
FROM SBOOK INTO CORRESPONDING FIELDS OF TABLE ITAB.
LOOP AT ITAB INTO LINE.
AT FIRST.
WRITE / 'List of Bookings'.
ULINE.
ENDAT.
AT NEW CARRID.
WRITE: / 'Carrid:', LINE-CARRID.
ENDAT.
AT NEW CONNID.
WRITE: / 'Connid:', LINE-CONNID.
ENDAT.
AT NEW FLDATE.
WRITE: / 'Fldate:', LINE-FLDATE.
ENDAT.
AT NEW CUSTTYPE.
WRITE: / 'Custtype:', LINE-CUSTTYPE.
ENDAT.
WRITE: / LINE-BOOKID, LINE-CLASS.
AT END OF CLASS.
ULINE.
ENDAT.
ENDLOOP.
This is also helpful......
LOOP AT <itab>.
AT FIRST. ... ENDAT.
AT NEW <f1>. ...... ENDAT.
AT NEW <f2 >. ...... ENDAT.
<single line processing>
AT END OF <f2>. ... ENDAT.
AT END OF <f1>. ... ENDAT.
AT LAST. .... ENDAT.
ENDLOOP.
Regards
Abhishek -
Abap dump: internal table too small, condense non-character like fields
Hi there,
I created a dynamic internal table by:
CALL FUNCTION 'LVC_FIELDCATALOG_MERGE'
CALL METHOD CL_ALV_TABLE_CREATE=>CREATE_DYNAMIC_TABLE
ASSIGN IT_EP_TABLE->* TO <IT_DBTABLE>.
SELECT * FROM (P_TABLE_NAME)
<b> INTO TABLE <IT_DBTABLE> </b>
It gave the error <b> internal table too small </b> SAPSQL_SELECT_TAB_TOO_SMALL,
so I removed the error by using "into corresponding fields of <IT_DBTABLE>.
But now it is not creating a TXT file by function module from internal table records. As it says the CONDENSE statement cannot be executed and dump occurs: 'OBJECTS_NOT_CHARLIKE' 'The current statement only supports character-type data objects', 'In statement
"CONDENSE" the argument "<F_SOURCE>" can only take a character-type data object'.
It only happens for table AFPO. As I think this table has fields i.e. currency which cannot be treated as characters. Is the move corresponding approach ok. Or how can I create text file with these records.try creating a dynamic table in the image of the source table and then move-corresponding to the original one. Something like this:
DATA dref TYPE REF TO data.
DATA tabdref TYPE REF TO data.
FIELD-SYMBOLS <dyn_struc> TYPE ANY.
FIELD-SYMBOLS <struc> TYPE ANY.
FIELD-SYMBOLS <tab> TYPE table.
*create table of your choice
CREATE DATA tabdref TYPE TABLE OF (P_TABLE_NAME).
ASSIGN tabdref->* TO <tab>.
create a line variable for the above table
CREATE DATA dref like line of <tab>.
ASSIGN dref->* TO <struc>.
create a line variable for the dynamically created table
CREATE DATA dref like line of <IT_DBTABLE>.
ASSIGN dref->* TO <dyn_struc>.
SELECT * FROM (P_TABLE_NAME)
INTO TABLE <tab>.
loop at <tab> assigning <struc>.
move-corresponding <tab> to <dyn_struc>.
append <dyn_struc> to <IT_DBTABLE>.
endloop. -
Short dump in table maintenance genarator
hi all,
in SM30 i gave my table name and clicked on maintain it is going to short dump?
what may be the reason?
regards,
siri.hi,
I feel that while regeneratng table maintainence after deletion you have to come out of that completely(till SE11) and then press table maintianence generator and create... hope you might have not done the same and so is the result of that dump as the changes were not reflected..
Regards,
Santosh -
Dynamic internal table column issue
Hi
i have ALV report with dynamic internal table.after i build the internal table and fieldcatalog i have problem i.e. when grid is displayed then one of the column value is coming in the next column.i populated col_pos in field catalog also and in the debug mode data is populated correctly for respective columns in fieldcatalog and dynamic internal table. But when it is displayed i have this problem.
any inputs on this?Hi Moorthy,
Did you perform an ALV consistency check?
Check the below given links as well.
The Consistency Check - ALV Grid Control (BC-SRV-ALV) - SAP Library
SAP ALV Consistency Check
Regards,
Philip. -
Hi,
I have got 3 tables:
it_extnout (old)
Fld1 Fld2 Fld3 Fld4 Fld5 Fld6 Fld7
ABC 123 70 JKL 5.00 A Q
it_extnin (new)
Fld1 Fld2 Fld3 Fld4 Fld5 Fld6 Fld7
ABC 123 99 LMN
it_extnin_x(update flag for new)
Fld1 Fld2 Fld3 Fld4 Fld5 Fld6 Fld7
ABC 123 X X X
So now...my requirement is as follows:
if an update flag is set for any field in it_extnin_x, then new value should get updated in it_extnout table.
Here, fld3, 4, and 6 is set for update, so finally my it_extnout should look like
Fld1 Fld2 Fld3 Fld4 Fld5 Fld6 Fld7
ABC 123 99 LMN 5.00 Q
Also, any field from fld3 to fld7 could be marked for an update, so it is dynamic.
I do not want to write read statements for each column like
read table itab with key fld3 = X (if sy-subrc is 0, then do some processing)
or read table itab with key where fld4 = X and so on....
What is the optimum way to achieve the same?
Any useful is deeply appreciated!
Thanks
Follow the rules of engagement, Don't use multiple user accounts for posting the question
If you repeat this your user will be locked and deleted
Edited by: Vijay Babu Dudla on Apr 20, 2011 10:19 AMTry this.I think you want output like this......
DATA: BEGIN OF LINE,
CARRID TYPE SBOOK-CARRID,
CONNID TYPE SBOOK-CONNID,
FLDATE TYPE SBOOK-FLDATE,
CUSTTYPE TYPE SBOOK-CUSTTYPE,
CLASS TYPE SBOOK-CLASS,
BOOKID TYPE SBOOK-BOOKID,
END OF LINE.
DATA ITAB LIKE SORTED TABLE OF LINE WITH UNIQUE KEY TABLE LINE.
SELECT CARRID CONNID FLDATE CUSTTYPE CLASS BOOKID
FROM SBOOK INTO CORRESPONDING FIELDS OF TABLE ITAB.
LOOP AT ITAB INTO LINE.
AT FIRST.
WRITE / 'List of Bookings'.
ULINE.
ENDAT.
AT NEW CARRID.
WRITE: / 'Carrid:', LINE-CARRID.
ENDAT.
AT NEW CONNID.
WRITE: / 'Connid:', LINE-CONNID.
ENDAT.
AT NEW FLDATE.
WRITE: / 'Fldate:', LINE-FLDATE.
ENDAT.
AT NEW CUSTTYPE.
WRITE: / 'Custtype:', LINE-CUSTTYPE.
ENDAT.
WRITE: / LINE-BOOKID, LINE-CLASS.
AT END OF CLASS.
ULINE.
ENDAT.
ENDLOOP.
This is also helpful......
LOOP AT <itab>.
AT FIRST. ... ENDAT.
AT NEW <f1>. ...... ENDAT.
AT NEW <f2 >. ...... ENDAT.
<single line processing>
AT END OF <f2>. ... ENDAT.
AT END OF <f1>. ... ENDAT.
AT LAST. .... ENDAT.
ENDLOOP.
Regards
Abhishek -
Short Dump to extending internal table memory
Hi All,
I have an internal table with 10 million records . While appending records to this internal table iam getting dump as "No storage space available for extending the internal table." .I declared internal table with "OCCURS 0 "How can i avoid this dump ?Hi,
The problem seems to be related to overflow of the internal table allocation size which will be set by BASIS people. Like if the internal table size restricted to say 1024KB and if we are trying to push data more than this it will throw such error.
Please try to split them into more smaller but several internal tables. Also try to restrict the number of records selected, if they are not really required to be selected.
Regards,
Ferry Lianto -
Internal table data 1E2 automatically convert to scientific format
Dear all,
I have been searched for solution moths from the forums and tried all possible methods, but still no way to solve my above problem. I found a way to solve it partially for us, but may be very helpful for others who meet similly case like mine, so I posted here.
my problem is when I export my internal table data to Excel, the Cell data with 1E2 auto becomes 1.00E02, and 1E8 becomes 1.00E08, we need it to be 1E2 and 1E8 in excel.
you can recreate my problem by
1, input 1E2 into your Microsoft Excel, then Enter, it will auto change into scientific format. which is we do not want.
2, use any of your SAP system open any table as long as there is a Char (>3) field in that table. add some
data entry in that field in the form "any amount (<15) of numeric 1 to 9"E"any one or two numeric 1 to 9". such as, 123E2, 1234E12 etc. then save this table's data to local file spread sheet, or use any FM to download it to a Excel file, when you open this
file by Excel, the cell with above form will display as scientific. but
if you put three or more numeric after the "E", such as 123E123 it will
display correctly.
what I have done:
I searched in SCN for similar thread:
Export to Excel 2007 - item number problem
Exceding the limit of numbers in Excel at target side
Excel download cell format problem
Formating as Text in excel through SAP
Converting of amount field into excel file through GUI DOWNLOAD
Data downloaded to excel gets converted to exponential format.
Problem with Excel download and scientific number
Re: Issue in displaying numbers in Excel?
CSV Flat File Data Problem (Number converting to Scientific Notation)
Tested accordingly, But none of these works in our case. because our
ultimate receiver of email attachment will be external third party, we cannot ask
them to change anything in their Excel.
Search Microsoft help about Excel, http://support.microsoft.com/kb/214233,
and it says this "Automatic Number Formatting" is a normal behaviour of excel.
no way to turn it off, the "work-around" way that Microsoft provides is not suitable for our
case.
We test CL_iXML recently arrording to weblog http://wiki.sdn.sap.com/wiki/display/Snippets/FormattedExcelasEmailAttachment
it successful controled the format. so this could be a solution for others whose internal table size is small. but our 2MB internal table bocome 6MB when converted to xml file attachment, which cannot be received by our end user's mail box. too big.
So please advise your ideas.
Many thanks in advance!
Peter Ding
Thank you very much for your time!Hi,
You can achieve this by describing the spreadsheet in XML with the help of the DOM classes.
The later releases of Excel can read and save spreadsheets as XML, providing your release supports this you can achieve it.
Check out the following Wiki
[Excel - XML|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/abap/exporting%2bdata%2bto%2bexcel%2b-%2bxml%2bto%2bthe%2brescue]
Regards,
Darren -
How to delete the short dump list in st22
Hi all,
i want to clear the runtime error list in ST22.
How to clear the list of short dump in ST22.
is there any T.code kindly suggest
thanks in advance
Sundar.cDear Sundar,
By default, short dumps are stored in the system for 14 days. The transaction for managing short dumps is ST22. You can delete short dumps in accordance with a time specification using the Reorganize function, which you can call by choosing
Goto u2192 Reorganize in ST22. You can save a short dump without a time limit using the Keep function, which you can choose under Short Dump u2192 Keep/Release in ST22.
The short dumps are stored in table SNAP. If there are too many entries in table SNAP, problems can occur during the reorganization (see SAP Note 11838). There are different options for deleting short dumps from table SNAP:
1. Call transaction ST22. Choose Goto u2192 Reorganize. You can now specify that all short dumps older than n days old (default 7) are to be deleted. If a very large number of records are deleted at once during the reorganization,
ORACLE error ora1562, failed to extend rollback segment ..., can occur. In this case, see SAP Note 6328.
2. Dropping and recreating table SNAP with the database utility (transaction SE14): you can use this transaction to drop and recreate the table SNAP. This means that all short dumps are deleted.
3. The reorganization program RSSNAPDL deletes old short dumps in pieces (to avoid database problems) from table SNAP. It deletes short dumps that are not flagged for retention and are older than seven days old. Schedule this program at a time of low workload, as large changes in the database are to be expected. The program RSNAPJOB performs a standard scheduling: it starts the program RSSNAPDL every day at 1:00 a.m.
4. TableSNAP is also automatically reorganized. At every short dump that occurs in dialog (the dump is displayed immediately after it is created), a maximum of 20 short dumps that are older than 7 days old are deleted from SNAP. This reorganization should be sufficient in normal production operation.
Hope this information resolves your error and also more useful in the feature.
Thanks
Kishore -
Populating dynamic internal table
Hi All,
I've created a dynamic internal table the issue is that the data is to be entered in it from 2 different tables so ...
is their any way we can read the internal table field names ...
or any other way to populate data in it ...hi
check this link
http://www.****************/Tutorials/ABAP/DynamicInternaltable/DynamicInternalTable.htm
thanks
sitaram
Maybe you are looking for
-
Use Adobe Reader to Save Form Data
I use Acrobat 8 Professional to create several forms and enabled usage rights in Adobe Reader so that my users can save the form data locally using Adobe Reader 7 or higher. My questions are: (1.)Is there a limit to saving the form data? For example,
-
Wls 6.0sp1, xerces 1.1
I'm in the process of upgrading to weblogic 6.0sp1 from 5.1. My app uses xerces 1.1 for all xml parsing. My original configuration was jbuilder 4.0, weblogic 5.1, jdk1.2.2 and xerces 1.1. This configuration worked and all xml parsed fine. After upgra
-
AVG tried to take over the Firefox Program. I unchecked some stuff on AVG and now Firefox doesn't look right. There is only a bar at the top of the page with "Firefox" and "new Tab" . How do I get the Firebox homepage back?
-
SRP541W WAN Load Balancing and NAT
Hello All, New to the forums. Thanks for taking the time to read my post. I recently switched my office over from a RV042 to SRP541W. We have 2 DSL lines and have used the Load Balance feature on the RV42 to make the best of the connecton speeds. Whe
-
New Payment process in R12, MICR font dropped by PREPROCESS
Hi, With the new way of doing Payments in R12, meaning can not use the bursting engine and at the same time CUPS printers, I can't get the MICR font to be on the output .ps file after it goes through the preprocess command of the pasta config file. A