Table Memory

Can anyone tell me how do I calculate the internal database memory required per record of a given table?  A rough estimate would do.  I am thinking that adding up the memory required by each field can be construed as memory required by a single record.  Going further, the memory of each field can be determined by it's data type.  But how do I calculate how many bytes of memory is taken by a given data type?  Please help.  I will assign the points to helpful posts.  Thanks!

Hi Sameer,
Consider this sample code it gives approx single record length and table size based on the no of records.
Just copy the code and execute it , give any table name in the selection screen it will return you the details.
REPORT zes_sample.
*--Give any Table Name
PARAMETERS : tabname TYPE dd02l-tabname.
DATA:  prc_line_len TYPE i VALUE 0,
       lv_tablename TYPE string,
       lv_dref TYPE REF TO data,
       tlen   TYPE i VALUE 0,
       rsize(50),
       tsize(50).
lv_tablename = tabname.
*--Dynamically creating the Internal table
CREATE DATA lv_dref TYPE TABLE OF (lv_tablename).
FIELD-SYMBOLS: <fs> TYPE STANDARD TABLE,
               <f> TYPE ANY.
*--Assigning the dynamically created Internal table to Field Symbol
ASSIGN lv_dref->* TO <fs>.
*--Selecting single row for calculating the reord length
SELECT *
  FROM (tabname)
  INTO TABLE <fs> UP TO 1 ROWS.
*--Calculating the record length
READ TABLE <fs> ASSIGNING <f> INDEX 1.
DESCRIBE FIELD <f> LENGTH prc_line_len IN BYTE MODE.
*--Getting the No of records in the Table
SELECT COUNT( * ) FROM (tabname).
*--Calculating the Approx Table size
IF sy-subrc EQ 0.
  tlen =   prc_line_len * sy-dbcnt.
ENDIF.
tsize = tlen.
rsize =   prc_line_len .
WRITE : / 'Single Record Size : ' , rsize, 'Bytes'.
SKIP 2.
WRITE : / 'Approxiamte  size of table ' , tabname, 'with', sy-dbcnt, 'records is :',
       /  tsize, 'Bytes'.
Regards,
<b>AS</b>

Similar Messages

  • Increase internal table memory

    Dear Techis,
    How to increase size of internal table. while executing it gettin in DUMP.
    I checked wid basis consultanat the Heap memory is more than sufficient, but still my report is not gettin executed its showin dump .
    Will anybody pls tell me is it possible to increase size of internal table as of now i am using "Occurs 0" if i use "occurs 100" is dis help in any way??
    Pls Suggest
    Regards
    Santosh

    Hi Shree,
    The Dump is
    Line width: 1700
    Number of lines: 106904
    Allocated lines: 106904
    New no. of requested lines: 8 (in 1 blocks)
    How to correct the error
    The amount of storage space (in bytes) filled at termination time was:
    The Dump is given Below
    You attempted to extend an internal table, but the required space was
    not available. 
    The internal table "\PROGRAM=ZFIGLN0002\DATA=IT_LINEITEM[]" could not be    
    further extended. To enable                                                
    error handling, the table had to be delete before this log was written.     
    As a result, the table is displayed further down or, if you branch to       
    the ABAP Debugger, with 0 rows.                                                                               
    At the time of the termination, the following data was determined for       
    the relevant internal table:                                                                               
    Memory location: "Session memory"                                           
    Row width: 197                                                              
    Number of rows: 43248                                                       
    Allocated rows: 43248                                                       
    Newly requested rows: 64 (in 1 blocks)                                      
    Last error logged in SAP kernel                                                                               
    Component............ "EM"                                                  
    Place................ "SAP-Server TCPRDSAP_TCP_00 on host TCPRDSAP (wp 4)"  
    Version.............. 37                                                    
    Error code........... 7                                                     
    Error text........... "Warning: EM-Memory exhausted: Workprocess gets PRIV "
    Description.......... " "                                                   
    System call.......... " "                                                   
    Module............... "emxx.c"                                              
    Line................. 1881                                                                               
    The error reported by the operating system is:                              
    Error number..... " " 
    Error text....... " " 
    The amount of storage space (in bytes) filled at termination time was:                                                                               
    Roll area...................... 1508832                                    
    Extended memory (EM)........... 383623632                                  
    Assigned memory (HEAP)......... 369003312                                  
    Short area..................... " "                                        
    Paging area.................... 24576                                      
    Maximum address space.......... " "                                                                               
    If the error occures in a non-modified SAP program, you may be able to     
    find an interim solution in an SAP Note.                                   
    If you have access to SAP Notes, carry out a search with the following     
    keywords:                                                                               
    "TSV_TNEW_PAGE_ALLOC_FAILED" " "                                           
    "ZFIGLN0002" or "ZFIGLN0002"                                               
    "POPULATE_SUMMARY_INFO"                                                                               
    If you cannot solve the problem yourself and want to send an error         
    notification to SAP, include the following information:                                                                               
    1. The description of the current problem (short dump)                                                                               
    To save the description, choose "System->List->Save->Local File         
    (Unconverted)".                                                                               
    2. Corresponding system log                                                                               
    Display the system log by calling transaction SM21.                     
       Restrict the time interval to 10 minutes before and five minutes        
                        Above is the error in showing in Dump Analysisi                                
    i Checked heap memory parameter thru basis consultant and its more as compared to dis

  • Internal table Memory Issue Exception TSV_TNEW_PAGE_ALLOC_FAILED

    Hi experts,
    I am working on a conversiojn programme. This programme is dealing with 4 input files.
    Each of these files is having more than 50,000 records. I am reading the corresponding application server files to fill
    the internal tables related to these files.
    The files are being read properly and internal tables are being filled.
    However when i try to assign any of these 4 internal tables to other temproray internal tables in programme(requirement)
    i get a dump  TSV_TNEW_PAGE_ALLOC_FAILED.
    The dump is related to memory issue.
    I think The memory available in the programme at this point is not sufficient for table assignment.
    Please suggest any alternatives where i can save any memory .
    Changig of basis setting is not an option.
    Regards,
    Abhishek Kokate

    Hi Kiran,
    I am not agree with you , I am agree with Hermann.
    While writting file you restrict the record max 5,000 to 10,000 records and process don't store the mutch data into internal table.
    After every used refresh the internal table, Declare table where necessary.
    But you can try to avoid the copy cost.
    Rgds
    Ravi Lanjewar

  • Internal Table Memory Allocation

    Hello all,
    I could understand the difference between Internal Table with occurs 0 and internal table with type declaration...
    correct me if i am wrong, occurs 0 declaration occupies 8kb memory and header line 256 bytes...
    But what i could not get is...
    Where can i view this internal table runtime memory usage?? Should i have to check that in some transaction???
    If that is the case, what transaction i should look at?? Can i view this in debugging mode??? I tried GOTO -> STATUS DISPLAY -> MEMORY USE and i have tried even SETTINGS -> MEMORY MONITORING ->MEMORY DISPLAY ON...Nothing worked..
    when i go for GOTO -> STATUS DISPLAY -> MEMORY USE in debugging
    memory allocated seems to be same for Internal table with occur 0 and Internal table with type declaration
    I have searched a lot about this in SDN....But could not come to a conclusion......
    I don't have authorisation here for DBG_MEMORY_DIFFTOOL or S_MEMORY_INSPECTOR unfortunately.....
    Waiting for your replies....

    Hi Jagannathan,
    You can view this on Debugger(ECC6.0) onwards.
    To find out how much memory internal tables occupy, choose Goto --> Display Condition --> Memory Usage.
    Choose Change Settings to display a window, in which you can choose the Internal Tables button.
    Hope this will assists you for your quest.
    Regards,
    -Syed.

  • Where I can check the internal table memory that have the system?

    Hi
    I have some jobs that cancel with this error:  No storage space available for extending an internal table.
    Where I can check the amount of space memory that have the system and if I can resolve this problem if I put a commit work or something like that?
    The dump stop when I make a import:
    IMPORT TIMES DHIST3 SUM3 FROM DATABASE MONI(DB) ID MONIKEY.
    Thanks for the help!

    hi,
    goto:
    sm04 - goto- memory
    A.
    Message was edited by:
            Andreas Mann

  • Internal table memory

    Hi,
    Can any one say how to identify,how much memory is associated with the internal table which we are using in our prog.

    Hi Prince
    occurs 0, the memory allocation is done by system,
    in case of occurs 100 memory of 100 records allocated initially ,then if records exceeds then system ll allocate memory,
    memory allocation here is known as extent and depend upon basis administrator...
    You can use the above addition to reserve memory space for <n> table lines when you declare the table object.
    When this initial area is full, the system makes twice as much extra space available up to a limit of 8KB. Further memory areas of 12KB each are then allocated.
    You can usually leave it to the system to work out the initial memory requirement. The first time you fill the table, little memory is used. The space occupied, depending on the line width, is 16 <= <n> <= 100.
    It only makes sense to specify a concrete value of <n> if you can specify a precise number of table entries when you create the table and need to allocate exactly that amount of memory (exception: Appending table lines to ranked lists). This can be particularly important for deep-structured internal tables where the inner table only has a few entries (less than 5, for example).
    To avoid excessive requests for memory, large values of <n> are treated as follows: The largest possible value of <n> is 8KB divided by the length of the line. If you specify a larger value of <n>, the system calculates a new value so that n times the line width is around 12KB."
    OCCURS 0 is betten than OCCURS 100 because
    OCCURS 100.
    if you use occurs 100, the ABAP memory will allocate for 100 records memory location, some time you have only 20 records at that time the remaining 80 records memory space will waste.
    REAL TIME EXAMPLE: BUS
    In bus total seat is 57, some time only 20 people only will come at that time also bus should go so remaining seats will waste.
    OCCURS 0:
    OCCURS 0, the memory is not allocated previously, that is optimistic.if you have 20 records the memory will take for only 20 records.
    1. when can we use occurs0?
    When declaring internal tables,
    whose expected number of records we are not sure,
    or we are not bothered.
    Its normally better to use occurs 0,
    rather than occur 10 , 100 , 90 etc.
    2. if we use occurs 0 how will the memory be allocated?
    a. if we use 0, 10 , or any other figure,
    there is no restriction on the expansion of the internal table.
    the 0, 10, etc is just a indicative figure for expected number of records.
    b. When using 0,
    8 KB - the internal table will be expanded in bunches of 8 KB.
    Reward if helpful.
    Thanks

  • Table memory has Hughe through number of records very low number

    Dear team,
    When I'm checking one table it has below number of records.
    select count(*) from table1
    4980092
    but the space allocated for this table
    select sum(bytes) from user_segments where segment_name = 'table1';
    SUM(BYTES)
    2361712640
    I'm surprised with this size.
    When find the cause,I found the if we are delerting the records memory won't get freed, then i how can i freeup the memory for this table.
    Delete happenning for thie table frequently on daily basis.

    user11081688 wrote:
    Dear team,
    When I'm checking one table it has below number of records.
    select count(*) from table1
    4980092
    but the space allocated for this table
    select sum(bytes) from user_segments where segment_name = 'table1';
    SUM(BYTES)
    2361712640
    I'm surprised with this size.
    why?
    When find the cause,I found the if we are delerting the records memory won't get freed, correct
    then i how can i freeup the memory for this table.
    there is no need to do so, since space will be reused by new rows.
    Delete happenning for thie table frequently on daily basis.if DELETE occurs daily, why is number of rows close to zero?
    how many rows get INSERT daily?
    what is average ROW LENGTH?
    SQL> select 2361712640/4980092 from dual;
    2361712640/4980092
            474.230725
    SQL

  • Short Dump to extending internal table memory

    Hi All,
    I have an internal table with 10 million records . While appending records to this internal table iam getting dump as "No storage space available for extending the internal table." .I declared internal table with "OCCURS 0 "How can i avoid this dump ?

    Hi,
    The problem seems to be related to overflow of the internal table allocation size which will be set by BASIS people. Like if the internal table size restricted to say 1024KB and if we are trying to push data more than this it will throw such error.
    Please try to split them into more smaller but several internal tables. Also try to restrict the number of records selected, if they are not really required to be selected.
    Regards,
    Ferry Lianto

  • How many new tables we can create in available memory?

    Hi Friends,
    First of all i am not tech Guy.
    i am SD functional Person.
    I would like to know is there any constraint to create n number of Ztables?
    Is the memory space belongs to tables is same as the data storage memory?
    if both memories are seperate?
    How much space we will have normally?
    How much space a normal table (3coloumns, 500rows) occupies?
    Regards
    Pardha Saradhi B

    Hi,
    There is no such limitation For Creation Of Ztable.
    Regarding Memory it depends on Server.
    There is initial Size Definded for a table in The Table Attributes ,
    if that size reached then again same Size of memory is assigned for that particular table by system.
    For a normal table memory depends its SIZE-Category attributes in The Technical Settings of the table..
    Regards,
    Sujit

  • Memory problems in internal table

    Hi,
    I am facing an issue with the internal table memory.
    I am selecting something from the database and putting it in internal table but the load is so much that it is giving me memory errors as the data may go into millions.
    So what we have decided is to cut down the data in the internal table. we have defined a variable which will contain suppose 30000 data.
    so what i mean to do is to take the records from the databas based on the value in the variable. By that way i will be taking only 30000 data each time.
    But the issue is that first time it will take 1 to 30000 records if i give upto 30000 rows in the select query.. but then how will i take 30001 to 60000 the next time.. by that i mean how will i keep incrementing in the select query.
    Thanks in advance.
    Amit Kurup

    Hi
    I've never used these stataments but perhaps they are usefull for you:
    See the help for OPEN/CLOSE/FETCH CURSOR
    DATA: C TYPE CURSOR,
          WA TYPE <TABLE>.
    DATA: ITAB LIKE STANDARD TABLE OF <TABLE>.
    OPEN CURSOR C FOR
      SELECT * FROM <TABLE> WHERE .......
    DO.
    FETCH NEXT CURSOR C TO WA.
    IF SY-SUBRC <> 0.
      CLOSE CURSOR C.
      EXIT.
    ENDIF.
    APPEND WA TO ITAB.
    IF COUNT = 30000.
    DO SOMETHING
    COUNT = 0.
    REFRESH ITAB.
    ENDIF.
    COUNT = COUNT + 1.
    ENDDO. 
    Max

  • SQL Server 2014 In-Memory Table Limitations

    When I use the migration wizard to migrate a table into a memory-optimized table, I get serious limitations (see images below). It appears that practically a table has to be an isolated staging table for migration.
    A frequently used table like Production.Product would be a good candidate to be memory resident, theoretically thinking.
    What do I do? 
    Bigger question: what if I want the entire OLTP database in memory? After all memory capacities are expanding.
    Thanks.
    Kalman Toth Database & OLAP Architect
    Free T-SQL Scripts
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

    ... It appears that practically a table has to be an isolated staging table for migration.
    Bigger question: what if I want the entire OLTP database in memory? After all memory capacities are expanding.
    Hello
    Yes, there are quite a few barriers for migrating tables to memory optimized.
    For a list of unsupported features check this topic:
    Transact-SQL Constructs Not Supported by In-Memory OLTP
    and for datatypes check here: Supported Data Types
    You probably do NOT want to put a whole database into the new In-Memory structures. Not all workloads actually profit from that. I.e. The more you have Updates the less you will benefit from the
    In-Memory Optimized Tables because of the version chains.
    You can read a bit here: Determining if a Table or Stored Procedure Should Be Ported to In-Memory OLTP
    And also those are some of the topics which you may want to have read beforehand:
    Memory Optimization Advisor
    Requirements for Using Memory-Optimized Tables
    Memory-Optimized Tables
    Good luck
    Andreas Wolter (Blog |
    Twitter)
    MCM - Microsoft Certified Master SQL Server 2008
    MCSM - Microsoft Certified Solutions Master Data Platform, SQL Server 2012
    www.andreas-wolter.com |
    www.SarpedonQualityLab.com

  • Memory and performance  when copying a sorted table to a standard table

    Hello,
    As you all probably know, it's not possible to use a sorted table as a tables parameter of a function module, but sometimes you want to use a sorted table in your function module for performance reasons, and at the end of the function module, you just copy it to a standard table to return to the calling program.
    The problem with this is that at that moment, the contents of the table is in memory twice, which could result in the well known STORAGE_PARAMETERS_WRONG_SET runtime exception.                                                                               
    I've been looking for ways to do this without using an excessive amount of memory and still being performant.  I tried four methods, all have their advantages and disadvantages, so I was hoping someone here could help me come up with the best way to do this.  Both memory and performance are an issue. 
    Requirements :
    - Memory usage must be as low as possible
    - Performance must be as high as possible
    - Method must work on all SAP versions from 4.6c and up
    So far I have tried 3 methods.
    I included a test report to this message, the output of this on my dev system is :
    Test report for memory usage of copying tables    
    table1[] = table2[]                                        
    Memory :    192,751  Kb                                    
    Runtime:    436,842            
    Loop using workarea (with delete from original table)      
    Memory :    196,797  Kb                                    
    Runtime:  1,312,839        
    Loop using field symbol (with delete from original table)  
    Memory :    196,766  Kb                                    
    Runtime:  1,295,009                                                                               
    The code of the program :
    I had some problems pasting the code here, so it can be found at [http://pastebin.com/f5e2848b5|http://pastebin.com/f5e2848b5]
    Thanks in advance for the help.
    Edited by: Dries Horions on Jun 19, 2009 1:23 PM
    Edited by: Dries Horions on Jun 19, 2009 1:39 PM
    Edited by: Dries Horions on Jun 19, 2009 1:40 PM
    Edited by: Dries Horions on Jun 19, 2009 1:40 PM

    I've had another idea:
    Create a RFC function like this (replace SOLI_TAB with your table types):
    FUNCTION Z_COPY_TABLE .
    *"*"Lokale Schnittstelle:
    *"  IMPORTING
    *"     VALUE(IT_IN) TYPE  SOLI_TAB
    *"  EXPORTING
    *"     VALUE(ET_OUT) TYPE  SOLI_TAB
    et_out[] = it_in[].
    ENDFUNCTION.
    and then try something like this in your program:
    DATA: gd_copy_done TYPE c LENGTH 1.
    DATA: gt_one TYPE soli_tab.
    DATA: gt_two TYPE soli_tab.
    PERFORM move_tables.
    FORM move_tables.
      CLEAR gd_copy_done.
      CALL FUNCTION 'Z_COPY_TABLE'
        STARTING NEW TASK 'ztest'
        PERFORMING copy_done ON END OF TASK
        EXPORTING
          it_in = gt_one[].
      CLEAR gt_one[].
      WAIT UNTIL gd_copy_done IS NOT INITIAL.
    ENDFORM.
    FORM copy_done USING ld_task TYPE clike.
      RECEIVE RESULTS FROM FUNCTION 'Z_COPY_TABLE'
       IMPORTING
         et_out        = gt_two[].
      gd_copy_done = 'X'.
    ENDFORM.
    Maybe this is a little bit faster than the Memory-Export?
    Edited by: Carsten Grafflage on Jul 20, 2009 11:06 AM

  • Read Table Vs Loop at

    Dear All,
    Please let me know which one of the two should I use to improve the performance, for tables containing a lot of data ?
    Regards,
    Thanks in anticipation.
    Alok.

    Hi,
        Follow below steps.
        In se30 transaction you can look for
        Tip&TRicks button on application toolbar
        apart from below conventions
       Follow below steps
    1) Remove corresponding from select satement
    2) Remove * from select
    3) Select field in sequence as defined in database
    4) Avoid unnecessary selects
    i.e check for internal table not initial
    5) Use all entries and sort table by key fields
    6) Remove selects ferom loop and use binary search
    7) Try to use secondary index when you don't have
    full key.
    8) Modify internal table use transporting option
    9) Avoid nested loop . Use read table and loop at itab
    from sy-tabix statement.
    10) free intrenal table memory wnen table is not
    required for further processing.
    11)
    Follow below logic.
    FORM SUB_SELECTION_AUFKTAB.
    if not it_plant[] is initial.
    it_plant1[] = it_plant[].
    sort it_plant1 by werks.
    delete adjacent duplicates from it_plant1 comparing werks
    SELECT AUFNR KTEXT USER4 OBJNR INTO CORRESPONDING FIELDS OF TABLE I_AUFKTAB
    FROM AUFK
    FOR ALL ENTRIES IN it_plant1
    WHERE AUFNR IN S_AUFNR AND
    KTEXT IN S_KTEXT AND
    WERKS IN S_WERKS AND
    AUART IN S_AUART AND
    USER4 IN S_USER4 AND
    werks eq it_plant1-werks.
    free it_plant1.
    Endif.
    ENDFORM. "SUB_SELECTION_AUFKTAB
    Regards
    Amole

  • Short dump-internal table size issue

    Hi,
    I get the following message in the short dump analysis for a report.
    No storage space available for extending table "IT_920".
    You attempted to extend an internal table, but the required space was not available.
    Error Analysis:
    The internal table "IT_920" could not be enlarged further.             
    To extend the internal table, 9696 bytes of storage space was          
    needed, but none was available. At this point, the table "IT_920" has  
    1008240 entries.
    Its an old report and I saw the internal table declaration using the "OCCURS" clause in the internal table declaration.
    begin of itab occurs 100.
    end of itab.
    I tried the option of changing to "OCCURS 0", still issue persists.
    Any help would be highly appretiated
    CM

    Hello CMV,
    This is basic problem with SAP internal tables. For every internal table memory is alocated ( Max..256K)...once you cross the memory size/limit of the internal table it resuls in short dump.
    Only way to overcome this problem is handle limited number of records at a time.. 
    Please refer following sample code which will help you to avoid short dump while processing large number of records....
      SORT TAB_RESULT.
      DESCRIBE TABLE TAB_RESULT LINES W_RECORDS.
      W_LOW      = 1.
      W_UP       = 1000.
    *Spliting the records from tab_result1 by pakage of 1000 at a time
    *to avoid short dump in case of more records
      WHILE W_LOW <= W_RECORDS.
        R_PKUNWE-SIGN = 'I'.
        R_PKUNWE-OPTION = 'EQ'.
        R_WERKS-SIGN = 'I'.
        R_WERKS-OPTION = 'EQ'.
        LOOP AT TAB_RESULT FROM W_LOW TO W_UP.
          MOVE TAB_RESULT-PKUNWE TO R_PKUNWE-LOW.
          MOVE TAB_RESULT-WERKS  TO  R_WERKS-LOW.
          APPEND R_PKUNWE.
          APPEND R_WERKS.
        ENDLOOP.
    *fetch sold to party
         SELECT KUNNR NAME1
           FROM KNA1
           APPENDING CORRESPONDING FIELDS OF TABLE TAB_KNA1
           WHERE KUNNR IN R_PKUNWE.
    *fetch plant
      SELECT WERKS NAME1
             FROM T001W
             APPENDING CORRESPONDING FIELDS OF TABLE TAB_T001W
             WHERE WERKS IN R_WERKS.
       REFRESH: R_PKUNWE,
                R_WERKS.
        W_LOW = W_LOW + 1000.
        W_UP  = W_UP  + 1000.
      ENDWHILE.
    Hope this will help you to solve problem.
    Cheers,
    Nilesh

  • Creation of view and use it as internal table

    dear community,
                   My questions is taht i am write a select quary in that data is very huge so that it cant inserted in to internal table(memory size of internal table) so i thing that i create aview same as data base table and when write the select quary use packsize and transfer recoerd ds in that view and use this view in my program
    is above logic can work or not
    also is size of view is greater than internal table memory size.

    Writing the large volume to another table(which a view is just a view of a table or group of tables) is not going to solve the problem.  You still have to select the data from the view(or table).  Like I said in your other threads, you need to read by package size, and do something with that subset, whether it is summaring, or writeing to another system via RFC.
    Regards,
    Rich Heilman

Maybe you are looking for