Tuning Memory Structures

Guys,
My application runs on oracle Database, in order to improve the performance by reducing I/0 we haev spread
the datafiles and redolog files across 4 different drives.
Now when I try to tune application based on increasing the SGA, buffer cache (to improve buffer cache size hit
ratio), shared pool does it use up the space on the default drive where it is installed which is C drive
For example Oracle is installed on C
I want to increase the SGA size from 600MB to 1GB so does it use the space on the C drive to incorporate the change
Thanks

SGA does not map to a file on a harddisk, but is allocated in the memory (should be RAM) of you your computer. It may indirectly you diskspace, if you don't have enough physical memory and paging happens. It makes no sense to increase your SGA, if it does not fit into pyhsical memory.

Similar Messages

  • Tuning the memory structure

    Hi folks,
    I have lots of misses in library cache. Can someone tell me how I can identify or fix the problem?
    SQL> SELECT Sum(pins) AS "Executions", Sum(reloads) AS "Cache Misses while Executing", Round(Sum(reloads) / Sum(pins) * 100, 2)
    AS "Misses Ratio, %" FROM v$librarycache;
    Executions Cache Misses while Executing Misses Ratio, %
    1389008 17415 1.25
    I used to run the above sql with the following result:
    Executions Cache Misses while Executing Misses Ratio, %
    26558 315 1.19
    Thanks,
    Amir

    Hello Amir,
    The stats that you have given in the thread are not so bad... that you need to worry. I would recommend you not to change much if your library cache hit ratio is around 99%. By adding more space to library cache you can increase the hit ratio and reduce the misses on library cache, but having too big library cache is also not recommended
    To identify the problem I would problem that you use statspack report and find out the SQL that has low hit ratio. You might find out that application has a un-named block which is used repetitively or the similar type of SQL implemented in different ways e.g. not using bind variables
    Using the statspack report you would be more equiped to reduce the misses or increase the hit ratio
    I have personally learnt not to bother much about hit ratios or misses. They should be just okay, more important are blocking events, again statspack report is good one to show such database activities
    Regards
    Sudhanshu
    Message was edited by:
    Sudhanshu

  • Tuning: Memory Abusers

    Hello,
    I'm working through the book Oracle Database 10g Performance Tuning Tips & Techniques from R. Niemiec. In chapter 15 is the topic - Top 10 "Memory Abusers" as a percent of all Statements - which I do not understand correct.
    Maybe someone has this book and can help me on this?
    I executed the sql-statement given in this book and it gives me about 66 percent for the top ten statements. The example has 44 percent.
    I don't understand the ranking given for the example, for the example the result is 60 points. Why?
    I don't know if I'm allowed to paste the rate-part here.
    Greets,
    Hannibal

    I customized the query on 10g a little;
    SELECT *
      FROM (SELECT rank() over(ORDER BY buffer_gets DESC) AS rank_bufgets,
                   (100 * ratio_to_report(buffer_gets) over()) pct_bufgets,
                   sql_text,
                   executions,
                   disk_reads,
                   cpu_time,
                   elapsed_time
              FROM v$sqlarea)
    WHERE rank_bufgets < 11;In my opinion above query helps to find and prioritize the problematic queries which are run from the last database startup, so the magic number and the conditions are not so important :)
    If you are on 10g and have the cost option AWR report of a problematic period or before 10g STATSPACK report will assist you for this purpose. Also after 10gR2 v$sqlstats is very useful, search and check them according to your version here; http://tahiti.oracle.com

  • How tuning memory in Logical database 10.2.0.2 in Unix server?

    Hi
    I need your help.
    I have a logical database ver, 10.2.0.2, in a unix itanium with 16G in Ram.
    Recently my database has issues because is not refresing the data that comes from the primary DB.
    I have to flush the SGA memory and shutdown the database.
    Also I decrese the max_servers from 27 to 18 and still fails.
    My sga memory is 9G
    How can I tuning the size of the SGA, shared pool etc for a better performance?
    Or what else I need to review in order to have a better performance?
    Thanks in advance for the help.
    Lorein

    Hello
    That's the issue I don't receive any error just take long time to apply the archive files from the primary database.
    This is a part of the alert file. Yesterday the file arch112366_1_652812906.arc start to be applied but for 5 hours didn't finish, but I didn't receive any error. The logical continued receiving the arch fiels from the primary but never finish loading this file. What I did was flush the SGA and shutdown the database , and decrease the max_servers. Then the database starts applying the files.
    LOGMINER: End mining logfile: /oracle/pexp/arch/sarchive/arch112365_1_652812906.arc
    Wed Feb 3 12:32:36 2010
    RFS[2]: No standby redo logfiles created
    RFS[2]: Archived Log: '/oracle/pexp/arch/sarchive/arch112366_1_652812906.arc'
    Wed Feb 3 12:32:38 2010
    RFS LogMiner: Registered logfile [oracle/pexp/arch/sarchive/arch112366_1_652812906.arc] to LogMiner session id [1]
    Wed Feb 3 12:32:38 2010
    LOGMINER: Begin mining logfile: /oracle/pexp/arch/sarchive/arch112366_1_652812906.arc
    Wed Feb 3 12:32:40 2010
    LSP0: rolling back apply server 4
    LSP0: apply server 4 rolled back
    Wed Feb 3 12:32:45 2010
    Thread 1 advanced to log sequence 1026
    Current log# 1 seq# 1026 mem# 0: /oradata/pexp/redoA/redo01.log
    Current log# 1 seq# 1026 mem# 1: /oradata/pexp/mirrorredoA/redo01b.log
    Wed Feb 3 12:32:46 2010
    LOGMINER: Log Auto Delete - deleting: /oracle/pexp/arch/sarchive/arch112352_1_652812906.arc
    Deleted file /oracle/pexp/arch/sarchive/arch112352_1_652812906.arc
    Wed Feb 3 12:32:46 2010
    LOGMINER: Log Auto Delete - deleting: /oracle/pexp/arch/sarchive/arch112353_1_652812906.arc
    Deleted file /oracle/pexp/arch/sarchive/arch112353_1_652812906.arc
    Wed Feb 3 12:32:46 2010
    Regards

  • Performance Tuning Memory

    I have an application where I expect to be pushing the bounds of the process memory limit on windows. I have done a lot of optimization to make sure that I am using as little memory as possible. I expect it to be around 650M given my current data set.
    When I run the application, I have to bump the heap up to almost twice that size so that I don't run out of memory. I believe that the virtual machine is allocating memory for the different generations in order to prepare to garbage collect all of these objects. The objects will never need to be garbage collected though because they will live for the life of the app. I've looked through the garbage colletion FAQs and documents and none of the VM parameters have enabled me to bring the size of the app down to the size I expect it to be.
    Any ideas?
    My last resort is to write it in C++ do I have complete control but I would much rather have this written in Java.
    Michael Connor

    Hi,
    don't have an answer to your problem, but a tip: Go and look for "jvmstat" - a gc and memory visualisation tool. It will help you to find out what's going on in your vm
    Then you can try to tweak your jvm with all those pretty -XX options,
    Documentation starting point: http://java.sun.com/docs/performance/
    Peter

  • Oracle 9i Memory structures

    Hai all
    what information exactly goes into the runtime area of the PGA and the persistent area of the PGA

    Fortunately for us all, Oracle ducuments all this in the Rather Fine Concepts Manual.

  • How to find how much memory used by particular procedure or function.

    Hi,
    How can we find out memory used by particular procedure or function?
    If procedure or function is called many times in particular interver, wil it be cached in memory?
    and how will it affect performance?
    what type of pl/sql statement will take more time than normal sql statement?

    Hi
    There are several different memory issues to consider:
    - the code itself (stored in the shared pool)
    - simple variables defined in the code
    - complex variables (eg VARRAY, TABLE etc)
    There's a helpful note on PL/SQL profiling here - http://www.oratechinfo.co.uk/tuning.html - which mentions how to measure memory use (session PGA and UGA - that's program and user global areas)
    You can find out more about shared pool memory structures here - http://download-east.oracle.com/oowsf2005/003wp.pdf.
    Calling a function many times - yes, the function code will be cached (if possible). Session state (for a package) will also be retained (ie global package variables).
    If many users call the same function, there will be one copy of the code but many copies of the private state.
    Finally: PL/SQL statements that can take a long time include:
    - anything that does heavy processing inside a tight loop;
    - anything that waits (select for update; read from a pipe or dequeue from AQ etc)
    Probably the most common mistake is to use PL/SQL for relational processing that can be done from SQL itself (eg writing nested PL/SQL loops to join data that could have been queried in a single SQL statement. Try to minimise context switches between PL/SQL and SQL:
    - use bulk collect where possible
    - use set operations in SQL
    Good luck, HTH
    Regards Nigel

  • What is maximum amount of memory that oracle db can utilize ?

    Hi
    Thank you for reading my post
    What is maximum number of memory that oracle database can utilize for each of its memory structure like sga, caches....
    Thanks

    ps, the following may help too
    VLM) Configurations
    Oracle Database for Windows supports Very Large Memory (VLM) configurations in Windows 2000, Windows 2003, and Windows XP, which allows Oracle Database to access more than the 4 gigabyte (GB) of RAM traditionally available to Windows applications.
    Note:
    This feature is available on Windows 2000, Windows 2003, and Windows XP only with Intel Pentium II and above processors.
    Specifically, Oracle Database uses Address Windowing Extensions (AWE) built into Windows 2000, Windows 2003, and Windows XP to access more than 4 GB of RAM.
    The requirements for taking advantage of this support are:
    1. The computer on which Oracle Database is installed must have more than 4 GB of memory.
    2. The operating system must be configured to take advantage of Physical Address Extensions (PAE) by adding the /PAE switch in boot.ini. See Microsoft Knowledge Base article Q268363 for instructions on modifying boot.ini to enable PAE.
    3. It is advisable (though not necessary) to enable 4GT support by adding the /3GB parameter in boot.ini. See Microsoft Knowledge Base article Q171793 for additional requirements and instructions on modifying boot.ini to enable 4GT.
    4. The user account under which Oracle Database runs (typically the LocalSystem account), must have the "Lock memory pages" Windows 2000 and Windows XP privilege.
    5. USE_INDIRECT_DATA_BUFFERS=TRUE must be present in the initialization parameter file for the database instance that will use VLM support. If this parameter is not set, then Oracle Database 10g Release 1 (10.1) or later behaves in exactly the same way as previous releases.
    6. Initialization parameters DB_BLOCK_BUFFERS and DB_BLOCK_SIZE must be set to values you have chosen for Oracle Database.
    Note:
    The total number of bytes of database buffers (that is, DB_BLOCK_BUFFERS multiplied by DB_BLOCK_SIZE) is no longer limited to 3 GB.
    Dynamic SGA and multiple block size are not supported with VLM. When VLM is enabled, the following new buffer cache parameters are not supported:
    o DB_CACHE_SIZE
    o DB_2K_CACHE_SIZE
    o DB_4K_CACHE_SIZE
    o DB_8K_CACHE_SIZE
    o DB_16K_CACHE_SIZE
    o DB_32K_CACHE_SIZE
    To select the block size for the instance, use the initialization parameter DB_BLOCK_SIZE. The buffer cache size is set by the initialization parameter DB_BLOCK_BUFFERS.
    7. Registry parameter AWE_WINDOW_MEMORY must be created and set in the appropriate key for your Oracle home. This parameter is specified in bytes and has a default value of 1 GB. AWE_WINDOW_MEMORY tells Oracle Database how much of its 3 GB address space to reserve for mapping in database buffers.
    This memory comes from the 3 GB virtual address space in Oracle Database, so its value must be less than 3 GB. Setting this parameter to a large value has the effect of using more of the address space for buffers and using less AWE memory for buffers. However, since accessing AWE buffers is somewhat slower than accessing virtual address space buffers, Oracle recommends that you tune these parameters to be as large as possible without adversely limiting database operations.
    In general, the higher AWE_WINDOW_MEMORY is set, the fewer connections and memory allocations will be possible for Oracle Database. The lower AWE_WINDOW_MEMORY is set, the lower the performance.
    8. Once this parameter is set, Oracle Database can be started and will function exactly the same as before except that more database buffers are available to the instance. In addition, disk I/O may be reduced because more Oracle Database data blocks can be cached in the System Global Area (SGA).
    Note:
    Registry parameter VLM_BUFFER_MEMORY, which enabled VLM configurations in earlier releases, is not supported in Oracle Database 10g Release 1 (10.1) or later.
    VLM Instance Tuning
    VLM configurations improve database performance by caching more database buffers in memory. This reduces disk I/O compared to configurations without VLM. VLM support in Oracle Database 10g Release 1 (10.1) or later has been re-written to integrate very closely with Windows. Compared to Oracle8i release 2 (8.1.6), VLM users should see better performance with the newer implementation.
    Tuning for VLM is no different than tuning for configurations without VLM. It is an iterative task that begins by selecting appropriate DB_BLOCK_SIZE and DB_BLOCK_BUFFERS initialization parameters for the application being supported.
    Note:
    Oracle Database 10g Release 1 (10.1) or later VLM configurations do not support multiple database block sizes.
    AWE_WINDOW_MEMORY, a new registry parameter specific to VLM, tells Oracle Database how much of its address space to reserve for mapping in database buffers. It defaults to a value of 1 GB, which should be suitable for most installations. If DB_BLOCK_SIZE is large, however, the default AWE_WINDOW_MEMORY value of 1 GB may not be sufficient to start the database.
    Increasing the value of AWE_WINDOW_MEMORY will improve performance, but it will also limit the amount of memory available for other Oracle Database threads (like foreground threads). Clients may see "out of memory" errors if this value is set too large. As a general guideline, increase the AWE_WINDOW_MEMORY registry value by 20 percent.
    For example, if DB_BLOCK_SIZE is set to 8 KB, AWE_WINDOW_MEMORY is set to 1 GB, and the number of LRU latches is set to 32 (16 processor computer), then database startup fails with out of memory errors 27102 and 34. Increasing the value of AWE_WINDOW_MEMORY to 1.2 GB fixes the problem.
    Having a large cache in a VLM configuration may also slow down database writer (DBWR) threads. Having more DBWR threads will distribute work required to identify and write buffers to disk and will distribute I/O loads among threads. Initialization parameter DB_WRITER_PROCESSES enables you to configure multiple database writer threads.
    A large cache can also introduce contention on the LRU (least recently used) latch. On symmetric multiprocessor (SMP) systems, Oracle Database sets the number of LRU latches to a value equal to one half the number of processors on the system. You can reduce contention on such configurations by increasing the number of LRU latches to twice (or four times) the number of processors on the system.
    See Also:
    Oracle Database Performance Tuning Guide for more information on instance tuning
    Windows 4 GB RAM Tuning (4GT)
    The following Windows operating systems include a feature called 4 GB RAM Tuning (4GT):
    · Windows Server 2003
    · Windows 2000 Advanced Server
    · Windows 2000 Datacenter Server
    This feature allows memory-intensive applications running on Oracle Database Enterprise Edition to access up to 3 GB of memory, as opposed to the standard 2 GB in previous operating system versions. 4GT provides a tremendous benefit: 50 percent more memory is available for database use, increasing SGA sizes or connection counts.
    Large User Populations
    Several features allow Oracle Database to support an increasingly large number of database connections on Windows:
    · Oracle Database Shared Server Process, which limits the number of threads needed in the Oracle Database process, supports over 10,000 simultaneous connections to a single database instance.
    · Oracle Net multiplexing and connection pooling features allow a large configuration to connect more users to a single database instance.
    · Oracle Real Application Clusters raises connection counts dramatically by allowing multiple server computers to access the same database files, increasing the number of user connections by tens of thousands, as well as increasing throughput.
    rgds
    alan

  • Applets and memory not being released by Java Plug-in

    Hi.
    I am experiencing a strange memory-management behavior of the Java Plug-in with Java Applets. The Java Plug-in seems not to release memory allocated for non-static member variables of the applet-derived class upon destroy() of the applet itself.
    I have built a simple "TestMemory" applet, which allocates a 55-megabytes byte array upon init(). The byte array is a non-static member of the applet-derived class. With the standard Java Plug In configuration (64 MB of max JVM heap space), this applet executes correctly the first time, but it throws an OutOfMemoryException when pressing the "Reload / Refresh" browser button or if pressing the "Back" and then the "Forward" browser buttons. In my opionion, this is not an expected behavior. When the applet is destroyed, the non-static byte array member should be automatically invalidated and recollected. Isn't it?
    Here is the complete applet code:
    // ===================================================
    import java.awt.*;
    import javax.swing.*;
    public class TestMemory extends JApplet
      private JLabel label = null;
      private byte[] testArray = null;
      // Construct the applet
      public TestMemory()
      // Initialize the applet
      public void init()
        try
          // Initialize the applet's GUI
          guiInit();
          // Instantiate a 55 MB array
          // WARNING: with the standard Java Plug-in configuration (i.e., 64 MB of
          // max JVM heap space) the following line of code runs fine the FIRST time the
          // applet is executed. Then, if I press the "Back" button on the web browser,
          // then press "Forward", an OutOfMemoryException is thrown. The same result
          // is obtained by pressing the "Reload / Refresh" browser button.
          // NOTE: the OutOfMemoryException is not thrown if I add "testArray = null;"
          // to the destroy() applet method.
          testArray = new byte[55 * 1024 * 1024];
          // Do something on the array...
          for (int i = 0; i < testArray.length; i++)
            testArray[i] = 1;
          System.out.println("Test Array Initialized!");
        catch (Exception e)
          e.printStackTrace();
      // Component initialization
      private void guiInit() throws Exception
        setSize(new Dimension(400, 300));
        getContentPane().setLayout(new BorderLayout());
        label = new JLabel("Test Memory Applet");
        getContentPane().add(label, BorderLayout.CENTER);
      // Start the applet
      public void start()
        // Do nothing
      // Stop the applet
      public void stop()
        // Do nothing
      // Destroy the applet
      public void destroy()
        // If the line below is uncommented, the OutOfMemoryException is NOT thrown
        // testArray = null;
      //Get Applet information
      public String getAppletInfo()
        return "Test Memory Applet";
    // ===================================================Everything works fine if I set the byte array to "null" upon destroy(), but does this mean that I have to manually set to null all applet's member variables upon destroy()? I believe this should not be a requirement for non-static members...
    I am able to reproduce this problem on the following PC configurations:
    * Windows XP, both JRE v1.6.0 and JRE v1.5.0_11, both with MSIE and with Firefox
    * Linux (Sun Java Desktop), JRE v1.6.0, Mozilla browser
    * Mac OS X v10.4, JRE v1.5.0_06, Safari browser
    Your comments would be really appreciated.
    Thank you in advance for your feedback.
    Regards,
    Marco.

    Hi Marco,
    my guess as to why JPI would keep references around, if it does keep them, is that it propably is an implementation side effect. A lot of things are cached in the name of performance and it is easy to leave things laying around in your cache. Maybe the page with the associated images/applets is kept in the browser cache untill the browser needs some memory and if the browser memory manager is not co-operating with the JPI/JVM memory manager the browser is not out of memory, thus not releasing its caches but the JVM may be out of memory. Thus the browser indirectly keeps the reference that it realy does not need. This reference could be inderect through some 'applet context' or what ever the browser uses to interact with JPI, don't realy know any of these details, just imaging what must/could be going on there. Browser are amazingly complicated beast.
    This behaviour that you are observing, weather the origin is something like I speculated or not, is not nice but I would not expect it to be fixed even if you filed a bug report. I guess we are left with relleasing all significatn memory structures in destroy. A simple way to code this is not to store anything in the member fields of the applet but in a separate class; then one has to do is to null that one reference from the applet to that class in the destroy method and everything will be relased when necessary. This way it is not easy to forget to release things.
    Hey, here is a simple, imaginary, way in which the browser could cause this problem:
    The browser, of course needs a reference to the applet, call it m_Applet here. Presume the following helper function:
    Applet instantiateAndInit(Class appletClass) {
    Applet applet=appletClass.newInstance();
    applet.init();
    return applet;
    When the browser sees the applet tag it instantiates and inits the new applet as follows:
    m_Applet=instantiateAndInit(appletClass);
    As you can readily see, the second time the instantiation occurs, the m_Applet holds the reference to the old applet until after the new instance is created and initlized. This would not cause a memory leak but would require that twice the memory needed by the applet would be required to prevent OutOfMemory.I guess it is not fair to call this sort of thing a bug but it is questionable design.In real life this is propably not this blatant, but could happen You could try, if you like, by allocating less than 32 Megs in your init. If you then do not run out of memory it is an indication that there are at most two instances of your applet around and thus it could well be someting like I've speculated here.
    br Kusti

  • External memory allocation and management using C / LabVIEW 8.20 poor scalability

    Hi,
    I have multiple C functions that I need to interface. I need
    to support numeric scalars, strings and booleans and 1-4 dimensional
    arrays of these. The programming problem I try to avoid is that I have
    multiple different functions in my DLLs that all take as an input or
    return all these datatypes. Now I can create a polymorphic interface
    for all these functions, but I end-up having about 100 interface VIs
    for each of my C function. This was still somehow acceptable in LabVIEW
    8.0 but in LabVIEW 8.2 all these polymorphic VIs in my LVOOP project
    gets read into memory at project open. So I have close to 1000 VIs read into memory when ever I open my project. It takes now about ten minutes to
    open the project and some 150 MB of memory is consumed instantly. I
    still need to expand my C interface library and LabVIEW doesn't simply
    scale up to meet the needs of my project anymore.
    I now
    reserve my LabVIEW datatypes using DSNewHandle and DSNewPtr functions.
    I then initialize the allocated memory blocks correctly and return the
    handles to LabVIEW. LabVIEW complier interprets Call Library Function
    Node terminals of my memory block as a specific data type.
    So
    what I thought was following. I don't want LabVIEW compiler to
    interpret the data type at compile time. What I want to do is to return
    a handle to the memory structure together with some metadata describing
    the data type. Then all of my many functions would return this kind of
    handle. Let's call this a data handle. Then I can later convert this
    handle into a real datatype either by typecasting it somehow or by
    passing it back to C code and expecting a certain type as a return.
    This way I can reduce the number of needed interface VIs close to 100
    which is still acceptable (i.e. LabVIEW 8.2 doesn't freeze).
    So
    I practically need a similar functionality as variant has. I cannot use
    variants, since I need to avoid making memory copies and when I convert
    to and from variant, my memory consumption increases to three fold. I
    handle arrays that consume almos all available memory and I cannot
    accept that memory is consumed ineffectively.
    The question is,
    can I use DSNewPtr and DSNewHandle functions to reserve a memory block
    but not to return a LabVIEW structure of that size. Does LabVIEW
    carbage collection automatically decide to dispose my block if I don't
    correctly return it from my C immediately but only later at next call
    to C code. Can I typecast a 1D U8 array to array of any dimensionality and any numeric data type without memory copy (i.e. does typecast work the way it works in C)?
    If I cannot find a solution with this LabVIEW 8.20 scalability issue, I have to really consider transferring our project from LabVIEW to some other development environent like C++ or some of the .NET languages.
    Regards,
    Tomi
    Tomi Maila

    I have to answer to myself since nobody else has yet answered me. I came up with one solution that relies on LabVIEW queues. Queues of different type are all referred the same way and can also be typecased from one type to another. This means that one can use single element queues as a kind of variant data type, which is quite safe. However, one copy of the data is made when you enqueue and dequeue the data.
    See the attached image for details.
    Tomi Maila
    Attachments:
    variant.PNG ‏9 KB

  • SAP Memory

    hi all,
    Please tell me the code for passing a particular flag value to a SAP memory . Then calling the memory at any other location and freeing the memory thereafter.
    Also please let me know the code for disabling/enabling  a particular screen field .
    Thanks ,
    Paul

    To fill the input fields of a called transaction with data from the calling program, you can use the SPA/GPA technique. SPA/GPA parameters are values that the system stores in the global, user-specific SAP memory. SAP memory allows you to pass values between programs. A user can access the values stored in the SAP memory during one terminal session for all parallel sessions. Each SPA/GPA parameter is identified by a 20-character code. You can maintain them in the Repository Browser in the ABAP Workbench. The values in SPA/GPA parameters are user-specific.
    <b>SPA/GPA Parameters as Default Values</b>
    The SPA/GPA Parameter Technique is a general procedure for filling the initial screen when a program is called. To use this technique for parameters on selection screens, you must link the parameter to an SPA/GPA parameter from the SAP memory as follows:
    PARAMETERS <p> ...... MEMORY ID <pid>......
    If you use this addition, the current value of SPA/GPA parameter <pid> from the global user-related SAP memory is assigned to parameter <p> as a default value. Description <pid> can contain a maximum of twenty characters and must not be enclosed in quotation marks.
    REPORT DEMO.
    <b>PARAMETERS TEST(16) MEMORY ID RID.</b>
    ABAP programs can access the parameters using the SET PARAMETER and GET PARAMETER statements.
    To fill one, use:
    <b>SET PARAMETER ID <pid> FIELD <f>.</b>
    This statement saves the contents of field <f> under the ID <pid> in the SAP memory. The code <pid> can be up to 20 characters long. If there was already a value stored under <pid>, this statement overwrites it. If the ID <pid> does not exist, double-click <pid> in the ABAP Editor to create a new parameter object.
    To read an SPA/GPA parameter, use:
    <b>GET PARAMETER ID <pid> FIELD <f>.</b>
    <b>Passing Data Between Programs</b>
    There are two ways of passing data to a called program:
    Passing Data Using Internal Memory Areas
    There are two cross-program memory areas to which ABAP programs have access (refer to the diagram in Memory Structures of an ABAP Program) that you can use to pass data between programs.
    <b>SAP Memory</b>
    SAP memory is a memory area to which all main sessions within a SAPgui have access. You can use SAP memory either to pass data from one program to another within a session, or to pass data from one session to another. Application programs that use SAP memory must do so using SPA/GPA parameters (also known as SET/GET parameters). These parameters can be set either for a particular user or for a particular program using the SET PARAMETER statement. Other ABAP programs can then retrieve the set parameters using the GET PARAMETER statement. The most frequent use of SPA/GPA parameters is to fill input fields on screens (see below).
    <b>ABAP Memory</b>
    ABAP memory is a memory area that all ABAP programs within the same internal session can access using the EXPORT and IMPORT statements. Data within this area remains intact during a whole sequence of program calls. To pass data to a program which you are calling, the data needs to be placed in ABAP memory before the call is made. The internal session of the called program then replaces that of the calling program. The program called can then read from the ABAP memory. If control is then returned to the program which made the initial call, the same process operates in reverse. For further information, refer to Data Clusters in ABAP Memory.
    Filling Input Fields on an Initial Screen
    Most programs that you call from other programs have their own initial screen that the user must fill with values. For an executable program, this is normally the selection screen. The SUBMIT statement has a series of additions that you can use to fill the input fields of the called program:
    Filling the Selection Screen of a Called Program
    You cannot fill the input fields of a screen using additions in the calling statement. Instead, you can use SPA/GPA parameters. For further information, refer to Filling an Initial Screen Using SPA/GPA Parameters.
    regards
    vinod

  • Running out of memory - not accessing Virtual Memory

    Hi there,
    I am working on a vision application, and trying to process very large images.
    Environment:
    LabVIEW 2014 (14.0f1) 64-bit
    Vision Development Module 2014
    Vision Acquisition Software 2014
    Windows 7 Professional SP1 64-bit Settings:
    Physical RAM = 8GB
    Virtual RAM Settings:
    Unchecked: "Automatically manage paging file size for all drives"
    Checked: "Custom Size"
    Initial Size (MB): 12094
    Maximum Size (MB): 24188
    I read in a sequence of images in the LabVIEW application and stitch it together.  I observe (via Task Manager) the physical memory rise up to approx 7.6GB, and then I am getting the "Out of Memory Error".
    So it doesn't seem to be accessing the virtual memory I have allocated? Is there a setting for this that I am missing?
    Thanks
    Christopher Farmer
    Certified LabVIEW Architect
    Certified TestStand Developer
    http://wiredinsoftware.com.au

    LabVIEW does need a contiguous block of memory for every data structure it creates. In addition its arrays are traditionally limited to 2^31-1 elements per dimension since the count member of an array is treated as signed 32 bit integer. This 2^31 limitation very likely doesn't come into play here since IMAQ Vision uses its own internal memory structures and treats IMAQ Vision handles as pointers rather than LabVIEW data handles to make it less prone to unintentional memory copies of the data.
    Basically image memory only gets copied when explicitedly calling IMAQ Vision functions or when passing an image to an image control for display in a window. And that is pretty much unavoidable.So if you create an image that uses 4GB of memory you quickly end up consuming 2 or 3 blocks of contiguous 4GB memory. Due to memory fragmentation, even if your system has still 4 or more GB of memory free, calling a function that (temporarely) causes another copy in memory of that 4GB image may still fail, since there is no contiguous block of memory available anymore in the system. And yes that data has to be in physical memory as it is actively used at that moment.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • SQL Server 2014 In-Memory Table Limitations

    When I use the migration wizard to migrate a table into a memory-optimized table, I get serious limitations (see images below). It appears that practically a table has to be an isolated staging table for migration.
    A frequently used table like Production.Product would be a good candidate to be memory resident, theoretically thinking.
    What do I do? 
    Bigger question: what if I want the entire OLTP database in memory? After all memory capacities are expanding.
    Thanks.
    Kalman Toth Database & OLAP Architect
    Free T-SQL Scripts
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

    ... It appears that practically a table has to be an isolated staging table for migration.
    Bigger question: what if I want the entire OLTP database in memory? After all memory capacities are expanding.
    Hello
    Yes, there are quite a few barriers for migrating tables to memory optimized.
    For a list of unsupported features check this topic:
    Transact-SQL Constructs Not Supported by In-Memory OLTP
    and for datatypes check here: Supported Data Types
    You probably do NOT want to put a whole database into the new In-Memory structures. Not all workloads actually profit from that. I.e. The more you have Updates the less you will benefit from the
    In-Memory Optimized Tables because of the version chains.
    You can read a bit here: Determining if a Table or Stored Procedure Should Be Ported to In-Memory OLTP
    And also those are some of the topics which you may want to have read beforehand:
    Memory Optimization Advisor
    Requirements for Using Memory-Optimized Tables
    Memory-Optimized Tables
    Good luck
    Andreas Wolter (Blog |
    Twitter)
    MCM - Microsoft Certified Master SQL Server 2008
    MCSM - Microsoft Certified Solutions Master Data Platform, SQL Server 2012
    www.andreas-wolter.com |
    www.SarpedonQualityLab.com

  • 7 cursors as out param , giving memory leak

    Hi,
    Please advise on the following problem i am facing:
    One of my store procedure returning 7 cursors from a store procedure , its causing memory leak .Please note that i am closing all the cursor and jdbc statement in my calling java program still its causing memory leak.
    Could anybody please advise me on this problem.
    Thanks,
    PP

    Is this a memory leak on the client or on the server? 99% of the time, this sort of error is the result of not cleaning up some memory structure-- perhaps there is a case where one or more ResultSet isn't being closed.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • ABAP Memory ID

    Hi All,
    Is the ABAP memory ID global?
    for example, I have FUNC1, in FUNC1, I export value to memory ID 'ABC'. the value of 'ABC' is valide for all other function call?
    The porblem I experinced is that the FUNC1 may called by many interface at the SAME time, and because the memory ID id "global", it may cause problem.
    Thanks,

    If you look at the documentation of the 'EXPORT' statement, you will see that data is stored in the ABAP memory not in SAP global memory. See below for an extract of the documentation.
    Stores a data cluster in ABAP memory. The specified objects obj1 ... objn (fields, structures, complex structures, or tables) are stored as one cluster in ABAP memory.
    If you call a transaction, an executable program, or a dialog module in call mode ( CALL TRANSACTION, SUBMIT, CALL DIALOG), the ABAP memory is retained, even over several call levels. The called transaction can import the data from ABAP memory using IMPORT ... FROM MEMORY. Each new EXPORT ... TO MEMORY overwrites the old data in ABAP memory. You cannot therefore append to data already in the ABAP memory.
    When you leave the lowest level of the call chain, the ABAP memory is released.
    SAP and ABAP/4 Memory
    There is a difference between the cross-transaction SAP memory and the transaction-specific ABAP/4 memory.
    SAP memory
    The SAP memory, otherwise known as the global memory, is available to a user during the entire duration of a terminal session. Its contents are retained across transaction boundaries as well as external and internal sessions. The SET PARAMETER and GET PARAMETER statements allow you to write to, or read from, the SAP memory.
    ABAP/4 memory
    The contents of the ABAP/4 memory are retained only during the lifetime of an external session (see also Organization of Modularization Units). You can retain or pass data across internal sessions. The EXPORT TO MEMORY and IMPORT FROM MEMORY statements allow you to write data to, or read data from, the ABAP memory.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    and Memory Structures of an ABAP Program
    So, just based on this, your problem may not be related to the export statement or the import statement.
    Are you making multiple function calls from the same program/transaction? In this case there is a chance of the memory getting overlaid. But if different people are using the transaction/program at the same time, but the function call is just once in the program/transaction, then you should not have this issue.
    Srinivas

Maybe you are looking for

  • Maps problem

    When ever I try to open maps in my e 71, all I get is the earth motionless and a line that stops at 90 percent, what can I do to fix this problem?

  • EN99 Import Simulation-Foreign Trade / Customs

    Hi , I am using TCode EN99. When I simulate import processing with reference to - a purchase order line item and it's ok (with RMIMPO) - a good receipt and the system doesn't  calculate the custom duties. I checked the documentation, it refers to pro

  • Display/save 8 bit grayscale image

    I have a VI that is capturing an 8 bit grayscale image (1D array of 1500 pixels). It has a 10 byte header that I strip off and try to display it.  The problem is that the displayed picture is interpeted as an RGB.  Also, the saved image can be opened

  • Select Range of dates

    Hello, is it possible to use the <cfinput datefield> or the <cfcalendar> to allow users to select multiple range of dates. EG User wants to select some days in June, July and August.

  • HT6114 Does the 10.9.2 update fix the issue on 2011 macs where projectors with HDMI are not recognized via thunderbolt adapter?

    Hi Mac, Does the 10.9.2 update fix the issue on 2011 macs where projectors with HDMI are not recognized via thunderbolt adapter? Thanks in advance, Ricardo Arguelles