SAP WM: Pre Allocated Stock

Hi All
We have a HU Managed Storage Location in the warehouse. The Storage Type is also SU managed. I want to do pre allocation for the materials that come in. After packing in the Inbound Delivery when I try to create TO, the system gives a warning that the material is pre allocated but then it just suggests the bin as per the putaway strategy (and not per the pre allocation table). The preallocated table LT51 is maintained and mvt type 101 has "consider pre allocated stock" check marked. The problem seems to be that on the TO preparation screen, there is no button for "Pre allocated Stock" and even the menu Goto --> Pre allocated stock is grayed out. In fact Please let me know how I can do pre allocation in this case.
Thanks
Apurva

Hello Apurva,
Did you get an answer? I'm experiencing the same!
Regards

Similar Messages

  • Pre allocated stock process

    Hi Experts,
    I am trying to use the pre allocated stock process.
    It is working fine till the time I move stock to the GI area for Cost center directly from Goods receipt area.
    Now there is a stock of 100 PC in my GI area for cost center.
    When I do the Goods issue to cost center & try to convert TR to TO is giving error that no source bin found. But in this case system should use the same source bin as a destination bin because the stock is existing in that bin.
    The process is working perfect if I try to issue 10 PC instead of 100 PC.
    Can you please help me to solve this problem?
    Thanks,
    Navin

    As per my understanding pre allocated stock scenario finishes when the goods are transferred from goods receipt interim area to goods issue interim area (first TO).
    When you post the GI (in Inventory Management), WM will work as you set it to work in LE-IM interface. If you set auto TR creation, it will create auto TR even if it makes no sense in your example since the goods are already in the proper bin and during the GI SAP consumes them from the interim storage bin.
    (Maybe SAP could not find out a foolproof solution to avoid the creation of such TRs)
    So, second TR has nothing to do with pre allocated stock scenario.
    What you can do:
    - delete the unnecessary TR (Set "Final Delivery" in LB02)
    - do not create the unnecessary TR (e.g. you can set "No Transfer Requiremnt" in MIGO / MB1A)
    - develop your own solution
    Edited by: Csaba Szommer on Nov 21, 2010 5:06 PM

  • Pre-allocated Stock

    Dear Experts,
    The concept of pre-allocated stock is clear including the pre-requisites (flagging the movement type as relevant for pre-allocated stock + maintain pre-allocated stock quantities in transaction LT51).
    In my test case, I did the following:
    1. I made movement type 101 as relevant for pre-allocated stock
    2. In LT51 I put material = M1 with storage type = 911 and bin = TEST and quantity = 10 PC
    When I did a GR for 30 PC, the system alerts that there's pre-allocated stock. So, the TO will be split into two items:
    - 20 PC from storage type 902 -> 001
    - 10 PC from storage type 902 -> 911 (from GR area to GI area)
    So far so good. However, I have two questions:
    1. How do I now consume the 10 PC on storage type 911? If I do a GI for 10 PC, the system will create a TO from 001->911
    2. How do I automate the stock placement TO? It seems that when there's pre-allocated stock, the TO automation will not function.
    Regards,
    Hani
    Edited by: Hani Kobeissi on Feb 3, 2012 2:38 PM

    A)
    You described the issue perfectly, but you didn't provide an answer. I'll be really surprised if SAP doesn't have a solution for this. I'll try to describe the issue in more details and with an example:
    I did answer, here you can find it again:
    I don't know the official answer, but it seems it is simply not solved. Maybe the reason is that this situation should be exceptional ("urgently need for goods issue").
    Other thing might be that if you urgently need stock for GI, then even if the system creates a TR for the inventory management posting (as per your IM/WM interface settings), you won't be able to create TO from the TR 'cause you don't have sufficient stock (=you cannot move the goods issued quamtity from your normal storage type; if you can then why are you using the preallocated stock scenario?).
    B)
    Say I have a reservation of 10 pc for a certain material M1. Since I have no stock of this material, then I won't be able to do a GI. Instead, I put in the pre-allocated stock that I need 10 in storage type 916. Then my order for this material arrives with 50 PC. Normally, I've setup the system to automatically create a TO to put the 50 PC in storage type 001. With this pre-allocated stock:
    1. THe TO is not automatically created. Why? Can I automate this?
    2. The system splits the TO into two items: the first item will put away the 10 PC straight from 902 to 916 and the remaining 40 will be sent to 001 (so far so good except for the automation).
    After I confirm the TOs, I'll end up with 10 PC in 916 and 40 PC in 001. However, I haven't done the GI yet. THen, when I do the GI against the reservation, the system will automatically create a TO of 10 PC from 001 to 916 and will decreate the 10 PC from 916. So, I end up with 0 PC in 916 and a new TO. If I confirm this new TO, then I'll end up in 10 PC again in 916 and 30 PC in 001 and my reservation is closed.
    The big question is how the hell will I consume these 10 PC that are now on 916? Any other GI will trigger another TO.
    1)
    If you have no automatic creation of TO for your normal process (902 --> normal storage type), I don't think there will be automatic TO creation for the preallocated stock.
    Neither in SAP documentation I can see this functionality for preallocated stock nor I'm aware of any such setting in config. Based on what do you expect it should work this way?.
    2)
    If you want to GI 10 pcs then the 10 pcs which has been moved from 902 to 916 will be consumed, no need to create a TO to move 10 pcs from your normal storage type to 916.
    If the GI creates a TR of 10 pcs to move the goods from 001 to 902 you can "delete" the TR. Other solution is that before confirming the TO you can cancel it using LT15.
    If due to your settings you cannot avoid moving the 10 pcs from 001 to 916 then you have to move them back manually (LT01 / LT10).
    This is how I see this and I can be easily mistaken. I hope someone from SAP also reads the thread and can provide an official answer.
    Edited by: Csaba Szommer on Feb 8, 2012 7:46 PM

  • Missing parts/pre-allocated stock

    Hi all,
    I got a Missing parts/pre-allocated stock table issue, maybe someone can help me out.
    This can be handled via LT51 to fill table T310 MANUALLY. This table will be checked during the creation of the Transfer Order (TO) in case the check is activated for the WM movement type.
    Is there an automatic way to fill this table?
    Is it possible to only create TRs if the stock is not available and TOs automatically from TRs if stock is available?
    Cheers
    Mathias

    Hi Matthias,
    We are in a same situation. Your message was on April. So somehow you should solved the problem. Can you share your solution?
    Best Regards,

  • Maintain pre allocated stock

    Hi ,
    Can anybody tell me what is the exact use of 'maintain pre allocated stock' , Tcode LT51.
    I was trying to do cross docking from 902 to 916. I have setup the 'consider pre allocated stock' indicator for movement type 101.
    I had created a outbound delivery > TO creation > confirmed TO for an item which has no stock in whse.
    Then after I did inbound delivery> TO creation , so while TO creation the system is diverting some stock to 916 directly from 902. This part is fine.
    But my question is , do we have any setting that sets the stock on outbound delivery as pre allocated once the availability check is done and found that no stock for that outbound delivery is availale in the whse.
    What exactly LT51 play here.
    Thanks,.
    Mono

    Hi,
    Pre allocated stock is something similar to Cross docking.
    This process is triggered by creating entry in pre allocated stock table.
    When you create TO for Putaway system will check the pre allocated stock tabel & give a message if there is an entry.
    You can goto preallocated stock tab in TO creation screen & add the qty. Reamining qty if any can be putaway with normal search process for BIN. In this case you will have 2 line items in TO.
    Once you confirm TO system will clear the Pre allocated stock table.
    Hope this will help.
    Navin

  • Pre-Allocated Stock - Auto Transfer Order

    I am looking at implementing a stock allocation process, but it looks to be conflicting with the auto transfer order process that we are using in receiving. 
    The allocation process is working fine, but as soon as I want to use an auto transfer order creation process, I get a failure message and the TO does not create.
    Is there a way to create the TO based on the allocation in the background without any message errors?
    Thank-you

    can it be that you try to achieve something with this functionality for which this fuction is not designed?
    Preallocated stock is an exception to normal business, and the user tells this exception to SAP by maintained the table.
    So SAP knows about an exception and can act accordingly and e.g. pick the material directly from receiving area, and does not wait until all stock is putaway.
    http://help.sap.com/saphelp_470/helpdata/en/c6/f83fbe4afa11d182b90000e829fbfe/frameset.htm

  • Pre-allocated stock T310

    Hello Gurus,
    the table T310 will be checked during the creation of the Transfer Order (TO) in case the check is activated for the WM movement type. I entered MANUALLY a record in table T310 by a transaction LT51 due to the missing parts verified by production. Is there an automatic way to maintain fill this table?
    Many thamks in advance
    Jari

    can it be that you try to achieve something with this functionality for which this fuction is not designed?
    Preallocated stock is an exception to normal business, and the user tells this exception to SAP by maintained the table.
    So SAP knows about an exception and can act accordingly and e.g. pick the material directly from receiving area, and does not wait until all stock is putaway.
    http://help.sap.com/saphelp_470/helpdata/en/c6/f83fbe4afa11d182b90000e829fbfe/frameset.htm

  • CPO SAP Adapter pre-requisite Check Error

    TEO 2.3.5 SAP adpater Pre-requisite Error
             Trying to setup an SAP ABAP connection. Satisfied the pre-requisite of SAP NCO sapnco30dotnet40P_7-20007348, still get the following error while checking the pre-requisite.
    Unable to check for prerequisites:
    A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
    This is what I see in the logs-
    /StackTrace><ExceptionString>System.ServiceModel.CommunicationObjectAbortedException: The communication object, System.ServiceModel.Channels.TransportReplyChannelAcceptor+TransportReplyChannel, cannot be used for communication because it has been Aborted.</ExceptionString></Exception></TraceRecord>
    ||735|2013/03/01 07:33:30.028|2084|PoolThread:57|||WCF: System.ServiceModel Error: 131075 :
    ||736|2013/03/01 07:33:30.029|2084|PoolThread:57|||WCF: <TraceRecord xmlns="
    http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord
    " Severity="Error"><TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.ThrowingException.aspx</TraceIdentifier><Description>Throwing an exception.</Description><AppDomain>Tidal.Automation.Server.exe</AppDomain><Source>System.ServiceModel.Channels.TransportReplyChannelAcceptor+TransportReplyChannel/59871390</Source><Exception><ExceptionType>System.ServiceModel.CommunicationObjectAbortedException, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType><Message>The communication object, System.ServiceModel.Channels.TransportReplyChannelAcceptor+TransportReplyChannel, cannot be used for communication because it has been Aborted.</Message><StackTrace>   at System.ServiceModel.Channels.InputQueueChannel`1.EndDequeue(IAsyncResult result, TDisposable&amp;amp; item)
       at System.ServiceModel.Dispatcher.ErrorHandlingReceiver.EndTryReceive(IAsyncResult result, RequestContext&amp;amp; requestContext)
       at System.ServiceModel.Dispatcher.ChannelHandler.EndTryReceive(IAsyncResult result, RequestContext&amp;amp; requestContext)
       at System.ServiceModel.Dispatcher.ChannelHandler.AsyncMessagePump(IAsyncResult result)
       at System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result)
       at System.Runtime.AsyncResult.Complete(Boolean completedSynchronously)
       at System.Runtime.InputQueue`1.AsyncQueueReader.Set(Item item)
       at System.Runtime.InputQueue`1.Shutdown(Func`1 pendingExceptionGenerator)
       at System.ServiceModel.Channels.CommunicationObject.Abort()
       at System.ServiceModel.Dispatcher.ListenerHandler.AbortChannels()
       at System.ServiceModel.Dispatcher.ListenerHandler.OnAbort()
       at System.ServiceModel.Channels.CommunicationObject.Abort()
       at System.ServiceModel.Dispatcher.ChannelDispatcher.OnAbort()
       at System.ServiceModel.Channels.CommunicationObject.Abort()
       at System.ServiceModel.ServiceHostBase.OnAbort()
       at System.ServiceModel.Channels.CommunicationObject.Abort()
       at Tidal.Automation.Server.WebService.WCFHost.StopServiceHost(Object state)
       at Tidal.Automation.Common.ThreadPool.OuterWaitCallBack(Object state)
       at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx)
       at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
       at System.Threading.ThreadPoolWorkQueue.Dispatch()
       at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
    </StackTrace><ExceptionString>System.ServiceModel.CommunicationObjectAbortedException: The communication object, System.ServiceModel.Channels.TransportReplyChannelAcceptor+TransportReplyChannel, cannot be used for communication because it has been Aborted.</ExceptionString></Exception></TraceRecord>
    ||737|2013/03/01 07:33:30.164|2508|MainEngineThread|||Encountered an exception while attempting to kill adapter SAP Solution Manager Adapter (95151695-aacc-4310-af5f-c38abfcea384).  Proceeding.
    ||738|2013/03/01 07:33:30.164|2508|MainEngineThread|||EXCEPTION (Tidal.Automation.Common.Product.RhapsodyException): An attempt was made to stop the adapter named 95151695-aacc-4310-af5f-c38abfcea384 but it appears to have never been started.
    Stack Trace:   at Tidal.Automation.Engine.Core.AdapterStarter.KillAdapter(Guid adapterId)
       at Tidal.Automation.Engine.Core.AdapterLifecycleManager.DisableAdapterStartups()
    ||739|2013/03/01 07:33:30.165|2508|MainEngineThread|||Encountered an exception while attempting to kill adapter SAP ABAP Adapter (cf6f0f84-a364-4105-b11f-0fef62873b37).  Proceeding.
    ||740|2013/03/01 07:33:30.165|2508|MainEngineThread|||EXCEPTION (Tidal.Automation.Common.Product.RhapsodyException): An attempt was made to stop the adapter named cf6f0f84-a364-4105-b11f-0fef62873b37 but it appears to have never been started.
    Stack Trace:   at Tidal.Automation.Engine.Core.AdapterStarter.KillAdapter(Guid adapterId)
       at Tidal.Automation.Engine.Core.AdapterLifecycleManager.DisableAdapterStartups()
    ||741|2013/03/01 07:33:30.221|2508|MainEngineThread|||Encountered an exception while attempting to kill adapter SAP Solution Manager Adapter (95151695-aacc-4310-af5f-c38abfcea384).  Proceeding.
    ||742|2013/03/01 07:33:30.221|2508|MainEngineThread|||EXCEPTION (Tidal.Automation.Common.Product.RhapsodyException): An attempt was made to stop the adapter named 95151695-aacc-4310-af5f-c38abfcea384 but it appears to have never been started.
    Stack Trace:   at Tidal.Automation.Engine.Core.AdapterStarter.KillAdapter(Guid adapterId)
       at Tidal.Automation.Engine.Core.AdapterLifecycleManager.DisableAdapterStartups()
    ||743|2013/03/01 07:33:30.222|2508|MainEngineThread|||Encountered an exception while attempting to kill adapter SAP ABAP Adapter (cf6f0f84-a364-4105-b11f-0fef62873b37).  Proceeding.
    ||744|2013/03/01 07:33:30.222|2508|MainEngineThread|||EXCEPTION (Tidal.Automation.Common.Product.RhapsodyException): An attempt was made to stop the adapter named cf6f0f84-a364-4105-b11f-0fef62873b37 but it appears to have never been started.
    Stack Trace:   at Tidal.Automation.Engine.Core.AdapterStarter.KillAdapter(Guid adapterId)
       at Tidal.Automation.Engine.Core.AdapterLifecycleManager.DisableAdapterStartups()

    Please open a TAC case.  They can not only get you past the missing prerequisite, but also raise the issue on the bad prerequisite detection error which you are seeing so we can get it fixed in the product.

  • EJB3 and Toplink pre-allocation size for Sequences

    Hi,
    How do I override the Toplink pre-allocation size for Sequences for a EJB3 Project?
    My problem in detail:
    I have a DB sequence which gets incremented by 1. I am using this sequence to generate primary keys for one of my tables. If I try to use the @GeneratedValue annotation for this primary key, then I am getting toplink validation exception 7027. I gather that this exception comes because the default pre-allocation size (increment-by value) is 50 for TopLink while DB uses 1. Now is there is any possiblity for me to override the default pre-allocation size for EJB3 project?
    Currently I am leaving the primary key empty when I persist the entity and letting a DB trigger insert the sequence value into the DB.
    thanks,
    Chandru.

    Chandru,
    When defining a sequence generator in JPA you also have the ability to configure a preallocationSize that should match the DB sequence's increment.
    Here is a simple example:
    @Entity
    @SequenceGenerator(name = "emp-seq", sequenceName = "EMP-SEQ", allocationSize = 1)
    public class Employee implements Serializable {
        @Id
        @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "emp-seq")
        private int id;Doug

  • Pre-allocating Extents for Instances

    Using 10.2.0.4 Standard Edition RAC. I have a table very busy with inserts. Oracle waits for events "gc current block busy", "gc buffer busy release" etc. There seems contention of the blocks between instance. I pre-allocated extent to each instance:
    alter table busy_table allocate extent (100m instance 1);
    alter table busy_table allocate extent (100m instance 2);But this seems not reducing the events. Is there any more to do?
    Some additional questions:
    1) How to I know the extent is allocated to a specific instance? DBA_EXTENTS has no instance information. And X$KTFBUE always shows the current instance.
    select INST_ID, KTFBUEFNO, KTFBUEBNO from X$KTFBUE
    where KTFBUESEGBNO=(
    select header_block from dba_segments
    where segment_name='BUSY_TABLE');returned the same inforamtion except that INST_ID is different.
    2) Does the preallocation affects insert sql only? I guess other operation are free to use any extent. Correct?
    3) We are using ASSM. So I am not supposed to be tuning FREELISTS AND FREELIST GROUPS. Correct?
    DB: 10.2.0.4
    OS: RHEL 5.3

    Just checked the document again ([url http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/clauses001.htm#g1053419]INSTANCE integer)
    If you are using automatic segment-space management, then the INSTANCE parameter of the allocate_extent_clause may not reserve the newly allocated space for the specified instance, because automatic segment-space management does not maintain rigid affinity between extents and instances.(How come I missed this??)
    So Oracle ignores the instance clause for ASSM tablespace. Preparing a test with MSSM tablespace...

  • Benefit of pre allocating String buffer size

    Hi,
    I have come across the following code when redesigning someones application (simplified!):
    public class myClass{
        private int approxLength = 1000;
        private void myMethod(){
            // Allocate the buffers with the right length, so they
            // are not reallocated all the time
            StringBuffer s = new StringBuffer(approxLength);
            while(do lots of times){
                s.append(someStuff);
            approxLength = s.length();
    }Now my point is, does pre-allocating the StringBuffer size really give much benefit? What is the default size when a SB is allocated using the no-arg constructor? Will it slow the app much if I create the Buffer without a default size?
    I think the code is messy and less readable with this "approxLength" idea, and would like to get rid of it if it wont cause any performance issues

    JN + DrClap,
    After reading what I wrote I can see where my babble may have been difficult to decipher. I was thinking faster than I could type. And you were right, a StringBuffer does conserve on memory reads and writes... For each String concatenation there is n + 2 memory segments allocated (unless resultant segment is rewritten... Don't know!). This reason alone is probably why the StringBuffer was created, with the exception of ease of use. But as the StringBuffer's size increases the probability of maximum utilization lessens for a randomized number of bytes being appended (Within the confines of the java language's capabilities and constraints). Its like a numbers game... Guessing a truly randomized number 1 through 10 has a much higher probability than guessing a number 1 through 1000000. So as the size of the StringBuffer increases efficiency decreases for a random byte length append. Which ultimately leads to wasted memory utilization.
    JP

  • SAP Standard Report for Stock Balances

    Hi All,
    Do we have a standard SAP report that gives the output in this format (or something close to this)
    Date     Stock     Plant 1               Plant 2          
              Material 1     Material 2     Material 3     Material 1     Material 2     Material 3
    1/1/2008     Opening Stock                              
         GR                              
         GI                              
         Closing Stock                              
    1/2/2008     Opening Stock                              
         GR                              
         GI                              
         Closing Stock                              
    1/3/2008     Opening Stock                              
         GR                              
         GI                              
         Closing Stock                              
    Regards
    Tom

    Hi Jack,
    As per your report requirement, there is no standard transaction available in SAP.
    You have develop Z- Report for your requirement.
    But keeping in mind the follwoing constarints
    1) If it is a huge number of materials, Then the report will hours together. It may be lead to performance issue of the System. Because it has to get huge data from different tables as per you logic.
    2) The report is not suitable for print. Because it will run pages together.Since you are including Opening balance, GR details, GI details and Closing balance for each materail.
    Thanks & Best Regards
    Girisha M S

  • Upload Customer Remittance Advice to SAP for auto allocation to invoices

    Hi,
    Here's the story, we receive excel files from customers that are rather large and I was hoping I could get a proposal together with information on how we can automate the allocation of payments to customer debtor account. 
    I do not have access to any SAP sites and our IT department is giving me the run around, so I have turned to all of you for some help.
    Please remember that I am in the receivables are and all remittances (spreadsheets) get sent to us straight from the customer via e-mail never from the bank.
    I am hoping this is enough info for you guys?!?!?
    Let me know if you need to know anything else.
    Oh and we are working with ECC6.
    Thanks
    Belinda

    There are a couple of  options 1) You could implement the SAP lockbox functionality. This is a standard approach and offers automated file upload functions. 2) You could create a custom upload function. I always try to use standard SAP before I try custom. You could basically have a program developed that would take a excel spreadsheet in your company layout and have that uploaded into the system. The customer could send them in your format or you copy paste it to your formated spreadsheet. This is done all the time.
    pls assign points to say thanks.

  • Warning when moving allocated stock to blocked stock

    Dear all.
    we are facing a problem with our quality / mm flow. We have materials that while in stock need regular quality inspections (every 72 hours) .
    Batch is inspected and passed inspection and allocated to a sales order / delivery but will only be shipped in a few days. Then the batch is inspected again and does not pass inspection. The inspection people move the material from unresticted to blocked status.
    However the delivery is goods issued and send to the customer because the quality inspection people don't get a warning that the batch is already allocated to a sales order when they do the goods movement into blocked stock.
    Anybody has a sollution?
    Regards,
    CvM

    Hi Lakshmipathi
    thanks for the info. However if I understand correctly this means that every batch needs quality inspection before PGI and that is not the case.
    Every 72 hours the batch will be quality inspected. In most cases the delivery and the PGI will take place within the first 72 hours. However in some exeptional cases the inspection will take place between Del note creation and PGI (not on purpose, just the 72 hour boundry, the quality inspection people are not aware of the delivery)
    At that time the batch is already allocated to a delivery yet during the quality inspection the batch fails the qualifications and will moved to blocked stock.
    The people who move the batch into blocked stock need the warning that this batch has been allocated to a delivery so that it can be stopped before the trucks leave our property.
    Regards,
    CvM

  • [SOLVED] SGA_MAX_SIZE pre-allocated with Solaris 10?

    Hi all,
    I'm about to build a new production database to migrate an existing 8.1.7 database to 10.2.0.3. I'm in the enviable position of having a good chunk of memory to play with on the new system (compared with the existing one) so was looking at a suitable size for the SGA... when something pinged in my memory about SGA_MAX_SIZE and memory allocation in the OS where some platforms will allocate the entire amount of SGA_MAX_SIZE rather than just SGA_TARGET.
    So I did a little test. Using Solaris 10 and Oracle 10.2.0.3 I've created a basic database with SGA_MAX_SIZE set to 400MB and SGA_TARGET 280MB
    $ sqlplus
    SQL*Plus: Release 10.2.0.3.0 - Production on Wed Jan 30 18:31:21 2008
    Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.
    Enter user-name: / as sysdba
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL> show parameter sga
    NAME                                 TYPE        VALUE
    lock_sga                             boolean     FALSE
    pre_page_sga                         boolean     FALSE
    sga_max_size                         big integer 400M
    sga_target                           big integer 280MSo I was expecting to see the OS pre-allocate 280MB of memory but when I checked the segment is actually the 400MB (i.e. SGA_MAX_SIZE) (my database owner is 'ora10g'):
    $ ipcs -a
    IPC status from <running system> as of Wed Jan 30 18:31:36 GMT 2008
    T         ID      KEY        MODE        OWNER    GROUP  CREATOR  
    CGROUP CBYTES  QNUM QBYTES LSPID LRPID   STIME    RTIME    CTIME
    Message Queues:
    T         ID      KEY        MODE        OWNER    GROUP  CREATOR  
    CGROUP NATTCH      SEGSZ  CPID  LPID   ATIME    DTIME    CTIME
    Shared Memory:
    m         22   0x2394e4   rw-r---   ora10g   10gdba   ora10g  
    10gdba     20  419438592  2386  2542 18:31:22 18:31:28 18:28:18
    T         ID      KEY        MODE        OWNER    GROUP  CREATOR  
    CGROUP NSEMS   OTIME    CTIME
    Semaphores:
    s         23   0x89a070e8 ra-r---   ora10g   10gdba   ora10g  
    10gdba   154 18:31:31 18:28:18
    $ I wasn't sure whether Solaris 10 was one of the OSs with truly dynamic memory for the SGA but had hoped it was... this seems to say different. Really I'm just after some confirmation that I'm reading this correctly.
    Thanks.
    Joseph
    Message was edited by:
    Joseph Crofts
    Edited for clarity

    I don't want to get bogged down in too many details, as the links provided in previous posts have many details of SGA tests and the results of what happened. I just want to add a bit of explanation about the Oracle SGA and shared memory on UNIX and Solaris in particular.
    As you know Oracle's SGA is generally a single segment of shared memory. Historically this was 'normal' memory and could be paged out to the swap device. So a 500 MB SGA on a 1 GB physical memory system, would allocate 500 MB from the swap device for paging purposes, but might not use 500 MB of physical memory i.e. free memory might not decrease by 500 MB. How much physical memory depended on what pages in the SGA were accessed, and how frequently.
    At some point some people realised that this paging of the SGA was actually slowing performance of Oracle, as now some 'memory' accesses by Oracle could actually cause 'disk' accesses by paging in saved pages from the swap device. So some operating systems introduced a 'lock' option when creating a shared memory segment (shmat system call if memory serves me). And this was often enabled by a corresponding Oracle initialisation parameter, such as lock_sga.
    Now a 'locked' SGA did use up the full physical memory, and was guaranteed not to be paged out to disk. So Oracle SGA access was now always at memory speed, and consistent.
    Some operating systems took advantage of this 'lock' flag to shared memory segment creation to implement some other performance optimisations. One is not to allocate paging storage from swap space anyway, as it cannot be used by this shared memory segment. Another is to share the secondary page tables within the virtual memory sub-system for this segment over all processes attached to it i.e. one shared page table for the segment, not one page table per process. This can lead to massive memory savings on large SGAs with many attached shadow server processes. Another optimisation on this non-paged, contiguous memory segment is to use large memory pages instead of standard small ones. On Solaris instead of one page entry covering 8 KB of physical memory, it covers 8 MB of physical memory. This reduces the size of the virtual memory page table by a factor of 1,000 - another major memory saving.
    These were some of the optimisations that the original Red Hat Enterprise Linux had to introduce, to play catch up with Solaris, and to not waste memory on large page tables.
    Due to these extra optimisations, Solaris chose to call this 'locking' of shared memory segments 'initimate shared memory' or ISM for short. And I think there was a corresponding Oracle parameter 'use_ism'. This is now the default setting in Oracle ports to Solaris.
    As a result, this is why when Oracle grabs its shared memory segment up front (SGA_MAX_SIZE), it results in that amount of real physical memory being allocated and used.
    With Oracle 9i and 10g when Oracle introduced the SGA_TARGET and other settings and could dynamically resize the SGA, this messed things up for Solaris. Because the shared memory segment was 'Intimate' by default, and was not backed up by paging space on the swap device, it could never shrink in size, or release memory as it could not be paged out.
    Eventually Sun wrote a work around for this problem, and called it Dynamic Intimate Shared Memory (DISM). This is not on by default in Oracle, hence you are seeing all your shared memory segments using the same amount of physical memory. DISM allows the 'lock' flag to be turned on and off on a shared memory segment, and to be done over various memory sizes.
    I am not sure of the details, and so am beginning to get vague here. But I remember that this was a workaround on Sun's part to still get the benefits of ISM and the memory savings from large virtual memory pages and shared secondary page tables, while allowing Oracle to manage the SGA size dynamically and be able to release memory back for use by other things. I'm not sure if DISM allows Oracle to mark memory areas as pageable or locked, or whether it allows Oracle to really grow and shrink the size of a single shared memory segment. I presumed it added yet more flags to the various shared memory system calls.
    Although DISM should work on normal, single Solaris systems, as you know it is not enabled by default, and requires a special initialisation parameter. Also be aware that there are issues with DISM on high end Solaris systems that support Domains (F15K, F25K, etc.) and in Solaris Zones or Containers. Domains have problems when you want to dynamically remove a CPU/Memory board from the system, and the allocations of memory on that board must be reallocated to other memory boards. This can break the rule that a locked shared memory segment must occupy contiguous physical memory. It took Sun another couple of releases of Solaris (or patches or quarterly releases) before they got DISM to work properly in a system with domains.
    I hope I am not trying to teach my granny to suck eggs, if you know what I mean. I just thought I'd provide a bit more background details.
    John

Maybe you are looking for