Shared memory Problem

Operating system Windows 2003 Server
When i am trying to connect to the oracle 10g database we are facing the following error..
ORA-04031: unable to allocate 4096 bytes of shared memory ("shared pool",select /*+ rule */ bucket_cn...","Typecheck heap","kgghteInit")
But if i restart the Server ther the error disappears..
Please help me regarding this problem....

can you give me any idea how much memory i have to set for Shared Pool It's all depends on the number of users , the amount of transcation , the nature of your database, application nature and whole lots of things,for temporary solution, you must increase the size of shared pool in oreder to get your db up and running atleast for some time.
hare krishna
Alok

Similar Messages

  • Is XML Publisher causing shared memory problem..?

    Hi Experts,
    Since this week, many of the Requisition/PO are erroring out with the below errors or similar to these errors:
    - ORA-04031: unable to allocate 15504 bytes of shared memorny ("sharedpool","PO_REQAPPROVAL_INIT1APPS","PL/SQL MPCODE","BAMIMA: Bam Buffer")
    ORA-06508: PL/SQL: could not find program unit being called.
    -Error Name WFENG_COMMIT_INSIDE
    3146: Commit happened in activity/function
    'CREATE_AND_APPROVE_DOC:LAUNCH_PO_APPROVAL/PO_AUTOCREATE_DOC.LAUNCH_PO_APPROVAL'
    Process Error: ORA-06508: PL/SQL: could not find program unit being called
    Few days back we were getting heap memory error for one of the XML Publisher report.
    I heard that XML Publisher requires lot of memory for sources/features,So I want to know whether XML Publisher can be one of the cause for memory problem to occur or this shared memory is not related with XML Publisher sources at all.
    Please advice.
    Many thanks..
    Suman
    Edited by: suman.g on 25-Nov-2009 04:03

    Hi Robert,
    Thanks for your quick reply...
    Apps version: 11.5.10.2
    database version: 9.2.0.8.0
    As I am a beginner in this so dont know much about this.. Can you please guide me on this.
    DBAs has increased the shared memory and problem has resolved but here I am more concrened whether the XML Publisher was or can be one od the cause for shared memory problem. Is there any way to check that or this occurs randomly and we can not check this.
    Please advice something.

  • SAP HA ASCS Instance service Shared Memory Problem

    Hi Gurus,
    I have completed the setup of ASCS and ERS instance for SAP HA.  The system was working fine, but yesterday when I was trying to start the ASCS instance, it shows following error in the dev_enqsrv log file.
    ShadowTable:attach: ShmCreate - pool doesn't exist
    Enqueue: EnqMemStartupAction Utc=1267507616
    Enqueue Info: replication enabled
    Enqueue Info: enque/replication_dll not set
    ShadowTable:attach: ShmCreate - pool doesn't exist
    ERROR => EnqRepAttachOldTable: failed to get information on old replication table: rc=-1 [enxxmrdt.h   393]
    EnqRepRestoreFromReplica: failed to attach to old replication table: rc=-1
    enque/backup_file disabled in enserver environment
    ***LOG GEZ=> Server start [encllog.cpp  493]
    For resolving this error, I have performed following steps:
    1. cleanipc 04 remove ( where 04 is the instance no for ASCS instance)
    2. saposcol -k to stop the saposcollector and release the shared memory.
    3. Also performed some steps like saposcol -d, kill , leave, quit.
    After doing this steps, the same error is been generated in the dev_enqsrv log file. The only problem with this is the ASCS instance is starting , but while checking the log file available.log, it is changing the staus from available to unavailable every 30 secs.
    Appreciate any response.
    Regards,
    Raj.

    Hi,
    important to know is your NW kernel version and the setup of your file systems.
    The error message could be no real error. It could be just the indication that there is no replicated enqueue table available and therefore the enqueue server of the starting ASCS instance is not able to attach to any existing table. It would create a new one. But: If you start your ASCS on the node where the ERS was running and you see that error message, that indicates that the replicated enqueue table couldn't be attached on your starting enqueue server, which is a critical error in a failover scenario.
    Check following help.sap.com site for details how enqueue table replication works:
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/47/e023f3bf423c83e10000000a42189c/frameset.htm
    The available.log has this confusing entries if you have your /usr/sap/<SID>/<Instance>/work directory on a clustered file system which is present on both nodes. In that case, the sapstartsrv of the ASCS on the active node will write to the log file that the instance is available, whereas the sapstartsrv of the ASCS on the passive node will write to the log file that the instance is not available. It's a bug, but there are several possibilities to workaround.
    If you need documentation of SAP HA on SLES or RHEL, I can post the links of the whitepapers here.
    Best regards,
    Matthias

  • Ora-4031&ora-1280 error-Shared Memory Problem

    Sir ,
    I am using oracle Streams for Data Replication
    I am facing the problem while confifurung cApture Process..It is giving error-ora-4031
    description
    Ora-4031-Unable to allocate <n> bytes of shared memory ( "Shared Pool","Unknown Object ","Streams Pool","Internal low LCR")
    Ora-1280 : Ligminer Fatal error
    I followed the steps given below
         # Instance Setup
              setup is like
                   Database Mode = archive
                   Share_pool_size=52m &
                   share_pool_reseved_size= 5m (10% of share pool Area)
                   processes=500
                   parallel_max_servers=35
                   parallel_min_servers=1
                   job_que_processes=1
                   aq_tm_processe=1
                   global_name=true
                   Archive log mode =true
                   log_archive_dest_1 :='location =E:\oracle\archive1 reopen=30"
                   log_archive_dest_2 :='location =E:\oracle\archive2 reopen=30"
         # Stream Administrator Setup
         # LogMinor Tablespace Setup
         # Supplemental Logging
         # Configure Propagation Process
         # Configure Capture Process
         # Configure Instantiation SCN
         # Configure Apply Process
         # Start Apply Process
         # Start Capture Process
    Please give me as solution to overcome this problem

    I suspect you are running 10.1 or earlier version of the data base? If that is the case, logminer has a memory leak that runs the SGA out of memory over time.
    Look in metalink for logminer memory leak bugs. This forum has the exact bug if you search for it. It is patched in release 10.2.0.2.0. I just loaded that release and have other issues to fix before the bug appears again or not.
    Hope this helps,
    J.

  • Shared memory problem : MaxDB

    Hi
    I am using SAP ABAP trial version. It was working well but now when i start server. it gives the error:
    ===================================================
    ============== Starting System NSP ================
    ===================================================
    =============== Starting database instance ...
    The requested service has already been started.
    More help is available by typing NET HELPMSG 2182.
    The MaxDB Database Starter, Version 7.6.02.14
    Copyright 2000-2007 by SAP AG
    Error! Connection failed to node (local) for database NSP:
    -24700,ERR_DBMSRV_NOSTART: Could not start DBM server.
    -24832,ERR_SHMNOTAVAILABLE: Shared memory not available
    -24686,ERR_SHMNOCLEANUP: Could not cleanup the DBM server Shared Memory
    -24827,ERR_SHMALLOCFAILED: ID E:\sapdb\data\wrk\NSP.dbm.shm, requested size 4287
    037658
    Error: Error while calling dbmcli
    "E:\sapdb\programs\pgm\dbmcli"  -d NSP -u , db_online
    ============== Start database failed !
    Press any key to continue . . .
    can someone tell me the exact steps with command to handle this issue. I am dummie in database administration
    Thanks
    Vishal Kapoor

    Thanks Mark for the advise.
    after the file deletion and system restart as recomended, the new error is coming:
    I stopped every MAXDB running service, deleted 2 given files and rebooted system.
    =============== Starting database instance ...
    The SAP DB WWW service is starting.
    The SAP DB WWW service was started successfully.
    The MaxDB Database Starter, Version 7.6.02.14
    Copyright 2000-2007 by SAP AG
    ERR
    -24988,ERR_SQL: SQL error
    -9022,System error: BD Corrupted datapage
    3,Database state: OFFLINE
    6,Internal errorcode, Errorcode 9163 "corrupted_datapage"
    20026,PAM::GetPage: Data page is not assigned.
    6,bd13GetNode, Errorcode 9163 "corrupted_datapage"
    20066,HistDir: registered files 50, max used are 50
    20017,RestartFilesystem failed with 'System error: BD Corrupted datapage'
    Error: Error while calling dbmcli
    "E:\sapdb\programs\pgm\dbmcli"  -d NSP -u , db_online
    ============== Start database failed !
    Press any key to continue . . .
    Thanks
    Vishal Kapoor

  • Shared memory problem - memory leak?

    I've got the following error after calling a stored procedure about 26000 times. Does this mean Oracle 8.1.6 Thin Driver has memory leak in CallableStatement? Thanks.
    ORA-04031: unable to allocate 4096 bytes of shared memory ("shared pool","BEGIN checktstn(:1, :2); END;","PL/SQL MPCODE","BAMIMA: Bam Buffer")

    Me Too!
    java.sql.SQLException: ORA-04031: unable to allocate 744 bytes of shared memory ("unknown object","sga heap","library cache")
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:114)
    at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:208)
    at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:542)
    at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1311)
    at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteDescribe(TTC7Protocol.java:595)
    at oracle.jdbc.driver.OracleStatement.doExecuteQuery(OracleStatement.java:1600)
    at oracle.jdbc.driver.OracleStatement.doExecute(OracleStatement.java:1758)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1805)
    at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:410)
    Does this error pertain to memory leaks?

  • R9.2.0 on RH 7.3: Memory problem: Linux or the Installer?.

    Having the following problem. Perhaps someone had the same. In that case, please help.
    I have P4 with 1GB of RAM and 2GB of SWAP. No disk space problem. No shared memory problem. I installed the Client of Oracle9i (9.2R2) on redhat 7.3 (standard installation as Server). I have already noted here that with the Oracle installer running, the system uses almost all the RAM available (99.5% of the total), but at least it can conclude the installation. However it never uses the SWAP (100% free).
    When tried to install the Database (enterprise or standard), the installation continues as long as there is RAM available. When reached 99.6% of the total used, it does not give any response (it hangs at 41% of the installation). Even here no SWAP portion is used.
    Is there a problem of the kernel dispached with Rh7.3 in using the swap? does Oracle Installer really needs so much RAM? or am I missing something in the installation process of the Database? I am using jdk-1.3.1_04 downloaded from the sun website.
    I would appreciate if you cc your response also to [email protected] I may have problem to access the Forum in the coming two days.
    Thank you in advance.
    Jama Musse Jama.

    Check the Support Matrix (if you can find it) and it will tell you that Redhat and most Linux are not supported. I bought Redhat 7.3 Professional , a WinBook with lots of disk and speed and tried to get 9.2 to run on it. no way. It's not certified so it's not supported. Even though I explained that what I was trying to do was get rid of Windows XP and move to RedHat 7.3 and would they send me Oracle 9.2 when it was available. I got the CDs 3 days later. but it won't work. Now I am left with the choise of buying the 9AS package or going backt to XP.
    I do not thing that Oracle is living up to the spirit of LINUX. They say Linux but they mean a very specific subset. Who do they think they are, Bill Gates?

  • NOT ENOUGH SHARED MEMORY

    Hi,
    I am facing "NOT ENOUGH SHARED MEMORY " problem even with 1-2 users in portal 3.0.6.x.
    The init.ora settings are:
    shared_pool_size=104857600 large_pool_size=614400
    java_pool_size=20971520
    Is it due to hanging sessions?
    How to see the hanging sessions inside database?
    Can you tell me what is going wrong?
    Thanks
    Vikas
    null

    Think I got it figured out. Oracle 10g XE doesn't
    have a DB_HANDLE initialization parameter. The
    problem is that the initialization parameters are
    located in $ORACLE_HOME/dbs/spfileXE.ora, but sqlplus
    is looking for initORCL.ora.You mean instance is looking for initORCL.ora and not for SPFILE, or ;-)
    So does anyone besides Faust knowSorry, again me ;-)
    how to configure
    sqlplus to look for spfileXE.ora instead of
    initORCL.ora? I can't find an SQL*Plus setting that
    will do this.How to set SPFILE and arond it you can find here:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/create.htm#i1013934
    Cheers!

  • SHARED MEMORY AND DATABASE MEMORY giving problem.

    Hello Friends,
    I am facing problem with EXPORT MEMORY and IMPORT MEMORY.
    I have developed one program which will EXPORT the internal table and some variables to the memory.This program will call another program via background job. IMPORT memory used in another program to get the first program data.
    This IMPORT command is working perfect in foreground. But  it is not working in background.
    So, I have reviewed couple of forums and I tried both SHARED MEMORY AND DATABASE MEMORY.  But no use. Still background is giving problem.
    When I remove VIA JOB  parameter in the SUBMIT statement it is working. But i need to execute this program in background via background job. Please help me . what should I do?
    pls find the below code of mine.
    option1
    EXPORT TAB = ITAB
           TO DATABASE indx(Z1)
                FROM   w_indx
                CLIENT sy-mandt
                ID     'XYZ'.
    option2
    EXPORT ITAB   FROM ITAB
      TO SHARED MEMORY indx(Z1)
      FROM w_indx
      CLIENT sy-mandt
      ID 'XYZ'.
       SUBMIT   ZPROG2   TO SAP-SPOOL
                      SPOOL PARAMETERS print_parameters
                       WITHOUT SPOOL DYNPRO
          *_VIA JOB name NUMBER number*_
                       AND RETURN.
    ===
    Hope every bidy understood the problem.
    my sincere request is ... pls post only relavent answer. do not post dummy answer for points.
    Thanks
    Raghu

    Hi.
    You can not exchange data between your programs using ABAP memory, because this memory is shared between objects within the same internal session.
    When you call your report using VIA JOB, a new session is created.
    Instead of using EXPORT and IMPORT to memory, put both programs into the same Function Group, and use global data objects of the _TOP include to exchange data.
    Another option, is to use SPA/GPA parameters (SET PARAMETER ID / GET PARAMETER ID), because SAP memory it is available between all open sessions. Of course, it depends on wich type of data you want to export.
    Hope it was helpful,
    Kind regards.
    F.S.A.

  • Oracle 11g problem with creating shared memory segments

    Hi, i'm having some problems with the oracle listener, when i'm trying to start it or reload it I get the follow error massages:
    TNS-01114: LSNRCTL could not perform local OS authentication with the listener
    TNS-01115: OS error 28 creating shared memory segment of 129 bytes with key 2969090421
    My system is a: SunOS db1-oracle 5.10 Generic_144489-06 i86pc i386 i86pc (Total 64GB RAM)
    Current SGA is set to:
    Total System Global Area 5344731136 bytes
    Fixed Size 2233536 bytes
    Variable Size 2919238464 bytes
    Database Buffers 2399141888 bytes
    Redo Buffers 24117248 bytes
    prctl -n project.max-shm-memory -i process $$
    process: 21735: -bash
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 64.0GB - deny
    I've seen that a solution might be "Make sure that system resources like shared memory and heap memory are available for LSNRCTL tool to execute properly."
    I'm not exactly sure how to check that there is enough resources?
    I've also seen a solution stating:
    "Try adjusting the system-imposed limits such as the maximum number of allowed shared memory segments, or their maximum and minimum sizes. In other cases, resources need to be freed up first for the operation to succeed."
    I've tried to modify the "max-sem-ids" parameter and set it to recommended 256 without any success and i've kind of run out of options what the error can be?
    /Regards

    I see, I do have the max-shm-ids quite high already so it shouldn't be a problem?
    user.oracle:100::oracle::process.max-file-descriptor=(priv,4096,deny);
    process.max-stack-size=(priv,33554432,deny);
    project.max-shm-memory=(priv,68719476736,deny)

  • Shared memory lock problem

    Hi all,
    I have an application which uses XLA to connect to TimesTen database, the appplication name is ISM, when I type ttstatus command it shows me below message, can any one explain why "ISM LOCKED " shows in ttstatus and how to resolve this problem, it this is C program, what's wrong in my code?
    TimesTen status report as of Mon Apr 14 17:32:32 2008
    Daemon pid 11987 port 16001 instance 6.04
    TimesTen server pid 11994 started on port 15005
    No TimesTen webserver running
    Data store /data/TimesTen/6.04/DataStore/usagecollector
    There are 10 connections to the data store
    Data store is in shared mode
    Shared Memory KEY 0x130282ba ID 654311539 (ISM LOCKED)
    Type PID Context Connection Name ConnID
    Process 12061 0x00000001001c8500 ism 6
    Replication 12037 0x000000010023d020 RECEIVER 5
    Replication 12037 0x000000010024d030 TRANSMITTER 4
    Replication 12037 0x0000000100258e60 REPHOLD 2
    Replication 12037 0x00000001002804d0 REPLISTENER 3
    Replication 12037 0x00000001002a8570 LOGFORCE 1
    Subdaemon 11993 0x0000000100176870 Worker 2044
    Subdaemon 11993 0x00000001001d5fa0 Flusher 2045
    Subdaemon 11993 0x00000001001fd610 Checkpoint 2047
    Subdaemon 11993 0x0000000100224c80 Monitor 2046
    RAM residence policy: Always
    Replication policy : Always
    Replication agent is running.
    Cache agent policy : Manual
    ------------------------------------------------------------------------

    Thanks for all your responses, yeah I also found this out on this forum, ISM means
    ISM = Intimate Shared memory.
    I've resolved my bug in my XLA application, which is I called ttxladeletebookmark before call ttxlastatus to unsubscribe from one table, timesten just unsubscribed all application from that table. :(
    swap the call order can resolve the problem.

  • SAPOSCOL Problems with Shared Memory

    Has anyone had any problems in either starting the saposcol or with keeping it running?  We have noticed multiple problems for several years now, and are at a point where the saposcol is not running at all on any of our servers, nor can we start it running.
    We're seeing "SAPOSCOL not running ? (Shared memory not available)".  I am working with SAP with two different customer messages trying to determine why we cannot start the saposcol.
    Does anyone have any ideas?
    Thanks,
    Traci Wakefield
    CV Industries

    I do have entries in the os-collector log:
          SAPOSCOL version  COLL 20.89 640 - AS/400 SAPOSCOL Version 18 Oct 2005, 64 bit, single threaded, Non-Unicode
          compiled at   Nov 26 2005
          systemid      324 (IBM iSeries with OS400)
          relno         6400
          patch text    COLL 20.89 640 - AS/400 SAPOSCOL Version 18 Oct 2005
          patchno       102
          intno         20020600
          running on    CENTDB1 OS400 3 5 0010000A1E1B
    13:25:06 28.02.2007   LOG: Profile          : no profile used
    13:25:06 28.02.2007   LOG: Saposcol Version  : [COLL 20.89 640 - AS/400 SAPOSCOL Version 18 Oct 2005]
    13:25:06 28.02.2007   LOG: Working directory : /usr/sap/tmp
    13:26:01 28.02.2007   LOG: Shared Memory Size: 339972.
    13:26:01 28.02.2007   LOG: INFO: size = (1 + 60 + 3143) * 106 + 348.
    13:26:01 28.02.2007   LOG: Connected to existing shared memory.
    13:26:01 28.02.2007   LOG: Reused shared memory. Clearing contents.
    13:26:04 28.02.2007   LOG: Collector daemon started
    13:26:04 28.02.2007   LOG: read coll.put Wed Feb 28 13:22:01 2007
    13:26:04 28.02.2007   LOG: Collector PID: 2469
    13:27:05 28.02.2007   LOG: Set validation records.
    14:00:32 28.02.2007 WARNING: Out of int limits in pfas41c1.c line 1528
    12:58:37 10.03.2007   LOG: Stop Signal received.
    12:58:38 10.03.2007   LOG: ==== Starting to deactivate collector ==========
    12:59:01 10.03.2007   LOG: ==== Collector deactivated  ================
    Also, I have tried saposcol -d (and other parameters below):
    saposcol -d
    kill
    clean
    leave
    quit.
    delete the files coll.put, dev_coll"
    4" From my open SAP message:
    I have also done the following:
    1 Check the authorizations of SAPOSCOL as mentioned in SAP Notes:
    637174 SAPOSCOL cannot access Libraries of different SAP systems
    175852 AS/400: Authorization problems in SAPOSCOL
    2 Remove the shared memory (coll.put)
    (according to SAP Note: #189072). You could find 'coll.put' in path:
    '/usr/sap/tmp'.
    3 End the following jobs in QSYSWRK:
    QPMASERV, QPMACLCT, QYPSPFRCOL and CRTPFRDTA
    4 Delete the temporary user space:
    WRKOBJ OBJ(R3400/PERFMISC) OBJTYPE(USRSPC)
    Afterwards you could start SAPOSCOL on operating system level.
    Just logon to iSeries as <SID>OFR and run the following command:
    SBMJOB CMD(CALL PGM(<kernel_lib>/SAPOSCOL) PARM('-l'))
    JOB(SAPOSCOL) JOBQ(R3<SID>400/R3_<nn>)
    LOG(4 0 SECLVL) CPYENVVAR(YES)
    Thanks,
    Traci"

  • Shared memory usage of Producer Consumer Problem

    Hai,
    I am able to write the Producer-Consumer problem as a interthread communication. But I don't know how to write it using shared memory. (i.e. I wish to start Producer and consumer as a different process). Please help me, sorry for the Temp file idea.
    Thanks
    MOHAN

    I have read that in the new brightly JDK1.4 you would be able to use shared memory, but I have to confess that I'm not sure about if I'm mixing different informations (lots of stuff read, u know?). Try reviewing the JDK1.4 SE new features.
    Also, I should reconsider using "traditional" ways like RMI, Sockets, and things like those.
    Regards.

  • Problem with EXPORT TO SHARED MEMORY statement.

    Hi I am using the syntax
          EXPORT w_netwr FROM w_netwr
                 w_name FROM w_name
            TO SHARED MEMORY indx(xy)
            FROM wa
            CLIENT sy-mandt
            ID 'Z_MID'.  
    and later importing them in a method but I am getting wrong values in import. Can u please help do we need any
    other addition to make this work correctly.

    delete the memory before exporting and after importing. like a clear statement we use in code.

  • Large SGA On Linux and Automatic Shared Memory Management problem

    Hello
    I use Oracle10gR2 in linux 32bit and I use http://www.oracle-base.com/articles/linux/LargeSGAOnLinux.php manual
    for larger SGA it works fine but when I set sga_target parameter for using Automatic Shared Memory Management
    I recieve this error
    ERROR at line 1:
    ORA-02097: parameter cannot be modified because specified value is invalid
    ORA-00824: cannot set sga_target due to existing internal settings, see alert
    log for more information
    and in alert log it has been wrote
    Cannot set sga_target with db_block_buffers set
    my question is when using db_block_buffers can't use Automatic Shared Memory Management ?
    Is any solution for using both Large SGA and Automatic Shared Memory Management ?
    thanks
    Edited by: TakhteJamshid on Feb 14, 2009 3:39 AM

    TakhteJamshid wrote:
    Do it means that when we use large SGA using Automatic Shared Memory Management is impossible ?Yes its true. An attempt to do so will result inthis,
    >
    ORA-00825: cannot set DB_BLOCK_BUFFERS if SGA_TARGET or MEMORY_TARGET is set
    Cause: SGA_TARGET or MEMORY_TARGET set with DB_BLOCK_BUFFERS set.
    Action: Do not set SGA_TARGET, MEMORY_TARGET or use new cache parameters, and do not use DB_BLOCK_BUFFERS which is an old cache parameter.>
    HTH
    Aman....

Maybe you are looking for