DBMS_LOB and memory management

Hello all,
When I declare a CLOB variable (myclob) do I always have to free it with dbms_lob.freetemporary(myclob) at the end of my procedure ?
I know I have to free it if it was created with dbms.createtemporary(myclob,....) but what happens when I run statements like "SELECT to_clob('Test') INTO myCLOB FROM DUAL;"
I don't find in Oracle documentation a good answer to this question ! Can someone explain the difference ? I hope memory is freed correctly without having to explicitly do a "freetemporary" but I'm not sure !
Thanks for your answers !

Thanks
But imagine you have something like
DECLARE
   myCLOB CLOB
BEGIN
   myCLOB := to_clob('test clob');
   -- Is it necessary ???
   --  dbms_lob.freetemporary(myCLOB,...) ;
END;1) Do I have to explicitely free "myCLOB" or is it automatically done by Oracle ?
2) Does the declaration of myCLOB involves the creation of a "new temporary LOB" or is it only when we create the lob explicitly with the dbms_lob.createtemporary(...);
Regards

Similar Messages

  • How to save large amount of data and memory management

    hi guys,
    I prepared a vi to take measurement of temperature and pressure simultaneously from Agilent 34970A with 34901A. The sampling rate is 1 sample/per second for each channel (20 temperature channel and 2 pressure channel (current). The device has 20 channels and most probably the test will lasts 2 days. So, it means 86400 sample per day totally. Could the attached vi  do this task? I really do not know how the "write to text file.vi" works and use the memory of the pc. And after collecting all data, could I export these data to microsoft office excel to analyse safely? And I really do not know can I work this vi without interrupting. is it possible? I am also open for all other suggests relevant the vi, too. And I wonder that is it possible to build a standalone application to take measurements from DAQ devices (here Agilent 34970A-34901A)  by a pc without labview?
    Egemen
    Attachments:
    Agilent_Labview_v2.4.vi ‏85 KB

    Hey,
    What I meant by check for an error while measuring is the following:
    you start the program
    If one of your Agilent VIs returns an error (maybe bad initialisation or so) you will still try to continue to measure.
    Even if it is not working, you still continue measuring. Instead you could also check at the end of each loop if an error occured, like this:
    this will only look for an error and stop right away. If you encounter typical errors with apperatus you might want to handle it. A sufficient way to do so might be a state machine that just re-initializes the setup if an error occurs and then continues with its task. Check out this article: http://www.ni.com/white-paper/3024/en
    Anyway, I would recommend to just try it out and see what problems arise, as you might spend hours of improving well working code (:

  • Variables reset and memory management

        Hi,
            I'm writing an application that works on sophisticated
        data-structures, and though rely a lot on memory allocations.
        As I wanted to do a clean job as much as possible, I divided
        the work in different modules, and I'm encountering a problem
        while working with two of them.
            I wrote a data library that brings data structures needed
        by the software to work, and the last one I did was a XML wrapper
        for the CVIXML library.
            Sadly, while testing those two libraries to debug and validate
        them, I encountered a 'strange' bug in the reading functions. As
        concrete example is worth any explaination, here it is :
            First of all, I wrote several little functions like this one
        to earn some local declarations and copy/paste in my code.
        (there's xmlget_elt_val(), xmlget_attr_tag() and xmlget_attr_val())
    [cf : xmlget_elt_tag() function below]
            I use them a lot through my code, but at one point, in that
        function (not even the first time I'm calling it...), I got quite
        an illogical behaviour.
    Bernard Pratz
    IT Developer
    Engineering service
    Chelton Telecom & Microwave

            Even deciding to work through using the workaround, I got into
        a similar trouble just a function latter, but at another call. For
        the context, I'm calling the xmlextract_Banc() function several times
        in the code, to get different amount of data.
            So, in the following function (called from xmlextract_Banc()) I
        discovered that I had one attribute never read. Once more, using the
        debugger, I found out that this time the xmlget_attr_val() function
        is resetting the imax variable at the second iteration of the for loop.
    [cf: xmlextract_Appareil() below]
            Then I understood that any workaround I could do would only bring that
        problem at another point further in the code, and will not be resolved. I'm
        thinking this maybe some kind of overflow of something, somewhere... But I
        can't tell what and where (...if I'm right).
            I'm using the 'data' library, which defines the structs and all the
        functions such as create_Type() or push_Type() I use in those snippets, with
        the GUI, and the measurement library without any trouble, but that doesn't
        mean it's not guilty...
            I'm really clueless about this problem, at the point I'm trying to find
        out other ways around to get a xml library to work with the application. I'm
        trying out right now to get a 'libxml++' dll to be wrapped and included in the
        project, instead of using genuine's labWindows/CVI XML interface. But as I wrote
        almost 85-90% of the library before getting into that trouble, it'd be really
        more than helpful if someone could help me find out what's wrong.
        Thanks for reading this post,
        I can give more informations about the project if needed
        and thanks again for any answer I could get.
    PS: I had to split the message in order to get into the 5000 chars limit per post
        I know there may be a reason for that limit to exists, but I prefered to ignore
        it and make my problem's explaination much verbose
    Bernard Pratz
    IT Developer
    Engineering service
    Chelton Telecom & Microwave

  • OpenGL Photoshop CS5 and memory management by OSX 10.6

    I was able to find out quite a bit about the secrets around this issue - see my post on "NVIDIA Geforce 9400M and Adobe Photoshop/CS5 question".
    What I still do not understand is the following Apple-remark (in this case on the 17" PowerBook)
    NVIDIA GeForce GT 330M graphics processor with 512MB of GDDR3 memory
    Intel HD Graphics with 256MB of DDR3 SDRAM shared with main memory
    Memory available to Mac OS X may vary depending on graphics needs. Minimum graphics memory usage is 256MB.
    For me as a layman it reads "You buy 512 MB and 256MB of graphic memory but if your programs will get it will be decided by OSX 10.6"
    What I am also wondering if there is a utility or a Terminal-input that allows to influence how much OpenGL gets from the regular memory DRAMs.

    Since the posting I have had no single problem with my 144MB VRAM - though it is somewhat noticeble that more VRAM would accelarate like when I use complex plugin from NIK or parallel video recording with EyeTV. What I avoid is to work with Photoshop, export downloaded TV-clips with EyeTV AND watch life Television.

  • What is difference between 32 bit and 64 bit sql server memory management

    What is difference between 32 bit and 64 bit sql server memory management
    Thanks
    Shashikala

    This is the basic difference...check if helps:
    A 32-bit CPU running 32-bit software (also known as the x86 platform) is so named because it is based on an architecture that can manipulate values that are up to 32 bits in length. This means that a 32-bit memory pointer can store a value between 0 and
    4,294,967,295 to reference a memory address. This equates to a maximum addressable space of 4GB on 32-bit platforms
    On the other hand 64-bit limit of 18,446,744,073,709,551,616, this number is so large that in memory/storage terminology it equates to 16 exabytes. You don’t come across that term very often, so to help understand the scale, here is the value converted to
    more commonly used measurements: 16 exabytes = 16,777,216 petabytes (16 million PB)➤ 17,179,869,184 terabytes (17 billion TB)➤ 17,592,186,044,416 gigabytes (17 trillion GB)➤
    As you can see, it is significantly larger than the 4GB virtual address space usable in 32-bit systems; it’s so large in fact that any hardware capable of using it all is sadly restricted to the realm of science fiction. Because of this, processor manufacturers
    decided to only implement a 44-bit address bus, which provides a virtual address space on 64-bit systems of 16TB. This was regarded as being more than enough address space for the foreseeable future and logically it’s split into an 8TB range for user mode
    and 8TB for kernel mode. Each 64-bit process running on an x64 platform will be able to address up to 8TB of VAS.
    Please click the Mark as answer button and vote as helpful if this reply solves your problem

  • Questions about db_keep_cache_size and Automatic Shared Memory Management

    Hello all,
    I'm coming upon a server that I'm needing to pin a table and some objects in, per the recommendations of an application support call.
    Looking at the database, which is a 5 node RAC cluster (11gr2), I'm looking to see how things are laid out:
    SQL> select name, value, value/1024/1024 value_MB from v$parameter
    2 where name in ('db_cache_size','db_keep_cache_size','db_recycle_cache_size','shared_pool_size','sga_max_size');
    NAME VALUE VALUE_MB
    sga_max_size 1694498816 1616
    shared_pool_size 0 0
    db_cache_size 0 0
    db_keep_cache_size 0 0
    db_recycle_cache_siz 0 0
    e
    Looking at granularity level:
    SQL> select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';
    GRANULE_SIZE/VALUE
    2048
    Then....I looked, and I thought this instance was set up with Auto Shared Mem Mgmt....but I see that sga_target size is not set:
    SQL> show parameter sga
    NAME TYPE VALUE
    lock_sga boolean FALSE
    pre_page_sga boolean FALSE
    sga_max_size big integer 1616M
    sga_target big integer 0
    So, I'm wondering first of all...would it be a good idea to switch to Automatic Shared Memory Management? If so, is this as simple as altering system set sga_target =...? Again, this is on a RAC system, is there a different way to do this than on a single instance?
    If that isn't the way to go...let me continue with the table size, etc....
    The table I need to pin is:
    SQL> select sum (blocks) from all_tables where table_name = 'MYTABLE' and owner = 'MYOWNER';
    SUM(BLOCKS)
    4858
    And block size is:
    SQL> show parameter block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    So, the space I'll need in memory for pinning this is:
    4858 * 8192 /1024/1024 = 37.95.......which is well below my granularity mark of 2048
    So, would this be as easy as setting db_keep_cache_size = 2048 with an alter system call? Do I need to set db_cache_size first? What do I set that to?
    Thanks in advance for any suggestions and links to info on this.
    cayenne
    Edited by: cayenne on Mar 27, 2013 10:14 AM
    Edited by: cayenne on Mar 27, 2013 10:15 AM

    JohnWatson wrote:
    This is what you need,alter system set db_keep_cache_size=40M;I do not understand the arithmetic you do here,select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';it shows you the number of buffers per granule, which I would not think has any meaning.I'd been looking at some different sites studying this, and what I got from that, was that this granularity gave you the minimum you could set the db_keep_cache_size, that if you tried setting it below this value, it would be bumped up to it, and also, that each bump you gave the keep_cache, would be in increments of the granularity number....?
    Thanks,
    cayenne

  • A few questions and some things we found about Memory Management

    We have been development a pretty complex RIA in flash; true
    we probably should have used flex but we are very heavily invested
    in flash and moving to flex was not really an option due to time
    constraints and the learning curve associated with changing. That
    reminds me are the Flex UI Objects going to be made available for
    Flash CS3 I mean that advanced datagrid would be nice?
    The application is a order processing and document
    management system that has around 150 swf’s and I am posting
    here to let people know what we found through our development and
    testing cycles and also have a few questions that I am hoping
    someone will at least point me in the right direction.
    We were kind of disappointed to find that you could no
    longer duplicate Movies and were forced to reload them if we wanted
    different instances, but we got over that fast. Keeping this in
    mind we decided to make as many dynamic screens as we could but
    these turned out to be horribly time consuming. I mean sure we
    could make the fields jump around like fleas on a dogs back (and it
    was a great joke) but not very practical. So back to the IDE and
    fla creation we went. This leads me to my first question has anyone
    see heard of or thought about a SWF file reader that would read the
    layout of a SWF and then generate the actionscript as output?
    As we were developing we found that it was more complex, and
    I think unnecessarily so, to get connected to a database. We use
    ColdFusion as our “middleware” so it wasn’t to
    bad and this leads me to my next question. Is there or has there
    been any indication that Flex data services or well I guess BlazeDS
    now is available for Flash CS3?
    So we went on our merry little way and everything was going
    great until testing this is when we started to see what kind of
    memory flash really eats up. In the beginning our app would take up
    a gig through the course of a day.
    Any Class that we created we added a CleanUp method to it.
    Yes we used weak listeners, yes we destroyed, and all references to
    the main movies, and yes that was it the main movies.
    Don’t be fooled the same way we were. The Garbage
    collection simulation
    http://www.adobe.com/devnet/flashplayer/articles/garbage_collection.html
    does not really tell you the whole story. Even though if you cut
    the line from the root the simulation suggests that the rest of the
    objects that you have created within that object will be marked and
    collected. Not so.
    After much testing we found that if we wanted a full recovery
    of memory the object and it’s sub objects and sub sub ojects
    within arrays also with sub objects must be deallocated and then
    assinged null to get full collection, I say full but really mean
    between 90-95% reminds me of my C programming days.

    We have been development a pretty complex RIA in flash; true
    we probably should have used flex but we are very heavily invested
    in flash and moving to flex was not really an option due to time
    constraints and the learning curve associated with changing. That
    reminds me are the Flex UI Objects going to be made available for
    Flash CS3 I mean that advanced datagrid would be nice?
    The application is a order processing and document
    management system that has around 150 swf’s and I am posting
    here to let people know what we found through our development and
    testing cycles and also have a few questions that I am hoping
    someone will at least point me in the right direction.
    We were kind of disappointed to find that you could no
    longer duplicate Movies and were forced to reload them if we wanted
    different instances, but we got over that fast. Keeping this in
    mind we decided to make as many dynamic screens as we could but
    these turned out to be horribly time consuming. I mean sure we
    could make the fields jump around like fleas on a dogs back (and it
    was a great joke) but not very practical. So back to the IDE and
    fla creation we went. This leads me to my first question has anyone
    see heard of or thought about a SWF file reader that would read the
    layout of a SWF and then generate the actionscript as output?
    As we were developing we found that it was more complex, and
    I think unnecessarily so, to get connected to a database. We use
    ColdFusion as our “middleware” so it wasn’t to
    bad and this leads me to my next question. Is there or has there
    been any indication that Flex data services or well I guess BlazeDS
    now is available for Flash CS3?
    So we went on our merry little way and everything was going
    great until testing this is when we started to see what kind of
    memory flash really eats up. In the beginning our app would take up
    a gig through the course of a day.
    Any Class that we created we added a CleanUp method to it.
    Yes we used weak listeners, yes we destroyed, and all references to
    the main movies, and yes that was it the main movies.
    Don’t be fooled the same way we were. The Garbage
    collection simulation
    http://www.adobe.com/devnet/flashplayer/articles/garbage_collection.html
    does not really tell you the whole story. Even though if you cut
    the line from the root the simulation suggests that the rest of the
    objects that you have created within that object will be marked and
    collected. Not so.
    After much testing we found that if we wanted a full recovery
    of memory the object and it’s sub objects and sub sub ojects
    within arrays also with sub objects must be deallocated and then
    assinged null to get full collection, I say full but really mean
    between 90-95% reminds me of my C programming days.

  • Difference between nio-file-manager  and nio-memory-manager

    Hi,
    what's the difference between nio-file-manager and nio-memory-manager? The documentation doesn't really discuss the differences as far as I know. They both use nio to store memory-mapped files don't they? What are the advantages/disadvantages of both?
    When to choose the first one and when the second when storing a large amount of data? Can both be used to query data with the Filter API? Are there size limits on both?
    Best regards
    Jan

    Hi Jan,
    The difference is that one uses a memory mapped file and one uses direct nio memory (as part of the memory allocated by the JVM process) to store the data. Both allow storing cache data off heap making it possible to store more data with a single cache node (JVM) without long GC pauses.
    If you are using a 32 bit JVM, the JVM process will be limited to a total of ~3GB on Windows and 4GB on Linux/Solaris. This includes heap and off heap memory allocation.
    Regarding the size limitations for the nio-file manager Please see the following doc for more information.
    With the release of 3.5 there is now the idea of a Partitioned backing map which helps create larger (up to 8GB of capacity) for nio storage. Please refer to the following doc.
    Both can be used to query data but it should be noted that the indexes will be stored in heap.
    hth,
    -Dave

  • Memory Management with NSString and synthesized properties

    I thought I understood memory management but now I'm getting some odd behavior and that's the only thing I can see that I might be doing incorrectly:
    I have a synthesized NSString propery called displayText. At one point I attempt to set a label with the displayText property (label.text = [[note displayText]]). The first one works, but the second one does not:
    [note setDisplayText:[[note fileName]]]; // - works
    [note setDisplayText:[[self getCharacterInFileName:[[note fileName]]]]; // - does not work
    And here is getCharacterInFileName:
    // Given the fileName for the image, get the specific part of that fileName for the note letter
    - (NSString *)getCharacterInFileName:(NSString *)fileName
    NSRange range = {1,1};
    NSString *characterInString = [[[NSString alloc]] initWithString:[[fileName substringWithRange:range]]];
    return characterInString;
    Is there some pointer mismanagement going on here that I'm missing?
    Message was edited by: darkpegasus

    Nevermind, I'm an idiot. I forgot to close an if/else block with a brace and it screwed everything up.

  • Memory management in jelly bean xperia tablet s and performance issues

    Most used apps are always in memory.
    while recent apps few remain in memory untill memory is full.
    this is how memory management is supose to work.
    pretty much same as windows 7
    but if this true then why are games lagging ?
    http://www.youtube.com/watch?v=S3TwBlW5ibk
    When their not enough memory games i suspect lagging.
    if their not enough memory recent apps getting killed to free up ram.
     why does't this happen ?
    And why are their apps in memory that i never use ?
    tv sideview never use why its in memory ?
    social life again never use again using memory
    reader widget again using ram again never use at all
    music unlimited again in memory never actualy use
    Like i say'd most used apps are in memory all the time while rest is in cache.
    But my most used apps are never in ram untill i start them
    now if disabled lot apps my tablet become much faster respondsive.
    Can sony plz plz plz plz fix memory management stop saying you did ?
    You maybe reduced random shutdowns by like 90% or 95% perhaps even 99.99%
    Before random shutdowns happend because of broken memory management, which obviously still broken as i see lag in games
    sony apps use way to much ram, stop treating it like a computer with infinity amount of ram

    I rooted my tablet, and when looking at the processor speed in SetCPU, Sony uses a governor for power management that makes the tablet run jagged... I changed the power governor to ondemand, and got MUCH better smoothness and performance, while not sacrificing my battery life. You should try it out, maybe that'll help with games and such.

  • Memory Management comparison between Database 9208 and 11gR2 on Sun Solaris

    Hi All,
    Need some case studies which would help understand how Memory management is done in 9208 and 11gR2 on Sun Solaris SPARC
    Also wanted some real time data which says 11gR2 manages Memory and CPU better than 9208. Some comparison Graph between 9i and 11gR2.
    Any information will be of great help.
    Thanks everyone for your support.
    Thanks
    Abdul

    please see if below helps :
    http://www.oracle.com/global/de/upgradecommunity/artikel/upgrade11gr1_workshop2.pdf
    http://www.dba-oracle.com/oracle11g/oracle_11g_memory_target_parameter.htm
    Regards
    Rajesh

  • Monitoring memory and thread management for Java 1.4.2_04

    Hi,
    I have an application which is using Java version 1.4.2_04. The application deployed in Customer site is having some memory issues. There are observing java.exe going 100% in Task Manager. The same application (of higher version)
    uses Java 1.5 is not seeing any issues.
    Now Customer needs the root cause of the issue and i would like to check the code which is causing the issue. I came to know we can monitor threads and memory using JConsole. Since we are using Java 1.4, i installed Java 1.6 on a remote machine and tried connecting to the problematic machine. But it failed to connect.
    i have added the option "-Dcom.sun.management.jmxremote.port=8880" when starting JVM (this i have provided inside ServletExec batch file, ServletExec is my web server).
    Any idea how to connect to the problematic machine from remote using JConsole. Or any other tools i can use on Java 1.4 to nail down the problem..
    Please provide pointers.
    I have another doubt on Java version 1.4, since there were many memory issues, am i hitting any defect in Java 1.4

    hari.r wrote:
    Please provide pointers.You need to tell you sales/marketing/business requirements people that they must come up with an end of life policy for customers on older versions.
    Unless your company has a service contract with Sun you right now, can no longer insure that the VM will remain secure nor even really running.
    See the following for the java VMs that you are using.
    [http://java.sun.com/products/archive/eol.policy.html]

  • Data Backup 2.1 and Mac Memory Management?

    I'm trialing a backup program called Data Backup 2.1. It keeps versions of my programs, which I need, as often I've had corruptions and have not notice this till a few days after the fact. I've been using Retrospect but read a review that praised Data Backup. The thing I've noticed with it is that although it is very fast, like SuperDuper, it seems to affect my free memory dramatically. I've noticed that it will finish and instead of having say 250 megs of Active memory in use I'll have 700 megs of active memory. Inactive will be low whereas normally its high. Free memory during its backup can drop to 20 megs (I have 1.5 gigs). The free memory, once you start to use your computer seems to recover to around 500 - 700 megs. The one thing I have noticed of concern is that while its running I get pageouts, which I never get and my reading about Mac memory management is you want to avoid pageouts and if you get them you need more memory (for what I'm doing 1.5 gigs should be plenty). I've asked the Data Backup people what's going on and they don't think its something to be concerned about but they said it is probably something to do with the way they are caching.
    I'm just wondering - do you think this is something to be concerned about. I'd like to switch from Retrospect as although I know it I'm not sure how committed they are to the Mac market any longer and it is way slower in terms of activities but it does manage memory well. However I don't want to get Data Backup if it is affecting RAM inappropriately.
    Kerry

    Synchronize! Pro X will maintain versioned archives, perform full, incremental, and bootable backups both to local and to network devices. I have found that SPX is just about as full-featured as Retrospect with certain limitations. It cannot backup across multiple media (CDs, DVDs, tape), no extensive browser windows like Retrospect, no backups without scanning (as SuperDuper does for its "fast" updating backup.
    SPX supports schedules, multiple-item backups (can select individual files and/or folders), extensive backup/synchronize customizations, can run as "root", can auto-mount devices (including network drives), and it's a universal binary.
    It's also nearly as expensive as Retrospect but in my opinion it's worth it.
    If you want a less costly backup solution without all the features of SPX, but with all the features of SuperDuper (and in my opinion better than SD) then try Deja Vu. Also a universal binary, it supports incremental archives, full, incremental, and bootable backups to local or network drives, supports scheduling and runs as a preference pane.
    Finally, for the the truly cheap there are PsyncX and RsyncX - both freeware. They are GUI wrappers around the basic backup and synchronizing tools that are part of Unix (ditto, rsync, and psync.)
    Download mentioned software from www.versiontracker.com or www.macupdate.com.
    Why reward points?(Quoted from Discussions Terms of Use.)
    The reward system helps to increase community participation. When a community member gives you (or another member) a reward for providing helpful advice or a solution to their question, your accumulated points will increase your status level within the community.
    Members may reward you with 5 points if they deem that your reply is helpful and 10 points if you post a solution to their issue. Likewise, when you mark a reply as Helpful or Solved in your own created topic, you will be awarding the respondent with the same point values.

  • Question about 11gR2 Grid, RAC, /dev/shm and Automatic Memory Management

    Hello,
    i've recently installed grid and rdbms software 11.2.0.2 on a two node Oracle Linux cluster with 128gb ram each node.
    I'm using ASM to store data and ocr and I'm testing Automatic Memory Management.
    When I finished Grid+RDBMS installation I've seen that /dev/shm size is 64gb (half of my total RAM).
    I've created a database with dbca and when I was asked to choose if I wanted to use AMM I've noticed that I could
    allocate only about 60gb for Oracle. If I chose more than 90gb I got an error saying:
    Using Automatic Memory Management requires 60gb available in my two nodes.
    The current available space in the two nodes is only 30gb and 30gb.
    If you want to use AMM you should either free up some space in /dev/shm
    or reduce the memory allocated to Oracle
    I was wondering when (during the installation or the settings of kernel parameters) did I define the space of /dev/shm ?
    Since I have 128gb of RAM wouldn't it be better to use more than 64gb of ram for my /dev/shm tmpfs partition ?
    Is there a limit or a ratio for best practice for my RAM and the /dev/shm ?
    thanks in advance.

    user9051299 wrote:
    Is the "half of the RAM size" a kernel's default value or Oracle's ? Neither. There are a number of unique factors that determine the best memory size and fit for Oracle - including just how much memory is effectively available (i.e. how much is needed for other services and processes).
    And from what I understand i don't "break" any Oracle's best practice by increasing the /dev/shm right ?Correct. (at least none that I'm aware of, and none that I have read in Oracle's RAC Starter Kit documentation).

  • Large SGA On Linux and Automatic Shared Memory Management problem

    Hello
    I use Oracle10gR2 in linux 32bit and I use http://www.oracle-base.com/articles/linux/LargeSGAOnLinux.php manual
    for larger SGA it works fine but when I set sga_target parameter for using Automatic Shared Memory Management
    I recieve this error
    ERROR at line 1:
    ORA-02097: parameter cannot be modified because specified value is invalid
    ORA-00824: cannot set sga_target due to existing internal settings, see alert
    log for more information
    and in alert log it has been wrote
    Cannot set sga_target with db_block_buffers set
    my question is when using db_block_buffers can't use Automatic Shared Memory Management ?
    Is any solution for using both Large SGA and Automatic Shared Memory Management ?
    thanks
    Edited by: TakhteJamshid on Feb 14, 2009 3:39 AM

    TakhteJamshid wrote:
    Do it means that when we use large SGA using Automatic Shared Memory Management is impossible ?Yes its true. An attempt to do so will result inthis,
    >
    ORA-00825: cannot set DB_BLOCK_BUFFERS if SGA_TARGET or MEMORY_TARGET is set
    Cause: SGA_TARGET or MEMORY_TARGET set with DB_BLOCK_BUFFERS set.
    Action: Do not set SGA_TARGET, MEMORY_TARGET or use new cache parameters, and do not use DB_BLOCK_BUFFERS which is an old cache parameter.>
    HTH
    Aman....

Maybe you are looking for

  • Attachments not showing as attachments in mail

    Why are my attachments not showing up as attachments when recieved. They are sent from Mail as windows "friendly" - but never the less always show up inlaid in the mail. The recipient then have to ctrl clik on the documents, save them on the harddriv

  • Lost ical entries after sync.

    After I performed a sync the other day all my new appointments that I had imputed into my iphone were erased. There is no record of them on the phone or on my mac. I'm really distressed because I lost my ENTIRE work schedule. Can any one tell me how

  • Direct to Stage

    Hi I'm trying to play a windows media file in my Director project, but its always on top, and I cannot put a button on top of it, of course, to let the user choose to skip the intro movie. Well the movie is always on top because of direct to stage, b

  • Have forgotten password for apple ID email and would like verification notice sent to my rescue email address

    In trying to setup my old iPad for my husband I have created an apple ID using an email address for which we have forgotten the password - - how can I have the verification notification sent to his rescue email?

  • .mdf mysql database to allow to be accessed in LAN

    i just finished my application, i did my application in visual studio 2013 and created my database in sql which turned out to be an .mdf file. i was wondering, how can i make people in my network access the database. i installed Microsoft Sql server