Confused about transaction, checkpoint, normal recovery.

After reading the documentation pdf, I start getting confused about it's description.
Rephrased from the paragraph on the transaction pdf:
"When database records are created, modified, or deleted, the modifications are represented in the BTree's leaf nodes. Beyond leaf node changes, database record modifications can also cause changes to other BTree nodes and structures"
"if your writes are transaction-protected, then every time a transaction is committed the leaf nodes(and only leaf nodes) modified by that transaction are written to JE logfiles on disk."
"Normal recovery, then is the process of recreating the entire BTree from the information available in the leaf nodes."
According to the above description, I have following concerns:
1. if I open a new environment and db, insert/modify/delete several million records, and without reopen the environment, then normal recovery is not run. That means, so far, the BTree is not complete? Will that affact the query efficiency? Or even worse, will that output incorrect results?
2. if my above thinking is correct, then every time I finish commiting transactions, I need to let the checkpoint to run in order to recreate the whole BTree. If my above thinking is not correct, then, that means that, I don't need to care about anything, just call transaction.commit(), or db.sync(), and let je to care about all the details.(I hope this is true :>)
michael.

http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/chkpoint.html
Checkpoints are normally performed by the checkpointer background thread, which is always running. Like all background threads, it is managed using the je.properties file. Currently, the only checkpointer property that you may want to manage is je.checkpointer.bytesInterval. This property identifies how much JE's log files can grow before a checkpoint is run. Its value is specified in bytes. Decreasing this value causes the checkpointer thread to run checkpoints more frequently. This will improve the time that it takes to run recovery, but it also increases the system resources (notably, I/O) required by JE.
"""

Similar Messages

  • Some doubt about Bdb XML Normal Recovery

    Hi, everyone
    I have read the document Getting Started with Transaction Processing for Java shipped with Bdb XML 2.4.13. In the book, there is something about Normal Recovery:
    Normal recovery is run only against those log files created since the time of your last checkpoint.To test this, I have designed a scenario as below:
    The ENVIRONMENT directory is under E:/bdb-xml/environment, and the BACKUP directory is under E:/bdb-xml/backup, the CONTAINER name is entry.dbxml, and there is already a document 1.xml in this container.
    1. run db_recover against ENVIRONMENT.
    2. copy entry.dbxml to BACKUP.
    3. create a document 2.xml.
    4. run checkpoint against ENVIRONMENT.
    5. modify document 1.xml.
    6. run checkpoint against ENVIRONMENT.
    7. copy log.0000000001(there is only one log file in ENVIRONMENT) to BACKUP, Note that I didn't copy the entry.dbxml in ENVIRONMENT.
    8. run db_recover agaist BACKUP(now there are 2 files : entry.dbxml, log.log.0000000001).After that, I used the BACKUP as environment directory, and try to query 2.xml. And I retrieved the document correctly, which I feel very curious. As the document says, the last checkpoint is created by step 6, after that, there is no other modifications happens, so the modification happened at step 3 and step 5 will not take effect when db_recover executed. But the two changes have committed to entry.dbxml.
    So, which is the last checkpoint. And what is the those log files created since the time of your last checkpoint.
    I also want to know where the checkpoint be writen, in the db files or the log files.
    thanks advance.
    Regards,
    John Kao.

    Jhon,
    You really do want to know the gory details don't you? :-)
    Running recovery in your backup directory will cause the container there to pick up all changes from the log file that it does not yet have. The checkpoint on the original container doesn't mean anything to the backup container.
    Let me point you to even more interesting documentation that is in the Berkeley DB documentation set. This page has all of the BDB documentation, including links that are not included in the BDB XML doc:
    http://www.oracle.com/technology/documentation/berkeley-db/db/index.html
    The "Getting Started with Transaction Processing" documents on that page have the sort of information you seem to want.
    Regards,
    George

  • Confusion about recovery

    Hi ,
    I am new to the DBA field and I have a confusion about recovery.
    The confusion is if a database is in noarchivelog mode can a database be recovered from commited changes that were there in the redo log files ?
    If I provide the path name of the redo log files while using recover database using cancel will it work at all given that the database is in noarchivelog mode ?
    Please help to clear my doubts..

    Oracle can use the Online Redo Logs for Recovery. Normally this happens in the case of Instance Recovery (e.g. from a server crash or shutdown abort) -- where the datafiles are not restored from a prior backup.
    If you restore datafiles from a prior backup, you are doing a media recovery. In NOARCHIVELOG mode, you could not have run a backup with the database OPEN, so the backup would have been run with the database SHUTDOWN or MOUNTed. At the subsequent startup, transactions would be in the online redo logs only until LGWR does a "wrap around" and overwrites the first redo log used after the startup. It is only within this window that transactions are in the redo logs.+
    Remember that LGWR uses a "round-robin" algorithm to cycle through the online redo logs. So, if the Online Redo Log that was CURRENT at the time of the backup has been overwritten, you cannot use the Online Redo Logs for a RECOVERy+._
    You must also ensure that there are no NOLOGGING operations !!
    One thing that you might trip up on is the behaviour of CTAS. A "CREATE TABLE AS SELECT" is, by default LOGGING in an ARCHIVELOG database. However, it is automatically a Direct Path operation in a NOARCHIVELOG database ! So the blocks for such a table would be "corrupt" if you attempt a recovery from the Online Redo Log as the row inserts are not captured.
    Hemant K Chitale
    Edited by: Hemant K Chitale on Oct 10, 2011 11:43 AM
    Edited by: Hemant K Chitale on Oct 10, 2011 11:44 AM
    Edited by: Hemant K Chitale on Oct 10, 2011 4:33 PM
    Edited by: Hemant K Chitale on Oct 10, 2011 4:34 PM

  • Checkpoint lsn and  Normal recovery

    Are there any conditions which result in a "normal recovery" to do the redo/undo processing beyond the "checkpoint lsn"
    We have noticed that sometimes the recovery takes more than 5 minutes to complete and we are checkpointing every 30 seconds.
    Edited by: user11188564 on May 21, 2009 6:52 AM

    Hi,
    user11188564 wrote:
    We have noticed that sometimes the recovery takes more than 5 minutes to complete and we are checkpointing every 30 seconds.Are you sure is normal recovery and not catastrophic recovery that's taking more the 5 minutes? I see no reason for normal recovery to complete just "sometimes" in more then 5 min, as you're checkpointing the env every 30 seconds. Also, double checking that the checkpoints are actually in the log files ( [db_printlog|http://www.oracle.com/technology/documentation/berkeley-db/db/utility/db_printlog.html] ) when the recovery takes so long, would be the first thing I would do.
    Bogdan

  • Confuse about the document

    Hi,all . From the document ,i had confused about the following .
    Automatic Undo Management in Oracle RAC
    url >> http://docs.oracle.com/cd/B19306_01/rac.102/b28759/adminrac.htm#CHDGAIFJ
    Oracle automatically manages undo segments within a specific undo tablespace that is assigned to an instance. Only the instance assigned to the undo tablespace can modify the contents of that tablespace. However, each instance can read the undo data blocks created by any instance. Also, when performing transaction recovery, any instance can update any undo tablespace, as long as that undo tablespace is not currently being used by another instance for undo generation or transaction recovery
    what's the meaning of above that is bold ?

    Say you're running a 2-node RAC and node 2 dies. The services which were running on node 2 now get re-located to node 1. It is then possible that node 1 will perform transaction rollback/recovery and, when it does so, it will need to be able to read from node 2's undo tablespace (and maybe update the undo segment headers in node 2's undo tablespace, too).

  • Confused about standby redo log groups

    hi masters,
    i am little bit confuse about creating redo log group for standby database,as per document number of standby redo group depends on following equation.
    (maximum number of logfiles for each thread + 1) * maximum number of threads
    but i dont know where to fing threads? actually i would like to know about thread in deep.
    how to find current thread?
    thanks and regards
    VD

    is it really possible that we can install standby and primary on same host??
    yes its possible and i have done it many times within the same machine.
    For yours confusion about spfile ,i agree document recommend you to use spfile which is for DG broker handling if you go with DG borker in future only.
    There is no concern spfile using is an integral step for primary and standby database implementation you can go with pfile but good is use spfile.Anyhow you always keep pfile on that basis you created spfile,i said you make an entry within pfile then mount yours standby database with this pfile or you can create spfile from this pfile after adding these parameter within pfile,i said cause you might be adding this parmeter from SQL prompt.
    1. logs are not getting transfered(even i configure listener using net manager)
    2.logs are not getting archived at standby diectory.
    3.'ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION' NEVER COMPLETE ITS RECOVERY
    4. when tried to open database it always note it 'always' said system datafile is not from sufficiently old backup.
    5.i tried 'alter database recover managed standby database camncel' also.Read yours alert log file and paste the latest log here..
    Khurram

  • Confusion about Kodo and JCA

    Hi,
    I'm a bit confused about Kodo's Connection Architecture strategy. It is my understanding that
    PMF's can be built to use the connection architecture. Along this line, one would configure the
    ConnectionFactory or ConnectionFactoryName, and possibly the ConnectionFactory2 and
    ConnectionFactory2Name properties in a PMF. The result of the PMF implementation supporting the
    connection architecture is nice integration with the application servers in terms of security,
    transaction, and connection management. One can lookup in JNDI a reference to a Kodo PMF that
    supports datastore transactions or to another one that supports optimistic transactions or to
    another one that supports NTR, and with proper settings of the transactional properties and suitable
    application code, one's sesson bean will work.
    But from what I can see of Kodo's JDOPersistenceManagerFactory class, it, itself, implements the
    ManagedConnectionFactory interface, meaning, I think, that this class is resource adaptor. And that
    the part that confuses me. Why would Kodo be a resource adaptor? I thought it used a resource
    adaptor, which I think is the same thing as a connection factory.
    Anyway, I'm puzzled, and I'm hoping that someone could straighten me out.
    David Ezzio

    David-
    The fact that Kodo can integrate into an application server as a
    Resource Adaptor, and section 3.2.2 of the specification that says that
    the PersistentManagerFactory should be able to utilize a Resource
    Adaptor to obtain connections to the data store are two separate issues.
    We implement Kodo itself as a Resource Adaptor in order to provide ease
    of integration into recent application servers. Your confusion is
    understandable, since we do not actually yet support the use of Resource
    Adaptors as the Connection Factories as per section 3.2.2.
    Does that make sense?
    David Ezzio <[email protected]> wrote:
    Hi,
    I'm a bit confused about Kodo's Connection Architecture strategy. It is my understanding that
    PMF's can be built to use the connection architecture. Along this line, one would configure the
    ConnectionFactory or ConnectionFactoryName, and possibly the ConnectionFactory2 and
    ConnectionFactory2Name properties in a PMF. The result of the PMF implementation supporting the
    connection architecture is nice integration with the application servers in terms of security,
    transaction, and connection management. One can lookup in JNDI a reference to a Kodo PMF that
    supports datastore transactions or to another one that supports optimistic transactions or to
    another one that supports NTR, and with proper settings of the transactional properties and suitable
    application code, one's sesson bean will work.
    But from what I can see of Kodo's JDOPersistenceManagerFactory class, it, itself, implements the
    ManagedConnectionFactory interface, meaning, I think, that this class is resource adaptor. And that
    the part that confuses me. Why would Kodo be a resource adaptor? I thought it used a resource
    adaptor, which I think is the same thing as a connection factory.
    Anyway, I'm puzzled, and I'm hoping that someone could straighten me out.
    David Ezzio--
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com
    Kodo Java Data Objects Full featured JDO: eliminate the SQL from your code

  • About transaction logs

    Can you tell me about transaction log space? how does it gets full? How is it related to performance?

    Hi,
    Monitoring the SAP Log Disk
    Use
    The size of the transaction log must be checked regularly to work out how much free space is available on the log disk. There should always be enough free space to allow the next file extension. When the SAP system has been installed the autogrow increment is set. At least the size of this increment should be available on the log disk to permit the next file extension. If less space is available and the transaction log file fills up, the SAP system will come to a standstill.
    Ideally, the transaction log should never be filled to more than 60-70%. If the transaction log regularly exceeds this level between 2 transaction log backups, the transaction log must be saved at more frequent time intervals.
    The size of the log can be assessed on the basis of information given for completed backups in the SAP transaction for Backup and Restore Information.
    Procedure:
           1.      To access the transaction for Backup Restore Information  choose CCMS ® DB Administration ® Backup logs.
    Alternatively, enter the transaction code DB12.
    The initial screen of the monitor CCMS Monitoring Tool u2013 DB12 (Backup Restore Information) appears. 
           2.      Choose Backup history and then Logs Backup.
           3.      A result list appears. Find the largest transaction log backup of the past week. Select a row and then History info to find out the number of pages that were processed during the backup. To work out the amount of space used in the transaction log, multiply the number of dumped pages by 8 KB. You can then work out how much free space is left on the transaction log disk.
    If you use a RAID1 disk system exclusively for the SAP transaction log and create hourly log backups, you will rarely encounter space problems. The SAP log file is initially created with a size of 1 GB. The smallest disk normally has 9 GB space and the log file can therefore grow to 9 GB.
    Hope it Helps
    Srini

  • New bie confused about linking different things BP and org struc

    Dear Experts,
    It might sound very silly question but as a fresher to a functional module i am bit confused about all the different concepts.
    1. How actually we start a CRM implementation like after blue printing what are the activities we first start
    like do we first config master data and then org structure and then activities.
    what is the inter linking between these three , i have read CR100, after reading that i am not able to link each one of them like which one is first.
    And last but not least please kindly guide me how to practise the config.
    thanks and regards
    Neel

    Hi Neel,
    first go through with CRM Replication (B09) from building blocks
    http://help.sap.com/bp_crmv340/CRM_DE/BBLibrary/html/BBLibrary.htm
    and also
    http://help.sap.com/bp_crmv250/CRM_DE/index.htm
    technical information - building blocks library
    abt transactions wat ever the std transactions in R3 will present in CRM which already mapped if there r any Ztransactions u have to create the same Ztransactions in CRM for Ztransactions u need to create Zitem categories item category determination and copy controls everything manually and u have to map them, it is not possible to download these from R3 to CRM
    About Business Partners
    Watever u r willing to replicate the BPs from CRM to R3 (u can also avoid some BPs not the replicate to R3) u have to do proper settings in R3 in PIDE transaction, u can do here both dimentional replication settings.
    go through the building block u can understand
    Regards
    Manohar
    Edited by: Manohar R on May 12, 2008 11:51 AM
    Edited by: Manohar R on May 12, 2008 11:52 AM

  • Confused about conflicting disk guidelines

    This is my first time on the forum and I am learning a tremendous amount - thanks everyone.  I am planning a machine with 3 disks and I'm a little confused about apparently conflicting guidellines issued by Harm.  In the "Storage rules for an editing rig", he states in Rule 4 that "If you have OS & programs on disk C:, set your pagefile on another disk".  However, in the "Guidelines  for disk disk storage", he says that, for 3 disks, that the OS, programs and pagefile should be on drive C:    Which is right?  Thanks

    You may be somewhat confused, as it appears to be contradictory, but let me try to explain.
    The basic rule for a fast disk system is to spread the load across as many disks as you have.
    Every disk can access only one location at a time, and if a program requires access to a number of files, it would be great to have all those files on different disks, because then all these files can be accessed in parallel, making it much faster than in a serial way. There are however practical limitations to doing it that way. It does not make sense to organize the location of your clips in such a way that all these clips are on different disks, your potential gains would be far outweighed by the effort to move your clips to different disks. It does not even make sense to allocate a different disk for different tracks, although from a disk performance view it may make a lot of sense.
    It has to make sense from a workflow view as well, so you have to compromise and try to follow a 'KISS' approach. As you can see from the Generic Guideline, the allocation of the pagefile can change, depending on the number of disks in the system. The reasoning behind it is to spread the load as much as possible but keep it as simple as possible from an organizational point of view.
    Remember, it is only a guideline, not a strict rule. Consider for instance a situation where a lot of stock footage is used in nearly all projects. It makes sense to put all this stock footage on a different disk than your normal clips, if you also add a lot of music to your projects, it makes sense to use a separate disk for your music files, because you spread access load across different disks and still keep it relatively simple to organize. Now, if this does not happen often and only for very limited parts of the timeline, this may not be worth the effort or the dedicated disks. So, the guideline is exactly that and can be influenced by your specific workflow and material. Someone may want to have all his HDV captures on one disk and all his AVCHD imports on another disk and maybe from a third camera on a third disk, thereby improving multicam performance.
    Hope this clears it up a bit.

  • Confused About Catch Up

    Hi everyone, So I have  just had Sky installed and I am a little confused about some aspects of it and I hope someone can give me some advice. The catch up service: Is it normal for there to be very little actual content in the catch up section?When I was looking at the various options on offer, one of the things that I really liked the look of was the '30 day catch up service'.
    I (possibly incorrectly) assumed this would mean I could access the vast majority (if not all) of the programmes shown on Sky for the last 30 days. When I look in the catch up section, there is very little choice there.For example - in the sky channels section, sorted by day - for last Saturday there are only four shows. Is this a fault or is catch up a whole lot less than it promises to be? I have the family package by the way and I realise that I can only access channels and catch up that I am subscribed to but it seems very poor to me that the 'catch up' service amounts to just a handful of shows, most of which I am not interested in. Strangely, when I manually search in the search box for a specific show, I get on-demand results that I cannot find anywhere else in the EPG. Is there more on demand content there but only if you know exactly what you want to search for and use the search box? It seems odd to me that I have to know what I want to watch and search for it manually, I cannot just browse a list of available shows. Pushed on demand: So the way I understand it is this, my 500GB hard drive has 250GB partitioned off for the sky box to download content to overnight. Where do I access this content?I can't find any of it.I have read the manual and it says I will see a little play icon on content that is available to watch which, I assume, refers to content automatically downloaded by the box. I have seen this icon on one show. That's it. Is 250GB of space being used for nothing?Can I get that space to use for my own downloads and recordings?I know there is a setting to turn off 'pushed on demand' but does this free up the space reserved? Sorry if this has been covered elsewhere or if I am asking stupid questions and thanks in advance for any replies. Broddr.

    Thanks for the reply Annie.
    I did as you suggested and still get no more.
    Pressing the green button and selecting channels just splits the five shows up into their specific channel.
    I have five shows total.
    I select Nat Geo channel and it shows the three of those five shows that are on Nat Geo.
    I select Nat Geo Wild and it shows the remaining two of those five shows that are on the Wild channel.
    I have been through all the system settings and every page on the box OS again and again and I am coming to the conclusion that I do not have a problem as such, rather the service is simply pants!
    Apparently my 'catch up' service for Nat Geo for the last four days consists of one programme!
    I suppose for an 'extra' service I can't really complain, but the touted '30 day catch up service' was one of the main reasons I signed up.
    I would argue they should not really call it 'catch up'.
    'Selected highlights' is probably more accurate.
    "Catch up on the best of sky over the last thirty days - as long as, for example, what you want to catch up on SyFy is Dark Matter and Defiance 'cos that's all there is."

  • I'm horribly confused about student licensing and commercial use

    As the title says I'm horribly confused about student licensing and using it for commercial use.
    I currently have a Student Licensing version of Adobe Creative Suite 4 that I purchased through my school's journeyEd portal.
    Seeing how CS5 is now out I was browsing looking at prices (why not upgrade while I'm still a student, right?) and while browsing I bumped into one source that says that Student Licensing can not be used for commercial purposes, and this is when the confusion started. I remember reading before that we are able to use student licensing for commercial purposes, okay time to google search. I found one Adobe FAQ that says I can. .
    http://www.adobe.com/education/students/studentteacheredition/faq.html
    " Can I use my Adobe Student and Teacher Edition software for commercial use?
    Yes. You may purchase a Student and Teacher Edition for personal as well as commercial use. "
    and I found this old thread;
    http://forums.adobe.com/thread/314304
    Where an poster listed as an employee of Adobe states
    "There is no upgrade from the CS3 Educational Edition to the comparable CS3 editions sold in non-academic environments. If you have an educational version of for CS3 obtained legitimately (i.e., you qualified for the educational version when you obtained it), you may continue to use that software for the indefinite future, even for commercial use! You cannot sell or otherwise transfer that license, though! When the next version of the Creative Suite is released, you will have two choices: (1) If you still qualify for the educational version, you can buy a copy of that next version (there is no special upgrade pricing from one educational version to another; the price is already very low) or (2) you can upgrade from the educational version of CS3 to the full version of the next version of the Creative Suite as an upgrade from CS3 at the prices published at that time. "
    Okay cool, hmm what this? Adobe is asking me if I want to IM with live costumer service agent, sure why not? Then the conversation started and I asked her my question about using my CS4 license for commercial use, she asks for my product code and email to verify my product, then informs me I can purchase the upgrade version of CS5 and use that for commercial, okay great, but not really answering my question. I reword it and give her a link to that FAQ page it goes like this. ..
    "[CS Rep] : [My name], I would like to inform you that Adobe Student and Teacher Editions are not allowed for
    commercial use.
    [CS Rep] : However, you can upgrade your current software to a normal upgrade version, and you can continue
    using it for commercial purpose.
    [Me] : Then is the FAQ page mistaken? Because it is very misleading if it is. But thank you for the information.
    [CS Rep] : You are welcome.
    [CS Rep] : I apologize for the misleading information in the FAQ."
    . .And after that, I went back to being confused.
    SO my questions are. . . Can I or can't I use my Adobe Creative Suite 4 student licensing for commercial purposes? and If I purchase a Student Licensing of CS5 can I use that for commercial purposes as well?
    Sorry for the long post, I just want to be perfectly clear on what I can and can not do with my purchase.

    The rules differ in various parts of the world. In North America you can use it for commercial work.
    There are no student/academic upgrades. The pricing is so low that in many cases you're better off buying another full student license but you are eligible for upgrade pricing for commercial versions once you're out of school.
    You may not transfer the student license in any way.
    Bob

  • Confusion about applet

    sir
    i have confusion about applet that if an applet compile successfully but on running it shows a exception message about "main"that no such method exist.help me out please

    The full text of the error message would make it easier for us to see what is wrong BUT it sounds like you are trying to run the Applet as an applicaiton from the comand line rather than through an HTML tag in an HTML page loaded into your browser!
    Though you can make Applets run as applications it not normal to do so.

  • Confusion about required_mirror_free_mb in asm

    Hi!
    i have confusion about required_mirror_free_mb in asm.
    i have 6 disks with normal redundancy and 4 failgroups.
    SQL> select STATE,TOTAL_MB,FREE_MB ,NAME,FAILGROUP from v$asm_disk;
    STATE TOTAL_MB FREE_MB NAME FAILGROUP
    NORMAL 2047 1421 ASMDISK1 FG1
    NORMAL 2047 1424 ASMDISK2 FG1
    NORMAL 2047 1424 ASMDISK3 FG2
    NORMAL 2047 1424 ASMDISK4 FG2
    NORMAL 2047 1423 ASMDISK5 ASMDISK5
    NORMAL 2047 1422 ASMDISK6 ASMDISK6
    6 rows selected.
    Almost all 6 disk have same space consumption.
    SQL> select GROUP_NUMBER,DISK_NUMBER,STATE,TOTAL_MB,FREE_MB ,NAME from v$asm_disk;
    GROUP_NUMBER DISK_NUMBER STATE TOTAL_MB FREE_MB NAME
    1 0 NORMAL 2047 1421 ASMDISK1
    1 1 NORMAL 2047 1424 ASMDISK2
    1 2 NORMAL 2047 1424 ASMDISK3
    1 3 NORMAL 2047 1424 ASMDISK4
    1 4 NORMAL 2047 1423 ASMDISK5
    1 5 NORMAL 2047 1422 ASMDISK6
    6 rows selected.
    SQL>
    SQL> select name, type, total_mb, free_mb, required_mirror_free_mb,usable_file_mb from v$asm_diskgroup;
    NAME TYPE TOTAL_MB FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
    DATA NORMAL 12282 8538 4094 2222
    SQL>
    Here is my question that how asm desides REQUIRED_MIRROR_FREE_MB i.e 4GB in above case........
    please can anybody answer me.....
    regards
    M.usman

    thank you for your reply...now confusion has increased even more....
    now i have change configuration a bit....
    STEp 1)
    i have 6 Disks each of 2 gb in size And 3 Failgroups FG1,FG2,FG3.
    FG1
    ASMDISK1
    ASMDISK2
    FG2
    ASMDISK3
    ASMDISK4
    FG3
    ASMDISK5
    ASMDISK6
    SQL> select GROUP_NUMBER,DISK_NUMBER,STATE,TOTAL_MB,FREE_MB ,NAME from v$asm_disk;
    GROUP_NUMBER DISK_NUMBER STATE TOTAL_MB FREE_MB NAME
    1 0 NORMAL 2047 1420 ASMDISK1
    1 1 NORMAL 2047 1418 ASMDISK2
    1 2 NORMAL 2047 1420 ASMDISK3
    1 3 NORMAL 2047 1419 ASMDISK4
    1 4 NORMAL 2047 1421 ASMDISK5
    1 5 NORMAL 2047 1420 ASMDISK6
    6 rows selected.
    SQL> select name, type, total_mb, free_mb, required_mirror_free_mb,usable_file_mb from v$asm_diskgroup;
    NAME TYPE TOTAL_MB FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
    DATA NORMAL 12282 8518 4094 2212
    required_mirror_free_mb=4GB as you said that mirroring is per failgroup so it means that the largest failure could occur is that any failgroup fails,failgroup comprises of 2 2gb disk so its 4gb.
    STEp 2)
    I deleted two disks 5 and 6 in FG3
    now i have two failgroups FG1 and FG2
    FG1
    ASMDISK1
    ASMDISK2
    FG2
    ASMDISK3
    ASMDISK4
    SQL> select GROUP_NUMBER,DISK_NUMBER,STATE,TOTAL_MB,FREE_MB ,NAME from v$asm_disk;
    SQL> select STATE,TOTAL_MB,FREE_MB ,NAME,FAILGROUP from v$asm_disk;
    STATE TOTAL_MB FREE_MB NAME FAILGROUP
    NORMAL 2047 0
    NORMAL 2047 0
    NORMAL 2047 1128 ASMDISK1 FG1
    NORMAL 2047 1127 ASMDISK2 FG1
    NORMAL 2047 1128 ASMDISK3 FG2
    NORMAL 2047 1127 ASMDISK4 FG2
    6 rows selected.
    SQL> select name, type, total_mb, free_mb, required_mirror_free_mb,usable_file_mb from v$asm_diskgroup;
    NAME TYPE TOTAL_MB FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
    DATA NORMAL 8188 4510 *2047* 1231
    Now still the largest failure could occur is that any failgroup fails,failgroup comprises of 2 2gb disk so its agian 4gb.
    But here its showing 2gb here i am confused.
    Maybe my concept is wrong please help ...please dont mind anything that i am again n again asking questions.

  • What is difference between enjoy transactions and Normal transactions

    What is difference between enjoy transactions and Normal transactions
    Ex:- ME22 & ME22N
    What is difference between these two.

    hi ,
    the transaction code with 'N' are created with help of object concept.
    In your case ME22 is obsolete one and ME22N is the tcode created with object concept.
    pls Reward helpful points
    Thanks
    Siva

Maybe you are looking for

  • Sent mail not getting saved on server

    I have an IMAP email account that I am accessing using Mail. I have the preference set to save a copy of the sent mail on the server. When I check my email at work (not on this mac), I do not see any sent mails that were sent from the mac. Is there s

  • Is there a way to use a text field as a rollover?

    I have a PDF that contains tables with bullets in them. I would like the end user to be able to rollover a bullet and have a part number appear.

  • Using 2nd APXBS as bridge & router?

    I already have one Airport Extreme Dual Band N in my house. But in my TV room, I have several wired ethernet devices. I'd like to connect them all to one router and wirelessly connect to my existing WiFi network. Is this the right way to go? The only

  • RH8 broken links are bookmarks

    I have a lot of broken links that RH says are bookmarks. when I try to restore the link, I get a message saying, You cannot create a topic from a missing bookmark. Create a new topic with the appropriate file name then add the bookmark to that topic.

  • Recovering Tunes From Trash

    I accidentally deleted some hundreds of "songs" from my iTunes library, and they are currently in trash. How can I recover these? I have  2013 MBP with "Mavericks" installed.