Delete a physical bdb database file

Lets say, I have a bdb database file x. Now, how to delete this file ? Ofcourse you can’t just delete it via command line else there will in inconsistency in bdb logs.
I saw remove API and used it the following way but the file is not deleted yet: http://www.oracle.com/technology/documentation/berkeley-db/xml/java/com/sleepycat/db/Database.html#remove(java.lang.String,%20java.lang.String,%20com.sleepycat.db.DatabaseConfig)
// Get the database handle to the database name “dbToDelete”
Database dbHandle = GetDataBaseHandle(“DbToDelete”)
// Delete this database
dbHandle.rmove(“DbToDelete, DbToDelete, null);
It doesn’t work – any ideas why? Is this the wrong API? Or this API doesn’t remove the database from your system but updates its log to “delete” it?
Thanks
Neha

Well, maybe I was using the API in incorrect manner. Now, I’m using it like:
// Delete this database
dbHandle.rmove(Absolute_Path_To_DbFile_To_Delete , null , null );
This does removes the file from the system. But, I’m just concerned that is that the right way? Shouldn’t somehow the bdb env path be specified in this API?
Neha

Similar Messages

  • How to stop BDB from Mapping Database Files?

    We have a problem where the physical memory on Windows (NT Kernel 6 and up, i.e. Windows 7, 2008R2, etc.) gets maxed out after some time when running our application.  On an 8GB machine, if you look at our process loading BDB, its only around 1GB. But, when looking at the memory using RAMMAP, you can see that the BDB database files (not the shared region files) are being mapped into memory and that is where most of the memory consumption is taking place.  I wouldn't care normally, as memory mapping can have performance and usability benefits. But the results are the system comes to a screeching halt.   This happens when we are inserting results in high order, e.g. 10s of millions of records in a short time frame.
    I would attach a picture to this post, but for some reason the insert image is greyed out.
    Environment open flags: DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_TXN | DB_INIT_MPOOL | DB_THREAD | DB_LOCKDOWN | DB_RECOVER
    Database open flags: DB_CREATE | DB_AUTO_COMMIT

    An update for the community
    Cause
    We opened a support request (SR) to work with Oracle on the matter. The conclusion we came to was that the main reason for the memory consumption was the Windows System Cache.  (For reference, see this http://support.microsoft.com/kb/976618) When opening files in buffered mode, the equivalent of calling CreateFile without specifying FILE_FLAG_NO_BUFFERING, all I/O to a file goes through the Windows System Cache.  The larger the database file, the more memory is used to back it.  This is not the same as memory mapped files, of which Berkeley will use for the region files (i.e. the environment.) Those also use memory, but because they are bounded in size, will not cause an issue (e.g. need a bigger environment, just add more memory.)  The obvious reason to use the cache is for performance optimizations, particularly in read-heavy workloads. 
    The drawback, however, is that when there is a significant amount of I/O in a short amount of time, that cache can get really full and can result in the physical memory being close to 100% used.  This has negative affects on the entire system. 
    Time is important, because Windows needs time to transition active pages to standby pages which decreases the amount of physical memory.   What we found is that when our DB was installed on FLASH disk, we could generate a lot more I/O and our tests could run in a fraction of the time, but the memory would get close to 100%. If we ran those same tests on slower disk, while the result was the same, i.e. inserted 10 million records into the data, the time takes a lot long and the memory utilization does not approach even close to 100%. Note that we also see the memory consumption happen when we utilize the hotbackup in the BDB library. The reason for this is obvious:  In a short amount of time we are reading the entire BDB database file which makes Windows utilize the system cache for it. Total amount of memory might be a factor as well. On a system with 16GB of memory, even with FLASH disk, we had a hard time reproducing the issue where the memory climbs.
    There is no Windows API that allows an application to control how much system cache is reserved or usable or maximum for an individual file.  Therefore, BDB does not have fine grained control of this behavior on an individual file basis.  BDB can only turn on or off buffering in total for a given file.
    Workaround
    In Berkeley, you can turn off buffered I/O in Windows by specifying the DB_DIRECT_DB flag to the environment.  This is the equivalent of calling CreateFile with specifying FILE_FLAG_NO_BUFFERING.  All I/O goes straight to the disk instead of memory and all I/O must be aligned to a multiple of the underlying disk sector size. (NTFS sector size is generally 512 or 4096 bytes and normal BDB page sizes are generally multiples of that so for most this shouldn't be a concern, but know that Berkeley will test that page size to ensure it is compatible and if not it will silently disable DB_DIRECT_DB.)  What we found in our testing is that using the DB_DIRECT_DB flag had too much of a negative affect on performance with anything but FLASH disk and therefore can not use it. We may consider it acceptable for FLASH environments where we generate significant I/O in short time periods.   We could not reproduce the memory affect when the database was hosted on a SAN disk running 15K SAS which is more typical and therefore are closing the SR.
    However, Windows does have an API that controls the total system wide amount of system cache space to use and we may experiment with this setting. Please see this http://support.microsoft.com/kb/976618 We are also going to experiment with using multiple database partitions so that Berkeley spreads the load to those other files possibly giving the system cache time to move active pages to standby.

  • Deleting archivelog files once applied to physical standby database

    Hi,
    Any procedure for automatically deleting the archivelog files after applied to physical standby database.

    also pls see DataGuard: auto delete redo logs after applied to physical standby?

  • Forward physical standby archivelog files to a logical standby database

    Hi,
    We have a production database (1) and we have a physical standby database (2) for it.
    Is it possible to forward the archivelogs files from (2) to a logical standby database (3). We want to use the (3) as a UAT - Ad Hoc read only Database.
    Thanks.

    Hi,
    The following Data Guard configurations using cascaded destinations are supported.
    A. Primary Database > Physical Standby Database with cascaded destination > Physical Standby Database
    B. Primary Database > Physical Standby Database with cascaded destination > Logical Standby Database
    A physical standby database can support a maximum of nine (30 as of Version 11.2) remote destinations.
    Physical Standby Forwarding Redo to a Logical Standby :
    Advantages :
    1. Users can connect to Logical Standby database and they can access
    2. Instead of querying primary database they can use logical standby database.
    3. without any additional overhead on your primary system, and without consuming any additional transatlantic bandwidth.
    Disadvantages :
    The following data types will not support in Logical standby database just check your application before implementing logical standby
    a. BFILE
    b. Collections (including VARRAYS and nested tables)
    c. Multimedia data types (including Spatial, Image, and Oracle Text)
    d. ROWID, UROWID
    e. User-defined types
    f. LOBs stored as SecureFiles
    g. XMLType stored as Object Relational
    h. Binary XML
    Thanks
    LaserSoft

  • BDB - rename of a database file

    If I rename (system call) a db file after closing it - Can thiis coorupt the environment ? What is the difference between renaming a file(closed) using BDB interface and the os system call ?
    Any help with theses questions will be appreciated.

    user11188564 wrote:
    a subsequent db_put or db_open was returning a error - error 2 i.e ENOENT .Did you try to open the same database? Was the put for a different database file?
    Did you close all the DBcursor handles before closing the database?
    Do you use CDS groups?
    Bogdan

  • When I try to start outlook for mac, i get the message that there is an old version of the Microsoft Database Daemon that needs to be deleted from the start up files. Where do I find the start up files???

    when I try to start outlook for mac, i get the message that there is an old version of the Microsoft Database Daemon that needs to be deleted from the start up files. Where do I find the start up files???

    In the Users & Groups pane of System Preferences.
    (66105)

  • Why did the latest update (31.1.2) delete database files for primary email account in profile?

    After restarting TB to apply the 31.1.2 update, all of the database files for my primary email account were apparently deleted. Restarting TB again got it to start syncing with the IMAP server again, but it has to download around 100,000 messages again because of this. Was this a design decision or is there a problem with the update itself? And why did it only affect my primary email account and leave the other 4 (much smaller ones) unscathed?

    1. Norton Internet Security 21.5.0.19
    2. Application Basics
    Name: Thunderbird
    Version: 31.1.2
    User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.1.2
    Profile Folder: Show Folder
    (Local drive)
    Application Build ID: 20140923203151
    Enabled Plugins: about:plugins
    Build Configuration: about:buildconfig
    Memory Use: about:memory
    Mail and News Accounts
    account1:
    INCOMING: account1, , (imap) imap.googlemail.com:993, SSL, passwordCleartext
    OUTGOING: smtp.googlemail.com:465, SSL, passwordCleartext, true
    account2:
    INCOMING: account2, , (none) Local Folders, plain, passwordCleartext
    account3:
    INCOMING: account3, , (imap) imap.googlemail.com:993, SSL, passwordCleartext
    OUTGOING: smtp.googlemail.com:465, SSL, passwordCleartext, true
    account4:
    INCOMING: account4, , (imap) imap.mail.yahoo.com:993, SSL, passwordCleartext
    OUTGOING: smtp.mail.yahoo.com:465, SSL, passwordCleartext, true
    account5:
    INCOMING: account5, , (imap) imap-mail.outlook.com:993, SSL, passwordCleartext
    OUTGOING: smtp-mail.outlook.com:587, alwaysSTARTTLS, passwordCleartext, true
    account6:
    INCOMING: account6, , (imap) mail.centurylink.net:143, alwaysSTARTTLS, passwordCleartext
    OUTGOING: smtp.centurylink.net:587, alwaysSTARTTLS, passwordCleartext, true
    Crash Reports
    http://crash-stats.mozilla.com/report/index/bp-ef9e8cc4-d975-4a65-8417-652332140926 (9/25/2014)
    Extensions
    Important Modified Preferences
    Name: Value
    accessibility.typeaheadfind.flashBar: 0
    browser.cache.disk.capacity: 358400
    browser.cache.disk.smart_size.first_run: false
    browser.cache.disk.smart_size.use_old_max: false
    browser.cache.disk.smart_size_cached_value: 358400
    extensions.lastAppVersion: 31.1.2
    font.internaluseonly.changed: false
    font.name.monospace.el: Consolas
    font.name.monospace.tr: Consolas
    font.name.monospace.x-baltic: Consolas
    font.name.monospace.x-central-euro: Consolas
    font.name.monospace.x-cyrillic: Consolas
    font.name.monospace.x-unicode: Consolas
    font.name.monospace.x-western: Consolas
    font.name.sans-serif.el: Calibri
    font.name.sans-serif.tr: Calibri
    font.name.sans-serif.x-baltic: Calibri
    font.name.sans-serif.x-central-euro: Calibri
    font.name.sans-serif.x-cyrillic: Calibri
    font.name.sans-serif.x-unicode: Calibri
    font.name.sans-serif.x-western: Calibri
    font.name.serif.el: Cambria
    font.name.serif.tr: Cambria
    font.name.serif.x-baltic: Cambria
    font.name.serif.x-central-euro: Cambria
    font.name.serif.x-cyrillic: Cambria
    font.name.serif.x-unicode: Cambria
    font.name.serif.x-western: Cambria
    font.size.fixed.el: 14
    font.size.fixed.tr: 14
    font.size.fixed.x-baltic: 14
    font.size.fixed.x-central-euro: 14
    font.size.fixed.x-cyrillic: 14
    font.size.fixed.x-unicode: 14
    font.size.fixed.x-western: 14
    font.size.variable.el: 17
    font.size.variable.tr: 17
    font.size.variable.x-baltic: 17
    font.size.variable.x-central-euro: 17
    font.size.variable.x-cyrillic: 17
    font.size.variable.x-unicode: 17
    font.size.variable.x-western: 17
    gfx.direct3d.last_used_feature_level_idx: 0
    mail.openMessageBehavior.version: 1
    mail.winsearch.firstRunDone: true
    mailnews.database.global.datastore.id: 0bb49554-fa41-4ecf-af58-17c0753468e
    network.cookie.prefsMigrated: true
    places.database.lastMaintenance: 1411474192
    places.history.expiration.transient_current_max_pages: 104858
    plugin.importedState: true
    print.printer_Brother_HL-2270DW_series.print_bgcolor: false
    print.printer_Brother_HL-2270DW_series.print_bgimages: false
    print.printer_Brother_HL-2270DW_series.print_colorspace:
    print.printer_Brother_HL-2270DW_series.print_command:
    print.printer_Brother_HL-2270DW_series.print_downloadfonts: false
    print.printer_Brother_HL-2270DW_series.print_duplex: 0
    print.printer_Brother_HL-2270DW_series.print_edge_bottom: 0
    print.printer_Brother_HL-2270DW_series.print_edge_left: 0
    print.printer_Brother_HL-2270DW_series.print_edge_right: 0
    print.printer_Brother_HL-2270DW_series.print_edge_top: 0
    print.printer_Brother_HL-2270DW_series.print_evenpages: true
    print.printer_Brother_HL-2270DW_series.print_footercenter:
    print.printer_Brother_HL-2270DW_series.print_footerleft: &PT
    print.printer_Brother_HL-2270DW_series.print_footerright: &D
    print.printer_Brother_HL-2270DW_series.print_headercenter:
    print.printer_Brother_HL-2270DW_series.print_headerleft:
    print.printer_Brother_HL-2270DW_series.print_headerright:
    print.printer_Brother_HL-2270DW_series.print_in_color: true
    print.printer_Brother_HL-2270DW_series.print_margin_bottom: 0.5
    print.printer_Brother_HL-2270DW_series.print_margin_left: 0.5
    print.printer_Brother_HL-2270DW_series.print_margin_right: 0.5
    print.printer_Brother_HL-2270DW_series.print_margin_top: 0.5
    print.printer_Brother_HL-2270DW_series.print_oddpages: true
    print.printer_Brother_HL-2270DW_series.print_orientation: 0
    print.printer_Brother_HL-2270DW_series.print_page_delay: 50
    print.printer_Brother_HL-2270DW_series.print_paper_data: 1
    print.printer_Brother_HL-2270DW_series.print_paper_height: 11.00
    print.printer_Brother_HL-2270DW_series.print_paper_name:
    print.printer_Brother_HL-2270DW_series.print_paper_size_type: 0
    print.printer_Brother_HL-2270DW_series.print_paper_size_unit: 0
    print.printer_Brother_HL-2270DW_series.print_paper_width: 8.50
    print.printer_Brother_HL-2270DW_series.print_plex_name:
    print.printer_Brother_HL-2270DW_series.print_resolution: 0
    print.printer_Brother_HL-2270DW_series.print_resolution_name:
    print.printer_Brother_HL-2270DW_series.print_reversed: false
    print.printer_Brother_HL-2270DW_series.print_scaling: 1.00
    print.printer_Brother_HL-2270DW_series.print_shrink_to_fit: true
    print.printer_Brother_HL-2270DW_series.print_to_file: false
    print.printer_Brother_HL-2270DW_series.print_unwriteable_margin_bottom: 0
    print.printer_Brother_HL-2270DW_series.print_unwriteable_margin_left: 0
    print.printer_Brother_HL-2270DW_series.print_unwriteable_margin_right: 0
    print.printer_Brother_HL-2270DW_series.print_unwriteable_margin_top: 0
    Graphics
    Adapter Description: NVIDIA GeForce GTX 560
    Vendor ID: 0x10de
    Device ID: 0x1201
    Adapter RAM: 1024
    Adapter Drivers: nvd3dumx,nvwgf2umx,nvwgf2umx nvd3dum,nvwgf2um,nvwgf2um
    Driver Version: 9.18.13.4052
    Driver Date: 7-2-2014
    Direct2D Enabled: true
    DirectWrite Enabled: true (6.2.9200.16492)
    ClearType Parameters: ClearType parameters not found
    WebGL Renderer: false
    GPU Accelerated Windows: 2/2 Direct3D 10
    AzureCanvasBackend: direct2d
    AzureSkiaAccelerated: 0
    AzureFallbackCanvasBackend: cairo
    AzureContentBackend: direct2d
    JavaScript
    Incremental GC: 1
    Accessibility
    Activated: 0
    Prevent Accessibility: 0
    Library Versions
    Expected minimum version
    Version in use
    NSPR
    4.10.6
    4.10.6
    NSS
    3.16.2.1 Basic ECC
    3.16.2.1 Basic ECC
    NSS Util
    3.16.2.1
    3.16.2.1
    NSS SSL
    3.16.2.1 Basic ECC
    3.16.2.1 Basic ECC
    NSS S/MIME
    3.16.2.1 Basic ECC
    3.16.2.1 Basic ECC
    Account5 is the one affected which is still attempting to sync. It also appears that my sync choices were altered without permission or notification. I always keep the Deleted folder and the POP subfolder from Hotmail accounts disabled since it is merely duplicating content in other folders. When re-checking the sync settings because it was taking so long, I found that the POP subfoler was selected for sync again.
    The crash report is from a later TB crash, the first one ever on this system.
    3. Windows 7 Ultimate Signature Edition, Cox Communications (shouldn't matter whatsoever), the affected account is Hotmail/Outlook.com, unaffected accounts: 2 GMail, 1 Yahoo, 1 CenturyLink

  • Getting `No such file or directory` error while trying to open bdb database

    I have four multi-threaded processes (2 writer and 2 reader processes), which make use of Berkeley DB transactional data store. I have multiple environments and the associated database files and log files are located in separate directories (please refer to the DB_CONFIG below). When all these four processes start to perform open and close of databases in the environments very quickly, one of the reader process is throwing a No such file or directory error even though the file actually exists.
    I am making use of Berkeley DB 4.7.25 for testing out these applications.
    The four application names are as follows:
    Writer 1
    Writer 2
    Reader 1
    Reader 2
    The application description is as follows:
    ‘*Writer 1*’ owns 8 environments and each environment having 123 Berkeley databases created using HASH access method. At any point of time, ‘*Writer 1*’ will be acting on 24 database files across 8 environments (3 database files per environment) for carrying out write operation. Where as reader process will be accessing all 123 database files / per environment (Total = 123 * 8 environments = 984 database files) for read activities. Similar configuration for Writer 2 as well – 8 separate environments and so on.
    Writer 1, Reader 1 and Reader 2 processes share the environments created by Writer 1
    Writer 2 and Reader 2 processes share the environments created by Writer 2
    My DB_CONFIG file is configured as follows
    set_cachesize 0 104857600 1   # 100 MB
    set_lg_bsize 2097152                # 2 MB
    set_data_dir ../../vol1/data/
    set_lg_dir ../../vol31/logs/SUBID/
    set_lk_max_locks 1500
    set_lk_max_lockers 1500
    set_lk_max_objects 1500
    set_flags db_auto_commit
    set_tx_max 200
    mutex_set_increment 7500
    Has anyone come across this problem before or is it something to do with the configuration?

    Hi Michael,
    I should have included about how we are making use of DB_TRUNCATE flag in my previous reply itself. Sorry for that.
    From writers, DB truncation happens periodically. During truncate (DB handle is not associated with any environment handle i.e. stand-alone DB
    ) following parameters are passed to db->open function call:
    DB->open(DB *db,                    
    DB_TXN *txnid,          => NULL
    const char *file,          => file name (absolute DB file path)
    const char *database,  => NULL
    DBTYPE type, => DB_HASH
    u_int32_t flags, => DB_READ_UNCOMMITTED | DB_TRUNCATE | DB_CREATE | DB_THREAD
    int mode); => 0
    Also, DB_DUP flag is set.
    As you have rightly pointed out, `No such file or directory` is occuring during truncation.
    While a database is being truncated it will not be found by others trying to open it. We simulated this by stopping the writer process (responsible for truncation) and by opening & closing the databases repeatedly via readers. The reader process did not crash. When readers and writers were run simultaneously, we got `No such file or directory` error.Is there any solution to tackle this problem (because in our case writers and readers are run independently. So readers won't come to know about truncation from writers)?
    Also, we are facing one more issue related to DB_TRUNCATE. Consider the following scenario:
    <ul><li>Reader process holds reference of database (X) handle in environment 'Y' at time t1
    </li>
    <li>Writer process opens the database (X) in DB_TRUNCATE mode at time t2 (where t2 &gt; t1)</li>
    <li>After truncation, writer process closes the database and joins the environment 'Y'</li>
    <li>After this any writes to database X is not visible to the reader process</li>
    <li>Once reader process closes the database X and re-joins the environment, all the records inserted from writer process are visible</li>
    </ul>
    Is it the right behavior? If yes, how to make any writes visible to a reader process without closing and re-opening the database in the above-mentioned scenario?
    Also, when [db_set_errfile|http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_set_errfile.html] was set, we did not get any additional information in the error file.
    Thanks,
    Magesh

  • Error: is not a primary database file.

    Hello
    If I try to set a database online with:
    ALTER DATABASE mydb SET online
    this error occurs:
    Msg 5171, Level 16, State 1, Line 1
    E:\Data\mydb_log.ldf is not a primary database file.
    Msg 5171, Level 16, State 2, Line 1
    E:\Data\mydb.mdf is not a primary database file.
    File activation failure. The physical file name "E:\Data\mydb.mdf" may be incorrect.
    Msg 945, Level 14, State 2, Line 1
    Database 'mydb' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details.
    Msg 5069, Level 16, State 1, Line 1
    ALTER DATABASE statement failed.
    The path of ldf and mdf file is correct.
    And if I delete the database "DROP DATABASE mydb" and attach the files, then it works..
    Thanks for your help

    Cause
    This problem generally occurs if the MDF file of your SQL Server has got damaged. the problem could be due to file header corruption or wrong information in file header.
    All such situations lead to the inaccessibility of MDF file and need to repair and restore the database. It is very important to work around this problem, as it may put your business at risk of destruction.
    The MDF repair is possible with the help of third party applications known as SQL recovery software. These software are helpful in each and every case of MDF corruption and thus allow you to have perfect MDF recovery.
    To perform SQL repair using these software, it is not necessary for the user to have sound technical knowledge as MDF repair software come with interactive user interface. SQL recovery software do systematic scan of corrupted MDF files, repair and restore them in original format.
    Stellar Phoenix SQL Database Recovery is the most advanced and the most influential SQL recovery software which allows you to have effective and successful MDF recovery in all cases of corruption. This SQL repair software comes equipped with an interactive, simple and cool looking user interface.
    This MDF repair software is powerful enough to carry out systematic scan of entire MDF file and extract all of the data from it. This SQL repair software can restore all of the MDF objects including tables, reports, forms, macros, database constraints, stored procedures, triggers etc.
    Cheers, Sridhar -------------- Please Mark it as Answer if it helps u so that it will be useful to other forum guys

  • Problem in recover physical standby database(Data Guard) by rman

    Hello to all
    I have created a physical standby database ,I want make backup of it by rman and when I lose it's datafile I can restore it ,making backup and restore is fine but in recovery I encounter some problem
    scenarios is follow
    1- In rman I create a backup of standby database by this command:
    backup database plus archivelog delete all input;
    2- I run this comman in rman for recover standby database
    run{
    2> set until scn 1392701;
    3> restore database;
    4> recover database;
    5> }
    (1392701 is extracted from this query "SELECT MAX(NEXT_CHANGE#)+1 UNTIL_SCN FROM V$LOG_HISTORY LH,
    V$DATABASE DB WHERE LH.RESETLOGS_CHANGE#=DB.RESETLOGS_CHANGE# AND LH.RESETLOGS_TIME =
    DB.RESETLOGS_TIME;" "http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/rman.htm")
    but RMAN result is like this:
    executing command: SET until clause
    Starting restore at 13-DEC-08
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from
    backup set
    restoring datafile 00001 to /u01/app/oracle/oradata/sari/system01.dbf
    restoring datafile 00002 to /u01/app/oracle/oradata/sari/undotbs01.dbf
    restoring datafile 00003 to /u01/app/oracle/oradata/sari/sysaux01.dbf
    restoring datafile 00004 to /u01/app/oracle/oradata/sari/users01.dbf
    restoring datafile 00005 to /u01/app/oracle/oradata/sari/example01.dbf
    restoring datafile 00006 to /u01/app/oracle/oradata/sari/users02.dbf
    channel ORA_DISK_1: reading from backup piece /home/oracle/backup/0ek24dt4_1_1
    channel ORA_DISK_1: restored backup piece 1
    piece handle=/home/oracle/backup/0ek24dt4_1_1
    tag=TAG20081213T042506
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:07
    Finished restore at 13-DEC-08
    Starting recover at 13-DEC-08
    using channel ORA_DISK_1
    starting media recovery
    archive log thread 1 sequence 116 is already on disk as file /u01/app/oracle/oradata/archive/1_116_666786084.arc
    archive log thread 1 sequence 117 is already on disk as file /u01/app/oracle/oradata/archive/1_117_666786084.arc
    archive log filename=/u01/app/oracle/oradata/archive/1_116_666786084.arc thread=1 sequence=116
    archive log filename=/u01/app/oracle/oradata/archive/1_117_666786084.arc thread=1 sequence=117
    unable to find archive log
    archive log thread=1 sequence=118
    RMAN-03002: failure of recover command at 12/13/2008 05:14:13
    RMAN-06054: media recovery requesting unknown log: thread 1
    seq 118 lowscn 1392700
    3- then I decline 1392701 to 1392700 and i run this command
    run{
    2> set until scn 1392700;
    3> restore database ;
    4> recover database;
    5> }
    executing command: SET until clause
    Starting restore at 13-DEC-08
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from
    backup set
    restoring datafile 00001 to /u01/app/oracle/oradata/sari/system01.dbf
    restoring datafile 00002 to /u01/app/oracle/oradata/sari/undotbs01.dbf
    restoring datafile 00003 to /u01/app/oracle/oradata/sari/sysaux01.dbf
    restoring datafile 00004 to /u01/app/oracle/oradata/sari/users01.dbf
    restoring datafile 00005 to /u01/app/oracle/oradata/sari/example01.dbf
    restoring datafile 00006 to /u01/app/oracle/oradata/sari/users02.dbf
    channel ORA_DISK_1: reading from backup piece /home/oracle/backup/0ek24dt4_1_1
    channel ORA_DISK_1: restored backup piece 1
    piece handle=/home/oracle/backup/0ek24dt4_1_1 tag=TAG20081213T042506
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:08
    Finished restore at 13-DEC-08
    Starting recover at 13-DEC-08
    using channel ORA_DISK_1
    starting media recovery
    archive log thread 1 sequence 116 is already on disk as
    file /u01/app/oracle/oradata/archive/1_116_666786084.arc
    archive log thread 1 sequence 117 is already on disk as
    file /u01/app/oracle/oradata/archive/1_117_666786084.arc
    archive log filename=/u01/app/oracle/oradata/archive/1_116_666786084.arc thread=1
    sequence=116archive log
    filename=/u01/app/oracle/oradata/archive/1_117_666786084.arc
    thread=1 sequence=117Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS
    would get error below
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/u01/app/oracle/oradata/sari/system01.dbf'
    media recovery complete, elapsed time: 00:00:10
    Finished recover at 13-DEC-08
    4- if I run
    run{
    restore database;
    recover database;
    I will recieve that error of step 2 (RMAN-06054: media recovery requesting unknown log: thread 1
    seq 118 lowscn 1392700)
    5- if I just restore the database and I don't perform recovery by rman and I restart redo apply all thing seem fine
    but in opening database I'll recieve ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/u01/app/oracle/oradata/sari/system01.dbf' error)
    do you know what is problem
    thanks
    Edited by: ARKH on Dec 12, 2008 11:06 PM

    hi
    I myself have found the solution , when I recover the standby database
    it do recovery but at the end of recovery it raise the error(RMAN-06054: media recovery requesting unknown log: thread 1
    seq 118 lowscn 1392700) but if I begain redo apply before open the database
    and I wait till all redo apply process start and communication between the
    standby database and the primary database start, then I can
    open the standby database and no error will raise
    but if befor restarting redo apply I open the database I'll recieve the
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/u01/app/oracle/oradata/sari/system01.dbf' error
    thanks

  • Problem creating physical Standby database with RMAN

    Hi All
    I am trying to learn oracle dataguard and as part of the process learning creating standby database.
    Platform : Sun-Fire-V250 Sparc, Solaris 10
    Database Version - Oracle 11R2
    I am creating standby database on same server, so directory structure is different.
    Following the instructions on Oracle site I managed to create a functional physical standby database. But I am not able to create standby database using RMAN. These are the steps that I followed-
    1.Set up all necessary parameters on primary database as done while creating physical standby database manually, eg setting force logging, creating standby logs etc.
    2.Edited parameter file on primary database as done while creating manual pysical standby database creation. Some of the changes done are-
    On Primary Database:
    *.FAL_CLIENT='orcl11020' #Primary database unique name
    *.FAL_SERVER='stdby_11' #Standby database unique name
    db_file_name_convert='/<dir>/oradata/stdby_11','/<dir>/oradata/orcl11020'
    log_file_name_convert='/<dir>/oradata/stdby_11','/<dir>/oradata/orcl11020','/<dir>/oradata/stdby_11/redo_mem','/<dir>/oradata/orcl11020/redo_mem'
    standby_file_management=auto
    *.log_archive_config='DG_CONFIG=(orcl11020,stdby_11)'
    *.log_archive_dest_1='LOCATION=/<dir>/flash_recovery_area/ORCL11020/archivelog
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) db_unique_name=orcl11020'
    *.log_archive_dest_2='SERVICE=stdby_11 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=stdby_11'
    *.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
    *.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
    *.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    Copied same pfile for standby database and modified following-
    *.control_files='/<dir>/oradata/stdby_11/stdby_11.ctl','/<dir>/fra_stdby/stdby_11/stdby_11.ctl'
    *.db_name='orcl1102'
    *.db_unique_name='stdby_11'
    *.FAL_CLIENT='stdby_11'
    *.FAL_SERVER='orcl11020'
    db_file_name_convert='/<dir>/oradata/orcl11020','/<dir>/oradata/stdby_11'
    log_file_name_convert='/<dir>/oradata/orcl11020','/<dir>/oradata/stdby_11','/<dir>/oradata/orcl11020/redo_mem','/<dir>/oradata/stdby_11/redo_mem'
    standby_file_management=auto
    *.log_archive_dest_1='LOCATION=/<dir>/fra_stdby/STDBY_11/archivelog
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) db_unique_name=stdby_11'
    *.log_archive_dest_2='SERVICE=orcl11020 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
    db_unique_name=orcl11020'
    3. Add relevant information in tnsnames.ora and listener.ora files and then restart listener.
    3. Created password file with same credential as primary database.
    4.Up-to-date RMAN backup of primary database available.
    5.Create standby controlfile with rman
    While primary database s open (I tried with primary database in mount mode as well)-
    $>rman catalog rman/paswd@rman target /
    RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY;
    6. Open a new terminal and startup standby database in nomount mode using parameter file created -
    $>ORACLE_SID=stdby_11
    $>export ORACLE_SID
    $>sqlplus / as sysdba
    SQL>STARTUP NOMOUNT pfile='<location/initfilename.ora'
    SQL>quit
    $> rman AUXILIARY / target sys/passwd@orcl11020 catalog rman/passwd@rman
    RMAN>DUPLICATE TARGET DATABASE FOR STANDBY DORECOVER;
    RMAN finishes without error but archive logs are not being tranported. Looking at the log, following caught my eye-
    Error 1017 received logging on to the standby
    Check that the primary and standby are using a password file
    and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
    and that the SYS password is same in the password files.
    returning error ORA-16191
    FAL[client, ARC2]: Error 16191 connecting to orcl11020 for fetching gap sequence
    Errors in file /<>dir>/diag/rdbms/stdby_11/stdby_11/trace/stdby_11_arc2_24321.trc:
    ORA-16191: Primary log shipping client not logged on standby
    Errors in file /<dir>/diag/rdbms/stdby_11/stdby_11/trace/stdby_11_arc2_24321.trc:
    ORA-16191: Primary log shipping client not logged on standby
    So on both primary and standby I confirmed
    SQL> show parameter remote_login_passwordfile
    NAME TYPE VALUE
    remote_login_passwordfile string EXCLUSIVE
    To make double sure that password files are same, I shutdown both databases, delete password files and recreated with same credentials.
    Password files are called - orapworcl11020 and orapwstdby_11
    Can someone guide me where thisngs are going wrong here please.

    Not sure if I understood it clearly.
    SELECT * FROM V$ARCHIVE_GAP;
    returns no rows so there is no gap.
    But could you please explain me the result of the previous query. To catch up again, on standby when I check
    SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG
    SEQUENCE# APPLIED
    75 NO
    74 NO
    76 NO
    77 NO
    I understand that though archive files have been copied across but they are not applied yet.
    On primary when I give your query -
    SELECT name as STANDBY,SEQUENCE#,applied, completion_time
    2 FROM v$archived_log
    3 where dest_id=2
    4 and sequence# BETWEEN 74 and 80;
    I get -
    STANDBY SEQUENCE# APPLIED COMPLETIO
    stdby_11 74 YES 28-JUN-11
    stdby_11 75 YES 28-JUN-11
    stdby_11 76 YES 29-JUN-11
    stdby_11 77 YES 29-JUN-11
    stdby_11 78 YES 29-JUN-11
    stdby_11 79 YES 29-JUN-11
    stdby_11 80 YES 29-JUN-11
    stdby_11 75 NO 07-JUL-11
    stdby_11 74 NO 07-JUL-11
    stdby_11 76 NO 07-JUL-11
    stdby_11 77 NO 07-JUL-11
    stdby_11 78 NO 07-JUL-11
    I have intentionally given
    sequence# BETWEEN 74 and 80
    because I know in the current incarnaion of the database, max sequence is 78.
    So my understanding is, the rows between 28-29 June are from previous incarnation, correct me if I am wrong
    Archive files of the current incarnation, since I successfully created standby database are shipped but yet to be applied - am I right?
    Then my final question is, when will these archives be applied to standby database?
    I am sorry to ask too many questions but I am just trying to understand how it all works.
    Thanks for your help again

  • Database File Size Management

    Hello,
    I have an application that stores large records (capped at 1M each in size, but on average around 0.5 M), and can at any given time have hundreds of thousands of such records. We're using a BTree structure, and BDB has thus far acquitted itself rather well.
    One serious issue we are facing, however, is that the size of the database keeps growing. My expectation was that the database file would grow only on an as-needed basis, but would not grow if records have been deleted. Our application is transactional and replicated. We do have the txnNoSync flag set to true, and checkpoint every 60 seconds (this number is tunable). We have setReverseSplitOff set to true. Could this be the problem? Our page size is set to the maximum possible size, i.e., 65536 bytes.
    Thanks in advance,
    Prashanth

    Hi Prashanth,
    No, it has nothing to do with turning reverse splits off, the page size or anything else.
    It's just that Btree (and Hash) databases are grow-only. Although you free up space by deleting records within a Btree database that space will not be returned to the filesystem, but it's reused where possible. Here is more information on disk space considerations:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/diskspace.html
    Also, to add to the information there, you could call DB->compact():
    http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_compact.html
    or use the db_dump and db_load utilities to do the compaction offline (that is, stopping all writes on the database). Note that if you use your own Btree comparison function you must modify the source code for the utilities so that they'll be aware of the order imposed.
    Let me know if you need more information on this.
    Regards,
    Andrei

  • Rman backup on physical standby database without cancelling MRP

    Hi all,
    Could anyone share, is this possible to take RMAN backup on physical standby database without cancelling MRP process.
    regarrds,

    Hi,
    On Standby Side:
    SQL> alter database mount;
    Database altered.
    SQL> alter database recover managed standby database using current logfile disconnect from session;
    Database altered.
    SQL> select max(Sequence#)  from v$archived_log;
    MAX(SEQUENCE#)
            405
    SQL> select max(Sequence#)  from v$archived_log where applied='YES';
    MAX(SEQUENCE#)
            404
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    [oracle@oel62-x64 Desktop]$ rman target /
    Recovery Manager: Release 11.2.0.3.0 - Production on Tue May 21 15:31:43 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: ADMDB (DBID=4063877183, not open)
    RMAN> backup database plus archivelog delete all input;
    Starting backup at 21-MAY-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=32 device type=DISK
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=390 RECID=391 STAMP=815416638
    input archived log thread=1 sequence=391 RECID=392 STAMP=815421952
    input archived log thread=1 sequence=392 RECID=393 STAMP=815422343
    input archived log thread=1 sequence=393 RECID=394 STAMP=815422434
    input archived log thread=1 sequence=394 RECID=395 STAMP=815422570
    input archived log thread=1 sequence=395 RECID=396 STAMP=815476598
    input archived log thread=1 sequence=396 RECID=397 STAMP=815476615
    input archived log thread=1 sequence=397 RECID=398 STAMP=815476645
    input archived log thread=1 sequence=398 RECID=399 STAMP=815477471
    input archived log thread=1 sequence=399 RECID=400 STAMP=815477475
    input archived log thread=1 sequence=400 RECID=401 STAMP=815477628
    input archived log thread=1 sequence=401 RECID=403 STAMP=815584146
    input archived log thread=1 sequence=402 RECID=402 STAMP=815584137
    input archived log thread=1 sequence=403 RECID=405 STAMP=816017446
    *input archived log thread=1 sequence=404 RECID=404 STAMP=816017444*
    *input archived log thread=1 sequence=405 RECID=406 STAMP=816017455*
    channel ORA_DISK_1: starting piece 1 at 21-MAY-13
    channel ORA_DISK_1: finished piece 1 at 21-MAY-13
    piece handle=/u01/app/oracle/fast_recovery_area/stldb/STLDB/backupset/2013_05_21/o1_mf_annnn_TAG20130521T153202_8spm937d_.bkp tag=TAG20130521T153202 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_14/o1_mf_1_390_8s48hfrp_.arc RECID=391 STAMP=815416638
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_14/o1_mf_1_391_8s4fohwb_.arc RECID=392 STAMP=815421952
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_14/o1_mf_1_392_8s4g1q0v_.arc RECID=393 STAMP=815422343
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_14/o1_mf_1_393_8s4g4l8z_.arc RECID=394 STAMP=815422434
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_14/o1_mf_1_394_8s4g8t9h_.arc RECID=395 STAMP=815422570
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_15/o1_mf_1_395_8s631622_.arc RECID=396 STAMP=815476598
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_15/o1_mf_1_396_8s631qjj_.arc RECID=397 STAMP=815476615
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_15/o1_mf_1_397_8s632od8_.arc RECID=398 STAMP=815476645
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_15/o1_mf_1_398_8s63whqc_.arc RECID=399 STAMP=815477471
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_15/o1_mf_1_399_8s63wly4_.arc RECID=400 STAMP=815477475
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_15/o1_mf_1_400_8s641d8j_.arc RECID=401 STAMP=815477628
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_16/o1_mf_1_401_8s9d21jk_.arc RECID=403 STAMP=815584146
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_16/o1_mf_1_402_8s9d1skv_.arc RECID=402 STAMP=815584137
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_21/o1_mf_1_403_8spm6p4h_.arc RECID=405 STAMP=816017446
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_21/o1_mf_1_404_8spm6mqj_.arc RECID=404 STAMP=816017444
    RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_21/o1_mf_1_405_8spm6yg0_.arc thread=1 sequence=405
    Finished backup at 21-MAY-13
    Starting backup at 21-MAY-13
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=/u01/app/oracle/oradata/stldb/system01.dbf
    input datafile file number=00002 name=/u01/app/oracle/oradata/stldb/sysaux01.dbf
    input datafile file number=00005 name=/u01/app/oracle/oradata/stldb/example01.dbf
    input datafile file number=00003 name=/u01/app/oracle/oradata/stldb/undotbs01.dbf
    input datafile file number=00006 name=/u01/app/oracle/oradata/stldb/appdata01.dbf
    input datafile file number=00004 name=/u01/app/oracle/oradata/stldb/users01.dbf
    channel ORA_DISK_1: starting piece 1 at 21-MAY-13
    channel ORA_DISK_1: finished piece 1 at 21-MAY-13
    piece handle=/u01/app/oracle/fast_recovery_area/stldb/STLDB/backupset/2013_05_21/o1_mf_nnndf_TAG20130521T153213_8spm9fnc_.bkp tag=TAG20130521T153213 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:02:15
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 21-MAY-13
    channel ORA_DISK_1: finished piece 1 at 21-MAY-13
    piece handle=/u01/app/oracle/fast_recovery_area/stldb/STLDB/backupset/2013_05_21/o1_mf_ncsnf_TAG20130521T153213_8spmfqxf_.bkp tag=TAG20130521T153213 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
    Finished backup at 21-MAY-13
    Starting backup at 21-MAY-13
    using channel ORA_DISK_1
    specification does not match any archived log in the repository
    backup cancelled because there are no files to backup
    Finished backup at 21-MAY-13
    RMAN> exit
    Recovery Manager complete.
    [oracle@oel62-x64 Desktop]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Tue May 21 15:34:42 2013
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select max(Sequence#) from  v$archived_log;
    MAX(SEQUENCE#)
            405
    SQL> select max(Sequence#) from v$archived_log where applied='YES';
    MAX(SEQUENCE#)
            404
    SQL> There have no problem, backup database when MRP is running. But if you want delete, then you are getting RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process.
    And will not delete this archived log, because it is needed for standby or upstream capture process.
    Updated
    When MRP stoped
    SQL> alter database recover managed standby database cancel;
    Database altered.
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    [oracle@oel62-x64 Desktop]$ rman target /
    Recovery Manager: Release 11.2.0.3.0 - Production on Tue May 21 15:46:07 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: ADMDB (DBID=4063877183, not open)
    RMAN> backup database plus archivelog delete all input;
    Starting backup at 21-MAY-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=24 device type=DISK
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=405 RECID=406 STAMP=816017455
    channel ORA_DISK_1: starting piece 1 at 21-MAY-13
    channel ORA_DISK_1: finished piece 1 at 21-MAY-13
    piece handle=/u01/app/oracle/fast_recovery_area/stldb/STLDB/backupset/2013_05_21/o1_mf_annnn_TAG20130521T154617_8spn3s9w_.bkp tag=TAG20130521T154617 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_21/o1_mf_1_405_8spm6yg0_.arc RECID=406 STAMP=816017455
    Finished backup at 21-MAY-13
    Starting backup at 21-MAY-13
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=/u01/app/oracle/oradata/stldb/system01.dbf
    input datafile file number=00002 name=/u01/app/oracle/oradata/stldb/sysaux01.dbf
    input datafile file number=00005 name=/u01/app/oracle/oradata/stldb/example01.dbf
    input datafile file number=00003 name=/u01/app/oracle/oradata/stldb/undotbs01.dbf
    input datafile file number=00006 name=/u01/app/oracle/oradata/stldb/appdata01.dbf
    input datafile file number=00004 name=/u01/app/oracle/oradata/stldb/users01.dbf
    channel ORA_DISK_1: starting piece 1 at 21-MAY-13
    channel ORA_DISK_1: finished piece 1 at 21-MAY-13
    piece handle=/u01/app/oracle/fast_recovery_area/stldb/STLDB/backupset/2013_05_21/o1_mf_nnndf_TAG20130521T154618_8spn3v4f_.bkp tag=TAG20130521T154618 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:01:16
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 21-MAY-13
    channel ORA_DISK_1: finished piece 1 at 21-MAY-13
    piece handle=/u01/app/oracle/fast_recovery_area/stldb/STLDB/backupset/2013_05_21/o1_mf_ncsnf_TAG20130521T154618_8spn6779_.bkp tag=TAG20130521T154618 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
    Finished backup at 21-MAY-13
    Starting backup at 21-MAY-13
    using channel ORA_DISK_1
    specification does not match any archived log in the repository
    backup cancelled because there are no files to backup
    Finished backup at 21-MAY-13
    RMAN> Apply process is stopped and new redo received from primary.
    SQL> select max(Sequence#)  from v$archived_log;
    MAX(SEQUENCE#)
            407
    SQL> select max(Sequence#)  from v$archived_log where applied='YES';
    MAX(SEQUENCE#)
            405
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    [oracle@oel62-x64 Desktop]$ rman target /
    Recovery Manager: Release 11.2.0.3.0 - Production on Tue May 21 15:49:28 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: ADMDB (DBID=4063877183, not open)
    RMAN> backup database plus archivelog delete all input;
    Starting backup at 21-MAY-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=32 device type=DISK
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=406 RECID=407 STAMP=816018527
    input archived log thread=1 sequence=407 RECID=408 STAMP=816018530
    channel ORA_DISK_1: starting piece 1 at 21-MAY-13
    channel ORA_DISK_1: finished piece 1 at 21-MAY-13
    piece handle=/u01/app/oracle/fast_recovery_area/stldb/STLDB/backupset/2013_05_21/o1_mf_annnn_TAG20130521T154937_8spnb1y3_.bkp tag=TAG20130521T154937 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: deleting archived log(s)
    RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_21/o1_mf_1_406_8spn8hkn_.arc thread=1 sequence=406
    RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process
    archived log file name=/u01/app/oracle/fast_recovery_area/stldb/STLDB/archivelog/2013_05_21/o1_mf_1_407_8spn8l69_.arc thread=1 sequence=407
    Finished backup at 21-MAY-13
    Starting backup at 21-MAY-13
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=/u01/app/oracle/oradata/stldb/system01.dbf
    input datafile file number=00002 name=/u01/app/oracle/oradata/stldb/sysaux01.dbf
    input datafile file number=00005 name=/u01/app/oracle/oradata/stldb/example01.dbf
    input datafile file number=00003 name=/u01/app/oracle/oradata/stldb/undotbs01.dbf
    input datafile file number=00006 name=/u01/app/oracle/oradata/stldb/appdata01.dbf
    input datafile file number=00004 name=/u01/app/oracle/oradata/stldb/users01.dbf
    channel ORA_DISK_1: starting piece 1 at 21-MAY-13
    channel ORA_DISK_1: finished piece 1 at 21-MAY-13
    piece handle=/u01/app/oracle/fast_recovery_area/stldb/STLDB/backupset/2013_05_21/o1_mf_nnndf_TAG20130521T154939_8spnb3f5_.bkp tag=TAG20130521T154939 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:01:15
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 21-MAY-13
    channel ORA_DISK_1: finished piece 1 at 21-MAY-13
    piece handle=/u01/app/oracle/fast_recovery_area/stldb/STLDB/backupset/2013_05_21/o1_mf_ncsnf_TAG20130521T154939_8spndhly_.bkp tag=TAG20130521T154939 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
    Finished backup at 21-MAY-13
    Starting backup at 21-MAY-13
    using channel ORA_DISK_1
    specification does not match any archived log in the repository
    backup cancelled because there are no files to backup
    Finished backup at 21-MAY-13
    RMAN> I think, codes is understandable.
    Regard
    Mahir M. Quluzade
    Edited by: Mahir M. Quluzade on May 21, 2013 3:36 PM

  • SQL and database file

    Hello.(sorry for my english)
    I have a legacy system , ported to opencobol that uses Berkeley DB. I can successfully open the database files using java both in windows and linux. However i want to create a web front end for the application.
    Data is stored using strange cobol structures (most numbers are stored as text or 4 bits per digit) and other fancy stuff.. so custom parsers should be written. (dont rember BDB classes name for that)
    I have been reading the documentation of Berkeley DB 12 hours totaly today but couldnt find answers to some questions...
    Is it possible to attach the database to dbsql.exe (sqlite) server ?? I tried to do "attach "..pathtofile" as NULL ,but i get multiple databases specified and not supported and other stupid messages....
    I think that what i tried propably doesnt make sense as collumns arent specified and nothing is in order this thing to work ..It is a simple Key value database.
    So i am a little stucked here.
    The only solution i can think is to synchronize with a relational Database. But i dont have triggers or nothing that helps....
    Note that i want one way synchronization. The "other way" will have very limited tasks.
    It seems to me that checking every x minutes if something changed using a cursor is a demanding task. So i am wondering.. Is there a way to track changes???
    Thank you :)
    Edited by: 843912 on 12 Μαρ 2011 7:13 μμ

    Hello,
    I am not sure I completely understand your question. If you are asking about importing
    and exporting data from a Berkeley DB database into an Oracle DB Table in Berkeley DB
    releases prior to 5.*, then this can be done with the Oracle OCI interface. If I am
    misunderstanding the question, please let me know.
    Thanks,
    Sandra

  • Finding whole mapping from database file - filesystems - logical volume manager - logical partitions

    Hello,
    Trying to make reverse engeneering of database files and their physical carriers on logical partitions ( fdisk ).
    And not able to make whole path from filesystem down to partitions with intermediate logical volumes.
    1. select from dba_data_files ...
    2. df -k
    to get the listing of filesystems
    3. vgdisplay
    4. lvdisplay
    5. cat /proc/partitions
    6. fdisk /dev/sda -l
       fdisk /dev/sdb -l
    Problem I have is that not able to determine which partitions are consisten in logical volumes. And then which logical volumens are consisted in filesystem.
    Thank you for hint or direction.

    Hello Wadhah,
    Before start the discussion let me explain I am newcommer to Oracle Linux. My genetic with dba experience of Oracle is from IBM UNIX ( AIX 6.1 ) and Oracle 11gr2.
    First task is to get the complete picture of one database on Oracle Linux for future maintenance tasks and make database more flexible and
    preparing for more intense work:
    -adding datafiles,
    -optimize/replace archive redolog files on separated filesystem from ORACLE_BASE
    - separating auditing log files from $ORACLE_BASE to own filesystem
    - separating diag directory on separated file system ( logging, tracing )
    - adding/enlarging TEMP ts
    - adding/enlarging undo
    - enlarging redo for higher transaction rate ( to reduce number of switched per time perceived in alert_SID.log )
    - adding online redo and control files mirrors
    So in this context try to inspect content of the disk space from the highest logical level in V$, DBA views down to fdisk partitions.
    The idea was to go in these steps:
    1. select paths of present online redo groups, datafiles, controlfiles, temp, undo
       from V$, dba views
    2. For the paths got from the step 1
       locate filesystems and for those filesystems inspect which are on logical volumens and which are directly on partitions.
    3. For all used logical volumes locate the logical partitions and their disks /dev/sda, /dev/sdb, ...

Maybe you are looking for

  • Missing Index topics, links and .htm files

    I hope someone can help with this, as I am very frustrated right now. I normally use RoboHelpX5, but have also tried this in RoboHelp7. If I generate a Winhelp2000 version of our help system, everything appears normal. All of the specified index keyw

  • Delivery completed indicator is not coming in PO

    Dear All, I have two users A and B In A user I am not abl to see delivery completed indicator in PO while is user B I am able to se it. There is no screen varient used and also have not done any personal settings. Regards, Gitesh

  • Write Out a DOM as an XML File in "pretty format"

    Hi all, I am using javax.xml.transform.Transformer to Write Out a DOM as an XML File. (URL: http://java.sun.com/j2ee/1.4/docs/tutorial/doc/JAXPXSLT4.html) It runs very well but in the XML output, it is not "format". I means, for example, there are a

  • HELP! Container on large product won't expand with content

    Everyone, I am at my wit's end. I'm almost finished optimizing my site for tablet but I can't figure out why the containing div around the large product won't expand. I've tried everything! It's fine on mobile and desktop but for some reason it won't

  • Photo Image Degradation

    Thank you for considering my post. I have suffered for years in making my sites on Frontpage. When it came to working with their tables and cells I was convinced either they or I was demonically possesed. I have decided to do my next website in Dream