BerkeleyDB + Tomcat + large number of databases.

Hi all,
for my bioinformatics project, I'd like to transform a large number of SQL databases (see http://hgdownload.cse.ucsc.edu/goldenPath/hg18/database/ ) to a set of read only BerkeleyDB JE databases.
In my web application , the Environment would be loaded in tomcat and one can imagine a servlet/jsp querying/browsing each database.
Then I wonder what are the best practices ?
Should I open each JE Database for each http request and close it at the end of the request ?
Or should I just let each Database open once it has been opened ? Wouldn't it be a problem if all the database and secondary databases are all open ? Can I share one Database for some multiple threads ?
Something else ?
Many thanks for your help
Thanks in advance
Pierre

Hi Pierre,
Normally you should keep the Environment and all Databases open for the duration of the process, since opening and closing a database (and certainly an environment) per request is expensive and unnecessary. However, each open database takes some memory, so if you have an extremely large number of databases (thousands or more), you should consider opening and closing the databases at each request, or for better performance keeping a cache of open databases. Whether this is necessary depends on how much memory you have and how many databases.
You'll find the answer to your multi-threading question in the getting started guide.
Please read the docs and also search the forum.
--mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • Latency increase with large number of databases

    Hello,
    I'm testing how BDB performs on random updates as the number of databases increases. I'm seeing increase in latency as the number of databases goes up. and strace shows that more time is spent on futex() when there are more databases. Is this an expected behavior?
    Thanks!
    --Michi
    Set Up:
    <li> BDB version: db-5.1.19
    <li> Operating system: RHEL4
    <li> Number of records: 2.5M
    <li> Record key size: 32B
    <li> Record value size: 4KB
    <li> Page size: 32KB
    <li> Access method: BTREE
    <li> Records are inserted into a database based on the hash of the key.
    <li> BDB is accessed via RPC server
    <li> RPC server thread pool size: 64. All the threads share the same env and db handles.
    <li> Number of RPC client process: 64
    <li> Each client does random updates. Throughput is throttled to be about 250 requests/sec, which means each process does about 4 requests/sec.
    Flags used for env and db:
    ENV flag DB_THREAD | DB_RECOVER | DB_CREATE | DB_READ_COMMITTED | DB_INIT_TXN |
             DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL
    DB flag DB_AUTO_COMMIT | DB_CREATE | DB_THREADDB_CONFIG
    mutex_set_max 2000000
    set_cachesize 1 0 1
    set_lk_max_objects 2000
    set_lk_max_locks 10000
    set_lk_max_lockers 10000
    set_lg_regionmax 8388608
    set_lk_detect DB_LOCK_DEFAULT
    set_thread_count 20000
    set_lg_max 1073741824 Result
    Number of databases: 256
    Average latency: 52 milliseconds
    strace output
    % time     seconds  usecs/call     calls    errors syscall
    64.74  121.199645        1032    117398           recvfrom
    17.21   32.217921         487     66104      9335 futex
      8.76   16.409370        2724      6024           pread
      2.47    4.630256         813      5694           fdatasync
      2.19    4.102371         105     39162           sendto
    Number of databases: 512
    Average latency: 123 milliseconds
    strace output
    % time     seconds  usecs/call     calls    errors syscall
    58.33  121.384230         933    130158     20045 futex
    23.06   47.977408         383    125146           recvfrom
      4.41    9.167276        1425      6431           pread
      3.11    6.472676         155     41716           sendto
      3.08    6.414824         366     17506           sched_yield
    Number of databases: 1024
    Average latency: 133 milliseconds
    strace output
    % time     seconds  usecs/call     calls    errors syscall
    70.46  102.981390        1077     95656     14884 futex
      9.88   14.435594         169     85256           recvfrom
      5.48    8.008787        1825      4389           pread
      3.13    4.576880         368     12436           sched_yield
      2.98    4.351077         153     28415           sendto
    ...

    Hi James,
    Presently, we never evict a DB after it's opened or encountered during recovery. Each DB takes about 2,000 bytes. So if you have 16K DBs you need approximately 32MB of memory, assuming all of them could be opened or recovered during a process lifetime. Unfortunately, even closing them does not cause them to be evicted. If you encounter them during recovery, this will also pull them into memory.
    We have an FAQ entry on this:
    http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#37
    So you will need a bigger cache size. If you are encountering this during recovery, then you could try a more frequent checkpoint interval.
    I hope this is useful.
    Regards,
    Charles Lamb

  • OutOfMemoryError with large number of databases

    Hey,
    I was wondering how the databases themselves are tracked in an environment and whether caching is handled differently for this information. we have one environment with many (~16000) databases. each database only has a few entries. when we start up our process with 64mb heap size, the process gets an OutOfMemoryError just opening the environment, even with the maxMemoryPercent set to 10%. It seems like bdb is not handling the database info well. any ideas would be helpful.
    thanks,
    -james

    Hi James,
    Presently, we never evict a DB after it's opened or encountered during recovery. Each DB takes about 2,000 bytes. So if you have 16K DBs you need approximately 32MB of memory, assuming all of them could be opened or recovered during a process lifetime. Unfortunately, even closing them does not cause them to be evicted. If you encounter them during recovery, this will also pull them into memory.
    We have an FAQ entry on this:
    http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#37
    So you will need a bigger cache size. If you are encountering this during recovery, then you could try a more frequent checkpoint interval.
    I hope this is useful.
    Regards,
    Charles Lamb

  • Best way to delete large number of records but not interfere with tlog backups on a schedule

    Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules.  There is a list of tables that need a lot of records purged from them.  What would be a good approach to use for deleting the old records?
    Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
    Approach #1
    A one-time delete that did everything.  Delete all the old records, in batches of say 50,000 at a time.
    After each run through all the tables for that DB, execute a tlog backup.
    Approach #2
    Create a job that does a similar process as above, except dont loop.  Only do the batch once.  Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
    Note:
    Some of these (well, most) are going to have relations on them.

    Hi shiftbit,
    According to your description, in my opinion, the type of this question is changed to discussion. It will be better and 
    more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
    take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state. 
    For more information about deleting a large number of records without affecting the transaction log.
    http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
    Hope it can help.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Problem fetch large number of records

    Hi
    I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
    I know when I use DB_MULTIPLE fetch all of the information and performance improves but
    I read that I can not use this flag when I use secondary database for index.
    please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
    thanks alot
    regards
    saeed

    Hi Saeed,
    Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
    You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
    Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
    I have other question.is there any way that fetch information with number of record?
    for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
    Regards,
    Andrei

  • Approach to parse large number of XML files into the relational table.

    We are exploring the option of XML DB for processing a large number of files coming same day.
    The objective is to parse the XML file and store in multiple relational tables. Once in relational table we do not care about the XML file.
    The file can not be stored on the file server and need to be stored in a table before parsing due to security issues. A third party system will send the file and will store it in the XML DB.
    File size can be between 1MB to 50MB and high performance is very much expected other wise the solution will be tossed.
    Although we do not have XSD, the XML file is well structured. We are on 11g Release 2.
    Based on the reading this is what my approach.
    1. CREATE TABLE XML_DATA
    (xml_col XMLTYPE)
    XMLTYPE xml_col STORE AS SECUREFILE BINARY XML;
    2. Third party will store the data in XML_DATA table.
    3. Create XMLINDEX on the unique XML element
    4. Create views on XMLTYPE
    CREATE OR REPLACE FORCE VIEW V_XML_DATA(
       Stype,
       Mtype,
       MNAME,
       OIDT
    AS
       SELECT x."Stype",
              x."Mtype",
              x."Mname",
              x."OIDT"
       FROM   data_table t,
              XMLTABLE (
                 '/SectionMain'
                 PASSING t.data
                 COLUMNS Stype VARCHAR2 (30) PATH 'Stype',
                         Mtype VARCHAR2 (3) PATH 'Mtype',
                         MNAME VARCHAR2 (30) PATH 'MNAME',
                         OIDT VARCHAR2 (30) PATH 'OID') x;
    5. Bulk load the parse data in the staging table based on the index column.
    Please comment on the above approach any suggestion that can improve the performance.
    Thanks
    AnuragT

    Thanks for your response. It givies more confidence.
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    Example XML
    <SectionMain>
    <SectionState>Closed</SectionState>
    <FunctionalState>CP FINISHED</FunctionalState>
    <CreatedTime>2012-08</CreatedTime>
    <Number>106</Number>
    <SectionType>Reel</SectionType>
    <MachineType>CP</MachineType>
    <MachineName>CP_225</MachineName>
    <OID>99dd48cf-fd1b-46cf-9983-0026c04963d2</OID>
    </SectionMain>
    <SectionEvent>
    <SectionOID>99dd48cf-2</SectionOID>
    <EventName>CP.CP_225.Shredder</EventName>
    <OID>b3dd48cf-532d-4126-92d2</OID>
    </SectionEvent>
    <SectionAddData>
    <SectionOID>99dd48cf2</SectionOID>
    <AttributeName>ReelVersion</AttributeName>
    <AttributeValue>4</AttributeValue>
    <OID>b3dd48cf</OID>
    </SectionAddData>
    - <SectionAddData>
    <SectionOID>99dd48cf-fd1b-46cf-9983</SectionOID>
    <AttributeName>ReelNr</AttributeName>
    <AttributeValue>38</AttributeValue>
    <OID>b3dd48cf</OID>
    <BNCounter>
    <SectionID>99dd48cf-fd1b-46cf-9983-0026c04963d2</SectionID>
    <Run>CPFirstRun</Run>
    <SortingClass>84</SortingClass>
    <OutputStacker>D2</OutputStacker>
    <BNCounter>54605</BNCounter>
    </BNCounter>
    I was not aware of Virtual column but looks like we can use it and avoid creating views by just inserting directly into
    the staging table using virtual column.
    Suppose OID id is the unique identifier of each XML FILE and I created virtual column
    CREATE TABLE po_Virtual OF XMLTYPE
    XMLTYPE STORE AS BINARY XML
    VIRTUAL COLUMNS
    (OID_1 AS (XMLCAST(XMLQUERY('/SectionMain/OID'
    PASSING OBJECT_VALUE RETURNING CONTENT)
    AS VARCHAR2(30))));
    1. My question is how then I will write this query by NOT USING COLMUN XML_COL
    SELECT x."SECTIONTYPE",
    x."MACHINETYPE",
    x."MACHINENAME",
    x."OIDT"
    FROM po_Virtual t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                          <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
    MachineType VARCHAR2 (3) PATH 'MachineType',
    MachineName VARCHAR2 (30) PATH 'MachineName',
    OIDT VARCHAR2 (30) PATH 'OID') x;
    2. Insetead of creating the view then Can I do
    insert into STAGING_table_yyy ( col1 ,col2,col3,col4,
    SELECT x."SECTIONTYPE",
    x."MACHINETYPE",
    x."MACHINENAME",
    x."OIDT"
    FROM xml_data t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                         <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
    MachineType VARCHAR2 (3) PATH 'MachineType',
    MachineName VARCHAR2 (30) PATH 'MachineName',
    OIDT VARCHAR2 (30) PATH 'OID') x
    where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
    insert into STAGING_table_yyy ( col1 ,col2,col3
    SELECT x."SectionOID",
    x."EventName",
    x."OIDT"
    FROM xml_data t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                         <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionOID PATH 'SectionOID',
    EventName VARCHAR2 (30) PATH 'EventName',
    OID VARCHAR2 (30) PATH 'OID',
    ) x
    where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
    Same insert for other tables usind the OID_1 virtual coulmn
    3. Finaly Once done how can I delete the XML document from XML.
    If I am using virtual column then I beleive it will be easy
    DELETE table po_Virtual where oid_1 = '99dd48cf-fd1b-46cf-9983';
    But in case we can not use the Virtual column how we can delete the data
    Thanks in advance
    AnuragT

  • Oracle Error 01034 After attempting to delete a large number of rows

    I sent the command to delete a large number of rows from a table in an oracle database (Oracle 10G / Solaris). The database files are located at /dbo partition. Before the command the disk space utilization was at 84% and now it is at 100%.
    SQL Command I ran:
    delete from oss_cell_main where time < '30 jul 2009'
    If I try to connect to the database now I get the following error:
    ORA-01034: ORACLE not available
    df -h returns the following:
    Filesystem size used avail capacity Mounted on
    /dev/md/dsk/d6 4.9G 5.0M 4.9G 1% /db_arch
    /dev/md/dsk/d7 20G 11G 8.1G 59% /db_dump
    /dev/md/dsk/d8 42G 42G 0K 100% /dbo
    I tried to get the space back by deleting all the data in the table oss_cell_main :
    drop table oss_cell_main purge
    But no change in df output.
    I have tried solving it myself but could not find sufficient directed information. Even pointing me to the right documentation will be higly appreciated. I have already looking at the following:
    du -h :
    du -h8K ./lost+found
    1008M ./system/69333
    1008M ./system
    10G ./rollback/69333
    10G ./rollback
    27G ./data/69333
    27G ./data
    1K ./inx/69333
    2K ./inx
    3.8G ./tmp/69333
    3.8G ./tmp
    150M ./redo/69333
    150M ./redo
    42G .
    I think its the rollback folder that has increased in size immensely.
    SQL> show parameter undo
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 10800
    undo_tablespace string UNDOTBS1
    select * from dba_tablespaces where tablespace_name = 'UNDOTBS1'
    TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
    MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
    ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
    UNDOTBS1 8192 65536 1
    2147483645 65536 ONLINE UNDO LOGGING NO LOCAL
    SYSTEM NO MANUAL DISABLED NOGUARANTEE NO
    Note: I can reconnect to the database for short periods of time by restarting the database. After some restarts it does connect but for a few minutes only but not long enough to run exp.

    Check the alert log for errors.
    Select file_name, bytes from dba_data_files order by bytes;
    Try to shrink some datafiles to get space back.

  • DBA Reports large number of inactive sessions with 11.1.1.1

    All,
    We have installed System 11.1.1.1 on some 32 bit windows test machines running Windows Server 2003. Everything seems to be working fine, but recently the DBA is reporting that there are a large number of inactive sessions throwing alarms that we are reaching our Max Allowed Process on the Oracle Database server. We are running Oracle 10.2.0.4 on AIX.
    We also have some System 9.3.1 Development servers that point at separate schemas in this environment and we don't see the same high number of inactive connections?
    Most of the inactive connections are coming from Shared Services and Workspace. Anyone else see this or have any ideas?
    Thanks for any responses.
    Keith
    Just a quick update. Originally I said this was only with 11.1.1.1 but we see the same high number of inactive sessions in 9.3. Anyone else see a large number of inactive sessions. They show up in Oracle as JDBC_Connect_Client. Does Shared Service, Planning Workspace etc utilize persistent connections or does it just abandon sessions when the windows service associated with an application is shutdown? Any information or thoughts are appreciated.
    Edited by: Keith A on Oct 6, 2009 9:06 AM

    Hi,
    Not the answer you are looking for but have you logged it with Oracle as you might not get many answers to this question on here.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Passing a large number of column values to an Oracle insert procedure

    I am quite new to the ODP space, so can someone please tell me what's the best and most efficient way to pass a large number of column values to an Oracle procedure from my C# program ? Passing a small number of values as parameters seem OK but when there are many, this seems inelegant.

    Passing a small number
    of values as parameters seem OK but when there are
    many, this seems inelegant.Is it possible that your table with a staggering amount of columns or method that collapses without so many inputs is ultimately what is inelegant?
    I once did a database conversion from VAX RMS system with a "table" with 11,000 columns to a normalized schema in an Oracle database. That was inelegant.
    Michael O
    http://blog.crisatunity.com

  • Large number of FNDSM and FNDLIBR processes

    hi,
    description of my system
    Oracle EBS 11.5.10 + oracle 9.2.0.5 +HP UX 11.11
    problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processes
    come down from 250 to 80 but these apps processes just dont get killed automatically .
    can i kill these processes manually??
    one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processes
    and under what circumstances , should i run cmclean ?

    Hi,
    problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processesThis means there are lots of zombie processes running and all these need to be killed.
    Shutdown your application and database and take a bounce of the server as there are too many zombie processes. I have once faced the issue in which due to these zombie process CPU utilization has gone to 100% on continuous count.
    Once you restart the server, start database and listener run cmclean and start the application services.
    one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processesNo it's not normal and should not be neglected. I should also advice you to run the [Oracle Application Object Library Concurrent Manager Setup Test|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=200360.1]
    and under what circumstances , should i run cmclean ?[CMCLEAN.SQL - Non Destructive Script to Clean Concurrent Manager Tables|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=134007.1]
    You can run the cmclean if you find that after starting the applications managers are not coming up or actual processes are not equal to target processes.
    Thanks,
    Anchorage :)

  • How large can a database be?

    Hi,
    just out of curiosity that came in my mind . . .
    10 of 10 largest database in the world user Oracle :)
    How large are these databases?

    Exactly. My largest db is 5TB.
    The actual limitations is o/s and h/w based - if the o/s for example only support 128 (or whatever) file devices (via HBA to a SAN), then you are limited to the max device size that the SAN can support.
    From an Oracle and ASM perspective, it can use whatever file systems and devices you assign to it for use.
    Again, there are o/s file system limits. Cooked file systems have a max size. There is a limit on the number of file handles a single o/s process can allocate. Oracle is subject to these - but Oracle itself does not really impose any limitation in this regard.

  • Large number of concurrent sessions

    What optimizations are used to provide a large number of concurrent sessions?

    Generally:
    1) Design so that clustering is easy - e.g. cache only read-only data, and
    cache it agressively
    2) Keep replication requirements down - e.g. keep HTTP sessions small and
    turn off replication on stateful session beans
    3) Always load test with db shared = true so that you don't get nasty
    surprise when clustering
    4) Don't hit the database more than necessary - generally the db scales the
    poorest
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    Clustering Weblogic? You're either using Coherence, or you should be!
    Download a Tangosol Coherence eval today at http://www.tangosol.com/
    "Priya Shinde" <[email protected]> wrote in message
    news:3c6fb3bd$[email protected]..
    >
    What optimizations are used to provide a large number of concurrentsessions?

  • Large number of sequences on Oracle 8i

    One possible solution to an issue I am facing is to create a very large number(~20,000) of sequences in the database. I was wondering if anybody has any experience with this, whether it is a good idea or I should find another solution.
    Thanks.

    Why not use one (or certainly less than 20000) sequence(s) and feed all your needs from it (them) ? Do your tables absolutely require sequential numbers or just unique ones ????
    I had 6 applications a few years ago sharing the same database, about 80% of the tables in each application used sequences for primary key values and I fed each system off of one sequence.
    All I was after was a unique id, so this worked fine. Besides in any normal course of managing even an OLTP system, you're bound to have records deleted, so there will be "holes" in the numbering anyway.

  • Rman backup failure, and is generating a large number of files.

    I would appreciate some pointers on this if possible, as I'm a bit of an rman novice.
    Our rman backup logs indicated a failure and in the directory where it puts its files there appeared a large number of files for the 18th, which was the date of the failure. Previous days backups generated 5 files of moderate size. When it failed it generated between 30 - 40 G of files ( it looks like one for each database file ).
    The full backup is early monday morning, and the rest incremental :
    I have placed the rman log, the script and a the full directory file listing here : http://www.tinshed.plus.com/rman/
    Thanks in advance - George
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055071_s244_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055096_s245_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734573008_s281_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055045_s243_s1
    -rw-r----- 1 oracle dba 524296192 Jan 18 00:03 database_f734055121_s246_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055020_s242_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054454_s233_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054519_s234_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054595_s235_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054660_s236_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054725_s237_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054790_s238_s1
    -rw-r----- 1 oracle dba 209723392 Jan 18 00:02 database_f734055136_s247_s1
    -rw-r----- 1 oracle dba 73408512 Jan 18 00:02 database_f734055143_s248_s1
    -rw-r----- 1 oracle dba 67117056 Jan 18 00:02 database_f734055146_s249_s1
    -rw-r----- 1 oracle dba 4194312192 Jan 18 00:02 database_f734054855_s239_s1
    -rw-r----- 1 oracle dba 2147491840 Jan 18 00:02 database_f734054975_s241_s1
    -rw-r----- 1 oracle dba 3221233664 Jan 18 00:02 database_f734054920_s240_s1
    drwxr-xr-x 2 oracle dba 4096 Jan 18 00:00 logs
    -rw-r----- 1 oracle dba 18710528 Jan 17 00:15 controlfile_c-1911789030-20110117-00
    -rw-r----- 1 oracle dba 1343488 Jan 17 00:15 database_f740621746_s624_s1
    -rw-r----- 1 oracle dba 2958848 Jan 17 00:15 database_f740621745_s623_s1
    -rw-r----- 1 oracle dba 6415990784 Jan 17 00:15 database_f740620829_s622_s1
    -rw-r----- 1 oracle dba 172391424 Jan 17 00:00 database_f740620814_s621_s1

    george3 wrote:
    Ok, perhaps its my understanding of RMAN that is at fault. From the logs :
    Starting recover at 18-JAN-11
    channel m1: starting incremental datafile backup set restore
    channel m1: specifying datafile copies to recover
    recovering datafile copy file number=00001
    name=/exlibris1/rmanbackup/database_f734055020_s242_s1
    recovering datafile copy file number=00002
    name=/exlibris1/rmanbackup/database_f734055045_s243_s1
    it seems to make backup copies of the datafiles every night, so the creation of these large files is normal ?Above results indicate that you have full (incremental level 0) backup(all datafiles copies ) and there happen update/recover (applying incremental level 1) backup.So there was happen applying */exlibris1/rmanbackup/database_f734055045_s243_s1* inremental backup to full(level 1) backup.And size should be normal
    Why is it making copies of the datafiles even on days of incrementals ?
    Because after getting level 1 backup and need applying and every day will apply one incremental backup.

  • Huge memory usage while parsing a large number of files

    hi,
    I use ORACLE8i XML parser class in my app.The problem is that
    when I parse a large number of xml files ,memory usage is so
    huge as more than 100M byte.Even worse,sometime I saw virtual
    memory low dialog.Could anybody give me some idea about this.
    any reply are welcome.Thanks a lot.

    Hi Mike. Yes I do have this enabled but this should only be requesting login to view the normal (open) applications. What I end up getting, after I login, is all the applications restarting. In the case of the synchronising files the database has to rebuild itself every time (160,000 files), and nothing has been happening while logged out. Using the Caffeine app I have still enabled the screensaver using a hot corner and had to log back in after 10 minutes but all the applications were still running, including the synchronisation. I am pretty sure my laptop has not gone into sleep mode, or it wouldn't have to reopen the apps, but it also has not shutdown as I get a login box as soon as I press any key. I can't understand the problem, and what makes it even stranger is that it is happening on two separate MacBooks at the same time. Thanks for the suggestion – really appreciated.
    OldGnome - thanks for the suggestion for the other app. I chose Caffeine as this has really good user reviews but the other one doesn't seem to have any, but I might still try it. Again thanks for your help.

Maybe you are looking for

  • Is there a way to add an existing distribution list to a document?

    i created a distribution list with CVI1 and i am trying to add it to a document, NOT via the distribution list but from the document itself. via the "environment" menu, I can create a new list or change a list in which the document is already written

  • Help needed in dynamic sql

    Hi, Can we use dynamic table name in PL/SQL. My query is "select col_nm from table_nm". Here I have to pass the table_nm using cursor. I tried giving "select col_nm from cursor.table_nm". But it not taking the cursor value for table name. How can we

  • How to use database look up table function in xsl mapping

    Can anybody tell me how to use database look up table function while mapping xsl between 2 nodes. I have an XML file coming in and depending on one of XML elements we need to decide which further path to take. But, using this XML element, we need to

  • Oracle BI Administration tool has stopped working

    Just now finished the 11g installation.But im not able to open BI Administrator.Im getting this error.Please help. "Oracle BI Administration tool has stopped working A problem caused the program to stop working correctly.Please close the program"

  • Saving completed file for opening in another program

    I have been using Pinnacle Studio 10.7 because it allows me to produce HD-DVD on regular DVDs. (The drawback is the approximate 20-minute time max on each DVD.) I'd like to start doing work in Premiere, then save my work to a file that I can simply o