Database files size

Hello, we are facing very strange thing - there are about 60.000 links in the database, but in the directory with environment there are 435 files of 10 megabytes
Why is that, and could it affect performance somehow?

In fact, my database closing operation looks like this
        domainsQueueDB.close();
        supersedeQueueDB.close();
        queueDB.close();
        tasksDB.close();
        recordIdSequence.close();
        systemDB.close();
        if (cleanup) {
            env.removeDatabase(null, supersede_domain_db_name);
            env.removeDatabase(null, supersede_queue_db_name);
            env.removeDatabase(null, queue_db_name);
            env.removeDatabase(null, tasks_db_name);
            env.removeDatabase(null, system_db_name);
            env.cleanLog();
        env.close();but I'm facing large log files when application is working. Do we need to configure cleaner somehow, I don't beleive 50.000 records could consume 3 GB, since each record is nothing more than URL, everal short strings (mime type etc) and several integers, representing parent pages. Considering that list of parent ID could contain several thousands of integers, I still don't think this can consume 64 kilobytes of data.

Similar Messages

  • About database file size

    Hi, I have created a database with size of 1000MB and log file1000MB by default.
    After database created, my data load failed because the db size has exceed 1GB, I try to increase the size but not success.
    Then I try to create a new database with 10000MB, reserve size is 10000MB also. Total of the db file is 1GB and log file also 1GB.
    May I know if I want to add more size later, lets say 100GB after 3 years...can I add the file size? But my reserve size only 10000MB.
    Thanks.

    You just need to add files to DBSpace
    Central / Dbspaces / You_DB_SPACE / Files / Right Click / New File
    it increase size you db

  • Database File Size Management

    Hello,
    I have an application that stores large records (capped at 1M each in size, but on average around 0.5 M), and can at any given time have hundreds of thousands of such records. We're using a BTree structure, and BDB has thus far acquitted itself rather well.
    One serious issue we are facing, however, is that the size of the database keeps growing. My expectation was that the database file would grow only on an as-needed basis, but would not grow if records have been deleted. Our application is transactional and replicated. We do have the txnNoSync flag set to true, and checkpoint every 60 seconds (this number is tunable). We have setReverseSplitOff set to true. Could this be the problem? Our page size is set to the maximum possible size, i.e., 65536 bytes.
    Thanks in advance,
    Prashanth

    Hi Prashanth,
    No, it has nothing to do with turning reverse splits off, the page size or anything else.
    It's just that Btree (and Hash) databases are grow-only. Although you free up space by deleting records within a Btree database that space will not be returned to the filesystem, but it's reused where possible. Here is more information on disk space considerations:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/diskspace.html
    Also, to add to the information there, you could call DB->compact():
    http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_compact.html
    or use the db_dump and db_load utilities to do the compaction offline (that is, stopping all writes on the database). Note that if you use your own Btree comparison function you must modify the source code for the utilities so that they'll be aware of the order imposed.
    Let me know if you need more information on this.
    Regards,
    Andrei

  • Database file size

    I am using Berkeley DB 5.1.19 with replication manager.
    I am seeing big differences between the size of the db files between the master and the client, is that expected and if so what is the reason. This has impact on the size of the backup too.
    On the master:
    [root@sde1 sandvine]# du -sh replica_data/*
    16K replica_data/__db.001
    29M replica_data/__db.002
    11M replica_data/__db.003
    2.9M replica_data/__db.004
    25M replica_data/__db.005
    12K replica_data/__db.006
    2.3M replica_data/__db.rep.db
    1.1M replica_data/__db.rep.diag00
    1.1M replica_data/__db.rep.diag01
    4.0K replica_data/__db.rep.egen
    4.0K replica_data/__db.rep.gen
    8.0K replica_data/__db.reppg.db
    8.0K replica_data/__db.rep.system
    11M replica_data/log.0000000158
    7.2M replica_data/log.0000000159
    8.0K replica_data/persistency_name_mapping.tbl
    8.0K replica_data/QM_KPI_NumManagedTable1_20111117T015214.012632_backup.db
    8.0K replica_data/QM_KPI_NumOverQuotaTable2_20111117T015214.074648_backup.db
    8.0K replica_data/QM_KPI_NumUnderQuotaTable3_20111117T015214.138377_backup.db
    8.0K replica_data/QM_KPI_NumUnmanagedTable4_20111117T015214.200234_backup.db
    8.0K replica_data/QmLastIpAddressTable5_20111117T015214.258221_backup.db
    12K replica_data/QmPolicyConfiguration6_20111117T015214.316379_backup.db
    13M replica_data/QmSubIdNameTable7_20111117T015214.375543_backup.db
    41M replica_data/QmSubscriberQuota_Daily8_20111117T015214.432662_backup.db
    41M replica_data/QmSubscriberQuota_PC_or_Monthly9_20111117T015214.506866_backup.db
    41M replica_data/QmSubscriberQuota_Roaming10_20111117T015214.570525_backup.db
    15M replica_data/QmSubscriberQuotaState12_20111117T015214.717594_backup.db
    41M replica_data/QmSubscriberQuota_Weekly11_20111117T015214.634982_backup.db
    On the client:
    [root@sde2 sandvine]# du -sh replica_data/*
    16K replica_data/__db.001
    146M replica_data/__db.002
    133M replica_data/__db.003
    3.3M replica_data/__db.004
    33M replica_data/__db.005
    12K replica_data/__db.006
    8.0K replica_data/__db.rep.db
    1.1M replica_data/__db.rep.diag00
    1.1M replica_data/__db.rep.diag01
    4.0K replica_data/__db.rep.egen
    4.0K replica_data/__db.rep.gen
    8.0K replica_data/__db.reppg.db
    8.0K replica_data/__db.rep.system
    7.2M replica_data/log.0000000159
    8.0K replica_data/persistency_name_mapping.tbl
    8.0K replica_data/QM_KPI_NumManagedTable1_20111117T015214.012632_backup.db
    8.0K replica_data/QM_KPI_NumOverQuotaTable2_20111117T015214.074648_backup.db
    8.0K replica_data/QM_KPI_NumUnderQuotaTable3_20111117T015214.138377_backup.db
    8.0K replica_data/QM_KPI_NumUnmanagedTable4_20111117T015214.200234_backup.db
    8.0K replica_data/QmLastIpAddressTable5_20111117T015214.258221_backup.db
    12K replica_data/QmPolicyConfiguration6_20111117T015214.316379_backup.db
    13M replica_data/QmSubIdNameTable7_20111117T015214.375543_backup.db
    41M replica_data/QmSubscriberQuota_Daily8_20111117T015214.432662_backup.db
    41M replica_data/QmSubscriberQuota_PC_or_Monthly9_20111117T015214.506866_backup.db
    41M replica_data/QmSubscriberQuota_Roaming10_20111117T015214.570525_backup.db
    15M replica_data/QmSubscriberQuotaState12_20111117T015214.717594_backup.db
    41M replica_data/QmSubscriberQuota_Weekly11_20111117T015214.634982_backup.db
    For example:
    The following 2 files on the master are small
    29M replica_data/__db.002
    11M replica_data/__db.003
    Where on the client, the same following:
    146M replica_data/__db.002
    133M replica_data/__db.003
    Thx in advance.

    The __db.00* files are not replicated database files. They are internal Berkeley DB files that back our shared memory regions and they are specific to each separate site's database. It is expected that they can be different sizes reflecting the different usage patterns and potentially different configuration options on the master and the client database. To read more about these files, please refer to the Programmer's Reference section titled "Shared memory regions".
    I am assuming that your replicated databases are the QM* and Qm* files. These look like they are the same size on the master and client, as we would expect.
    Paula Bingham
    Oracle

  • Syslog database file size is growing

    Hi ,
    I have a Cisco Works Server ( LMS 2.6 Version) which had a issue with Syslog Severity level summary report which was hanging when ever we run a job and report job used to fail always.Also i have observed SyslogFirst.db,SyslogSecond.db,SyslogThird.db database file are grown to 90Gb Each due to which RME was very slow.
    I have done a RME database Reinitilization and post to that Syslog Severity level summary report started working properly.Also the file file size of SyslogFirst.db,SyslogSecond.db,SyslogThird.db are reduced to almost 10 MB.But when i see today the SyslogThird.db file increase to 4GB again.
    I need a help on what causing this files (SyslogThird.db ) to grow so fast.Is there any option whcih i need to see in cisco work which stop this files to grow so fast.Please help me on this issue.
    Thanks & Regds,
    Lalit

    Hi Joseph,
    Thanks for your reply.SyslogThird.db is not growing now but my SeverityWise Summary report is stoped again.If i see status in the RME Jobs it says SeverityWise Summary report failed.I have checked the SyslogThird.db file size and found it was 20Gb.Is this is failing because 20Gb file size.
    Please needs your valuable inputs.Thanks once gain.After RME reinitilization it was only 1Gb and report was getting generated.
    Thanks & Regds,
    Lalit

  • Database file sizes

    Hi All,
    Is there any specific guidelines on the size of datafiles in mssql?.
    The best practices documents say you can maintain number of data files = the number of processors. When installing the SAP system it creates 3 data files by default. In production systems currently the size of these 3 files are very high.So is it a good option to restrict the growth of these files and add another 3 new datafiles and allow those files to grow.
    regards,
    dev

    Hi dev,
    there's a whitepaper published on Juergen Thomas' Blog ([http://blogs.msdn.com/b/saponsqlserver/archive/2009/06/24/new-sap-on-sql-server-2008-whitepaper-released.aspx]) that states the following:
    - Small sized systems, where 4 data files should be fine. These systems usually run on dedicated database servers that have around 4 cores.
    - Medium sized systems, where at least 8 data files are required. These systems usually run on dedicated database servers that have between 8 and 16 CPU cores.
    - Large sized systems where a minimum of 16 data files are required. These are usually systems that run on hardware that has between 16 and 32 CPU cores.
    - Xtra Large sized systems. Upcoming hardware over the next years certainly will support up to 256 CPU cores. However, we donu2019t necessarily expect a lot of customers deploying this kind of hardware for one dedicated database server, servicing one database of an SAP application. For XL systems we recommend 32 to 64 data files.
    For more information check out the whitepaper (its 404 currently which should be fixed soon)

  • Lite 10g DB File Size Limit

    Hello, everyone !
    I know, that Oracle Lite 5.x.x had a database file size limit = 4M per a db file. There is a statement in the Oracle® Database Lite 10g Release Notes, that db file size limit is 4Gb, but it is "... affected by the operating system. Maximum file size allowed by the operating system". Our company uses Oracle Lite on Windows XP operating system. XP allows file size more than 4Gb. So the question is - can the 10g Lite db file size exceed 4Gb limit ?
    Regards,
    Sergey Malykhin

    I don't know how Oracle Lite behave on PocketPC, 'cause we use it on Win32 platform. But ubder Windows, actually when .odb file reaches the max available size, the Lite database driver reports about I/O error ... after the next write operation (sorry, I just don't remember exact error message number).
    Sorry, I'm not sure what do you mean by "configure the situation" in this case ...

  • DB file size

    Hi All,
    I installed oracle 10g and I have tables around 500 with huge data. recently I moved the data in to another server and purged the tables.But, still the database file size is more than 3GB even the number of tables are only 20 with 50 rows each. If I took the backup using command tool the database file size is just 200KB. Please let me know how to shrink the database file. I tried using the Application Express but no such great change.
    Thank you.
    Bhargava Sriram A.

    ALTER DATABASE DATAFILE '<file_name>' RESIZE <integer> K;
    Also check the below metalink article if you have issues while resizing datafile.
    Note 130866.1 How to Resolve ORA-03297 When Resizing a Datafile by Finding
    the Table Highwatermark

  • Is there a size limit on the iPod for the song database file ?

    I have been running into the same issue for the last 2 weeks: Once I exceed 110 GB on my iPod Classic 160 GB, iTunes is no longer able to update the database file on the iPod.
    When clicking (on the iPod) on Settings/About, the iPod displays the wrong number of songs. Also, the iPod is no longer able to play any songs.
    Is there a size limit for the database file on the iPod ?
    I am making excessive use of the 'comments' field in every song that I load to the iPod. This increases the size of the database file.
    Is there a way, that I can manually update the database file on the iPod ?
    Thanks for your help !

    did you experience some crashing of the ipod as well? do you know how many separate items you had?

  • Cannot open database due to incorrect file size

    Hi,
    we have a oracle database 8.0.5 and it crashed yesterday. now we cannot open the database (only mount readonly) due to an incorrect filesize of usr1orcl.ora. is there a simple way to open the database?

    i'm sorry but it wasnt in archivelog mode.
    the ora error codes are:
    ORA-01122: datafile name - failed verification check
    Cause: The information in the datafile is inconsistent with information from the control file. This could be for any of the following reasons:
    The control file is from a time earlier than the datafiles.
    The datafile size does not match the size specified in the control file.
    The datafile is corrupted.
    Action: Make certain that the datafiles and control files are the correct files for this database, then retry the operation.
    ORA-01110: datafile name: str
    Cause: This message reports the filename involved with other messages.
    Action: See the associated messages for a description of the problem.
    ORA-01200: actual file size of num is smaller than correct size of num blocks
    Cause: The size of the file, as returned by the operating system, is smaller than the size of the file as indicated in the file header and the control file. Somehow the file has been truncated.
    Action: Restore a good copy of the datafile from a backup and perform recovery as needed.

  • Archive log file size is varying in RAC 10g database.

    ---- Environment oracle 10g rac 9 node cluster database, with 3 log groups for each node with 500 mb size for each redo log file.
    Question is why would be the archive log file size is varying, i know when ever there is log file switch the redo log will be archived, So as our redo log file size is of 500 MB
    isn't the archive log file size should be the same as 500 MB?
    Instead we are seeing the archive log file is varying from 20 MB to 500 MB this means the redo log file is not using the entire 500 MB space? What would be causing this to happen? how can we resolve this?
    Some init parameter values.(just for information)
    fast_start_mttr_target ----- 400
    log_checkpoint_timeout ----- 0
    log_checkpoint_interval ----- 0
    fast_start_io_target ----- 0

    There was a similar discussion a few days back,
    log file switch before it filled up
    The guy later claimed it's because their log_buffer size. It's remain a mystery to me still.

  • Database file sometimes gets to zero size and zero file flags

    I have a problem that on my system sometimes single database file gets to a state where it has zero size and zero file flags ( as if set chmod 0 file ).
    My database runs 24/7 and there are multiple agents running at the same time. My database files are backed up and removed from time to time to protect stored data. So I guess that this error could come up when two agents get to backup or recover the database file at the same time. Though, it is hard to find if it is caused by this problem, so I'd like to ask if there is anyone who stumbled on same problem - where one database ends up in state of 0 flags and 0 size.

    Hello, anyone has any tip about this issue?

  • Database file: forms disapeared but size of file remained

    After saving a Star Office database file the computer closed down due to powe management setting. At opening the file again, the forms contained in it are invisible. How can I make them visible again?

    Yes, psadmin worked. My command was:
    psadmin register-portlet -u amadmin -f password.txt -p portal1 -g myapp.ear
    I think that bypasses the file size, not completely sure. I found several files called web.xml and changed them all but it did not make any difference to deploy portlet on the console
    So I think this is the right answer.
    Thanks

  • How do find all database slog size and mdf file size ?

    hi experts,
    could you share query to find all databases log file size and mdf file (includes ndf files ) and total db size ? in MB and GB
    I have a task to kae the dbs size  around 300 dbs
    ========               ============     =============        = ===        =====
    DB_Name    Log_file_size           mdf_file_size         Total_db_size           MB              
    GB
    =========              ===========               ============       ============     
    Thanks,
    Vijay

    Use this ViJay
    set nocount on
    Declare @Counter int
    Declare @Sql nvarchar(1000)
    Declare @DB varchar(100)
    Declare @Status varchar(25)
    Declare @CaptureDate datetime
    Set @Status = ''
    Set @Counter = 1
    Set @CaptureDate = getdate()
    Create Table #Size
    SizeId int identity,
    Name varchar(100),
    Size int,
    FileName varchar(1000),
    FileSizeMB numeric(14,4),
    UsedSpaceMB numeric(14,4),
    UnusedSpaceMB numeric(14,4)
    Create Table #DB
    Dbid int identity,
    Name varchar(100)
    Create Table #Status
    (status sql_Variant)
    Insert Into #DB
    Select Name
    From Sys.Databases
    While @Counter <=(Select Max(dbid) From #Db)
    Begin
    Set @DB =
    Select Name
    From #Db
    Where @Counter = DbId
    Set @Sql = 'SELECT DATABASEPROPERTYEX('''+@DB+''', ''Status'')'
    Insert Into #Status
    Exec (@sql)
    Set @Status = (Select convert(varchar(25),status) From #Status)
    If (@Status)= 'ONLINE'
    Begin
    Set @Sql =
    'Use ['+@DB+']
    Insert Into #Size
    Select '''+@DB+''',size, FileName ,
    convert(numeric(10,2),round(size/128.,2)),
    convert(numeric(10,2),round(fileproperty( name,''SpaceUsed'')/128.,2)),
    convert(numeric(10,2),round((size-fileproperty( name,''SpaceUsed''))/128.,2))
    From sysfiles'
    Exec (@Sql)
    End
    Else
    Begin
    Set @SQL =
    'Insert Into #Size (Name, FileName)
    select '''+@DB+''','+''''+@Status+''''
    Exec(@SQL)
    End
    Delete From #Status
    Set @Counter = @Counter +1
    Continue
    End
    Select Name, Size, FileName, FileSizeMB, UsedSpaceMB, UnUsedSpaceMB,right(rtrim(filename),3) as type, @CaptureDate as Capturedate
    From #Size
    drop table #db
    drop table #status
    drop table #size
    set nocount off
    Andre Porter

  • What is impact to MDF file size if change database to simple recovery mode

    Hi,
    Currently I have a Database with 27GB MDF and 80GB LDF.
    If I change from Full recovery to Simple recovery mode, would LDF information be transfered to MDF file and make
    MDF file size exceed 100GB?

    Hi
    May I know how to perform point in time recovery? Currently the only backup we perform every 4 hours is the server OS snapshot.
    Example :
    1. Now is 6pm and some error transaction occurred.
    2. We can perform 3pm server OS snapshot recovery on the mdf file. ( We would lost 3 hours data in this case )
    3. Could we apply the ldf transaction log after OS snapshot recovery and roll it forward till 5:50pm?
    You would be able to perform point in time recovery if you have
    1. Database configured in full recovery mode
    2. You were taking transaction log backups.(of course with full backup or may be differential)
    In your scenario applying snapshot wont help you.What you have to do it you should have full backup in place .If you had full backup ,like full backup at 3 PM then you would have restored it with no recovery.After that suppose you took tansaction log backups
    every on hour then restore 1 PM ,2 PM and 3 PM log backup all in nore covery.
    Now i should have mentioned first but before restoring full backup you can also take tail log backup read this article
    http://technet.microsoft.com/en-us/library/ms179314.aspx
    So now after full and all log backups are applied with no recobery apply tail log backup with recovery and its quite possible that you might not have a data loss or in some scenario very small data loss( not 3 hrs as you would have with snapshot)
    hope this helps
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

Maybe you are looking for