Database File Size Management

Hello,
I have an application that stores large records (capped at 1M each in size, but on average around 0.5 M), and can at any given time have hundreds of thousands of such records. We're using a BTree structure, and BDB has thus far acquitted itself rather well.
One serious issue we are facing, however, is that the size of the database keeps growing. My expectation was that the database file would grow only on an as-needed basis, but would not grow if records have been deleted. Our application is transactional and replicated. We do have the txnNoSync flag set to true, and checkpoint every 60 seconds (this number is tunable). We have setReverseSplitOff set to true. Could this be the problem? Our page size is set to the maximum possible size, i.e., 65536 bytes.
Thanks in advance,
Prashanth

Hi Prashanth,
No, it has nothing to do with turning reverse splits off, the page size or anything else.
It's just that Btree (and Hash) databases are grow-only. Although you free up space by deleting records within a Btree database that space will not be returned to the filesystem, but it's reused where possible. Here is more information on disk space considerations:
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/diskspace.html
Also, to add to the information there, you could call DB->compact():
http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_compact.html
or use the db_dump and db_load utilities to do the compaction offline (that is, stopping all writes on the database). Note that if you use your own Btree comparison function you must modify the source code for the utilities so that they'll be aware of the order imposed.
Let me know if you need more information on this.
Regards,
Andrei

Similar Messages

  • Database file size

    I am using Berkeley DB 5.1.19 with replication manager.
    I am seeing big differences between the size of the db files between the master and the client, is that expected and if so what is the reason. This has impact on the size of the backup too.
    On the master:
    [root@sde1 sandvine]# du -sh replica_data/*
    16K replica_data/__db.001
    29M replica_data/__db.002
    11M replica_data/__db.003
    2.9M replica_data/__db.004
    25M replica_data/__db.005
    12K replica_data/__db.006
    2.3M replica_data/__db.rep.db
    1.1M replica_data/__db.rep.diag00
    1.1M replica_data/__db.rep.diag01
    4.0K replica_data/__db.rep.egen
    4.0K replica_data/__db.rep.gen
    8.0K replica_data/__db.reppg.db
    8.0K replica_data/__db.rep.system
    11M replica_data/log.0000000158
    7.2M replica_data/log.0000000159
    8.0K replica_data/persistency_name_mapping.tbl
    8.0K replica_data/QM_KPI_NumManagedTable1_20111117T015214.012632_backup.db
    8.0K replica_data/QM_KPI_NumOverQuotaTable2_20111117T015214.074648_backup.db
    8.0K replica_data/QM_KPI_NumUnderQuotaTable3_20111117T015214.138377_backup.db
    8.0K replica_data/QM_KPI_NumUnmanagedTable4_20111117T015214.200234_backup.db
    8.0K replica_data/QmLastIpAddressTable5_20111117T015214.258221_backup.db
    12K replica_data/QmPolicyConfiguration6_20111117T015214.316379_backup.db
    13M replica_data/QmSubIdNameTable7_20111117T015214.375543_backup.db
    41M replica_data/QmSubscriberQuota_Daily8_20111117T015214.432662_backup.db
    41M replica_data/QmSubscriberQuota_PC_or_Monthly9_20111117T015214.506866_backup.db
    41M replica_data/QmSubscriberQuota_Roaming10_20111117T015214.570525_backup.db
    15M replica_data/QmSubscriberQuotaState12_20111117T015214.717594_backup.db
    41M replica_data/QmSubscriberQuota_Weekly11_20111117T015214.634982_backup.db
    On the client:
    [root@sde2 sandvine]# du -sh replica_data/*
    16K replica_data/__db.001
    146M replica_data/__db.002
    133M replica_data/__db.003
    3.3M replica_data/__db.004
    33M replica_data/__db.005
    12K replica_data/__db.006
    8.0K replica_data/__db.rep.db
    1.1M replica_data/__db.rep.diag00
    1.1M replica_data/__db.rep.diag01
    4.0K replica_data/__db.rep.egen
    4.0K replica_data/__db.rep.gen
    8.0K replica_data/__db.reppg.db
    8.0K replica_data/__db.rep.system
    7.2M replica_data/log.0000000159
    8.0K replica_data/persistency_name_mapping.tbl
    8.0K replica_data/QM_KPI_NumManagedTable1_20111117T015214.012632_backup.db
    8.0K replica_data/QM_KPI_NumOverQuotaTable2_20111117T015214.074648_backup.db
    8.0K replica_data/QM_KPI_NumUnderQuotaTable3_20111117T015214.138377_backup.db
    8.0K replica_data/QM_KPI_NumUnmanagedTable4_20111117T015214.200234_backup.db
    8.0K replica_data/QmLastIpAddressTable5_20111117T015214.258221_backup.db
    12K replica_data/QmPolicyConfiguration6_20111117T015214.316379_backup.db
    13M replica_data/QmSubIdNameTable7_20111117T015214.375543_backup.db
    41M replica_data/QmSubscriberQuota_Daily8_20111117T015214.432662_backup.db
    41M replica_data/QmSubscriberQuota_PC_or_Monthly9_20111117T015214.506866_backup.db
    41M replica_data/QmSubscriberQuota_Roaming10_20111117T015214.570525_backup.db
    15M replica_data/QmSubscriberQuotaState12_20111117T015214.717594_backup.db
    41M replica_data/QmSubscriberQuota_Weekly11_20111117T015214.634982_backup.db
    For example:
    The following 2 files on the master are small
    29M replica_data/__db.002
    11M replica_data/__db.003
    Where on the client, the same following:
    146M replica_data/__db.002
    133M replica_data/__db.003
    Thx in advance.

    The __db.00* files are not replicated database files. They are internal Berkeley DB files that back our shared memory regions and they are specific to each separate site's database. It is expected that they can be different sizes reflecting the different usage patterns and potentially different configuration options on the master and the client database. To read more about these files, please refer to the Programmer's Reference section titled "Shared memory regions".
    I am assuming that your replicated databases are the QM* and Qm* files. These look like they are the same size on the master and client, as we would expect.
    Paula Bingham
    Oracle

  • File Size Management

    Say I scan an image as pdf, approx. 2.5 mb file size (36"x48" w/ lots of color at 150ppi). I then open in elements 4.0 add some text lines, arrows or other and try to save medium quality, with layers, as pdf and the file size gets very large very quick, like 70mb pretty easily and 1gb isnt unheard of. In addition to eating up our network servers memory, I sometimes get error messages like not enough memory (ram) to save and cant update my work session.
    I know if i flatten the image (combine layers) or save without layers it will be smaller or as a jpg (lose layers) it will work. However i am looking for ways to retain the layer adjustment ability when I reopen the file.
    Is this a problem with other users? Would an upgrade to CS2 or CS3 be better at managing these issues? ANY IDEAS WOULD BE GREAT, THANKS

    > Say I scan an image as pdf, approx. 2.5 mb file size...
    Why PDF? I scanned a picture at 150 PPI, which resulted in a 2.5 MB file
    size (BMP). When saved as PDF, the size went to 4.0 MB. After adding some
    text and one adjustment layer, totaling three layers, I saved as PDF and
    PSD. The new file sizes were 9 and 5.2 MB respectively.
    Try going with PSD or TIFF format and see if that helps.
    Juergen

  • About database file size

    Hi, I have created a database with size of 1000MB and log file1000MB by default.
    After database created, my data load failed because the db size has exceed 1GB, I try to increase the size but not success.
    Then I try to create a new database with 10000MB, reserve size is 10000MB also. Total of the db file is 1GB and log file also 1GB.
    May I know if I want to add more size later, lets say 100GB after 3 years...can I add the file size? But my reserve size only 10000MB.
    Thanks.

    You just need to add files to DBSpace
    Central / Dbspaces / You_DB_SPACE / Files / Right Click / New File
    it increase size you db

  • Syslog database file size is growing

    Hi ,
    I have a Cisco Works Server ( LMS 2.6 Version) which had a issue with Syslog Severity level summary report which was hanging when ever we run a job and report job used to fail always.Also i have observed SyslogFirst.db,SyslogSecond.db,SyslogThird.db database file are grown to 90Gb Each due to which RME was very slow.
    I have done a RME database Reinitilization and post to that Syslog Severity level summary report started working properly.Also the file file size of SyslogFirst.db,SyslogSecond.db,SyslogThird.db are reduced to almost 10 MB.But when i see today the SyslogThird.db file increase to 4GB again.
    I need a help on what causing this files (SyslogThird.db ) to grow so fast.Is there any option whcih i need to see in cisco work which stop this files to grow so fast.Please help me on this issue.
    Thanks & Regds,
    Lalit

    Hi Joseph,
    Thanks for your reply.SyslogThird.db is not growing now but my SeverityWise Summary report is stoped again.If i see status in the RME Jobs it says SeverityWise Summary report failed.I have checked the SyslogThird.db file size and found it was 20Gb.Is this is failing because 20Gb file size.
    Please needs your valuable inputs.Thanks once gain.After RME reinitilization it was only 1Gb and report was getting generated.
    Thanks & Regds,
    Lalit

  • Database file sizes

    Hi All,
    Is there any specific guidelines on the size of datafiles in mssql?.
    The best practices documents say you can maintain number of data files = the number of processors. When installing the SAP system it creates 3 data files by default. In production systems currently the size of these 3 files are very high.So is it a good option to restrict the growth of these files and add another 3 new datafiles and allow those files to grow.
    regards,
    dev

    Hi dev,
    there's a whitepaper published on Juergen Thomas' Blog ([http://blogs.msdn.com/b/saponsqlserver/archive/2009/06/24/new-sap-on-sql-server-2008-whitepaper-released.aspx]) that states the following:
    - Small sized systems, where 4 data files should be fine. These systems usually run on dedicated database servers that have around 4 cores.
    - Medium sized systems, where at least 8 data files are required. These systems usually run on dedicated database servers that have between 8 and 16 CPU cores.
    - Large sized systems where a minimum of 16 data files are required. These are usually systems that run on hardware that has between 16 and 32 CPU cores.
    - Xtra Large sized systems. Upcoming hardware over the next years certainly will support up to 256 CPU cores. However, we donu2019t necessarily expect a lot of customers deploying this kind of hardware for one dedicated database server, servicing one database of an SAP application. For XL systems we recommend 32 to 64 data files.
    For more information check out the whitepaper (its 404 currently which should be fixed soon)

  • Database files size

    Hello, we are facing very strange thing - there are about 60.000 links in the database, but in the directory with environment there are 435 files of 10 megabytes
    Why is that, and could it affect performance somehow?

    In fact, my database closing operation looks like this
            domainsQueueDB.close();
            supersedeQueueDB.close();
            queueDB.close();
            tasksDB.close();
            recordIdSequence.close();
            systemDB.close();
            if (cleanup) {
                env.removeDatabase(null, supersede_domain_db_name);
                env.removeDatabase(null, supersede_queue_db_name);
                env.removeDatabase(null, queue_db_name);
                env.removeDatabase(null, tasks_db_name);
                env.removeDatabase(null, system_db_name);
                env.cleanLog();
            env.close();but I'm facing large log files when application is working. Do we need to configure cleaner somehow, I don't beleive 50.000 records could consume 3 GB, since each record is nothing more than URL, everal short strings (mime type etc) and several integers, representing parent pages. Considering that list of parent ID could contain several thousands of integers, I still don't think this can consume 64 kilobytes of data.

  • Huge File Sizes - Illustrator CS

    Lately my file sizes have been absolutely huge - all I was doing earlier was creating a simple poster using a jpeg picture (800kb) and two logos (about 800kb each) and the file size managed to reach 20mb when saved as a PDF! What am I doing wrong here? Is it the pictures I'm using or the illustrator file?
    Any ideas??

    A guy here had this problem with an Illy file that produced a huge pdf and we haven´t solved the problem yet. It was a simple A4 ad with very little text and one normal sized linked eps image in 300 dpi.
    The Illy file saved as a few KB. The linked picture was about 2 MB.
    We tried produced a pdf, both straight from Illy and also by distilling an eps of the ad. The eps was about 3 MB but the resulting pdf (with the image in 300 dpi) was around 9 MB whichever way we tried to do things.
    This struck us all as abnormal and we tried different methods on different machines but no-one can find what went wrong. We just send the 9 MB pdf to press and everything was o.k., but we still don't know why it was so big.
    Any ideas anyone?

  • Lite 10g DB File Size Limit

    Hello, everyone !
    I know, that Oracle Lite 5.x.x had a database file size limit = 4M per a db file. There is a statement in the Oracle® Database Lite 10g Release Notes, that db file size limit is 4Gb, but it is "... affected by the operating system. Maximum file size allowed by the operating system". Our company uses Oracle Lite on Windows XP operating system. XP allows file size more than 4Gb. So the question is - can the 10g Lite db file size exceed 4Gb limit ?
    Regards,
    Sergey Malykhin

    I don't know how Oracle Lite behave on PocketPC, 'cause we use it on Win32 platform. But ubder Windows, actually when .odb file reaches the max available size, the Lite database driver reports about I/O error ... after the next write operation (sorry, I just don't remember exact error message number).
    Sorry, I'm not sure what do you mean by "configure the situation" in this case ...

  • DB file size

    Hi All,
    I installed oracle 10g and I have tables around 500 with huge data. recently I moved the data in to another server and purged the tables.But, still the database file size is more than 3GB even the number of tables are only 20 with 50 rows each. If I took the backup using command tool the database file size is just 200KB. Please let me know how to shrink the database file. I tried using the Application Express but no such great change.
    Thank you.
    Bhargava Sriram A.

    ALTER DATABASE DATAFILE '<file_name>' RESIZE <integer> K;
    Also check the below metalink article if you have issues while resizing datafile.
    Note 130866.1 How to Resolve ORA-03297 When Resizing a Datafile by Finding
    the Table Highwatermark

  • Database file: forms disapeared but size of file remained

    After saving a Star Office database file the computer closed down due to powe management setting. At opening the file again, the forms contained in it are invisible. How can I make them visible again?

    Yes, psadmin worked. My command was:
    psadmin register-portlet -u amadmin -f password.txt -p portal1 -g myapp.ear
    I think that bypasses the file size, not completely sure. I found several files called web.xml and changed them all but it did not make any difference to deploy portlet on the console
    So I think this is the right answer.
    Thanks

  • Enterprise manager log file sizes

    Hi,
    I was wondering if there is a way of managing the size of the emdb.nohup file in Enterprise Manager. Looking at the documentation, it looks as though you can control the emoms trace and log file sizes, but I can't find anything about the nohup log.
    Ideally I would like to be able to purge the log file.
    Thanks very much!

    Hi again,
    I found the emdb.nohup file in my log directory at the location you noted. It is apparently created when I stopped and restarted my dbconsole (emctl start dbconsole) and is updated each time I make a connection to that database (for which you are connecting), and for every time the page refreshes.
    I think the name of the file using a suffix of nohup is probably intentional on Oracle's part to indicate that this is a log file that is 'active', but in reality, it is not a true nohup file as in the sense of using the unix nohup command. (at least that is what I'm thinking)
    Sorry I'm not knowledgeable enough on this to be sure of my theory, but that is what I basically theorize.
    According to man pages on nohup, it states nohup is "a utility immune to hangups".
    "nohup - run a command immune to hangups, with output to a non-tty"
    To answer your question, there is no problem purging or pruning this file.
    I just cleared it out by sending a date to the file which effectively clears it out except for a new entry in the file with the current date/timestamp.
    e.g., $ date > emdb.nohup
    Then, I reconnected to my OEM console for this database and it updated the file with new entries for the new connection. No problem....
    Wed Aug 6 09:46:53 EDT 2008
    08/08/06 09:47:07 ## oracle.sysman.db.adm.inst.SitemapController: event="doLoad"
    08/08/06 09:47:07 ## 1. newPage = /database/instance/sitemap/sitemap
    08/08/06 09:47:07 ## 2. newPage = /database/instance/sitemap/sitemap
    Ji Li

  • First time user questions (managing library, file size defaults, cropping,)

    I'm on my first Mac, which I am enjoying and am also using iPhoto '08 for the first time which is a great tool. It has really increased my efficiency in editing a lot of photos at one time. However, the ease of use makes a couple custom things I want to do difficult. I'd appreciate any feedback;
    1) I often want to get at my files for upload or transfer to another machine. When I access Photos with my Finder I can only see "Iphoto Library" (or something like that) which does not show the individual files. Very annoying. I have found that i can browse to one of the menus and select "Open Library" and then I can see all the files.
    How can I make it default to this expanded setting? When I am uploading pictures via a web application, for instance, the file open tool does not usually give me the option to Open the library. By Default, I would like the library to always be expanded so I do not have to run iPhoto or select "open" to view the files.
    Basically, I just want easy manual control of my files in Finder and other applications.
    2) Where do I set the jpg size of an edited file? My camera will output 10MB files and after a simple straighten or crop, iPhoto will save them to 1MB files.
    Ignoring the debate on file size, is there a way to control the jpg compression so I can stay as close to what came out of the camera as possible?
    3) I crop all my photos to 5x7. If I do that once, it comes up the next time. However, once I straighten the photo and then choose crop, it always comes up with some other odd size by default (the largest rectangle that can be fit in the new photos size).
    While I know this may be useful for some people, it is time consuming when going through hundreds of photos to constantly have to choose "5x7" from the drop down list. Is there a way to make this the default?
    I'm sure I'll have some more questions, but thus far, I've been real happy with iPhoto.
    4) The next task will be sharing this Mac Pictures folder on my Wireless network so my XP PC can access it. I'm open to any tips on that one as well....
    Thanks!

    toddjb
    Welcome to the Apple Discussions.
    There are three ways (at least) to get files from the iPhoto Window.
    1. *Drag and Drop*: Drag a photo from the iPhoto Window to the desktop, there iPhoto will make a full-sized copy of the pic.
    2. *File -> Export*: Select the files in the iPhoto Window and go File -> Export. The dialogue will give you various options, including altering the format, naming the files and changing the size. Again, producing a copy.
    3. *Show File*: Right- (or Control-) Click on a pic and in the resulting dialogue choose 'Show File'. A Finder window will pop open with the file already selected.
    To upload to MySpace or any site that does not have an iPhoto Export Plug-in the recommended way is to Select the Pic in the iPhoto Window and go File -> Export and export the pic to the desktop, then upload from there. After the upload you can trash the pic on the desktop. It's only a copy and your original is safe in iPhoto.
    This is also true for emailing with Web-based services.
    If you use Apple's Mail, Entourage, AOL or Eudora you can email from within iPhoto.
    The format of the iPhoto library is this way because many users were inadvertently corrupting their library by browsing through it with other software or making changes in it themselves. If you're willing to risk database corruption, you can restore the older functionality simply by right clicking on the iPhoto Library and choosing 'Show Package Contents'. Then simply make an alias to the folders you require and put that alias on the desktop or where ever you want it. Be aware though, that this is a hack and not supported by Apple.
    Basically, I just want easy manual control of my files in Finder and other applications.
    What's above is what's on offer. Remember, iPhoto is NOT a file organiser, it's a photo organiser. If you want to organise files, then use a file organiser.
    Where do I set the jpg size of an edited file
    You don't. Though losing 9MB off a 10MB files is excessive. Where are you getting these file sizes.
    I crop all my photos to 5x7. If I do that once, it comes up the next time. However, once I straighten the photo and then choose crop, it always comes up with some other odd size by default (the largest rectangle that can be fit in the new photos size).
    Straightening also involves cropping. Best to straighten first, then crop.
    The next task will be sharing this Mac Pictures folder on my Wireless network so my XP PC can access it.
    If you use the hack detailed above be very careful, it's easy to corrupt the iPhoto Library, and making changes to the iPhoto Package File via the Finder or another app is the most popular way to go about it.
    Regards
    TD

  • Please can you tell me the default maximum file size for an attachment in Case Management v12 ?

    Hi,
    Please can you tell me the default maximum file size for an attachment in Case Management v12+? I am able to define a maximum attachment size but I am not able to see what the default is set to.
    Thank you
    Regards,
    Anthony

    Hi,
    The default max attachment size is 8MB.
    Regards.
    Mike

  • Is there a size limit on the iPod for the song database file ?

    I have been running into the same issue for the last 2 weeks: Once I exceed 110 GB on my iPod Classic 160 GB, iTunes is no longer able to update the database file on the iPod.
    When clicking (on the iPod) on Settings/About, the iPod displays the wrong number of songs. Also, the iPod is no longer able to play any songs.
    Is there a size limit for the database file on the iPod ?
    I am making excessive use of the 'comments' field in every song that I load to the iPod. This increases the size of the database file.
    Is there a way, that I can manually update the database file on the iPod ?
    Thanks for your help !

    did you experience some crashing of the ipod as well? do you know how many separate items you had?

Maybe you are looking for