Few interesting facts about database under UCM 10g

This data is obtained from one project. The reader may get an impression how database under UCM will look like, which can help a DB Administrator to set it up.
The project uses very few custom metadata fields, documents are stored in the database. At the moment of the analysis, the database contained more than 40 millions of items and the overall size of the database was over 19.2TB.
The list of tables with size over 1 GB:
Table Name Records Size Meaning
FILESTORAGE 42 mil. 19.22 TB stores documents
SCTACCESSLOG 368 mil. 113 GB stores audit trail from CONTENT TRACKER
IDCCOLL1 17 mil. 26 GB stores full text information
DOCMETA 42 mil. 10 GB custom metadata fields of content items
REVISIONS 42 mil. 10 GB metadata for each revision
DOCUMENTS 83 mil. 7 GB assets within the Content Server
DOCUMENTHISTORY 41 mil. 3 GB history of actions performed on a document
IDCCOLL2 1.8 mil. 3 GB stores full text information
Other core Content Server tables that could be potentially large:
ARCHIVEHISTORY - history of archived assets within the system (Archiver?)
WORKFLOWHISTORY - history of actions performed on an asset within a workflow

Hi Jiri,
Thanks for sharing.
That is a very small text index (as a proportion of the File Storage) is there a lot of 'image' type content stored?
Classically I think the rule of thumb was to assume that text index would be 1/3 of the file storage - obviously this will vary hugely depending on what you store.
I think it would be interesting to know the basic purpose for this system so that the great data you have provided can be put into a more useful context for other users.
I would guess this is a high volume imaging system where the majority of items will only have one revision.
Thanks
Tim

Similar Messages

  • Few basic questions about database administration

    Hello,
    I have a few basic questions about database administration.
    1. I switched one of my oracle instances to archivelog mode. I just cannot locate the archive log files on my windows system. The %ora_home%/ora92/database/archive directory is desperately empty...
    2.What is the tools01.dbf datafile used for?
    3.What is the undotbso1.dbf datafile used for?
    Thanks in advance,
    Julien.

    1. The archive log location needs to be specified in your init.ora file. By default, Oracle will place the archive files in either ORACLE_HOME/dbs or ORACLE_HOME/database.
    2. The tools01.dbf file belongs to the tools tablespace which should be set as the default tablespace for system. It primary purpose is to hold Oracle Forms and Reports database objects, however, it should also be used for holding other non sys database objects such as the perfstat (statspack) or other third party database schemas e.g. quests sqllab.
    3. undotbs01.dbf file belongs to the undo tablespace.

  • Interesting facts about Nokia Map Loader 1.3 and 1...

    Hi there, I have done some interesting tests yesterday after upgrading the Map Loader to version 1.3. First, I tried to connect my N95-1 (v.20.0.015) with a microSD card (4 GB) to my PC with Nokia Map Loader 1.3 running. The software asks to update to 1.2.1, then it tells my Map data is incompatible. Nothing new with this. Well I went to my wifes PC (with Map Loader v. 1.2.1. running) and found out the older version too claims that map data is incompatible! Well, then I changed another microSD card (2 GB) into my N95, formatted it, opened the Maps software. Now the Map loader did work OK. I could download maps and voices. I then changed my original microSD card (4 GB) back, and then deleted (with winXP resource manager) the folders "cities" and hidden "private" completely. After this the Map Loader (1.2.1) did work all right with this 4 GB microSD card. I downloaded the maps and voices I needed and now the N95 works fine. I haven´t, though, tried this with the version 1.3 yet. But you will find the version 1.2.1 from this Smart2go site: http://www.smart2go.com/en/help/maploader harrykaa

    OK now, I have some more info on this topic now.
    Firstly after having tried to open MapLoader 1.3 and getting the "uncompatible" message, you do not have to format the SD card.
    Only delete the hidden "private" folder and the folder "cities". Then you go and delete the qf file from the root directory of the SD card too.
    What is new is that I did this 3 days ago and then downloaded the maps with older MapLoader (1.2.1).
    Now, yesterday I deleted folders and qf file again and found out that the new MapLoader (1.3) works too.
    It even asks to update the maps before opening.
    I did download the maps of the Europe with both versions 1.2.1 and 1.3.
    But it was interesting to find out that Europe maps downloaded with 1.2.1 were 1,525 MB, but the maps downloaded with 1.3 were over 1,600 MB. So with new version you get more recent maps.
    harrykaa

  • Database options under UCM

    With regard to this thread: Few interesting facts about database under UCM 10g what Oracle database options can be effectively used under UCM? The comprehensive overview of the options can be obtained here: http://www.oracle.com/us/products/database/options/index.html
    Real Application Clusters* - this option can be used to increase the database performance and availability. It is fully transparent to applications.
    Partitioning* - this option can affect performance, enable hierarchical storage management (using cheaper hardware to store large amount of data) and help with disaster recovery (backup/restore). I believe, if documents are stored in the database, this option is a must. Even if a project does not use HSM, partitioning of large tables such as FILESTORAGE will enable: a) faster backups - once a partition is "closed", it will not change - therefore, future backups can work only with "open" partitions and unpartitioned data; b) faster restores - large tables can only be partially restored - e.g. few "last months" and the system can be running whilst restoring the remaining data. Watch out for partitioning of metadata tables, though (DOCMETA, REVISIONS, DOCUMENTS)! At least, there are no clear criteria how these tables should be partitioned - and various checks and validations may actually require to have those tables restored fully before you may perform such basic operations like check-in.
    Advance Security* and Database Vault* - these options may increase security, when content is stored in the database (no one, not even administrators might be able to reach the content unless authorized). The only drawback to that is that even if content is stored in the database, in initial phases it is anyway stored in the filesystem (vault), too, and the minimum retention period is 1 day
    I will also mention two options that might look appetizing, but UCM probably does not benefit from them too much:
    Advanced Compression* - compresses data in the database. This, and Hybric Columnar Compression used in Exadata, can do the real magic when working with structured data (just read a report from Turkcell, who compressed 600 TB to 50 TB, which means by 12). For unstructured data, such as PDF or JPEG, the effect might be very small, though. Still, if you have a chance, give it a try.
    Active Data Guard* - Data Guard is a technology for disaster recovery. Advantage of Active Data Guard is that it allows using of the secondary location for read only operations, rather than leaving it idle (stand-by); this means, you might decrease sizing of both locations. With UCM, also do not forget about CONTENT TRACKER (which might require a "write" operation even for otherwise read only ones, such as DOC_INFO, GET_SEARCH_RESULTS, or retrieving a content item), but db gurus know how to handle even that. Unfortunately, Active Data Guard cannot be used with UCM at the moment, because not all the data is stored in the database and the secondary location might not be fully synchronized.
    In my opinion, other options are not so relevant for a UCM solution.

    Compression and Deduplication of SecureFiles LOBs, which is part of the Advanced Compression Option, can potentially deliver huge space savings, and performance benefits. If the content is primarily Office documents, or XML documents, or character-based (email?), then it will likely compress very well. Also, if the same file is stored multiple times, deduplication will cause the Oracle database to store only one copy, rather than storing the same document multiple times. There's more info on Advanced Compression here: http://www.oracle.com/us/products/database/options/advanced-compression/index.html

  • Install IPM 11g on UCM 10g

    Dear All,
    I already install ucm 10g. And right now i have requirement to instal IPM 11g on that existing ucm 10g. Is that possible to install IPM 11g on UCM 10g? which document i need to look up for this case?
    Many Thanks
    Gumas

    See the installation manual: http://docs.oracle.com/cd/E21043_01/doc.1111/e14495/configipm.htm
    "You can use either Oracle UCM 11g or Oracle UCM 10g as the Oracle I/PM repository. For information about configuring Oracle UCM 11g, see Chapter 4, "Configuring Oracle Enterprise Content Management Suite," Chapter 5, "Configuring Oracle Universal Content Management," and Section 7.1.1.1, "Configuring Oracle Content Server 11g to Work with Oracle I/PM." For information about using Oracle UCM 10g as the repository, see Section 7.1.1.2, "Installing and Configuring Oracle UCM 10g to Work with Oracle I/PM."

  • Questions about free Download Oracle 10g, Database and Developer suite

    Hi everyone, got some questions..
    1) Is it possible to download free Oracle 10g and Developer suite? is a 30 day license trial or something like that?
    2) On windows systems which are the minimun requirements?, for example in a Pentium 4, 512Mb RAM, Windows XP Home edition is OK?
    3) Should I download Standard Edition? Personal?
    4) If I am trying to update my Oracle Developer knowledge (I was developer on 1999 with Oracle 7.3 and Developer 2000) what products do I have to install?? Oracle DB 10g, Developer Suite,Application Server too?, what else?
    Thanks guys!
    J.

    My answer you could find here Questions about free download Oracle 10g, Developer Suite

  • Interesting Stuff about Heat, Whining, and CPU

    This whining noise with my new macbook pro has had me (and many others) stumped. I still havent found a solution, but I've gathered some interesting data with a great program found here: http://www.bresink.de/osx/TemperatureMonitor.html.
    First, for those more worried about the heat, new MacBook Pros do not seem to have a heat problem (at least mine doesn't). Mine runs cooler on my lap than on my desk for some weird reason though. About 50 C on the desk and 40 or so on my lap. (I had posted in another topic that the laptop felt hotter on my lap, but after letting the unit sleep for awhile, then running it on my desk and then on my lap, Bresink's temperature monitor confirms the temperatures, which could be due to the vent design or my owning a crappy desk lol.)
    Second, the System Information window shows my MacBook Pro as being manufactured on 4/14/06, so new revisions obviously did not fix the whining issue.
    Third, the System Information window reports that the CPU does NOT have variable speed (***?) and shows it to be running at 1.83 ghz all the time regardless of power management settings or AC/battery power. Anyone else find this strange? Both the nominal CPU clock and the actual CPU clock are always at 1.83 ghz according to this program.

    Another interesting information about the whine and why it, for some of the users who have a whining MBP, didn't occur on WinXP SP2:
    MS KB918005 "Battery power may drain more quickly than you expect on a Windows XP SP2-based portable computer" ( http://support.microsoft.com/kb/918005/en-us )
    After applying this fix, which fix a power drain due to bad power management in WinXP SP2, a user reported on onmac.net forum ( http://forum.onmac.net/showthread.php?t=1214 ) that his MacBook Pro started to whine on WinXP.
    This confirm what said the former intel engineer who posted on this forum at the beginning of the talks about this issue. This fix will also likely make other notebook PCs whine.
    So apparently MacOS X do manage the power correctly, and the fact that the whine didn't appear on WinXP SP2 was because of not using the CPU power states in which the whine appear (C4 power state for example). So the fact of no whine under WinXP SP2 on MacBook Pro was because of a bug in WinXP SP2 CPU power management states.
    So apparently the former intel guy who posted here was right and so if we assume that all what he said was right, then he also said that there is not hardware way to be sure that all computer produced are whine free (because of the nature of the components used), but that, by using component of better quality, it can be reduced for the units which are unluckily not whine free.
    Considering that thing, it's unlikly that a software update can remove the whine issue without reducing the battery life, and as latest user report seems to indicate that the recents series of MBP have a quieter whine, I HIGHLY RECOMMEND PEOPLE WHO DO HAVE A LOUD WHINING MBP TO SEND IT TO APPLE, IN ORDER TO GET THE LOGIC BOARD REPLACED BY A RECENT ONE!

  • Standby Database on Oracle 10g Standard Edition ?

    Hi All,
    Is it possible to do Oracle Stand By Database on Oracle 10g SE ?
    I read somewhere in this group that we can do standby database even in SE by writing a script that copy the archive log to the secondary server.
    Is this approach reliable enough to be done in production ?
    Does anybody have sample of the script ?
    Thank you for your help,
    xtanto

    Hi,
    Well, I think I'd beter put the scripts right there. Remember, they have been tested under MY environment, using MY releases, MY O.Ses. It's up to you to check whether the're valid for you. They are provided as is, for sample.
    There are 4 files:
    . generic.sh : you can duplicate this file in order to set up as many standby as needed, etc. It calls other scripts in order to:
    . archivemove.sh : Get the archived redo logs to the standby host
    . recover.sh : Synch the standby
    . getrecid.sql : get the maximum progess point on the Manual Standby (used by archivemove.sh)
    These scripts are used from the standby host. Remember to throughly check it before relying on it for production.
    generic.sh
    #!/bin/sh
    # Be sure environment variable are set. If not, then it might fail!
    # These environment variables are those for the Manual Physical Standby host
    export ORACLE_HOME=/logical/oracle/Ora9i
    export ORACLE_BASE=/logical/oracle
    export ORACLE_STANDBY=tns_alias
    export ORACLE_STANDBY_SYSDBA_PASS=change_on_install
    export PATH=$ORACLE_HOME/bin:$PATH
    export SOURCE_HOST=primary_host
    export SOURCE_DRIVE=/primary/absolute/path/to/archived/redo/logs
    export LOCAL_ARC_PARTH=/path/to/logical/archive/dest
    # Check the date command usage depending on the platform
    dateexec=`date "+%Y-%m-%d-%H-%M"`
    # copy archived redo logs from main database
    archivemove.sh > $dateexec.generic.log
    # recover/sync the Manual Standby Database
    recover.sh >> $dateexec.generic.log
    archivemove.sh
    #!/bin/sh
    echo ----------------------------------------------------------------
    echo ----------------------------------------------------------------
    echo Get what log has last been applied to: $ORACLE_STANDBY
    echo ----------------------------------------------------------------
    sqlplus /nolog @getrecid.sql $ORACLE_STANDBY
    echo ----------------------------------------------------------------
    maxval=`tail -1 recid.log`
    echo maxval=$maxval
    rm recid.log
    echo ----------------------------------------------------------------
    # Check source drive to see what we're missing locally (source = primary)
    for filename in `remsh $SOURCE_HOST 'ls $SOURCE_DRIVE' | sort`
    do
         # get archive number.
         # WARNING here I'm based on MY archived redo log name format! Put yours for the cut
         filename_parsed=`echo $filename | cut -c12-16`
         # Check if the number is after the last one applied to standby
         if [ $filename_parsed -gt $maxval ]
         then
              # grab it!
              echo $filename
              rcp $SOURCE_HOST:$SOURCE_DRIVE/$filename $LOCAL_ARC_PARTH
         fi
    done
    echo ----------------------------------------------------------------
    echo Removing old files
    echo ----------------------------------------------------------------
    # Check in local directory
    for filename in `ls $LOCAL_ARC_PATH | sort`
    do
         # WARNING again about filename format
         filename_parsed=`echo $filename | cut -c12-16`
         # Check the arc number...
         if [ $filename_parsed -lt `expr $maxval - 15` ]
         then
              # Delete it!
              echo $filename
              rm -f $LOCAL_ARC_PATH/$filename
         fi
    done
    echo ----------------------------------------------------------------
    echo end archivemove.sh
    echo ----------------------------------------------------------------
    recover.sh
    #!/bin/sh
    echo ----------------------------------------------------------------
    echo Traitement de la base $ORACLE_STANDBY
    echo ----------------------------------------------------------------
    sqlplus /nolog << EOF
    connect sys/$ORACLE_STANDBY_SYSDBA_PASS@$ORACLE_STANDBY as sysdba
    SELECT MAX(RECID) "Log id now" FROM V\$LOG_HISTORY;
    RECOVER AUTOMATIC DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE
    CANCEL
    SELECT MAX(RECID) "Log id after recover" FROM V\$LOG_HISTORY;
    exit;
    EOF
    echo ----------------------------------------------------------------
    echo End of recovery process
    echo ----------------------------------------------------------------
    getrecid.sql
    connect sys/change_on_install@&1 as sysdba
    SET HEAD OFF FEEDBACK OFF VERIFY OFF TERMOUT ON ECHO OFF TRIMSPOOL ON SERVEROUTPUT OFF
    SPOOL recid.log
    SELECT MAX(RECID) FROM V$LOG_HISTORY;
    SPOOL OFF
    exitHTH building your own scripts.
    Yoann.

  • How to create new database on Oracle 10g

    Hi All,
    Can any one tell me how to create new database on oracle 10g.
    Thanks in Advance for your help.

    again some confusion here.....
    u said u need a new database in your first post and now you saying u need a new schema..
    one database has many schemas(users)..... ex: scott,sys,system are few of them...
    now it depends you need seperate database for test,dev environment - this is in the case u have many schemas under each database and many tables(objects) under each schema.
    OR
    You just need a separate schema (in same db) for test,dev environment...where in you will have multiple tables in each schema...U need to know the dba credentials of the db to create a new schema.
    ideally u need to have different database...You can create one with out sys/system(oracle users) password as these passwords are db dependent.
    what you need is access to the any machine where server is installed(can be the same mc where you have your dev db or a diff machine) and that will be the machine where your db will be installed (can do it through database configuration assistance),ofcourse you will need windows authentication for this.
    so you login to the same machine or access it from your machine using remote login.
    I hope that is clear.Hope i am not listing things that you already know..Just did it coz of confusion between db and schema
    Message was edited by:
    coolguy

  • TDE and database copies in 10g

    We are looking into various encryption options for our 10g database running on AIX 5.3. As part of our development support, we regularly copy the production database files to other AIX servers, scrub the data, and create the resulting database under a different name. We are interested in what impact the encryption would have on this process, namely on bringing up a new database on a different server. We currently do not think we will want/need encryption on the copied, scrubbed databases, but we believe the database will actually come up in an encrypted state, although probably not useable because of the key/server relationship.
    Is anyone currently using TDE in an environment similar to this? If so, do you have any suggestions for things to look into further or for other options that should be considered?
    Thanks,
    Kathleen West
    ERS of Texas

    RMAN Cross plaform transportable Convert Database:
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/dbxptrn.htm#CHDCFFDI
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/ontbltrn.htm
    Regards
    Asif Kabir

  • Why i can't not create new database in oracle 10g express

    why i can't not create new database in oracle 10g express?
    should i use oracle 11g standard edition?
    thanks

    In Oracle a schema is what a 'database' is in Sqlserver.
    And if you would have been aware about the limitations of XE, you would have known you can only create *1* (one) database using Oracle XE.
    However, probably you don't need a new database at all, and a schema will suffice.
    Sybrand Bakker
    Senior Oracle DBA

  • Manipulate databases in Oracle 10g XE R2

    Hi all,
    I need to create and manipulate databases in Oracle 10g XE R2 using perl DBI:1.607
    please can someone help me with some documentation
    thank you in advance

    user10984468 wrote:
    Hi all,
    I need to create and manipulate databases in Oracle 10g XE R2 using perl DBI:1.607
    please can someone help me with some documentation
    thank you in advanceThis does not appear to be an issue with documentation. (Note the title of the forum.)
    It does appear to be a request for documentation. Your question is best asked in the Database category - go to http://forums.oracle.com, scroll down to Databases, expand that category (click on More...), look for Express Edition.
    The XE docs are at http://www.oracle.com/pls/xe102/homepage
    Much more info about XE is at http://www.oracle.com/technology/products/database/xe/index.html

  • Migrate Oracle 9i Database to Oracle 10g Solaris Platform

    Hi DBAs,
    Thank you very much all of you, you helped me in RMAN Duplicate Database, There was a problem in my backupset and now it has been resolved.
    I have now following task.
    I want to migrate Oracle 9i Database to Oracle 10g ASM, both are on Solaris platform. Please provide me step by step solution. I have very big Database about 250G Database.

    In the "Complete checklist for manual upgrades to 10gR2" is written:
    PREREQUISITES
    =============
    + Install Oracle 10g Release 2 in a new Oracle Home.
    + Install the latest available patchset from Metalink.
    + Install the latest available Critical Patch Update.
    ... so "Install the latest available Critical Patch Update" step is that for the Oracle 10g database Oracle Home or Oracle 9i database Oracle Home ? I think that is for the 10g database/ Oracle Home ...
    after applying the CPU pacht there is some "Post Installation Instructions":
    3.3.5 Post Installation Instructions which demande a connection to the database:
    3. For each database instance running out of the ORACLE_HOME being patched, connect to the database using SQL*Plus as SYSDBA and run catcpu.sql as follows:
    cd %ORACLE_HOME%\CPU\CPUJan2007
    sqlplus /nolog SQL> CONNECT / AS SYSDBA
    SQL> STARTUP
    SQL> @catcpu.sql
    SQL> QUIT
    My question is ... If I want to upgrade a 9i database to 10g and I apply this CPU before the upgrade is done (to 10g Oracle Home), how could I connect to an instance which is not present to run this script ? ... is this step mandatory or is optional ?
    Thanks,
    P.

  • IMP database for oracle 10G in window

    Hi Experts,
    I try to imp database from exp dump file.
    I am new person.
    When i create a blank database by Oracle 10G ( create general purpose database during install oracle).
    Now I want to imp a database about 250G size.
    DO I need to create a each tablespace in source DB before imp?
    I have a full exp database dump file?
    Can I directly imp a full exp dump files to copy source database?
    Thanks
    Jim

    Thanks for your help.
    If I do not create source DB tablespace. what kind of tablespace will be imp inot new database? default from source database?i
    I have a full exp dump file.
    also if you want to keep the same tablesapce as source then just create all of tablespace. how about data file? we can change data file size. is it true?
    Thanks
    JIM
    Edited by: user589812 on Feb 26, 2009 12:43 PM

  • Where to learn about database tuning from?

    Hello,
    I need to learn more about database tuning - practical aspect. Are there any sites/services that could help me? I can't use production environment of course, I need to prepare my own ones (Oracle DBs on linux and windows too) and workload too. How to simulate workload from many users?
    Thanks in advance for help
    Aliq

    How do you learn to paint? You can read every book on the subject, attend lectures by famous artists on paints and brushes and styles and what not..
    None of this will turn you into an artist that can paint. Theory only goes that far.
    And this is as true in performance tuning as in painting. You need to run into that brick wall called experience over and over again - and each time learn hard lessons that no theory can ever teach.
    If performance tuning was that easy, we would have had fully automated tuning software in operating systems and database systems that could detect and fix all our performance woes on the fly.
    Does not work like that.
    Also, performance tuning is many times seen as an "after the fact" thing. Design the system. Code the software. Implement it. Then tune it.
    Wrong. Also does not work like that.
    Performance tuning begins at the very first workshop when brainstorming the basic design of the system. If performance and scalability are not part of that process, they cannot easily (if at all) be made to be part of the final system as a tuning exercise.
    If I need to pass a single fundamental "uber-alles" principle for performance tuning - when dealing with it after the fact (as many of us do), then it is:
    Identify The Problem
    Do not confuse symptoms as the actual problem.
    PS. Performance tuning is also many time (IMO) a situation where you have lost. Why? Because of if the code was designed and coded correctly, then there would not have been a performance issue. If the Oracle architecture was understood correctly, there would not be a problem. Which makes the advice by the other 2 posters so important. Understand Oracle. Understand how to design and code in Oracle. If done well, what is there left to performance tune?

Maybe you are looking for

  • External hard drive not showing up on the desktop

    I have a external hard drive and when i turn it on the icon for the drive does not show up. I used disk warrior and it found sever damaged sections of the drive. i fixed that and it is still not mounting. Should it get a new hard drive?

  • Sub-contracting in routing

    Hi, i have one routing for my finished good with some 10 operations.the bom is simple with bar as input & Finished goods as output. The first 3   & the 8th operation  are mandatory sub-contracting operations & the rest could be in-house or external d

  • HF R52-POOR VIDEO QUALITY IN LOW LIGHT

    HELLO,I'VE BEEN USING THE HF R52 FOR SEVERAL MONTHS NOW AND FOR THE MOST PART AM VERY HAPPY WITH IT..THE ONE ISSUE I'M HAVING IS WITH THE POOR VIDEO QUALITY IN LOW LIGHT CONDITIONS[NIGHT TIME OR INDOORS].THE RESULTS ARE "GRAINY" OR "FUZZY"..I'VE TRIE

  • Change PCR processes in MSS

    Hello Experts, i am using standard PCR processes like Change employee group and subgroup,change working time,Change position, etc. In PCR - Personnel Change Requests processes, i would like to remove/hide Request for Transfer and Request for Transfer

  • Not working well when charging.

    Hello I have iPad air 2 (recently bought). I am facing issue with my iPad. It works great but when I put iPad in charging then I have to wait until its charged. Its not working while charging it.. It makes me feel like I am using some old Nokia smart