Very Large Site

In my current environment I have the following setup: I have 1 content database assigned to a Web Application which has a single site collection with a single site. The content database  is 190 GB. Is there a process to split
the content database into multiple smaller content databases ? Any other recommendations to manage the large content database?
Thanks
MP

Normally this would be a fairly routine operation, except for your one parameter that it is all one big site collection.  Ouch.  MS doesn't support splitting a site collection over two content databases, if you've ever taken a peek inside the contentDB
you'll see why.
There are a couple of things you can do though...
Best solution would be to analyze your site structures and see if there are some logical groupings.  For instance, if you have a document center as part of your site collection that would be perfect to break out into its own site collection.  if
you find your sites are grouped into units like departments and projects that would be another way to break up the site collections.  try to identify classifications of sites and set up a governance/managed path that allows you to group each site
classification into it's own site collection.  once they are broken out into site collections then you can move them around into seperate databases.
If grouping the sites into seperate site collection is not an option, the next best thing I've found is to enable the FILESTREAM and RBS. This is a little tricky and took me an hour or two to read through the documentation and practice on a test farm
a couple times.  The default filesize out of box is 0K (meaning all blobs) but I've found it's generally better to set the break-point around 100K.  The issue with this approach is of course backups and DR sites become just that much more complex,
but it will significantly reduce your contentDB.  Of course, you may find that a combination of the two might be more useful - break out the site into seperate site collections and then enale RBS on items like your document center.
Last option is the possibility that you may not need to do anything at all.  It's going to depend on how your site is used.  While 200GB is the recommended limit for collaborative sites, if the site is used largely for publishing static content
or archiving data or as a record center then your I/O and usage patterns would allow you to still be performant at up to a terabyte of data.
ieDaddy
Blog: http://iedaddy.com
Twit: @iedaddy

Similar Messages

  • Grid Control Architecture for Very Large Sites: New Article published

    A new article on Grid Control was published recently:
    Grid Control Architecture for Very Large Sites
    http://www.oracle.com/technology/pub/articles/havewala-gridcontrol.html

    Oliver,
    Thanks for the comments. The article is based on practical experience. If one was to recommend a pool of 2 management servers for a large corporate with 1000 servers, what that would mean is that if 1 server was brought down for any maintenance reason (for eg. applying an EM patch), all the EM work load would be on the remaining management server. So it is better to have 3 management servers instead of 2 when the EM system is servicing so many targets. Otherwise, the DBAs would be a tad angry since only 1 remaining managment server would not be able to service them properly during the time of the maintainance fix on the first management server.
    The article ends with these words: "You can easily manage hundreds or even *thousands* of targets with such an architecture. The large corporate which had deployed this project scaled easily up to managing 600 to 700 targets with a pool of just three management servers, and the future plan is to manage *2,000 or more* targets which is quite achievable." The 2000 or more is based on the same architecture of 3 managment servers.
    So as per the best practice document, 2 management servers would be fine for 1000 servers, although I would still advise 3 servers in practice.
    For your case of 200 servers, it depends on the level of monitoring you are planning to do, and the type of database managment activities that
    will be done by the DBAs. For eg, if the Dbas are planning on creating standby databases now and then through Grid Control, running backups daily
    via Grid Control, cloning databases in Grid Control, patching databases in Grid Control and so on, I would definitely advise a pool of 2 servers
    in your case. 2 is always better than 1.
    Regards,
    Porus.
    Edited by: Porushh on Feb 21, 2009 12:51 AM

  • "Fatal error: Allowed memory size of 33554432 bytes exhausted" I get this error message when I try to load very large threads at a debate site. What to do?

    "Fatal error: Allowed memory size of 33554432 bytes exhausted"
    I get this error message whenever I try to access very large threads at my favorite debate site using Firefox vs. 4 or 5 on my desktop or laptop computers. I do not get the error using IE8.
    The only fixes I have been able to find are for servers that have Wordpress and php.ini files.

    It works, thanks

  • All of the input fields in my browser, regardless of site, are very large while everything else is normal size. How do I fix this?

    All of the input fields in the browser are very large while other text is normal size. How do I fix this?
    In some cases some sites headers will run together and sometimes overlap. Other sites will just show a portion of the site in a vertical box but not the full width of the screen and I can't adjust it.

    Applications -> Utilities -> Audio MIDI Setup; select the desired output device and click on the gear icon at the bottom, select 'Play alerts and sound effects through this device'.

  • How can we suggest a new DBA OCE certification for very large databases?

    How can we suggest a new DBA OCE certification for very large databases?
    What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
    The largest databases that I have ever worked with barely over 1 Trillion Bytes.
    Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
    I could guess that maybe some of the following topics of how to configure might be on it,
    * Partitioning
    * parallel
    * bigger block size - DSS vs OLTP
    * etc
    Where could I send in a recommendation?
    Thanks Roger

    I wish there were some details about the OCE data warehousing.
    Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
    Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
    Overview of Data Warehousing
      Describe the benefits of a data warehouse
      Describe the technical characteristics of a data warehouse
      Describe the Oracle Database structures used primarily by a data warehouse
      Explain the use of materialized views
      Implement Database Resource Manager to control resource usage
      Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
    Parallelism
      Explain how the Oracle optimizer determines the degree of parallelism
      Configure parallelism
      Explain how parallelism and partitioning work together
    Partitioning
      Describe types of partitioning
      Describe the benefits of partitioning
      Implement partition-wise joins
    Result Cache
      Describe how the SQL Result Cache operates
      Identify the scenarios which benefit the most from Result Set Caching
    OLAP
      Explain how Oracle OLAP delivers high performance
      Describe how applications can access data stored in Oracle OLAP cubes
    Advanced Compression
      Explain the benefits provided by Advanced Compression
      Explain how Advanced Compression operates
      Describe how Advanced Compression interacts with other Oracle options and utilities
    Data integration
      Explain Oracle's overall approach to data integration
      Describe the benefits provided by ODI
      Differentiate the components of ODI
      Create integration data flows with ODI
      Ensure data quality with OWB
      Explain the concept and use of real-time data integration
      Describe the architecture of Oracle's data integration solutions
    Data mining and analysis
      Describe the components of Oracle's Data Mining option
      Describe the analytical functions provided by Oracle Data Mining
      Identify use cases that can benefit from Oracle Data Mining
      Identify which Oracle products use Oracle Data Mining
    Sizing
      Properly size all resources to be used in a data warehouse configuration
    Exadata
      Describe the architecture of the Sun Oracle Database Machine
      Describe configuration options for an Exadata Storage Server
      Explain the advantages provided by the Exadata Storage Server
    Best practices for performance
      Employ best practices to load incremental data into a data warehouse
      Employ best practices for using Oracle features to implement high performance data warehouses

  • Safari crashes when opening a very large pdf

    I have a 1st generation ipad running 4.3.5.
    Everytime I try to download a very large pdf in Safari ie. 175+ megs in gets about three quarters of the way through then Safari crashes and loose my progress. I will then restart Safari and the process restarts and crashes again.
    I've set auto-lock to never but that didn't help. Any ideas how to get Safari to download this file.
    ps I've considered other methods of getting the pdf but for this project I have to download it from a web site.
    Thanks for any help.

    Other apps can download PDFs - I don't know whether it can cope with a 175 meg download, but GoodReader can download files : http://www.goodreader.net/gr-man-tr-web.html

  • T3/RMI packet size very large

    In order to determine our bandwidth requirements, we recently placed a
    sniffer on our network to analyze our message packets. We've noticed
    that
    our packets are very large and it appears that much of the overhead is
    in
    the overhead of RMI or T3.
    Here are some sample numbers for similar message between a single client
    and our WLS server.
    T3: 3500 bytes
    T3 w/ HTTP tunneling: 5500 bytes
    IIOP: 1250 bytes (using VisiBroker ORB talking to Smalltalk ORB)
    As you can see the T3 packet size is 65% larger than the same packet
    sent
    via Corba/IIOP. It also appears that with RMI, all of the full class
    names
    and variable names are also being passed along within the packet. Are we
    missing something or is this an understood fact? Is there anything we
    can
    do to fix this problem? As it stands, our bandwidth requirements to
    support
    the larger T3 packet size are astronomical and this would not be
    feasible
    in a production environment. Does anyone know what is the typical
    percentage
    overhead increase per packet. It appears to be about 400%.
    Our WLS environment is described below.
    Edwin Marcial
    Continental Power Exchange
    Weblogic Environment
    WLS Server
    WLS 4.51 w/ Service Pack 7
    NativeIO = true
    ExecuteThreadCount = 40
    readTimeoutMillis=5000
    readTimeoutMillisSSL=10000
    Dell Pentium III 600 w/ 512 MB memory
    JavaSoft 1.2.2
    -ms128 -mx350
    WLS Client
    Java Application
    t3s and https (using WLS RMI)
    JavaSoft 1.1.7b
    typically Pentium 200 MHz or better w/ 64MB or more

    I think you are kind of stuck with this. RMI is a heavyweight protocol in
    comparision to IIOP. If the message sizes really bother you that much you
    may want to look into and a EJB implementation that maps RMI to IIOP such as
    the Inprise Application Server which sits atop the VisiBroker ORB.
    -paul
    Edwin Marcial <[email protected]> wrote in message
    news:[email protected]..
    In order to determine our bandwidth requirements, we recently placed a
    sniffer on our network to analyze our message packets. We've noticed
    that
    our packets are very large and it appears that much of the overhead is
    in
    the overhead of RMI or T3.
    Here are some sample numbers for similar message between a single client
    and our WLS server.
    T3: 3500 bytes
    T3 w/ HTTP tunneling: 5500 bytes
    IIOP: 1250 bytes (using VisiBroker ORB talking to Smalltalk ORB)
    As you can see the T3 packet size is 65% larger than the same packet
    sent
    via Corba/IIOP. It also appears that with RMI, all of the full class
    names
    and variable names are also being passed along within the packet. Are we
    missing something or is this an understood fact? Is there anything we
    can
    do to fix this problem? As it stands, our bandwidth requirements to
    support
    the larger T3 packet size are astronomical and this would not be
    feasible
    in a production environment. Does anyone know what is the typical
    percentage
    overhead increase per packet. It appears to be about 400%.
    Our WLS environment is described below.
    Edwin Marcial
    Continental Power Exchange
    Weblogic Environment
    WLS Server
    WLS 4.51 w/ Service Pack 7
    NativeIO = true
    ExecuteThreadCount = 40
    readTimeoutMillis=5000
    readTimeoutMillisSSL=10000
    Dell Pentium III 600 w/ 512 MB memory
    JavaSoft 1.2.2
    -ms128 -mx350
    WLS Client
    Java Application
    t3s and https (using WLS RMI)
    JavaSoft 1.1.7b
    typically Pentium 200 MHz or better w/ 64MB or more

  • Very-large-scale searching in J2EE

    I'm looking to solve a very-large-scale searching problem. I am creating a site
    where users can search a table with five million records, filtering and sorting
    independantly on ten different columns. For example, the table might be five million
    customers, and the user might choose "S*" for the last name, and sort ascending
    on street name.
    I have read up on a number of patterns to solve this problem, but anticipate some
    performance issues. I'll explain below:
    1) "Page-by-Page Iterator" or "Value List Handler"
    In this pattern, it appears that all records that match the search criteria are
    retrieved from the database and cached on the application server. The client (JSP)
    can then access small pieces of the cached results at a time. Issues with this
    include:
    - If the customer record is 1KB, then wide search criteria (i.e. last name =
    S*) will cause 1 GB transfer from the database server to app server, and then
    1GB being stored on the app server, cached, waiting for the user (each user!)
    to ask for the next 10 or 100 records. This is inefficient use of network and
    memory resources.
    - 99% of the data transfered from the database server will not by used ... most
    users flip through a couple of pages and then choose a record or start a new search
    2) Requery the database each time and ask for a subset
    I haven't seen this formalized into a pattern yet, but the basic idea is this:
    If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
    records from the db. If the user asks for the next page, requery the database
    and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
    The query is re-performed, causing the Oracle server to do another costly "execute"
    (bad on 5M records with sorting).
    To solve this, I've beed trying to enhance the second strategy above by caching
    the ResultSet object in a stateful session bean. Unfortunately, this causes a
    "ResultSet already closed" SQLException, although I ensure that the Connection,
    PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
    seen this on newsgroups ... it appears that WebLogic is forcing the Connection
    closed. If this is how J2EE and pooled connections work, then that's fine ...
    there's nothing I can really do about it.
    Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
    it yet, but it wouldn't be a great solution as it would be using Oracle-specific
    functionality (we are trying to be db-agnostic).
    More information:
    - BEA WebLogic Server 8.1
    - JDBC: Oracle's thin driver provided with WLS 8.1
    - Platform: Sun Solaris 5.8
    - Oracle 9i
    Any other ideas on how I can solve this issue?

    Michael McNeil wrote:
    I'm looking to solve a very-large-scale searching problem. I am creating a site
    where users can search a table with five million records, filtering and sorting
    independantly on ten different columns. For example, the table might be five million
    customers, and the user might choose "S*" for the last name, and sort ascending
    on street name.
    I have read up on a number of patterns to solve this problem, but anticipate some
    performance issues. I'll explain below:
    1) "Page-by-Page Iterator" or "Value List Handler"
    In this pattern, it appears that all records that match the search criteria are
    retrieved from the database and cached on the application server. The client (JSP)
    can then access small pieces of the cached results at a time. Issues with this
    include:
    - If the customer record is 1KB, then wide search criteria (i.e. last name =
    S*) will cause 1 GB transfer from the database server to app server, and then
    1GB being stored on the app server, cached, waiting for the user (each user!)
    to ask for the next 10 or 100 records. This is inefficient use of network and
    memory resources.
    - 99% of the data transfered from the database server will not by used ... most
    users flip through a couple of pages and then choose a record or start a new search
    2) Requery the database each time and ask for a subset
    I haven't seen this formalized into a pattern yet, but the basic idea is this:
    If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
    records from the db. If the user asks for the next page, requery the database
    and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
    The query is re-performed, causing the Oracle server to do another costly "execute"
    (bad on 5M records with sorting).
    To solve this, I've beed trying to enhance the second strategy above by caching
    the ResultSet object in a stateful session bean. Unfortunately, this causes a
    "ResultSet already closed" SQLException, although I ensure that the Connection,
    PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
    seen this on newsgroups ... it appears that WebLogic is forcing the Connection
    closed. If this is how J2EE and pooled connections work, then that's fine ...
    there's nothing I can really do about it.
    Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
    it yet, but it wouldn't be a great solution as it would be using Oracle-specific
    functionality (we are trying to be db-agnostic).
    More information:
    - BEA WebLogic Server 8.1
    - JDBC: Oracle's thin driver provided with WLS 8.1
    - Platform: Sun Solaris 5.8
    - Oracle 9i
    Any other ideas on how I can solve this issue? Hi. Fancy SQL to the rescue! If the table has a unique key, you can simply send a
    query per page, with iterative SQL that selects the next N rows beyond what was
    selected last time. Eg:
    Let variable X be the highest key value you've seen so far. Initially it would
    be the lowest possible value.
    select * from mytable M
    where ... -- application-specific qualifications...
    and M.key >= X
    and (100 <= select count(*) from mytable MM where MM.key > X and MM.key < M.key and ...)
    In English, this says, select all the qualifying rows higher than what I last saw, but
    only those that have fewer than 100 qualifying rows between the last I saw and them (ie:
    the next 100).
    When processing this query, remember the highest key value you see, and use it for the
    next query.
    Joe

  • Why does the persona that I created get very large when I post it even though its the correct dimensions?

    Every time we try and upload our persona, it shows up on screen very large. It seems like the dimensions are oversized and shows up on screen as only about 1/4 of the full design. Its like an extreme close up view of what what we built. We've built it to the specs provided on the site. If you want to check it out, the persona was created by Worth. Any suggestions on how to fix this?

    - Inspect the dock connector on the iPod for bent or missing contacts, foreign material, corroded contacts, broken, missing or cracked plastic.
    - Reset all settings      
    Go to Settings > General > Reset and tap Reset All Settings.
    All your preferences and settings are reset. Information (such as contacts and calendars) and media (such as songs and videos) aren’t affected.
    - Restore from backup. See:                                 
    iOS: How to back up           
    - Restore to factory settings/new iOS device.
    If still problem, make an appointment at the Genius Bar of an Apple store since it appears you have a hardware problem.
    Apple Retail Store - Genius Bar          
    Apple will exchange your iPod for a refurbished one for this price. They do not fix yours.
      Apple - iPod Repair price

  • Tips or tools for handling very large file uploads and downloads?

    I am working on a site that has a document repository feature. The documents are stored as BLOBs in an Oracle database and for reasonably sized files its not problem to stream the files out directly from the database. For file uploads, I am using the Struts module to get them on disk and am then putting the blob in the database.
    We are now being asked to support very large files of 250MB+. I am concerned about problems I've heard of with HTTP not being reliable for files over 256MB. I'd also like a solution that would give the user a status bar and allow for restarts of broken uploads or downloads.
    Does anyone know of an off-the-shelf module that might help in this regard? I suspect an ActiveX control or Applet on the client side would be necessary. Freeware or Commercial software would be ok.
    Thanks in advance for any help/ideas.

    Hi. There is nothing wrong with HTTP handling 250MB+ files (per se).
    However, connections can get reset.
    Consider offering the files via FTP. Most FTP clients are good about resuming transfers.
    Or if you want to keep using HTTP, try supporting chunked encoding. Then a user can use something like 'GetRight' to auto resume HTTP downloads.
    Hope that helps,
    Peter
    http://rimuhosting.com - JBoss EJB/JSP hosting specialists

  • HT5548 Is it possible to change the size of the icons shown in Launchpad? Mine seem very large.

    Is it possible to change the size of the icons in Launchpad? Mine seem very large.

    Thanks for the reply UKoC. You are close on the size. I would say they are about an inch to an inch and a half in width and only seven across the row. They just seem quite large on my display and I figured there would be an easy way. I kept thinking I just had to be missing the way for adjusting them since everything else seems to be so easy to personalize. Funny thing is that I have watched an online tutorial twice now on Apple's web site showing Launchpad and the icons shown there are smaller and more of them. I just thought I was missing the fix somehow. Thanks for confirming this.

  • Can I get a very large (72GB) SSD?

    Can I get a very large (72GB) SSD? Price is currently no issue. Speed and capacity are the two highly important factors.

    This question is not particularly related to OS X Server, so you probably won't get the widest audience for your storage hardware question here; Apple doesn't offer SSD storage arrays, and big FC SAN arrays aren't all that common on OS X and OS X Server systems.
    As for your question...
    The HP 3PAR StoreServ 10000 SSD array will support up to 3.2 petabytes (that's ~3,200 terabytes), but you're probably going to need somebody to test and qualify it for you, or you'll have to qualify it yourself — primary connection into that would be FC SAN via 8 Gbps FC SAN HBA.    Here are some smaller arrays; 3PAR 7450 220 TB and this 3PAR 7000 (not sure of the capacity there).
    Contact ATTO, Promise, or one of the other organizations that specialize in high-end storage for OS X, if you want somebody that can work with you here; to qualify and support this configuration, should you need that — they'll also be involved establishing a connection from your Mac to the FC SAN; via Thunderbolt or via FC SAN PCIe HBA or such.
    If price is an object, and if this question is related to this how much storage do I need? question, these 3PAR SSD arrays are very likely massive capacity and budget overkill — SSD arrays aren't yet cheap enough to provide archival storage for most places, and your OS X Systems probably can't even generate enough I/O to keep these 3PAR arrays busy.   (Enterprise-class gear such as these 3PAR arrays can be fairly gonzo in its bandwidth and I/O capabilities.)
    For typical backup configurations I work with, that's usually either a decent-sized disk array — possibly with a mix of flash cache, if necessary — and enough spindles on the disks to get you the I/O bandwidth necessary, then possibly with tape backups for longer-term and off-site storage. 
    Most folks that really need I/O performance will use SSDs for secondary storage (after using main memory and processor caches for all they're worth), and then use bigger (and somewhat slower) disks for tertiary storage (with RAID-10 or virtual RAID across parallel disks for speed and hardware reliability — SATA disks are fairly common here, because they're big and cheap), and then out to LTO Ultrium tape (LTO-6 provides ~2.5 TB per cartridge uncompressed, and claims ~2.5:1 compression) or maybe uploaded to Amazon Glacier or similar cloud services for long-term and off-site storage requirements.   (I've yet to work at a site that's using SSD for classic backups, though there are sites that are using these arrays for database transaction logs and other backup-like purposes.)
    Also look at archiving the data to a remote site, if it's important data.  Either tapes or (less desirably) disks via courier, or transferred via a fat network link, if you can afford that.

  • Web pages in safari automatically expands very large

    When in safari web pages automatically expand very large , at first i thought it was pushing on the magic mouse but it happens even when the page is just sitting there....

    Lyons Den,
    You are inadvertently using the zoom shortcut...two taps on the Magic Mouse.

  • I received the error (in iCal on my iMac): "The server responded with an error". The error message is very large, and if there is a way to acknowledge and close it I can't find it. Because this error message is open, I can't do anything in iCal. Any sugge

    I received the error (in iCal on my iMac): "The server responded with an error". The error message is very large, and if there is a way to acknowledge and close it I can't find it. Because this error message is open, I can't do anything in iCal. Any suggestions on how I could kill this error message? Thanks.
    iMac, Mac OS X (10.7.2)
    Basically i tried to enter too much information into my calendar and it has crashed  now i can not get rid of the error message or use the calendar  can anyone help please

    did you find ou how to get rid of it i can't

  • What is the optimum core configuration for a new Mac Pro to process and manipulate very large (80 megapixel) images using PhotoShop and Camera Raw?

    Hello:
    I will be using creative techniques to process and manipulate a large number (hundreds) of very large (80 megapixel) images captured using a medium format digital back (Phase One IQ180).
    Final output will be digital fine art imagery printed using an Epson 11880 at large sizes (up to 60 inches x ?), retaining the highest possible quality and resolution. I will be using Adobe CC PhotoShop and Camera RAW as well as Capture One software. PhotoShop filters will be used extensively.
    The Mac Pro needs to be optimized for the above purpose and be useful for at least five years. I plan to max out all the other options (RAM, graphics cards, storage). Performance is more important than cost.
    The few discussions I have found that mention optimum core configurations seem to lean toward 6 or 8 (but likely are not taking into consideration my need for manipulating a large number of very large files), so I am looking to this foum for opinions.
    Thank you,
    Kent

    See if this helps
    http://macperformanceguide.com/index_topics.html#MacPro2013

Maybe you are looking for