Best practice to handle contents greater than 1 TB

Hello All,
I am using Sharepoint 2010 and I need to know whats the best practice to handle contents greater than 1 TB
Specifics
1) Contents will be collection of images (Jpeg format) and collectively the sizes can go above 1 TB till 10 TB or more
2) Image will be uploaded to sharepoint though webservice
So any of below option suitable? if not, then any other option?
- Document Library
- Document Center
- Record Center
- Asset Library
- Picture Library
Thanks in advance ...

Theres several aspects to this.
Large lists:
http://technet.microsoft.com/en-gb/library/cc262813%28v=office.14%29.aspx
A blog summarising large databases here:
http://blogs.msdn.com/b/pandrew/archive/2011/07/08/articles-about-scaling-sharepoint-to-large-content-database-capacity.aspx
Boundaries and limits:
http://technet.microsoft.com/en-us/library/cc262787%28v=office.14%29.aspx#ContentDB
If at all possible make your web service clever enough to split content over multiple site collections to allow you to have smaller individual databases.
It can be done but you need to do a lot of reading on this to do it well. You'll also need a good DBA team to maintain the environment.

Similar Messages

  • Best Practice for Handling Timeout

    Hi all,
    What is the best practice for handling timeout in a web container?
    By the way, we have been using weblogic 10.3
    thanks

    Are you asking about this specific to web services, or just the JEE container in general for a web application?
    If a web application, Frank Nimphius recently blogged a JEE filter example that you could use too:
    http://thepeninsulasedge.com/frank_nimphius/2007/08/22/adf-faces-detecting-and-handling-user-session-expiry/
    Ignore the tailor fitted ADF options, this should work for any app.
    CM.

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • What is the best practice to handle JPA methods in JSF app?

    I am building a JSF-JPA web app(No EJB).
    I have several methods that has JPA QL inside.
    Because I have to put those methods inside JSF beans to inject EntityManagerFactory (am I right about this?).
    And I do want to separate those methods from regular JSF beans which are used by page authors.
    And I may need to use them in different JSF managed beans.
    My question here is that what is the best practice to handle that?
    I. write a or a few separate JSF Beans and inject them into regular Beans?
    II. write a or a few separate JSF Beans and access them into regular Beans using FacesContext?
    III. others?
    Waiting to hear from you opinions.

    You can create named queries on your Entities themselves then just call entityMgr.createNamedQuery("nameOfQuery");
    Normally, we put these named queries in the class of the entity which will be returned. This allows for all information pertaining to a given entity and all ways of accessing that entity (except em.find() and stuff, of course) to be in one place. As long as the entity is defined in your persistence.xml file, any named queries which reside on that entity will be available through the EntityManager.
    As for the EntityManagerFactory, we normally create an application scope bean which holds the factory itself (because this is a heavy-weight object) and then just get all EntityManager instances from that by injecting this bean into whatever needs it. For example, I might have:
    //emfBB is the injected app scope bean which holds the entity manager factory.
    private EmfBB emfBB;
    private void lookupSomeData()
    EntityManager em = this.getEmfBB().getEmf()
    I hope this answered your question?
    ~Zack
    Edited by: zmarr on Nov 6, 2008 1:29 PM

  • Best Practices for Handling queries for searching XML content

    Experts: We have a requirement to get the count of 4 M rows from a specific XML tag with value begins with, I have a text index created but the query is extremely slow when I use the contains operator.
    select count(1) from employee
    where
    contains ( doc, 'scott% INPATH ( /root/element1/element2/element3/element4/element5)') >0
    what is oracle's best practices recommendation to query / index such searches?
    Thanks

    Can you provide a test case that shows the structure of the data and how you've generated the index? Otherwise, the generic advice is going to be "use prefix indexing".

  • CUIC reporting: handled calls greater than answered calls

    Hello,
    We met the following problem in Cisco Cuic reporting 8.5.4 for cisco UCCE 8.5.3.
    We use only the stock reports. In call type historical all fields we met a strange situation for a simple inbound script:
    - the numbers of handled calls is greater than the number of answered calls. This is vey unusual because any call
    is first answered and only after that is handled or not by the agent. So the handled condition supose to be first answered.
    Any ideas? What could be the cause for additional handled calls?
    Also we have problems between the number of handled calls in Agent Team historical all fields and the number of handled calls
    in Call type historical all fields. Should not be the same number in both reports ?
    The definitions for answered, handled, offered are very simple and they lack essential info in describing the situations
    they treat. The whole documentation for enterprise misses many times essential info and the only source of real
    info is others knowledge and experience.
    Any help is wellcome.
    Best regards,
    Marius

    Answered is incremented when the Agent picks up the call (or received the activity, in the case of EIM).
    Handled is incremented when the call (or EIM activity) completes.
    Handled will be incremented regardless of whether the call is transferred/conferenced/consulted or not.
    As others have mentioned, Answered and Handled counts can be incremented during different intervals depending on the length of the call. The only way I can think of where that would be the case here would be if your agents are handling EIM activities... They could have 5 open emails in their inbox at the end of the day - these 5 open emails would be considered Answered, but not Handled.
    If this isn't from non-voice activities, then perhaps you have some system issue causing the problem.
    -Jameson

  • Best practice on handling a datacontrol based on a changing webservice

    Is there any best practice on how to handle changes to a datacontrol, when a webservice changes ?, it seems the information on
    portnumbers, servernames etc. is placed in a number of files, an optimal solution seems to be that Jdeveloper would have functionality
    to regenerate all relevant files based on changes to the WSDL, but this does not seem to be supported in Jdeveloper 11g
    Regards
    Ole Spielberg

    Hi,
    I think in this case you would better use a WS proxy, wrap this in a POJO and create a data control from this. This allows you to set the port and host programmatically. I agree that there should be a better option to do the same in the WS data control
    Frank

  • Best practice for handling errors in EOIO processing on AEX?

    Hi,
    I'm looing for resources and information describing the best practice on how to handle processing errors of asynchronous integrations with Exactly-Once-In-Order QoS on an AEX (7.31) Java only installation. Most information I've found so far are describing a the monitoring and restart jobs on AS ABAP.
    Situation to solve:
    multiple different SOAP messages are integrated using one queue with an RFC receiver. On error the message status is set to Holding and all following messages are Waiting. Currently I need to manually watch over the processing, delete the message with status holding and restart the waiting ones.
    Seems like I can setup component based message alerting to trigger an email or whatever alert. I still need to decide on how to handle the error and resolve it (ie. delete the errornous message, correct the data at sender and trigger another update). I still need to manually find the oldest entry with status waiting and restart it. I've found a restart job in Background jobs in configuration and monitoring home, but it can be only scheduled in intervals of 1 or more hours.
    Is there something better?
    Thank you.
    Best regards,
    Nikolaus

    Hi Nikolaus -
    AFAIK - For EOIO, you have to cancel the failed message and then process the next message in the sequence manually..
    Restart job only works the messages which are in error state but not in holding state.. So you have to manually push the message... So there is no other alternative.
    But it should not be that difficult to identify the messages in a sequence..
    How to deal with stuck EOIO messages in the XI ... | SCN
    Though it is for older version, it should be the same.. you should be able to select the additional columns such as sequence ID from the settings..

  • Best Practice for Delivering Content

    Hi SDN Experts,
    We have a scenario where we need to read/write a EDI kind of file, separated file. But the solution should be a deployable one and we are planning to provide IR Content alone and use Integration Scenarios for ID.
    We have two options:
    1. Use file Content Conversion (EDI structure is very simple) and provide Communication Channel Templates with Content Conversion Parameters for Integration Scenario.
    2. Use java Map to read/write EDI (Communication Channels would be simple in this case ).
    What are the pros and cos of the two approaches?
    Regards,
    Sudharshan N A

    Hi Sudharsan,
    Pros and Cons:
    1. Use file Content Conversion (EDI structure is very simple) and provide Communication Channel Templates with Content Conversion Parameters for Integration Scenario.
    Ans: - Communication channel configuration is dificult as you have to introduce content conversion in it. If you miss any parameter in that part or provide wrong value. It will cause error.
    - Mapping will bw very easy may be one to one or upon requirement. can use graphical mapping.
    2. Use java Map to read/write EDI (Communication Channels would be simple in this case ).
    Ans: - You will resolve all conversion relation (like in which format data is expected at receiver) issues in java mapping by programing. so mapping is going to be atough task.
    - no content conversion required at CC level so its easy to configured.
    Best practice depends accordingly:
    if you are a java expert (in java mapping) go for 2nd one other wise its always default to use content conversion.
    Hope this will help.
    Best regards,
    Alok
    Edited by: Alok Sharma on Jan 31, 2008 8:42 AM

  • Best Practice for trimming content in Sharepoint Hosted Apps?

    Hey there,
    I'm developing a Sharepoint 2013 App that is set to be Sharepoint Hosted.  I have a section within the app that I'd like to be Configuration-related, so I would like to only allow certain users or roles to be able to access this content or even see
    that it exists (i.e. an Admin button, if you will).  What is the best practice for accomplishing this in Sharepoint 2013 Apps?  Thusfar, I've been doing everything using jQuery and the REST api and I'm hoping there's a standard within this that I
    should be using.
    Thanks in advance to anyone who can weigh in here.
    Mike

    Hi,
    According to
    this documentation, “You must configure a new name in Domain Name Services (DNS) to host the apps. To help improve security, the domain name should not be a subdomain
    of the domain that hosts the SharePoint sites. For example, if the SharePoint sites are at Contoso.com, consider ContosoApps.com instead of App.Contoso.com as the domain name”.
    More information:
    http://technet.microsoft.com/en-us/library/fp161237(v=office.15)
    For production hosting scenarios, you would still have to create a DNS routing strategy within your intranet and optionally configure your firewall.
    The link below will show how to create and configure a production environment for apps for SharePoint:
    http://technet.microsoft.com/en-us/library/fp161232(v=office.15)
    Thanks
    Patrick Liang
    Forum Support
    Please remember to mark the replies as answers if they
    help and unmark them if they provide no help. If you have feedback for TechNet
    Subscriber Support, contact [email protected]
    Patrick Liang
    TechNet Community Support

  • Best practice on publishing content

    Hi all,
    I would like to know what your best practices are in an easy publisching process for unstructed data for a intranet enviroment. Standaard i have to do several steps like create document, create iview to document, include iview on rol or workset.
    Best to my idea would be to create a folder in KM with sub folders and HTML files in there. Then some fancy tool that generated automaticaly a workset with menu structure based of the folder structure and includes the html files in pages.
    Is a kind of tool out there or do you have easy ways with current tools.

    Hi Pim,
    I would suggest you look at "Collaboration Rooms" as
    an option for deploying structured "unstructured"
    content (if that makes sense). This may be a good
    option for some of your scenarios.
    It may even make sense to define your own Collaboration
    Room Templates to match your use cases. This feature
    automates creation of persistence areas (CM Stores),
    as well as the instances of the worksets and iViews.
    New support for Room Parts and extensions really reduces
    the overhead in creating special areas in which to
    deploy content and collaborate.
    Regards,
    Darin

  • Best Practice Exception Handling.

    Hi,
    Please consider two scenarios:
    Scenario 1:
    DECLARE
      l_emp   scott.emp.ename%TYPE;
      l_dname scott.dept.dname%TYPE;
    BEGIN
      BEGIN
        SELECT ename INTO l_emp FROM emp WHERE empno = 7934;
      EXCEPTION
        WHEN no_data_found THEN
          dbms_output.put_line('No big deal');
          NULL;
        WHEN too_many_rows THEN
          dbms_output.put_line('It is a big deal');
          RAISE;
        WHEN OTHERS THEN
          RAISE;
      END;
      BEGIN
        SELECT dname INTO l_dname FROM dept WHERE deptno = 1;
      EXCEPTION
        WHEN no_data_found THEN
          dbms_output.put_line('It is a big deal');
          RAISE;
        WHEN too_many_rows THEN
          dbms_output.put_line('It is a big deal');
          RAISE;
        WHEN OTHERS THEN
          RAISE;
      END;
    EXCEPTION
      WHEN OTHERS THEN
        RAISE;
    END;
    Scenario 2:
    DECLARE
      l_point_of_error NUMBER;
      l_emp            scott.emp.ename%TYPE;
      l_dname          scott.dept.dname%TYPE;
    BEGIN
      l_point_of_error := 1;
      now write some implicit cursors-
      SELECT ename INTO l_emp FROM emp WHERE empno = 7934;
      l_point_of_error := 2;
      SELECT dname INTO l_dname FROM dept WHERE deptno = 1;
    EXCEPTION
      WHEN no_data_found THEN
        CASE l_point_of_error
          WHEN 1 THEN
            dbms_output.put_line('No big deal');
            NULL;
          WHEN 2 THEN
            dbms_output.put_line('It is a big deal');
            RAISE;
          ELSE
            dbms_output.put_line('I have an idea which block this errored out on...but I want or HAVE TO raise the error');
            RAISE;
        END CASE;
        NULL;
      WHEN too_many_rows THEN
        CASE l_point_of_error
          WHEN 1 THEN
            dbms_output.put_line('It is a big deal');
            RAISE;
          WHEN 2 THEN
            dbms_output.put_line('It is a big deal');
            RAISE;
          ELSE
            dbms_output.put_line('I have an idea which block this errored out on...but I want or HAVE TO raise the error');
            RAISE;
        END CASE;
        NULL;
      WHEN OTHERS THEN
        RAISE;
    END;
    /What do you think is the right approach?
    The one thing I can think of using Scenario 2, it will be a nightmare to handle the 'case' statements in the final exception catcher when the number of blocks to handle are more.
    Also, Scenario 2 also uses ONE more variable to be assigned (more processing time maybe??).
    I am also told that Scenario 2 is used extensively used by Oracle Applications PL/SQL Api's.
    But, if you can suggest which do you think is the best practice is? I would really appreciate it.
    Thank you,
    Rahul

    My bad 3360:
    I didn't mention the error logging think in OP and I use it in my code all the time and forgot to put it in OP.
    Anyway, I use Tom Kyte's who_am_i and who_called_me in my error logging to figure out the line number of the error.
    * One failure was caused by the error log table running out of space because no one ever looked at it and didn't realize it had hundreds of thousands of exceptions in it. Yes, you are absolutely right. Yes somebody(the developer(s) ) have to take care of the error logging table (try to see what errors out and why).
    P.S: I am gonna stick with Scenario 1 (with the error logging of course :) ) according to Todd Barry's comments and I have no idea why Oracle Applications uses Scenario 2. Can anybody tell me why (maybe it's just bad coding on Oracle Apps part?).
    Thank you,
    Rahul.

  • Best Practices For Portal Content Objects Transport System

    Hi All,
    I am going to make some documentation on Transport Sytem for Portal content objects in Best Practices.
    Please help in out and send me some documents related to SAP Best Practices for transport  for Portal Content Objects.
    Thanks,
    Iqbal Ahmad
    Edited by: Iqbal Ahmad on Sep 15, 2008 6:31 PM

    Hi Iqbal,
    Hope you are doing good
    Well, have a look at these links.
    http://help.sap.com/saphelp_nw04/helpdata/en/91/4931eca9ef05449bfe272289d20b37/frameset.htm
    This document, gives a detailed description.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f570c7ee-0901-0010-269b-f743aefad0db
    Hope this helps.
    Cheers,
    Sandeep Tudumu

  • Best  practice: Fade out content when slide advances automatically?

    I have a course that uses the playbar to navigate and have the slides advancing (for the most part) automatically. Do you recommend setting the last content that's on the slide to fade out? I feel like it's jumpy if it doesn't but looks so sudden too.
    Please share your best practices!

    This one of those "it depends" questions.
    For the very first introductory module of a course I generally do not have the final slide fade out because I usually have a graphic on that final slide that shows what they should click next in the LMS interface to get to the next lesson.  So I want that image to remain visible until they shut down the browser window.  Doing this has avoided issues with newbie users getting lost in the LMS and not knowing how to proceed.
    However, this ends up being unnecessary for all subsequent modules, so for them I usually have the Project End option set to fade out.

  • Best practices for handling elements and symbols (including preloading)

    I am trying to learn Edge Animate and I have not seen enough animations to know how this is typically handled and I searched the forum and have not found an answer either.
    If you have many different elements and symbols for a project, what is the best practice for having them appear, disappear, etc. on the timeline? I ask this question not only from a performance based perspective, but also keeping in mind the idea of preloading. This is a 2 part question:
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.

    Hi, escargo-
    Good questions!
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    I would recommend that you set your visibility to "off" instead of simply changing the opacity.  The reason I suggest this is that when your visibility is set to off, your object's hit points also disappear.  If you have any type of interactivity, having the object still visible but with 0 opacity will interfere with anything you have underneath it in the display order.
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.
    No, none of this has any impact on page load.  As you already noticed, all of the assets of your project will load before it displays.  If you want only part of your composition to load, you may want to do what we call a multi-composition project.  There's a sample of that in the Edge Animate API in the Advanced section, and plenty of posts in the forums (and one in the team's blog) explaining how to do that.
    http://www.adobe.com/devnet-docs/edgeanimate/api/current/index.html
    https://blogs.adobe.com/edge/
    Hope that helps!
    -Elaine

Maybe you are looking for