Data more than 1TB in one site collection

Hi All,
Our client has a requirement where we need to upload documents of size more than 1TB in one of the subsite.
How do you suggest we can achieve this without affecting the performance and with the set limitations of content DB.

It needs to follow specific performance and usage guidelines:
http://www.wictorwilen.se/Post/The-SharePoint-2010-4TB-content-database-limit-fine-prints-just-a-warning.aspx
http://technet.microsoft.com/en-us/library/cc262787.aspx#ContentDB
Trevor Seward, MCC
Follow or contact me at...
&nbsp&nbsp
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Similar Messages

  • JDBC sender adapter polling data more than once within one polling interval

    Hi,
    Our sender communciation channel is polling data twice in one polling interval, hence triggering the corresponding BPM twice.
    The polling interval for the channel is set to 30 mins.
    We dont have an update query and use <test> instead. Currently there is no provision to use an update query in the source system.
    Intermitently the polling happens within an interval of few milliseconds, because of which we are in doubt whether the use of Update query will solve our problem
    Additionaly in the BPM, we have one channel which deletes the data from source system after the processing is complete by the BPM, but the second BPM is getting triggered before the deletion step is being executed in the first BPM.
    Please Advise how we can stop the channel from polling data twice.
    Thanks,
    Merrilly.

    Hi Merrilly,
    Please ty to set an advanced mode option in the table(for sender
    channel configurations) as follows:
    Key name is "clusterSyncMode" and its values are either "scheduler" or
    "none".(both are without quotes and case-sensitive)
    Default value is "scheduler", this means it prevents 2
    messages being generated regardless of 2 server nodes. This is true,
    even if you do not add anything in advanced options table.
    Also are more information about this parameter of file adapter, what
    is the same for jdbc:
    #801926 - XI 3.0 File Adapter: Additional Parameters
    Regards
    SK

  • Managed Metadata values scrambled when copying or moving documents from one Site collection to another.

    Hi
    What works: Copy or move a document library containing one or more MM columns *within* a Site Collection = all column managed metadata replicated accurately.
    What doesn't work: Copy or move a document library containing one or more MM columns *from one Site Collection to another* within the same Webapp. The issue is that some but not all MM values are totally wrong. By totally wrong I mean that the incorrect
    terms are associated with a TermSet that is not even referenced in this DL!
    Same result for all method's tried:
    1) Save the DL as a template (no contents) and then copy or move the documents in
    2) Programmatically do the copy
    3) Export the list via CA and then reimporting it using Pshell
    This thread is very similar but his workaround has not helped in this instance:
    https://social.msdn.microsoft.com/Forums/office/en-US/6aca7b7c-97cb-4f1f-99f1-5e1fa0b1e3ed/content-type-hub-custom-library-template-managed-metadata-lostscrambled-when-moving-files?forum=sharepointdevelopment#65b52342-408f-47b4-a9f8-5d0b952acc12
    Has anyone else experienced this and found the cause/solution? Or can anyone do this without the issues that I have described?
    Thanks

    Hi Nuno,
    I've got to the root of the issue but in answer to your question first: The MM column is attached to a Global Content Type - there is nothing local.
    The fact that some are ok and others are not led me down the route of checking the columns at item level. Scripting this did not help - all rows returned are what I would expect. ie The same MMColName with the expected value (as appearing
    on the source DL).
    On checking SharePoint manager however I have found what looks to be causing the issue.
    When checking the properties (ie SiteCollection/Lists/<ListName>/items/<itemName>/Properties) of the 'good' documents against the 'bad' documents I find the following:
    Good Docs: The MMColName is displayed as expected at the item level
    Field: Name
    Field:  Internal Name
    Item Properties:
    Ok?
    Partnership Type
    Partnership_x0020_Type
    Partnership Type
    Y
    Bad Docs: The MMColName is NOT displayed as expected at the item level. Instead the internal name is displayed:
    Field: Name
    Field:  Internal Name
    Item Properties:
    Ok?
    Register Type
    Type_x0020_of_x0020_Licence
    Type_x0020_of_x0020_Licence
    N
    Meeting Type
    Meeting_x0020_Type
    Meeting Type & Meeting_x0020_Type (Both columns are listed)
    N
    I should add that each of these 3 examples are taken from 3 different DL's - hence the different Fields
    So this looks to be a case of corruption - and unless we can think of any other outcome we will have to look to rebuild the DL - while trying to maintain the metadata!

  • How do u save datas more than one table using net beans ide using JSF

    Hi,
    I am new to JSF.
    I save / delete / update / New master table using POJO (Plain Old Java Objects), database - oracle and Toplink Persistence Unit.
    How do u save data more than one table using net beans ide using JSF (I am using POJO) ?
    and also Tell me the reference book for JSF.
    Thanks in advance.
    regards,
    N.P.Siva

    SivaNellai wrote:
    I am new to JSF.
    So, I am using net beans IDE 6.1 from sun microsystem. It is a free software.No, you don't drag'n'drop if you're new to JSF. Switch to source code mode. Write code manually, with the help of IDE for the speed up.
    So, please guide me the reference books, articles. I need the basic understanding of JSF, net beans IDE.[JSF: The Complete Reference|http://www.amazon.com/JavaServer-Faces-Complete-Reference/dp/0072262400] is a good book. The [JSF specification document|http://jcp.org/aboutJava/communityprocess/final/jsr252/index.html] is also a good reading to understand what JSF is and how it works. There are also javadocs and tlddocs of Sun JSF Mojarra.

  • Ical crashes when I go to a date more than one month forward.

    ical crashes when I go to a date more than one month forward.

    The extension called Stop Autoplay isn't available from AMO any longer, it was probably pulled by the original author. <br />
    http://kb.mozillazine.org/Muting_browser
    It looks like it is available here: <br />
    http://mac.softpedia.com/progDownload/Stop-Autoplay-Download-31767.html

  • How to extract audit log data from every document library in site collection using powershell?

    Hi All,
    I have n number of document library in one site collection,
    My query is- How to extract audit log data from every document library in a site collection using powershell?
    Please give solution as soon as possible?

    Hi inguru,
    For SharePoint audit log data, These data combine together in site collection. So there is no easy way to extract audit log data for document library.
    As a workaround, you can export the site collection audit log data to a CSV file using PowerShell Command, then you can filter the document library audit log data in Excel.
    More information:
    SharePoint 2007 \ 2010 – PowerShell script to get SharePoint audit information:
    http://sharepointhivehints.wordpress.com/2014/04/30/sharepoint-2007-2010-powershell-script-to-get-sharepoint-audit-information/
    Best Regards
    Zhengyu Guo
    TechNet Community Support

  • Search is not working for only one site collection

    Hi All,
    I have one issue where users are searching something on a site collection nothing is coming out.  Search is working on web application level but not in only one site collection.
    I checked the crawl log and found  below error for this site collection

    Hi Aditya,
    From the error message, there might be several reasons:
    Configure search time-out settings (Search Server 2010):
    http://technet.microsoft.com/en-us/library/ee808892.aspx
    Please check the user accounts created under the home group that could push the limit of the ACL, and this error message may be occur:
    http://www.sweendog.net/blogengine/post/2012/02/03/The-Filter-Daemon-has-Timed-Out.aspx & http://sharepoint.stackexchange.com/questions/26755/sharepoint-2010-search-server-not-crawling-content-due-to-filter-daemon-timeout
    Make sure the search service account has access to SearchIndex share:
    http://www.sharepointsecurity.com/sharepoint/sharepoint-development/fixing-the-filter-daemon-did-not-respond-within-the-timeout-limit-error/
    If the links above doesn’t help, please collect more error message in ULS log for troubleshooting.
    Regards,
    Rebecca Tu
    TechNet Community Support

  • Multipe discussion boards within one site collection, each with their own email?

    So we use several list servs and these have become an issue and the data is lost discussion based on you cannot collectively house and search for information and attachments.
    I have a few questions that maybe you could help in my research:
    1) Is SharePoint discussions capable of having multiple discussion boards in one site collection, each with their own email address to replace the function of a list serv (so at a minimum content can be organized, with attachments available, and replies
    can be seen/sent via email.)
    2) From an attachment perspective; what is the best way to ensure virus blocking in the above scenario IF the above scenario is able to be done
    3) Is there OOTB functionality here or is there a recommended third party product? (I am not looking to build out a custom solution if at all possible to avoid that)
    any thoughts or input here would be greatly appreciated!
    ~Kat

    1) - Yes, discussion boards are just specially formatted lists and yes, each discussion board can have their own email address.
    2) -  Blocked file types can be configured in CA and I believe that SP will strip out any emails with blocked file types attached (not 100% on that, probably 95% sure)
    3) All out of the box (assuming SP will block the attachments as I assumed in #2

  • More than 1 SMP per site

    Can we have more than 1 SMP per site?suppose I have 1 primary site with 2 secondary site, is it possible to have more than 1 SMP for USMT per site?. One of customer have around 40 distribution points and they are expecting to have same DPs as SMP as well
    i.e. 40 SMPs....Any pointers will be appreciated. Thanks
    regards,

    To add-on to Jason, there has been a little change since R2 (at least according to the documentation):
    Prior to System Center 2012 R2 Configuration Manager, all site system roles at a secondary site must be located on the site server computer. The only exception is the distribution point. Secondary sites support installing distribution points on the site
    server computer and on remote computers.
    Beginning with System Center 2012 R2 Configuration Manager, the state migration point can also be installed on the site server computer or on a remote computer, and can be co-located with a distribution point.Reference:
    http://technet.microsoft.com/en-us/library/gg712282.aspx#Plan_Where_to_Install_Sites
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Mapping one site collection to one record center

    hi,
    i would like to map one site collection to one record center
    is it possible 
    assume , am having  10 site collections, and if i create 10 record center site collections , how can i map these 10  site collections to 10 record centers.
    help is appreciated!

    Hi  Benjamin,
    Yes, you can map one site collection to one record center. Here is the steps you can refer to:
    Configure Send to locations for your Records Centers.
    Allow for sending outside the site.
    Create Content Organizer Rule to send to a record center for your Site  Collections.
    For more information, you can refer to the blog:
    http://blogs.msdn.com/b/mcsnoiwb/archive/2010/03/04/route-records-to-other-site-collections.aspx
    Best Regards,
    Eric
    Eric Tao
    TechNet Community Support

  • How to transfer data more than 255 char from excel sheet to internal table.

    Hello Experts,
    I have a requirement where i have a text field in the excel sheet of more than 255 char and need to be updated in the text element. To do that i need to transfer the excel sheet data to an internal table where one of the field is more than 255 char.
    in the standard function module it works only upto 255 char. Can you help me if we have some other way to get more than 255 char in the internal table from excel sheet.
    Thanks in Advance.
    BR,
    RaJ.

    Using .xls, it is not possible transfer data more than 255 characters from excel sheet. However if the excel sheet is saved as Comma Delimited format or Tab Delimited format, then using GUI_DOWNLOAD function module data more than 255 characters can be transferred.
    File name should be : .csv (Comma Delimited format)  or .txt (Tab Delimited format)
    File Type would still remain 'ASC' while calling function module GUI_DOWNLOAD
    Also In the internal table declare the field type as String or LCHAR.
    Eg:
    TYPES: begin of ty_file,
            col_a TYPE string,
          end of ty_file.
    DATA: i_file type standard table
                   of ty_file
                 with header line
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        FILENAME                      =  'C:\test.csv'
      TABLES
        DATA_TAB                      = i_file

  • Archiving same data more than once due to overlapping variant values

    Hi all,
    i had accidently run 2 archiving jobs on the same data. For instance, job 1 was archiving for company code IN ( where the company code was from IN00 till ZZ00), which was the unwanted job. The second archive job archived data from IN99 till INZZ ( not the whole IN company code ).
    These 2 jobs failed due to log fulll ( the data was too huge to be archived), however when i expand the jobs in the failed SARA session, the archive files has up to 100 MB size.
    Below are some of the problems which will incur if we archive the same data more than once ( which i found from  my online search )
    - some archiving objects require that data only exists once in the archive therefore duplicate data can lead to erroneous results in the totals of archived data
    - Archiving the data again will affect checksum. Checksum normally conducted before and after archiving process and its purpose is to validate whether the same file contents exist in the newly created archive files as compare the original data.
    Could anyone advice me on how to overcome this multiple archiving on the same data issue. Apart from above stated impact, what are the other problems of multiple archiving on same data?
    The failed archived sessions are currently in "Incomplete Archiving Session" and in 1 week time they will be deleted by archive delete jobs and will be moved to "Completed Archive Session". I highly appreciate if anyone could help
    Source of finding:
    http://help.sap.com/saphelp_nw73/helpdata/en/4d/8c7890910b154ee10000000a42189e/content.htm
    http://help.sap.com/saphelp_nwpi71/helpdata/en/8d/3e4fc6462a11d189000000e8323d3a/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/8d/3e4fc6462a11d189000000e8323d3a/content.htm

    Hello,
    There are several issues here.  In this case it seems pretty clear cut that you did not want the first variant to be executed.  Hopefully none of the deletions have taken place for this archive run.
    In cases where you have overlapping selection criteria and some of the deletions have been processed you can be in a very difficult situation.  The best advise that I have would be to check your archive info structure CATALOG definition and make sure that both the archive file and the offset fields are set to DISPLAY fields and not KEY fields.
    If your file and offset are key fields then when you use the archive info structure you would pull up more than one copy of the archived document.
    Example:  FI document 12345 was archived and deleted in archive run 1 and archive run 2.
    The search for the archive info structure when the file and offset are keys fields would return two results.
    12345 from run 1
    12345 from run 2
    If the CATALOG has the file and offset as display only fields you would only return one result
    12345 from (whichever deletion file was processed first)
    The second deletion process would have a warning message in the job log that not all records were inserted.
    Please note that any direct access of the data archive file that bypasses the archive info structure and goes directly to the data archiving files would still show two documents and not a single document.
    Regards,
    Kirby Arnold

  • Is there a way for a site owner to copy a custom calendar that resides on a site on one site collection to a site on another site collection?

    A user contacted me about copying and moving information from an old site that is being retired to a new site.
    The most important info is the department calendar.
    I don't see a way to use the "Manage content and structure" functionality to copy or move a calendar from one site collection to another.
    And site owners do not generally have enough permissions to create templates into the gallery.
    Is this just not possible for him to do on his own?
    Thank you for your help!

    I gave up and created templates for the site owner and created the corresponding list for them on the new site.
    I was startled to discover that even though I said I wanted to include all content, the 4 columns that were of type "people picker" did not, for the most part, transfer.
    So the columns will have to be manually entered. But at least the rest of the effort is done.

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • Alerts do not work on one site collection - Alerts have EventTypeBitmask of -1 - do you think this is the problem

    I have a farm with approx 200 site collections in it. 
    We did recently move sites from one farm to another.  All other site collection alerts work,  Yet I have one site collection that just does not send alerts.
    Things I have done to attempt to get them to work:
    1. Turn Alerts off and back on
    stsadm -o setproperty -url "sitecollectionurl" -pn Alerts-enabled -pv False
    stsadm -o setproperty -url "sitecollectionurl" -pn Alerts-enabled -pv True
    2. Update the alerts template
    stsadm -o updatealerttemplates -url "sitecollectionurl" -f "c:\program files\path to \AlertTemplates.xml
    3. Update all the siteurl properties of all alerts to the correct url
    4. update all the mobilurl propties of all alerts to the correct url
    5. updated all the Alert.Titles
    None of these have worked for me to get the alerts to start working
    The only thing I see different is this EventTypeBitMask property.
    Does anyone have any other suggestions to get alerts working on this site collection?
    Thank you for any help
    [email protected]

    Hi,
    Whether all of the user can’t receive alert email in this site collection.
    Check whether the "Immediate Alerts" time job is enabled for your web application.
    Try to do restart SharePoint Timer services and IISReset.
    Here are two articles about troubleshooting SharePoint alerts not working, you can use as a reference:
    http://www.sharepointdiary.com/2012/02/sharepoint-alerts-not-working-troubleshooting-checklist.html#ixzz2aSvOz3hB
    http://blogs.technet.com/b/steve_chen/archive/2009/11/20/alerts-in-sharepoint-troubleshooting-moss-wss.aspx
    Besides, here is a similar post, you can take a look at:
    https://social.technet.microsoft.com/Forums/en-US/895a5612-9237-4d66-8cb3-37e7eefe8dbb/alerts-not-working-for-a-single-site-collection?forum=sharepointadminlegacy
    Best Regards,
    Lisa Chen
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Lisa Chen
    TechNet Community Support

Maybe you are looking for

  • Using API's

    Greetings, I am writing a project which needs to "read" data out of .pdb file (this file contains debugging information for VC++ 6 and .NET). Problem is - the intra structure and format of the .pdb file were not published by Microsoft, and they are n

  • JOptionPane and Applet problem

    Hi folks, I have an applet, I want to display a pop up dialog whenever the user does something. SO i used JOptionPane: int userIn=JOptionPane.showInternalConfirmDialog(this,"Do you wish to generate xml for each ics/ids files pair in this data set?",

  • Why does out slapd process consume a lot of memory (1.1 GB)?

    Our LDAP installation (4.1) on Solaris 5.8 consumes too much memory. It is now at 1.1 GB and rising. I have tuned the dbcachesize and cachesize parameters in the config file, but that has not helped. I'd appreciate any help I can get in solving this

  • DNG Converter 7.4 for FUJIFILM FinePix F900EXR

    Hi, I noticed the following: When shooting in "RAW + JPG" (aspect ratio set to "16:9") with my Fuji F900EXR I get the following dimensions: RAW: 4608 × 3456 pixel JPG: 4608 × 2592 pixel (of course this is 16:9) After converting using Adobe DNG Conver

  • VDI 3 , ESX 3.5, USB File Transfer from DTU failing

    Hey guys, Some users have noticed that when copying files from a USB stick in the Sun Ray (2FS) (via mapped U drive) to say a network share, the transfer fails or the contents of the file are corrupt. Anyone else had this issue or ideas on a fix? THa