Growing tfs Tfs_DefaultCollection database

Best people,
Our TFS 2013 Tfs_DefaultCollection DB is growing.
It has now the size of 149GB and still growing.
When I look at the Disk Usage by Top Tables there are two very big tables.
dbo.tbl_Content is 54GB and dbo.tbl_BuildInformation2 19GB.
We started with a fresh new environment a year ago.
We have about 65 projects.
Setup:
Database server:
OS:  Windows 2012 R2
SQL:  Microsoft SQL Server Enterprise Edition 11.0.5058
Application Server:
OS:  Windows 2012 R2
TFS:  12.0.31101.0(Tfs2013.Update4)
Best Regards Fabian Stans

Hi Robin,   
Thanks for your reply.
The above solution works to remove completed build(s) and all its related data from TFS collection database. We can execute Tfsbuild delete command to bulk delete builds by date, after executed the Tfsbuild delete command, the build information can be removed
from tbl_Build and tbl_Buildinformation2 tables immediately, but need wait that related data be removed from
tbl_ContainerItem, tbl_File 
and tbl_Content tables. Please refer to the discussions in this post:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/c4d21955-4cf7-4c7c-aae0-6bf7997edce8/how-to-delete-build-drops-created-by-build-with-copy-build-output-to-the-server-setting?forum=tfsbuild.
Except the build data in TFS collection database, you also need to destroy the deleted source files to reduce the collection database size. We usually have three ways to destroy the deleted source files:
Execute the Tf destroy command, but it this files will be deleted later by a job run by the TFS Background Job Agent, this job usual runs only once a day!
Execute the Tf destroy with the /startcleanup switch which immediately kicks off the cleanup job.
Run the cleanup stored procedures manually. If your Content table is large: “exec prc_DeleteUnusedContent”. Please refer to the discussions in this similar post:
https://social.msdn.microsoft.com/forums/vstudio/en-US/661c8f0f-61aa-4ddb-a044-cba530278aaf/cleaning-up-tfs-tfsdefaultcollection-database.
After the destroyed entries be removed from tables, please backup your database and then shrink it.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Attaching TFS sharepoint database to Sharepoint 2010

    Hello,
    I'm in the middle of upgrading from TFS 2008 (on one server) to TFS 2013 (on another server).  I was able to get the TFS portion of the upgrade working, but I'm encountering some difficulty with the Sharepoint database portion.  Due to the specs
    of the server, I will need to use Sharepoint 2010 instead of Sharepoint 2013.  I have a copy of the Sharepoint databases on a SQL Server 2012 instance (on a different server then the actual Sharepoint software).  Should I just proceed to attach the
    TFS Sharepoint database to the Sharepoint 2010 instance?  If so, isn't that done with a stsadm.exe command?  I have also yet to configure the Sharepoint 2010 instance in case there is some special considerations when dealing with the TFS sharepoint
    databases.
    Thanks!

    check this one:
    http://blogs.msdn.com/b/tfssetup/archive/2014/05/15/migrating-team-foundation-server-databases.aspx
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Two entries found in TFS DB for one user, how do I know which one to remvoe?

    We had issue with an account that TFS reported to there are multiple users with the same display name. I searched the database and found two entries with the same 'displayname'.
    First weird thing is that the account names are only different in some letters with or without capital (exactly "SIvakhno" and "sivakhno"), I thought TFS (and Microsoft) is not capital sensitive, right? Why does it consider the two accounts
    are different?
    My next question is, if I want to remove one to solve the 'multiple user' issue, which entry should I remove from the database? They have different 'Sid', 'Id', 'AccountName' (as above), 'UniqueUserId' (1 and 0, respectively), 'LastSync', 'SequenceId', all
    the other fileds are the same in the table.
    Thanks

    Hi Peter, 
    Thanks for your reply.
    There’s two users existing in your TFS Server databases, one is Slvakhno and another one is sivakhno, and they have different SID and Login name in your TFS database, right?
    What’s the login names of these two users, which one’s Login name is D_Name\sivakhno2, and which one’s Login name is D_Name\sivakhno?
    TFS recognize the user using SID value, every user has the unique SID value in AD although they may have the same display name. If the Slvakhno and slvakhno users have different SID value in TFS database, TFS will think they two different users. But you
    received the same SID value for the Slvakhno and slvakhno users after executed the tfssecurity /imx command line, that’s an issue in this scenario, please use these two users’ login name value in the tfssecurity /imx command line, then check the SID result
    again.
    You said it shows the same display name for D_Name\sivakhno2 and D_Name\sivakhno users after execute the tfssecurity /imx command line, which same display name shows in result? 
    According the D_Name\sivakhno2 False   False result, TFS Server think this D_Name\sivakhno2 is not available, please check this D_Name\sivakhno2 user in your AD, ensure it’s available and check its SID value, compare it with value
    in database.  
    For the change user’s display name will affect other AD groups issue, please contact AD experts for the better response.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Problem with ApEx after upgrade of database to 10.2.0.2

    I could have posted this in the general Database forum since I am not sure whether the problem is directly related to the HTMLDB installation, perhaps it's only the place where the symptoms are shown, but here is it.
    One of the customers I am currently developing HTMLDB applications for, recently updated their Oracle database to version 10.2.0.2. The HTMLDB installation is 2.0 (not sure which version exactly, I can't check it at the moment as I'm at another customer right now). Everything seems to work fine (just like it did before the upgrade), but since about two weeks, the HTMLDB application that is already in production (and used quite a lot) suddenly started spawning 404 errors.
    The problem is that this behaviour can not always be reproduced. The errors appeared on last Friday as well, after which the database was restarted, which stopped them. This week, it ran without any problems on Monday and on Tuesday, until around 4pm. Then, the search functionality of one page (very basic query) and switching back and forth using the tabs would give 404 errors.
    The customer has been in touch with Oracle Support about this (Severity 1 TAR) but so far they have not been able to come up with anything that could lead to a solution. I will, in short, describe what information we DO have at the moment, and I hope that maybe one of you has experienced the same (or similar) problem, or could help me find where to look for a solution. At the moment, I am wondering whether it is the HTMLDB installation that became corrupted (and may require a reinstall) But, since it works fine at times, there may be something else that is causing these problems.
    Here we go (sorry for the long introducton ;))
    The problem:
    Sudden 404 errors on specific pages, in a working application, but also in the HTMLDB environment itself. Yesterday I tried to import a new version of the application - it would keep giving the error, and also simple tasks such as adding a new Tab would give me the error. In the development environment, which runs on a different database (previous version), everything works fine.
    If the errors show up, they will keep on coming, until the database is restarted. At one point (last week) they seemed to stop by themselves around lunch time, but that only occurred at one time, as far as I know.
    The error:
    The error message that is shown in the Apache log file is:
    [Fri Mar 17 15:44:10 2006] [error] [client 10.100.60.2] [ecid: 1142606650:10.100.60.26:4432:4836:63,0] mod_plsql: /pls/vta/wwv_flow.accept HTTP-404 ORA-06550: line 22, column 3:\nPLS-00306: wrong number or types of arguments in call to 'ACCEPT'\nORA-06550: line 22, column 3:\nPL/SQL: Statement ignored\n
    Additional notes:
    So far it is unclear what is causing these errors, but the customer has been in contact with Allround Automations (of PL/SQL Developer) who experienced something similar (a known bug afaik, but I have no documentation) with the reuse of a parsed representation of a cursor, which could cause access violations. This should however have been fixed in the patchset used to upgrade to 10.2.0.2.
    Also, the amount of invalid database objects seems to fluctuate a lot. At times there may be 6, or as much as 23 or 24, then, when queried again a short time later, only 4. No idea whether this is related to anything else described here.
    In short:
    To me, it seems like something is influencing the database, causing HTMLDB and its applications to crash. It does not seem to come from HTMLDB itself, because it works fine the rest of the time, and without changing anything in the DAD, the application or the procedures used, the errors start showing up. But, I might be wrong ofcourse.
    Any help would be greatly appreciated.

    John,
    Thanks for your help with this. So far everyone's pretty stumped.
    It seems to be a problem that grows in the database, like something is leaking. We've been able to track it down to seeing that it starts happening on one particular call on a page we use for lookups, though if it is left alone it seems to grow until it encompasses anything on the HTMLDB server. We've got a SR open with Oracle, just waiting on some movement.
    The keepalives may be unrelated, but it's worth a look while we're waiting for a break. Thanks for the suggestion.
    I am still hoping that either Oracle can find the earlier SR and use it as a starting point, or someone involved from earlier checks back in.
    Thanks again for your help, I'll let you know what happens.
    Thanks,
    Justin

  • Update SQL Server 2008 R2 CLUSTER To SP1 With TFS 2010

    Hi,
    I have Cluster SQL Server 2008 R2 configured to store  Team Foundation Server 2010 DBs
    I want to Update this Cluster to SP1 to support Sharepoint DBs 
    So i want to see if this update can impact my TFS DBs 
    Thnx
    vote if you think useful

    Hello,
    I which way is this "Database Design" related, the Topic of this forum?
    You can install the SP 1 without any harm to your TFS 2010 database; we host them on SQL Server 2012 and also this work.
    But instead of the old SP1 you should go with the last SP 3 for SQL Server 2008 R2.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]
    thank you for your reply
    I want to test it in a lab environment
    vote if you think useful

  • TFS 2010 to 2013

    I am confused on the steps needed when SQL Server Instance and Report Server are located on a server separate from TFS.
    We are using new hardware for those as well to be compatible with the TFS2013 changeover.

    Hi BossHogg20,  
    Thanks for your post.
    You can follow the steps in this
    document to backup and restore TFS 2010 databases to the new TFS 2013’S SQL Server, then run the Upgrade Wizard to configure your TFS 2013 Server. During the Upgrade Wizard, it will ask you provide the right SQL Server instance and Reporting Service, etc. 
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Fast growing object in tablespace

    Hi Experts
    can any tell me how to find the fast growing objects in database. database is growing very fast and i want to know what all objects are those?
    thanks in advance
    anu

    can any tell me how to find the fast growing objects in database. database is growing very fast and i want to know what all objects are those?I would change this query to take it object wise it as OP is interested in growing object size not in segment size
    select owner,segment_name,segment_type,sum(bytes)/1024/1024 "SIZE(MB)",blocks from dba_segments where segment_type = 'TABLE' and owner = 'LOGS' group by owner,segment_name,segment_type;Regards.

  • SharePoint_Config database

    We recently switched the SharePoint_Config database to Simple recovery model to control the transaction logs. However, the ldf file still gets larger than the actual database file(mdf). 
    I have researched some of the autogrowth/pre-sizing settings, and I just don't understand it for the log portion. Right now, they are just set to the default SQL Server settings. 
    Does anyone have any suggestions as to what I can do to keep the transaction log smaller than the database files or at least relatively small?
    I have checked, and I do not have any active transactions at the moment. 
    Thanks!

    It is still growing. Yesterday, it was at 51MB and now it is at 110MB(which I know is not much compared to others, but we really rely on the server that SharePoint is on), so I am guessing it grows after the database backup(@10:15pm), and then the growth
    doesn't change much throughout the day since I am the only one using it for development. 
    Yes, it may be completely normal, but I just don't want it to continue to grow 50MB+ every night, if it is not normal. 
    I no longer truncate the logs every night since it is in Simple recovery. Should I still be doing that? 
    I tried to find out through internet research if I still needed to, but everyone was saying that Simple recovery takes care of the truncation when the file gets a certain size. Well is there a way to check the size that the auto truncation will take place?

  • How to check fastest growing tables in db2 via db2 command

    Hi Experts,
    You might feel this very silly question on this forum, but still because of some requirement’s I need that. I'm new to db2 and basis so please bare my immatureness.
    Our DB size is growing faster @ 400/500 MB per day from last 15 days, which is supposed to be not more than 100 MB per day. We want to check the fastest growing tables. So i checked the history in db02 transaction and select the entry field as per 'Growth'. But for the given specific date it is showing nothing. Earlier we had same issue some 3 months back that time it used to display with same selection criteria.
    So I want a db2 command to execute and check the fastest growing table on database level. Please help guys. Early reply must be appreciated.
    PFA screenshot. DB version is DB2 9.7 and OS is Linux
    Thanks & Regards,
    Prasad Deshpande

    Hi Gaurav/Sriram,
    Thanks for the reply..
    DBACOCKPIT is best to go i agree with you but on DBACOCKPIT though our data collector framework and rest all the things are configured properly but still it is not working. It is not changed from last 1 year, and for same scenario 3 months ago it has displayed the growth tables.
    Any how i have raised this issue to SAP so let the SAP come back with the solution on this product error.
    In the mean while @ Experts please reply if you know the DB level command for getting the fastest growing table.
    I'll update the SAP's reply for the same as soon as i get, so that the community should also get the solution for this..
    Thanks & Regards,
    Prasad Deshpande

  • User does not have permission to alter database

    I have a OperationsManager database on sql 2012 that tje original owner is domain\administrator
    I can no longer log in as this user but I need to enable autogrow on this database but when i log in as another user and run sql studio as administrator, It gives me an error when trying to enable autogrow
    User

    Hi JonDoe,
    According to the error message and your description, we need to verify if the user account you are using to enable auto grow on the database  has access to modify database properties and create database permissions. You can provide it with Alter rights
    on the database via the following T-SQL statement. Or you grant the Sysadmin role to the new user in the login properties.
    GRANT ALTER ON DATABASE:: OperationsManager TO username (your new user account);
    --- you need to note that the grantor must have either the permission itself with GRANT OPTION, or a higher permission that implies the permission being granted.
    For more information, you can review the following article.
    Add Any User to SysAdmin Role:
    http://blog.sqlauthority.com/2008/12/27/sql-server-add-any-user-to-sysadmin-role-add-users-to-system-roles/
    SQL Server Database Growth and Auto growth Settings:
    https://www.simple-talk.com/sql/database-administration/sql-server-database-growth-and-autogrowth-settings/
    Thanks,
    Sofiya Li
    If you have any feedback on our support, please click
    here.
    Sofiya Li
    TechNet Community Support

  • Can't find all Tfs_Warehouse fields in the Tfs_Analysis cube.

    I'm converting an SSRS report that currently hits the Tfs_Warehouse relational DB and we want it to use the Tfs_Analysis OLAP cube instead.
    The report uses the dbo.dimIteration.StartDate field from the Tfs_Warehouse DB, I can't find it in the cube...
    Anyone knows how it's called or how to find it in the TFS cube?
    TFS 2012...

    Hi Carlos,
    According to your description, you want to use Tfs_Analysis database, however you cannot find it, right?
    SQL Server can handle both relation databases (using database engine) and OLAP cubes (using Analysis Services, SQL Server Analysis Services (SSAS)). SSAS is optional in SQL server installation. TFS uses SSAS as an option if you install the reporting capabilities,
    you can select the reporting tab in TFS2012 Admin Console to see if it installed and what server it runs on. From BIDS you simple select Analysis Services as type of data source and then the server name hosting SSAS (probably SQL1P if you have it installed).
    But if you not to familiar with SSAS and its query language MDX, you can always use the relational database TFS_Warehouse and SQL as data source for your custom reports.
    Reference:
    How to Rebuild TFS Analysis Database
    If the issue persists, since this issue is related TFS, you can post the issue on Team Foundation Server - Reporting & Warehouse forum on the link below.
    http://social.msdn.microsoft.com/Forums/vstudio/en-US/home?forum=tfsreporting
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • How to deal with the using up space of /opt/oracle

    Hello, all
    We have an Oracle 10g preinstalled in a linux server (Rh 9). Now the system reports that the /opt/oracle is using up. The alert says:
         Filesystem /opt/oracle has only 19% available space
    Because I didn't install the Oracle dababase, I didn't have the chance to select a place for filesystem. Now the /opt/oracle is on the /dev/sda6 which is only 7.1G. I want to know what kind of information are stored here? Should it keep growing? Our database running on the Oracle is a small one, and it shouldn't occupy so much space. By the way, I scheduled the backup to disk (another hard disk) everyday. Is there any copy of the backup on /opt/oracle? How can I deal with it?
    Any advice is highly appreciated!
    Qian
    Message was edited by:
    QianChen
    Message was edited by:
    QianChen

    Typically, the growth files on your binary directory is going to be in the $ORACLE_HOME/network/logs/listener.log , any tracelog, sqlnet.log, or any files that get accidentally dumped in $ORACLE_HOME/dbs.
    I would check there first.
    Also, try scanning for the last few files written/updated in the past 30 days.
    #30 biggest files modified in last little while:
    find . -xdev -type f -mtime -14 -exec ls -l {} \; | sort -nk 5,5 | tail -30

  • Sequence out of sync

    Database version : Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    There are thousands of schemas in our database. Those schemas are identical, and each of them have over 400 tables and 300 sequences. Most sequences have cache enabled, and cache size is usually 20.
    As the number of schemas grow in the database, we frequently run into primary key constraint violation problems. The values of those primary key columns are from different sequences. If we check database when the problem occurs, sometimes we find the nextval of those sequences are smaller the max values of the corresponding columns already, but sometimes the sequences numbers are still greater than the max values of the columns.
    The problem usually occurs when multiple processes are running schema change or data migration scripts concurrently. If we run one process at a time, the problem doesn't seem to happen.
    Have anyone experienced similar issue or is there an Oracle bug related to it?
    Thanks,

    if we have multiple sessions running and each session is inserting records in different schema?Example :
    Suppose i am running your application by scott user :
    As you said each session is having their separate sequences and suppose scott.seq1.nextval is 123
    I (scott) am going to insert into hr.tab1.col1
    At the same time hr user is also running the application and suppose hr.seq1.nextval is 123
    He (HR) is also going to insert into hr.tab1.col1
    Now its PK violation, why ? Because I am using current user's sequence object i.e. in scott's session, I am using scott.seq1, while HR user is using hr.seq1; whose both sequence's nextval is 123. So, check the application code that by using whose sequence object you are going to insert into PK column; in the above example, you should use hr.seq1, because you are going to insert into hr.tab1.
    Regards
    Girish Sharma

  • Capturing data every 2 mins - Streams

    I am using Oracle 11gR2 on RHEL
    We are planning to configure streams from 11gR2 OLTP environment to the 11gR2 DW. This would be schema level replication.
    Once the streams destination schema is populated with the data, we need to schedule the ETL processes to run every 2 minutes as we are dealing with real time reporting. The amount of data flow would be around 5000 records every minute.
    So my question is if I need to look for the most recent data, should I be using a TIMESTAMP column in the source tables or does Streams have any inbuilt columns which I can query to get the most recent data(recent 2 minutes of data)?
    I went through the Oracle Documentation which specifies that these attributes can be added using DBMS_CAPTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE
    row_id (row LCRs only)
    serial#
    session#
    thread#
    tx_name
    username
    But none of these are related to timestamp, so is that the source tables should be having the timestamp column if I need to query the recent 2 mins worth of data from destination schema?
    I know CDC can be of help as I can create subscribers and can extend the window to retrieve the latest data but not sure how this can be accomplished with Streams ? Any thoughts?

    Sorry, think so I was not clear. I am talking about destination tables here/
    I don't have control over the source tables in the OLTP database as they already exist and they would hold data for few years.
    DW is what I have control upon and the one's that I was talking about were the Streams destination tables in the DW database.
    So what I was talking about is, the presentation layer would hold all the reporting data and if the Streams destination tables continue to grow, then my database size would double due to duplicate data in the reporting Schema and Streams schema.
    The reason I wouldn't be able to purge the data in Streams tables(in DW database) is because the rows are not frozen, even after a year some rows might get updated, so if I purge them, I suppose Streams will complain and would throw errors.
    In the case of CDC, since the DML activity is applied as inserts, updates and deletes at the destination tables, even if the change table at destination is truncated, I would still receive updates. For eg:
    In CDC if there are 100 source in source table which have been replicated to destination then I would have 100 rows in destination tables with Operation I(I stands for Insert)
    If I truncate the destination table rows, then obviously it would result in 0 rows in destination table.
    Now if all the 100 rows in source table are updated, I would receive 200 rows in the destination table, 100 as UO(Update old value) and 100 as UN(update new value), so what I mean is CDC won't compain if I truncate the destination tables as part of purge operation for maintenance activities. So I can control the size of the table, I can keep it simple by purging the change table(destination table) every day.
    In case of streams, whether will I be able to do similar purging operations(at destination side) even on rows which would be updated later ?
    Let me know if I am clear.

  • What is the impact if Stats Gather runs in business hours.

    Hi
    I have a Production Database in which Stats gather is running every alternative day as per the business requirement.
    This Stats gather job starts at 7 PM. As data growing in this Database Time taken for this Job also increasing.
    Now a days, This job is taking more than 14 hours to Analyze the Production Schema! It is completing at
    9 AM in next day by the time business time starts.
    Below is the script which we are using.
    EXEC DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'${schema_name}',estimate_percent=>'',method_opt => 'FOR ALL COLUMNS SIZE
    AUTO',cascade => TRUE,options=>'GATHER');
    1. Is it recommended to run the stats gather in business hours?
    2. How can We tune this Schema gather, so that this will take less time.
    Appreciate the help.
    Thanks in Advance.

    Hello,
    Gathering statistics is an heavy operation which should be done out from the Business hours.
    If you are in *10g* or *11g*, normally every night (after 10 PM) the statistics are gathered.
    Else, you can also use the following option:
    options            => 'GATHER STALE' By that way, You would speed up analyze by analyzing only tables with more than 10% data change. But the MONITORING option must be enabled first on each Table.
    You may find here some tracks about GATHER_STALE option:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2453570168793
    Hope this help.
    Best regards,
    Jean-Valentin

Maybe you are looking for

  • Cannot Send Email Through Hotmail iOS 5 iPhone 4

    Hello, I just got a new iPhone 4 with iOS 5.  I've previously had a 3G and I own an iPad.  Both of which I've activated and sync'd my hotmail account to.  The issue I"m having is that I can receive emails on my phone from my Hotmail account however,

  • FTP - Receiver Comm channel error

    Scenario: Send a file to a 3rd party via FTP adapter Error: MONI is OK but comm channel monitor is in error state and error is displayed as: An error occurred while connecting to the FTP server 'xxx.xx.xx.xxx:21'. The FTP server returned the followin

  • How to display three months data in ALV grid control

    Hi Thanks for all.I have tried now it is working but as per requirement month name and cumulative total will display like this format.cumulative total means month1amount+ month2amount+month3amount.And one more thing month range will change some times

  • Printing of Payment advice for Payments done by TT(Bank Transfers)

    Hi Any standard Tcode available for taking print of Payment Advice for payments made through bank transfers like TT to our vendors. Kindly share as we have std Tcode FBZ5 for printing of payment advice for payments done through Checks thanks & regard

  • Bank account display

    Hello Friends In FS10n Bank account Display I want offict entry line items   ? for example Vendor a/c               Dr  1000 Bank Chq issue A/c  Cr  1000 in Bank Chq issue a/c- Vendor a/c   Cr   1000