Server Logon History table in master database has grown to 30GB. Is it a best practice to truncate this table?

I have noticed that the master database in our staging env has grown to 30Gb with 299803769 records. I am not sure why but this has happened in the last one year. Just from yesterday to today (1/1/2014 to 1/2/2014) it has 629921 records. Is it
a good practice to truncate or delete data from this table?
Thanks for your inputs

AFAIK that's not a system table.
David
David http://blogs.msdn.com/b/dbrowne/

Similar Messages

  • RSECLOG - how do i delete/truncate this table???

    Hello BW people,
    we turned on auth analysis 6 or 7 months ago and now we have a 260 gb table and i am trying to delete the logs manually thru RSECPROT but it is taking forever to delete one log (approx 30 mins).
    is there either a program in sap or a manual way of deleteing or truncating this table?
    any help would be very much appreciated.

    Thanks Jorge,
    this has worked perfectly.
    I thought there might be a sap program to do this, but this worked.
    thx,
    Erik

  • SQL Server Compact 3.5 Check if database has changed

    Hi,
       Is there an easy way to determine if any table in an SQL Server CE 3.5 database has changed? (Any insert, update or delete in any table).
    Thanks
    Paul.
    Paul Wainwright

     I am currently using the change tracking mechanism as the application synchronizes databases. I was considering using a FileSystemWatcher to check the LastWrite to the database file but wondered if something global in a SDF database could be used instead.
    Hi Paul,
    Based on my research, besides using the change tracking mechanism and FileSystemWatcher, ADO.NET SqlDependency mechanism is also helpful for detecting SQL Server compact database changes if you're using client side ADO.NET with C# or VB.NET.
    A SqlDependency object can be associated with a SqlCommand in order to detect when query results differ from those originally retrieved. You can also assign a delegate to the OnChange event, which will fire when the results change for an associated command.
    For more details about SqlDependency, please review the following link.
    Detecting Changes with SqlDependency:
    http://msdn.microsoft.com/en-us/library/62xk7953.aspx
    Thanks,
    Lydia Zhang

  • Could not create table on 'master' database

    Hello,
         I am new to the SQL Azure. When I tried to create table on master DB, i get following error.
    Msg 262, Level 14, State 1, Line 1CREATE TABLE permission denied in database 'master'.
    I tried to change role to 'db_owner' but have received following error.
    Msg 15151, Level 16, State 1, Line 1Cannot alter the role 'db_owner', because it does not exist or you do not have permission.
    I logged into master DB using the credentials I created while creating DB on the Azure. This is supposed to be 'sa'.
    My question is, does SQL Azure allow to create objects (tables, stores procedures, triggers) on master DB. If yes, what am I doing wrong? If no, why?
    My guess is that SQL Azure is not less than the On premises DB. It must be allowing objects creation on master DB.
    I have found a similar question on forum but I did not get the answer. Here is the link --> https://social.msdn.microsoft.com/Forums/azure/en-US/cbff1bd2-358f-4106-b238-ec369941099d/master-database-cannot-alter-the-role-dbowner.
    FYI, I have gone through following resources but haven't got solution for my problem.
    1) https://iainhunter.wordpress.com/2011/06/20/copying-databases-and-creating-users-and-logins-for-sql-azure/
    2) http://www.c-sharpcorner.com/forums/thread/126589/create-database-permission-denied-in-database-master.aspx
    3) http://stackoverflow.com/questions/20656943/how-do-you-change-the-owner-of-an-azure-database
    4) https://msdn.microsoft.com/en-us/library/azure/ee336235.aspx
    Thanks in advance.

    My question is, does SQL Azure allow to create objects (tables, stores procedures, triggers) on master DB. If yes, what am I doing wrong? If no, why?
    My guess is that SQL Azure is not less than the On premises DB. It must be allowing objects creation on master DB.
    For the answer of the question, you can read Joseph's. Azure SQL, so far, is less than the premise DB except its
    cloud-based advantage. There're many limitations like less T-SQL features supported.
    Azure SQL Database Transact-SQL Reference
    more links for your reference.
    Preview features
    Managing Databases and Logins in Azure SQL Database
    Eric Zhang
    TechNet Community Support

  • Best practice for replicating Partitioned table

    Hi SQL Gurus,
    Requesting your help on the design consideration for replicating a partitioned table.
    1. 4 Partitioned tables (1 master table with foreign key constraints to 3 tables) partitioned based on monthly YYYYMM
    2. 1 table has a XML column in it
    3. Monthly switch partition to remove old data, since it is having foreign key constraint; disable until the switch is complete
    4. 1 month partitioned data is 60 GB
    having said the above, wanted to create a copy of the same tables to a different servers.
    I can think of
    1. Transactional replication, but then worried about the XML column,snapshot size and the alter switch will make the same thing
    on the subscriber or row by row delete.
    2. Logshipping with standby with every 15 minutes, but then it will be for the entire database; because I have other partitioned monthly table which is of 250 GB worth.
    3. Thinking about replicating the Partitioned table as Non Partitioned, in that case how the alter switch will work. Is it possible to ignore delete when setting up the replication.
    3. SSIS or Stored procedure method of moving data on a daily basis.
    4. Backup and restore on a daily basis, but this will not work when the source partition is removed.
    Ganesh

    Plz refer to
    http://msdn.microsoft.com/en-us/library/cc280940.aspx

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practice : how 2 fetch tables, views, ... names and schema

    hi,
    I am looking for the best practice about getting the catalog of a database.
    I have seen that I can make some select in system-tables(or views) such as DBA_TABLES and DBA_VIEWS, or DBA_CATALOG, but is that the best way to grab thses informations ?
    (I ask this question because It seems a strange way to me to get the table names using a simple select, but getting column info using a specialized function, OCIDescribeAny(). this does not look like a coherent API...)
    thanks for your advice
    cd

    in the same idea, why use OCIDescribeAny instead of doing an appropriate select in DBA_TAB_COLUMNS ?
    cd

  • Best practice to load a table that is a relationship between 2 parent tabs

    I need to know what is the best way to load a table that is a kind of relationship between 2 master tables and that is relating them. It is an append procedure not a replace procedure.
    should it be a sqlldr?
    what tool should be used?
    Thanks a lot.
    @}--->----->>----------
    Paola

    Yes it is inside of a file that is a kind of .txt, it is ASCII.
    I want to load without having problems with constraints at all.
    But the point is that it should be fast enough to avoid performance problems.
    Thank you.
    Paola
    @{--->----->>----------                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • JDeveloper 11.1.1.3: Best Practice for Checkboxes in Table

    Hi there,
    I'm having problems with checkboxes inside the table component.
    Can someone please fill me in as to what the best practice is to use checkboxes inside a table ?
    In the database, we are storing values Y and N.
    Thanks,
    Mark

    Hi Mark,
    I suppose you are talking about ADF Faces applications. If so, then I have two prefered approaches tested and used in real practice:
    *1) First approach: Create a simple converter, define it in the faces-config.xml and set it in the <af:selectBooleanCheckbox> tags' "converter" attribute, for example:*
    package mypackage;
    import javax.faces.application.FacesMessage;
    import javax.faces.component.UIComponent;
    import javax.faces.context.FacesContext;
    import javax.faces.convert.Converter;
    import javax.faces.convert.ConverterException;
    public class BooleanYNConverter implements Converter {
      public BooleanYNConverter() {
      public Object getAsObject(FacesContext facesContext, UIComponent uiComponent, String string) {
        if (string==null) return null;
        String s = string.trim();
        if (s.length()==0) return null;
        if (s.equalsIgnoreCase("true")) return "Y";
        if (s.equalsIgnoreCase("false")) return "N";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + string + " to Y/N. It must be either true or false",
            "Cannot convert " + string + " to Y/N. It must be either true or false" );
        throw new ConverterException(errorMessage);
      public String getAsString(FacesContext facesContext, UIComponent uiComponent, Object object) {
        if (object == null) return "";
        if (object.equals("Y")) return "true";
        if (object.equals("N")) return "false";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + object + " to true/false. It must be either Y or N",
            "Cannot convert " + object + " to true/false. It must be either Y or N" );
        throw new ConverterException(errorMessage);
    }In faces-config.xml:
      <converter>
        <converter-id>BooleanYNConverter</converter-id>
        <converter-class>mypackage.BooleanYNConverter</converter-class>
      </converter>In JSF page:
    <af:selectBooleanCheckbox ... converter="BooleanYNConverter"/>
    N.B. If you use this approach, the ViewObject attribute's Control Type should be set to "Default" instead of "Checkbox" (see the attribute's Control Hints section in the dialog box)!
    *2) Second approach: In the PageDef define a button binding for Y/N values and map the corresponding item in the table binding to this button binding. In this way you will remap VO attribute's Y/N value to true/false as how the checkbox component expects it. Neither converters nor additional configuration is necessary, but you have to do this on each checkbox field again:*
    <?xml version="1.0" encoding="UTF-8" ?>
    <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="11.1.1.56.60" id="TestPagePageDef" Package="view.pageDefs">
      <parameters/>
      <executables>
        <variableIterator id="variables"/>
        <iterator Binds="DeptViewRO" RangeSize="25" DataControl="AppModuleDataControl" id="DeptViewROIterator"/>
      </executables>
      <bindings>
        <tree IterBinding="DeptViewROIterator" id="DeptViewRO">
          <nodeDefinition DefName="model.DeptViewRO" Name="DeptViewRO0">
            <AttrNames>
              <Item Value="DeptID"/>
              <Item Value="DeptName"/>
              <Item Value="Flag" Binds="MyFlag"/>
            </AttrNames>
          </nodeDefinition>
        </tree>
        <button IterBinding="DeptViewROIterator" StaticList="true" id="MyFlag">
          <AttrNames>
            <Item Value="Flag"/>
          </AttrNames>
          <ValueList>
            <Item Value="Y"/>
            <Item Value="N"/>
          </ValueList>
        </button>
      </bindings>
    </pageDefinition>In the sample above the target VO attribute is called Flag. Have a look at the line <tt><Item Value="Flag" Binds="MyFlag"/></tt>. This line does the magic.
    Hope I've been a bit helpful.
    Dimitar
    Edited by: Dimitar Dimitrov on Nov 13, 2010 1:53 PM
    There was a little mistake: Instead of BooleanYNConverter I had written BooleanYNCheckbox in the <af:selectBooleanCheckbox> tag.

  • Best practice for archiving FGA_LOG$ table?

    Hi all,
    I am running Oracle 11g Release 1, and have been asked to look at how to best archive the FA_LOG$ table. We are required to store the logs for 3 years. I ran the following query:
    select owner, segment_name, segment_type, bytes/1024/1024 "MB" from dba_segments
    where tablespace_name = 'AUDIT_TBS' AND rownum <=100
    AND bytes/1024/1024 > 1 order by bytes desc;
    and it shows that the FGA_LOG$ table is currently at 84861 MB!
    Is my best course of action to:
    1. Install the patch necessary to access the DBMS_AUDIT_MGMT package
    2. Move the table to a different tablespace
    3. Use the DataPump export utlity
    4. Cleanup the table with the DBMS_AUDIT_MGMT package?
    Thanks,
    David

    to remove from the table itself
    delete from fga_log$ where timestamp# < sysdate-(365*3);
    if you want to keep data outside
    expdp to a file called this_years_fga_data.dmp , zip up and keep wherever you like.
    truncate fga_log$
    repeat annually
    * edit, jsut to clarify the timestamp# column may depend on version.
    Edited by: deebee_eh on Apr 25, 2012 5:05 PM

  • Best Practice to load multiple Tables

    Hi,
    Am trying to load around 9-10 tables in sql server.all these belong to a single project.
    table loads are just select * into destination table,using some joins.
    Is it good to have 1 stored proc to load all these tables or individual stored procs to load.
    All these tables are independent to each other.
    Appreciate your help !

    If you are using SQL Server 2008 and onwards take a look into this link (Minimal logging)
    http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-1.aspx
    http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-2.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Best practice for existing target table

    We want to develop mappings for existing target tables.
    1. The tables are imported from the datadictionary
    2. The tables are used in the mappings
    Is it a good idea to set deploy to no for these tables, to prevent them from beeing used by the deplomentmanager (default action:create)?
    Thanks your for a good practice
    Stp
    Message was edited by:
    user444776
    Message was edited by:
    user444776

    Hi,
    Yes, you are right. for the tables which already exist in the database schema the deployment actions should be "None". Unless you planned and made any changes to the table structures.
    Cheers
    Mahesh

  • Best Practice SAP HANA Unify tables

    I have 2 tables from differents systems With The same structure . We need unify. It´s possible Do It with Calculate View, and after the start this analytics and anothers Calculate View ?.
    Same Name, and different Schema
    Thanks

    I have 2 tables from differents systems With The same structure . We need unify. It´s possible Do It with Calculate View, and after the start this analytics and anothers Calculate View ?.
    Same Name, and different Schema
    Thanks

  • Best Practice for Running Number Table

    Dear All
    Thank you for your attention.
    I would like to generate number for each order
    AAAA150001
    AAAA is prefix
    1 is year and 0001 is he sequence number.
    I proposed the table as below
    Prefix    | Year     | Number
    AAAA    | 15        | 1
    Using  SQL query as below to get the lastest number
    SELECT CurrentNumber = Prefix + Year + RIGHT ('0000'+ CAST (Number+1 AS VARCHAR(4)), 4)
    FROM RunningNumber WHERE Prefix = 'AAAA'
    after all save process then update the running number table
    UPDATE RunningNumber SET Number = (Number +1) WHERE Prefix = 'AAAA' AND Year = '15'
    Is that a normal approach and good to handle concurrent saving?
    Thanks.
    Best Regards
    mintssoul

    Dear Visakh16
    Each year the number will reset, table will as below
    Prefix    | Year     | Number
    AAAA    | 15        | 8749
    AAAA    | 16        | 1
    I could only use option1 from your ref.
    To use this approach, I must make sure 
    a) the number will not be duplicated or jumped as there is multiple users using the system concurrently.
    b) the number will not increment when there is any error after get the new number
    Is that using the following methods could archive a) & b)? 
    1) .NET SqlTransaction.Rollback
    2) SQL
    ROLLBACK TRANSACTION Thanks.
    To prevent repeat information, details of 1) & 2) is not listed here, please refer to my previous reply to Uri
    thanks.
    Best Regardsmintssoul

  • Problem adding more than 1 table from a database

    I am running Crystal Reports 2008 on windows server 2008 and connecting to an ODBC database using IBM Client access driver.  When I create a new report and only add one table from the database, everything works fine.  If I try to go add a second table to the report, then I get an error - Crystal Reports has stopped working - there is no other useful information.  I have also tried creating a new report and adding 2 tables at the same time, it also blows up before I even get to the linking screen.
    I have also tried opening a report that was created in Crystal 11 that has 2 or more tables in it.  The report will open and run successfully in Crystal 2008.

    Hi Gwyn,
    Moved this post to the Database Connectivity Forum.
    Have you installed any CR patches? If not please do so. Also you can find our DataDirect ODBC drivers from this link also:
    http://service.sap.com/sap/bc/bsp/spn/bobj_download/main.htm
    Select Crystal Report, 2008 and you'll find Service Pack 2 and the Data Direct ODBC drivers. Try both and then reply if you have still have a problem or if it now works.
    Thank you
    Don

Maybe you are looking for