Best practice for replicating Partitioned table

Hi SQL Gurus,
Requesting your help on the design consideration for replicating a partitioned table.
1. 4 Partitioned tables (1 master table with foreign key constraints to 3 tables) partitioned based on monthly YYYYMM
2. 1 table has a XML column in it
3. Monthly switch partition to remove old data, since it is having foreign key constraint; disable until the switch is complete
4. 1 month partitioned data is 60 GB
having said the above, wanted to create a copy of the same tables to a different servers.
I can think of
1. Transactional replication, but then worried about the XML column,snapshot size and the alter switch will make the same thing
on the subscriber or row by row delete.
2. Logshipping with standby with every 15 minutes, but then it will be for the entire database; because I have other partitioned monthly table which is of 250 GB worth.
3. Thinking about replicating the Partitioned table as Non Partitioned, in that case how the alter switch will work. Is it possible to ignore delete when setting up the replication.
3. SSIS or Stored procedure method of moving data on a daily basis.
4. Backup and restore on a daily basis, but this will not work when the source partition is removed.
Ganesh

Plz refer to
http://msdn.microsoft.com/en-us/library/cc280940.aspx

Similar Messages

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • JDeveloper 11.1.1.3: Best Practice for Checkboxes in Table

    Hi there,
    I'm having problems with checkboxes inside the table component.
    Can someone please fill me in as to what the best practice is to use checkboxes inside a table ?
    In the database, we are storing values Y and N.
    Thanks,
    Mark

    Hi Mark,
    I suppose you are talking about ADF Faces applications. If so, then I have two prefered approaches tested and used in real practice:
    *1) First approach: Create a simple converter, define it in the faces-config.xml and set it in the <af:selectBooleanCheckbox> tags' "converter" attribute, for example:*
    package mypackage;
    import javax.faces.application.FacesMessage;
    import javax.faces.component.UIComponent;
    import javax.faces.context.FacesContext;
    import javax.faces.convert.Converter;
    import javax.faces.convert.ConverterException;
    public class BooleanYNConverter implements Converter {
      public BooleanYNConverter() {
      public Object getAsObject(FacesContext facesContext, UIComponent uiComponent, String string) {
        if (string==null) return null;
        String s = string.trim();
        if (s.length()==0) return null;
        if (s.equalsIgnoreCase("true")) return "Y";
        if (s.equalsIgnoreCase("false")) return "N";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + string + " to Y/N. It must be either true or false",
            "Cannot convert " + string + " to Y/N. It must be either true or false" );
        throw new ConverterException(errorMessage);
      public String getAsString(FacesContext facesContext, UIComponent uiComponent, Object object) {
        if (object == null) return "";
        if (object.equals("Y")) return "true";
        if (object.equals("N")) return "false";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + object + " to true/false. It must be either Y or N",
            "Cannot convert " + object + " to true/false. It must be either Y or N" );
        throw new ConverterException(errorMessage);
    }In faces-config.xml:
      <converter>
        <converter-id>BooleanYNConverter</converter-id>
        <converter-class>mypackage.BooleanYNConverter</converter-class>
      </converter>In JSF page:
    <af:selectBooleanCheckbox ... converter="BooleanYNConverter"/>
    N.B. If you use this approach, the ViewObject attribute's Control Type should be set to "Default" instead of "Checkbox" (see the attribute's Control Hints section in the dialog box)!
    *2) Second approach: In the PageDef define a button binding for Y/N values and map the corresponding item in the table binding to this button binding. In this way you will remap VO attribute's Y/N value to true/false as how the checkbox component expects it. Neither converters nor additional configuration is necessary, but you have to do this on each checkbox field again:*
    <?xml version="1.0" encoding="UTF-8" ?>
    <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="11.1.1.56.60" id="TestPagePageDef" Package="view.pageDefs">
      <parameters/>
      <executables>
        <variableIterator id="variables"/>
        <iterator Binds="DeptViewRO" RangeSize="25" DataControl="AppModuleDataControl" id="DeptViewROIterator"/>
      </executables>
      <bindings>
        <tree IterBinding="DeptViewROIterator" id="DeptViewRO">
          <nodeDefinition DefName="model.DeptViewRO" Name="DeptViewRO0">
            <AttrNames>
              <Item Value="DeptID"/>
              <Item Value="DeptName"/>
              <Item Value="Flag" Binds="MyFlag"/>
            </AttrNames>
          </nodeDefinition>
        </tree>
        <button IterBinding="DeptViewROIterator" StaticList="true" id="MyFlag">
          <AttrNames>
            <Item Value="Flag"/>
          </AttrNames>
          <ValueList>
            <Item Value="Y"/>
            <Item Value="N"/>
          </ValueList>
        </button>
      </bindings>
    </pageDefinition>In the sample above the target VO attribute is called Flag. Have a look at the line <tt><Item Value="Flag" Binds="MyFlag"/></tt>. This line does the magic.
    Hope I've been a bit helpful.
    Dimitar
    Edited by: Dimitar Dimitrov on Nov 13, 2010 1:53 PM
    There was a little mistake: Instead of BooleanYNConverter I had written BooleanYNCheckbox in the <af:selectBooleanCheckbox> tag.

  • Best practice for existing target table

    We want to develop mappings for existing target tables.
    1. The tables are imported from the datadictionary
    2. The tables are used in the mappings
    Is it a good idea to set deploy to no for these tables, to prevent them from beeing used by the deplomentmanager (default action:create)?
    Thanks your for a good practice
    Stp
    Message was edited by:
    user444776
    Message was edited by:
    user444776

    Hi,
    Yes, you are right. for the tables which already exist in the database schema the deployment actions should be "None". Unless you planned and made any changes to the table structures.
    Cheers
    Mahesh

  • Best Practice for Replicating Schema Changes

    Hi,
    We manage several merge replication topologies (each topology has a single publisher/distributor with several pull subscriptions, all servers/subscribers are SQL Server 2008 R2).  When we have a need to perform schema changes in support of pending software
    upgrades we do the following:
    a) Have all subscribers synchronize to ensure there are no unsynchronized changes present in the topology at the time of schema update,
    b) Make full copy-only backup of distribution and publication databases,
    c) Execute snapshot agent,
    d) Execute schema change script(s) on publisher (*) when c and d are reversed this has caused issues with changes to view definitions which has resulted in us having to reinitialize subscriptions,
    e) Have subscribers synchronize again to receive schema updates.
    Each topology has it's own quirks in terms of subscriber availability and consequently the best time to perform such updates.
    The above process would seem necessary when making schema changes to remove tables, columns and/or views from the database, but when schema changes are focused on adding and/or updating objects, and/or adding/updating data, is the entire process above necessary? 
    In this instance, if it's possible to remove the step of coordinating the entire topology to synchronize prior to performing these changes I would like to do that.
    The process as we currently perform it works without issue, but I'd like to streamline it if and where possible, while maintaining integrity and avoiding potential for non-convergence.
    Any assistance or insight you can provide is greatly appreciated.
    Best Regards
    Brad

    If you need to make schema changes then you will need to use ALTER syntax at the publisher.  By default the schema change will be propagated to subscribers automatically, publication property
    @replicate_ddl must be set to true.  This is covered in
    Make Schema Changes on Publication Databases.
    This can be done at anytime, without the need to synchronize unsynchronized changes, make a backup, or execute the snapshot agent.
    Adding an a new article involves adding the article to the publication, creating a new snapshot, and synchronizing the subscription to apply the schema and data for the newly added article. Reinitialization is not required, but a new snapshot is.
    Dropping an article from a publication involves dropping the articles, creating a new snapshot, and synchronizing subscriptions. Special considerations must be made for Merge publications with parameterized filters and compatibility level lower than 90RTM.
    This is covered in
    Add Articles to and Drop Articles from Existing Publications.
    Brandon Williams (blog |
    linkedin)

  • Best practice for archiving FGA_LOG$ table?

    Hi all,
    I am running Oracle 11g Release 1, and have been asked to look at how to best archive the FA_LOG$ table. We are required to store the logs for 3 years. I ran the following query:
    select owner, segment_name, segment_type, bytes/1024/1024 "MB" from dba_segments
    where tablespace_name = 'AUDIT_TBS' AND rownum <=100
    AND bytes/1024/1024 > 1 order by bytes desc;
    and it shows that the FGA_LOG$ table is currently at 84861 MB!
    Is my best course of action to:
    1. Install the patch necessary to access the DBMS_AUDIT_MGMT package
    2. Move the table to a different tablespace
    3. Use the DataPump export utlity
    4. Cleanup the table with the DBMS_AUDIT_MGMT package?
    Thanks,
    David

    to remove from the table itself
    delete from fga_log$ where timestamp# < sysdate-(365*3);
    if you want to keep data outside
    expdp to a file called this_years_fga_data.dmp , zip up and keep wherever you like.
    truncate fga_log$
    repeat annually
    * edit, jsut to clarify the timestamp# column may depend on version.
    Edited by: deebee_eh on Apr 25, 2012 5:05 PM

  • Best Practice for Running Number Table

    Dear All
    Thank you for your attention.
    I would like to generate number for each order
    AAAA150001
    AAAA is prefix
    1 is year and 0001 is he sequence number.
    I proposed the table as below
    Prefix    | Year     | Number
    AAAA    | 15        | 1
    Using  SQL query as below to get the lastest number
    SELECT CurrentNumber = Prefix + Year + RIGHT ('0000'+ CAST (Number+1 AS VARCHAR(4)), 4)
    FROM RunningNumber WHERE Prefix = 'AAAA'
    after all save process then update the running number table
    UPDATE RunningNumber SET Number = (Number +1) WHERE Prefix = 'AAAA' AND Year = '15'
    Is that a normal approach and good to handle concurrent saving?
    Thanks.
    Best Regards
    mintssoul

    Dear Visakh16
    Each year the number will reset, table will as below
    Prefix    | Year     | Number
    AAAA    | 15        | 8749
    AAAA    | 16        | 1
    I could only use option1 from your ref.
    To use this approach, I must make sure 
    a) the number will not be duplicated or jumped as there is multiple users using the system concurrently.
    b) the number will not increment when there is any error after get the new number
    Is that using the following methods could archive a) & b)? 
    1) .NET SqlTransaction.Rollback
    2) SQL
    ROLLBACK TRANSACTION Thanks.
    To prevent repeat information, details of 1) & 2) is not listed here, please refer to my previous reply to Uri
    thanks.
    Best Regardsmintssoul

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best practice for partitioning 300 GB disk

    Hi,
    I would like to seek for advise on how I should partition a 300 GB disk on Solaris 8.x, what would be the optimal size for each of the partition.
    The system will be used internally for running web/application servers and database servers.
    Thanks in advance for your help.

    There is no "best practice" regardles of what others might say. I depends entirely on how you plan on using and maintaining the system. I have run into too many situations where fine-grained file system sizing bit the admins in the backside. For example, I've run into some that assumed that /var is only going to be for logging and printing, so they made it nice and small. What they didn't realize is that patch and package information is also stored in /var. So, when they attempted to install the R&S cluster, they couldn't because they couldn't put the patch info into /var.
    I've also run into other problems where a temp/export system that was mounted on a root-level directory. They made the assumption that "Oh, well, it's root. It can be tiny since /usr and /opt have their own partitions." The file system didn't mount properly, so any scratch files in that directory that were created went to the root file system and filled it up.
    You can never have a file system that's too big, but you can always have a file system that's too small.
    I will recommend the following, however:
    * /var is the most volatile directory and should be on its own several GB partition to account for patches, packages, and logs.
    * You should have another partition as big as your system RAM and assign that parition as a system/core dump for system crashes.
    * /usr or whatever file system it's on must be big enough to assume that it will be loaded with FOSS/Sunfreeware tools, even if at this point you have no plans on installing them. I try to make mine 5-6 GB or more.
    * If this is a single-disk system, do not use any kind of parallel access structure, like what Oracle prefers, as it will most likely degrade system performance. Single disks can only make single I/O calls, obviously.
    Again, there is no "best practice" for this. It's all based on what you plan on doing with it, what applications you plan on using, and how you plan on using it. There is nothing that anyone here can tell you that will be 100% applicable to your situation.

  • What is the best practice for creating primary key on fact table?

    what is the best practice for primary key on fact table?
    1. Using composite key
    2. Create a surrogate key
    3. No primary key
    In document, i can only find "From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys."
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/logical.htm#i1006423
    I also found a relevant thread states that primary key on fact table is necessary.
    Primary Key on Fact Table.
    But, if no business requires the uniqueness of the records and there is no materilized view, do we still need primary key? is there any other bad affect if there is no primary key on fact table? and any benifits from not creating primary key?

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • Best Practice for Extracting a Single Value from Oracle Table

    I'm using Oracle Database 11g Release 11.2.0.3.0.
    I'd like to know the best practice for doing something like this in a PL/SQL block:
    DECLARE
        v_student_id    student.student_id%TYPE;
    BEGIN
        SELECT  student_id
        INTO    v_student_id
        FROM    student
        WHERE   last_name = 'Smith'
        AND     ROWNUM = 1;
    END;
    Of course, the problem here is that when there is no hit, the NO_DATA_FOUND exception is raised, which halts execution.  So what if I want to continue in spite of the exception?
    Yes, I could create a nested block with EXCEPTION section, etc., but that seems clunky for what seems to be a very simple task.
    I've also seen this handled like this:
    DECLARE
        v_student_id    student.student_id%TYPE;
        CURSOR c_student_id IS
            SELECT  student_id
            FROM    student
            WHERE   last_name = 'Smith'
            AND     ROWNUM = 1;
    BEGIN
        OPEN c_student_id;
        FETCH c_student_id INTO v_student_id;
        IF c_student_id%NOTFOUND THEN
            DBMS_OUTPUT.PUT_LINE('not found');
        ELSE
            (do stuff)
        END IF;
        CLOSE c_student_id;   
    END;
    But this still seems like killing an ant with a sledge hammer.
    What's the best way?
    Thanks for any help you can give.
    Wayne

    Do not design in order to avoid exceptions. Do not code in order to avoid exceptions.
    Exceptions are good. Damn good. As it allows you to catch an unexpected process branch, where execution did not go as planned and coded.
    Trying to avoid exceptions is just plain bloody stupid.
    As for you specific problem. When the SQL fails to find a row and a value to return, what then? This is unexpected - if you did not want a value, you would not have coded the SQL to find a value. So the SQL not finding a value is an exception to what you intend with your code. And you need to decide what to do with that exception.
    How to implement it. The #1 rule in software engineering - modularisation.
    E.g.
    create or replace function FindSomething( name varchar2 ) return foo.col1%type is
      id foo.col1%type;
    begin
      select col1 into id from foo where col2 = upper(name);
      return( id );
    exception when NOT_FOUND then
      return( null );
    end;
    And that is your problem. Modularisation. You are not considering it.
    And not the only problem mind you. Seems like your keyboard has a stuck capslock key. Writing code in all uppercase is just as bloody silly as trying to avoid exceptions.

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Where to find best practices for tuning data warehouse ETL queries?

    Hi Everybody,
    Where can I find some good educational material on tuning ETL procedures for a data warehouse environment?  Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems.  (For example, most of our ETL
    queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
    I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
    often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
    partition tables that are larger than 50 GB;
    use minimal logging to load data precisely where you want it as fast as possible;
    often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
    rebuild statistics after every load of a table.
    But I still feel like I'm missing some very crucial concepts for performant ETL development.
    BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices.  Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
    SQL" tasks.
    Thanks, and any best practices you could include here would be greatly appreciated.
    -Eric

    Online ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL  using Merge command of SQL Server
    2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    Kindly let me know if any further help is needed
    Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities

  • Best Practices for Defining NDS Java Projects...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications.  We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects.  For example, what is and when do you define Tracks, Software Components, Development Components, etc.  All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see).  We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance.  This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi Peggy,
    In Component Model we divide software projects into small components.Components can use other components in well defined manner.
    A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as “sources” in a repository.
    A development component can be defined as a frame shared by a number of objects, which are part of the software.
    Software components combine components (DCs) to larger units for delivery and deployment.
    A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
    The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
    You can also find lot of support in SDN for the above concepts with tutorials.
    Refer this Link for a overview on Java development Infrastructure(JDI)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
    To understand further
    Working with Net Weaver Development Infrastructure :
    http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
    In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
    Regards,
    Vijith

Maybe you are looking for

  • No sound on Windows 7 Boot Camp

    I've had this problem for a while now, I just ignored it in the past, now I'm really fed up. I've tried everything, downloading Realteck, burning BootCamp drivers onto the CD (I have a Lion which came with no disc), I tried playing around with the So

  • Creating ColdFusion CMS

    I am working on designing a RIA for my school district. I update the main site while others update individual school web pages. I want to make a ColdFusion CMS with pre-made Components that these users can use to build their web pages. For example, i

  • HP photosmart 5525

    Prints copies but cannot connect to either WiFi or with cable.

  • How to archive iMessage from iPad

    iMessages take up over 4GB on my iPad, and of course iPhone and Mac too.  Before upgrading to iOS8, I need to free up some space on the iPad and iPhone.  How?  For example, can I archive some of them to Mac for off-line view?

  • How to call CTX_THES.THES_TT procedure

    Hi ! There seems to be a bug (?) in the parameter definition for the THES_TT procedure of the CTX_THES package. The PHRASE-Parameter is missing in the declaration. Ith should be called like THES_TT (restab, phrase [,thes-name]), but the declaration i