Best Practice to load multiple Tables

Hi,
Am trying to load around 9-10 tables in sql server.all these belong to a single project.
table loads are just select * into destination table,using some joins.
Is it good to have 1 stored proc to load all these tables or individual stored procs to load.
All these tables are independent to each other.
Appreciate your help !

If you are using SQL Server 2008 and onwards take a look into this link (Minimal logging)
http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-1.aspx
http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-2.aspx
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence

Similar Messages

  • Best practice to load a table that is a relationship between 2 parent tabs

    I need to know what is the best way to load a table that is a kind of relationship between 2 master tables and that is relating them. It is an append procedure not a replace procedure.
    should it be a sqlldr?
    what tool should be used?
    Thanks a lot.
    @}--->----->>----------
    Paola

    Yes it is inside of a file that is a kind of .txt, it is ASCII.
    I want to load without having problems with constraints at all.
    But the point is that it should be fast enough to avoid performance problems.
    Thank you.
    Paola
    @{--->----->>----------                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • BPC:NW - Best practices to load Transaction data from ECC to BW

    I have a very basic question for loading GL transaction data into BPC for variety of purposes, would be great if you can point me towards best practices/standard ways of making such interfaces.
    1. For Planning
    When we are doing the planning for cost center expenses and need to make variance reports against the budgets, what would be the source Infocube/DSO for loading the data from ECC via BW, if the application is -
    YTD entry mode:
    Periodic entry mode:
    What difference it makes to use 0FI_GL_12 data source or using 0FIGL_C10 cube or 0FLGL_O14 or 0FIGL_D40 DSOs.
    Based on the data entry mode of planning application, what is the best way to make use of 0balance or debit_credit key figures on the BI side.
    2. For consolidation:
    Since we need to have trading partner, what are the best practices for loading the actual data from ECC.
    What are the typical mappings to be maintained for movement type with flow dimensions.
    I have seen multiple threads with different responses but I am looking for the best practices and what scenarios you are using to load such transactions from OLTP system. I will really appreciate if you can provide some functional insight in such scenarios.
    Thanks in advance.
    -SM

    For - planning , please take a look at SAP Extended Financial Planning rapid-deployment solution:  G/L Financial Planning module.   This RDS captures best practice integration from BPC 10 NW to SAP G/L.  This RDS (including content and documentation) is free to licensed customers of SAP BPC.   This RDS leverages the 0FIGL_C10 cube mentioned above.
      https://service.sap.com/public/rds-epm-planning
    For consolidation, please take a look at SAP Financial Close & Disclosure Management rapid-deployment solution.   This RDS captures best practice integration from BPC 10 NW to SAP G/L.  This RDS (including content and documentation) is also free to licensed customers of SAP BPC.
    https://service.sap.com/public/rds-epm-fcdm
    Note:  You will require an SAP ServiceMarketplace ID (S-ID) to download the underlying SAP RDS content and documentation.
    The documentation of RDS will discuss the how/why of best practice integration.  You can also contact me direct at [email protected] for consultation.
    We are also in the process of rolling out the updated 2015 free training on these two RDS.  Please register at this link and you will be sent an invite.
    https://www.surveymonkey.com/s/878J92K
    If the link is inactive at some point after this post, please contact [email protected]

  • SQL Loader : Loading multiple tables using same ctl file

    Hi ,
    We tried loading multiple tables using the same ctl file but the data was not loaded and no errors were thrown.
    The ctl file content is summarised below :
    LOAD DATA
    APPEND INTO TABLE TABLE_ONE
    when record_type ='EVENT'
    TRAILING NULLCOLS
    record_type char TERMINATED BY ',' ,
    EVENT_SOURCE_FIELD CHAR TERMINATED BY ',' ENCLOSED BY '"',
    EVENT_DATE DATE "YYYY-MM-DD HH24:MI:SS" TERMINATED BY ',' ENCLOSED BY '"',
    EVENT_COST INTEGER EXTERNAL TERMINATED BY ',' ENCLOSED BY '"',
    EVENT_ATTRIB_1 CHAR TERMINATED BY ',' ENCLOSED BY '"',
    VAT_STATUS INTEGER EXTERNAL TERMINATED BY ',' ENCLOSED BY '"',
    ACCOUNT_REFERENCE CONSTANT 'XXX',
    bill_date "to_date('02-'||to_char(sysdate,'mm-yyyy'),'dd-mm-yyyy')",
    data_date "trunc(sysdate)",
    load_date_time "sysdate"
    INTO TABLE TABLE_TWO
    when record_type ='BILLSUMMARYRECORD'
    TRAILING NULLCOLS
    RECORD_TYPE char TERMINATED BY ',' ,
    NET_TOTAL INTEGER EXTERNAL TERMINATED BY ',' ENCLOSED BY '"',
    LOAD_DATE_TIME "sysdate"
    INTO TABLE BILL_BKP_ADJUSTMENTS
    when record_type ='ADJUSTMENTS'
    TRAILING NULLCOLS
    RECORD_TYPE char TERMINATED BY ',' ,
    ADJUSTMENT_NAME CHAR TERMINATED BY ',' ENCLOSED BY '"',
    LOAD_DATE_TIME "sysdate"
    INTO TABLE BILL_BKP_CUSTOMERRECORD
    when record_type ='CUSTOMERRECORD'
    TRAILING NULLCOLS
    RECORD_TYPE char TERMINATED BY ',' ,
    GENEVA_CUSTOMER_REF CHAR TERMINATED BY ',' ENCLOSED BY '"',
    LOAD_DATE_TIME "sysdate"
    INTO TABLE TABLE_THREE
    when record_type ='PRODUCTCHARGE'
    TRAILING NULLCOLS
    RECORD_TYPE char TERMINATED BY ',' ,
    PROD_ATTRIB_1_CHRG_DESC CHAR TERMINATED BY ',' ENCLOSED BY '"',
    LOAD_DATE_TIME "sysdate"
    Has anyone faced similar errors or are we going wrong somewhere ?
    Regards,
    Sandipan

    This is the info on the discard in the log file :
    Record 1: Discarded - failed all WHEN clauses.
    Record 638864: Discarded - failed all WHEN clauses.
    While some of the records were loaded for one table.
    Regards,
    Sandipan

  • Loading multiple tables with SQL Loader

    Hi,
    I want to load multiple tables from a single data file using SQL Loader.
    Here's the basic idea of what I want. Let's say I have two tables, table =T1
    and table T2:
    SQL> desc T1;
    COL1 VARCHAR2(20)
    COL2 VARCHAR2(20)
    SQL> desc T2;
    COL1 VARCHAR2(20)
    COL2 VARCHAR2(20)
    COL3 VARCHAR2(20)
    My data file, test.dat, looks like this:
    AAA|KBA
    BBR|BBCC|CCC
    NNN|BBBN|NNA
    I want to load the first record into T1, and the second and third record load into T2. How do I set up my control file to do that?
    Thank!

    Tough Job
    LOAD DATA
    truncate
    INTO table t1
    when col3 = 'dummy'
    FIELDS TERMINATED BY '|'
    TRAILING NULLCOLS
    (col1,col2,col3 filler char nullif col3='dummy')
    INTO table t2
    when col3 != 'dummy'
    FIELDS TERMINATED BY '|'
    (col1,col2,col3 nullif col3='dummy')
    This will load t2 tbl but not t1.
    T1 Filler col3 is not accepting nullif. Its diff to compare columns have null using when condition. If i find something i will let you know.
    Can you seperate records into 2 file. Will a UNIX command work for you which will seperate 2col and 3col record types for you. and then you can execute 2 controlfiles on it.
    Thanks,
    http://www.askyogesh.com

  • Best Practice for using multiple models

    Hi Buddies,
         Can u tell me the best practices for using multiple models in single WD application?
        Means --> I am using 3 RFCs on single application for my function. Each time i am importing that RFC model under
        WD --->Models and i did model binding seperately to Component Controller. Is this is the right way to impliment  multiple            models  in single application ?

    It very much depends on your design, but One RFC per model is definitely a no no.
    Refer to this document to understand how should you use the model in most efficient way.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/705f2b2e-e77d-2b10-de8a-95f37f4c7022?quicklink=events&overridelayout=true
    Thanks
    Prashant

  • Best practice for running multiple sites on 1 CF install?

    Hi-
    I'm setting up a new hosting environment (Windows Server 2008 Standard 64 bit VPS  configuration, MySQL, IIS 7, CF 9)
    Has anyone seen any docs or can anyone suggest best practices for configuring multiple sites in this environment? At this point I'm thinking simple is best, one new site in IIS for each client (domain) and point it to CF.
    Given this environment, is anyone aware of any gotchas within the setup of CF 9 on IIS 7?
    Thank you in advance,
    Rich

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • SQL Loader to Load Multiple Tables from Multiple Files

    Hi
    I wish to create a control file to load multiple tables from multiple files
    viz.Emp.dat into emp table and Dept.dat into Dept table and so on
    How could I do it?
    Can I create a control file like this:
    OPTIONS(DIRECT=TRUE,
    SKIP_UNUSABLE_INDEXES=TRUE,
    SKIP_INDEX_MAINTENANCE=TRUE)
    UNRECOVERABLE
    LOAD DATA
    INFILE 'EMP.dat'
    INFILE 'DEPT.dat'
    INTO TABLE emp TRUNCATE
    FIELDS TERMINATED BY "|" OPTIONALLY ENCLOSED BY '"'
    (empno,
    ename,
    deptno)
    INTO TABLE dept TRUNCATE
    FIELDS TERMINATED BY "|" OPTIONALLY ENCLOSED BY '"'
    (deptno,
    dname,
    dloc)
    Appreciate a Quick Reply
    mailto:[email protected]

    Which operating system? ("Command Prompt" sounds like Windows)
    UNIX/Linux: a shell script with multiple calls to sqlldr run in the background with "&" (and possibly nohup)
    Windows: A batch file using "start" to launch multiple copies of sqlldr.
    http://www.pctools.com/forum/showthread.php?42285-background-a-process-in-batch-%28W2K%29
    http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/start.mspx?mfr=true
    Edited by: Brian Bontrager on May 31, 2013 4:04 PM

  • Load Multiple Table using SQLLDR

    Hi,
    We are trying to Load Multiple Table using SQLLDR in same loading session.
    Here's a snippet of our script:
    LOAD DATA INFILE '/u07/dd/yyyy_mm_dd.dat'
    append INTO TABLE table1
    WHEN STORE_NO != ' '
    STORE_NO position(1:5) ,
    TR_DATE position(6:27) DATE "mm/dd/yy
    truncate INTO TABLE table2
    WHEN STORE_NO != ' '
    STORE_NO position(1:5) ,
    TR_DATE position(6:27) DATE "mm/dd/yy
    Now upon execution this gives an error saying truncate shouldn't be present before 2nd INTO Clause. The script runs fine if we do same operation say Append / Replcae / Truncate on both tables. My Question here is can we have SQLLDR perform two diff operations (Append & Truncate in our case) in one session?
    Thanks
    Chinmay
    PS: Referred to http://www.orafaq.com/wiki/SQL*Loader_FAQ#Can_one_load_data_from_multiple_files.2F_into_multiple_tables_at_once.3F already

    No.
    did you read the syntax diagram and the paragraph below:
    "When you are loading a table, you can use the INTO TABLE clause to specify a table-specific loading method (INSERT, APPEND, REPLACE, or TRUNCATE) that applies only to that table. That method overrides the global table-loading method. The global table-loading method is INSERT, by default, unless a different method was specified before any INTO TABLE clauses."

  • Best practices for loading apo planning book data to cube for reporting

    Hi,
    I would like to know whether there are any Best practices for loading apo planning book data to cube for reporting.
    I have seen 2 types of Design:
    1) The Planning Book Extractor data is Loaded first to the APO BW system within a Cube, and then get transferred to the Actual BW system. Reports are run from the Actual BW system cube.
    2) The Planning Book Extractor data is loaded directly to a cube within the Actual BW system.
    We do these data loads during evening hours once in a day.
    Rgds
    Gk

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • Best practice for deleting multiple rows from a table , using creator

    Hi
    Thank you for reading my post.
    what is best practive for deleting multiple rows from a table using rowSet ?
    for example how i can execute something like
    delete from table1 where field1= ? and field2 =?
    Thank you

    Hi,
    Please go through the AppModel application which is available at: http://developers.sun.com/prodtech/javatools/jscreator/reference/codesamples/sampleapps.html
    The OnePage Table Based example shows exactly how to use deleting multiple rows from a datatable...
    Hope this helps.
    Thanks,
    RK.

  • 2012 NLB Best Practice (Single vs Multiple NICs)?

    Our environment has used an NLB configuration with two NICs for years.  One NIC for the host itself and one for the NLB.  We have also been running the NLB in multicast mode.  Starting with 2008, we began adding the cluster's MAC address as
    an ARP entry on our layer three switch.  Each server participating in the NLB is on VMware.
    Can someone advise what the best procedure is for handling NLB in this day?  Although initial tests with one NIC seem to be working, I do notice that we get a popup warning on the participant servers when launching NLB manager "Running NLB Manager
    on a system with all networks bound to NLB might not work as expected"... if they are set to run in unicast mode.
    With that said, should we not be running multicast?  Will that present problems down the road?

    Hi enoobmot11,
    You can refer the following KB and the VMware requirement KB:
    Network Load Balancing Best practices
    https://technet.microsoft.com/en-us/library/cc740265%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396
    Multiple network adapters
    https://technet.microsoft.com/en-us/library/cc784848(v=ws.10).aspx
    The VMware KB:
    Microsoft Network Load Balancing Multicast and Unicast operation modes (1006580)
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006580
    Sample Configuration - Network Load Balancing (NLB) Multicast Mode Configuration (1006558)
    http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1006558
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice : how 2 fetch tables, views, ... names and schema

    hi,
    I am looking for the best practice about getting the catalog of a database.
    I have seen that I can make some select in system-tables(or views) such as DBA_TABLES and DBA_VIEWS, or DBA_CATALOG, but is that the best way to grab thses informations ?
    (I ask this question because It seems a strange way to me to get the table names using a simple select, but getting column info using a specialized function, OCIDescribeAny(). this does not look like a coherent API...)
    thanks for your advice
    cd

    in the same idea, why use OCIDescribeAny instead of doing an appropriate select in DBA_TAB_COLUMNS ?
    cd

Maybe you are looking for