Best way to Insert Millions records in SQL Azure on daily basis?

I am maintaining millions of records in Sql Server 2008 R2 and now i am intended to migrate these on SQL Azure.
In existing system with SQL Server 2008 R2, few SSIS packages and Stored Procedures are firstly truncate the existing records and then perform Insert operation on the table which holds
approx 26 Million records in 30 mins. on Daily basis (as system demands).
When i migrate these on SQL Azure, i am unable to perform these operations in a
faster way as i did in SQL 2008. Sometimes i got Request timeout error.
While searching for faster way, many of them suggest for Batch process or BCP. But Batch processing is NOT suitable in my case because it takes much time to insert those records. I required some faster and efficient way on SQL Azure.
Hoping for some good suggestions.
Thanks in advance :)
Ashish Narnoli

+1 to Frank's advice.
Also, please upgrade your Azure SQL Database server to
V12 as you will receive higher performance on the premium tiers.  As you scale-up your database for your bulk insert, remember that
SQL Database charges by the hour. To minimize costs, scale back down when the inserts have completed.

Similar Messages

  • Best way to insert millions of records into the table

    Hi,
    Performance point of view, I am looking for the suggestion to choose best way to insert millions of records into the table.
    Also guide me How to implement in easier way to make better performance.
    Thanks,
    Orahar.

    Orahar wrote:
    Its Distributed data. No. of clients and N no. of Transaction data fetching from the database based on the different conditions and insert into another transaction table which is like batch process.Sounds contradictory.
    If the source data is already in the database, it is centralised.
    In that case you ideally do not want the overhead of shipping that data to a client, the client processing it, and the client shipping the results back to the database to be stored (inserted).
    It is must faster and more scalable for the client to instruct the database (via a stored proc or package) what to do, and that code (running on the database) to process the data.
    For a stored proc, the same principle applies. It is faster for it to instruct the SQL engine what to do (via an INSERT..SELECT statement), then pulling the data from the SQL engine using a cursor fetch loop, and then pushing that data again to the SQL engine using an insert statement.
    An INSERT..SELECT can also be done as a direct path insert. This introduces some limitations, but is faster than a normal insert.
    If the data processing is too complex for an INSERT..SELECT, then pulling the data into PL/SQL, processing it there, and pushing it back into the database is the next best option. This should be done using bulk processing though in order to optimise the data transfer process between the PL/SQL and SQL engines.
    Other performance considerations are the constraints on the insert table, the triggers, the indexes and so on. Make sure that data integrity is guaranteed (e.g. via PKs and FKs), and optimal (e.g. FKs should be indexes on the referenced table). Using triggers - well, that may not be the best approach (like for exampling using a trigger to assign a sequence value when it can be faster done in the insert SQL itself). Personally, I avoid using triggers - I rather have that code residing in a PL/SQL API for manipulating data in that table.
    The type of table also plays a role. Make sure that the decision about the table structure, hashed, indexed, partitioned, etc, is the optimal one for the data structure that is to reside in that table.

  • What is the best approach to insert millions of records?

    Hi,
    What is the best approach to insert millions of record in table.
    If error occurred while inserting the record then how to know which record has failed.
    Thanks & Regards,
    Sunita

    Hello 942793
    There isn't a best approach if you do not provide us the requirements and the environment...
    It depends on what for you is the best.
    Questions:
    1.) Can you disable the Constraints / unique Indexes on the table?
    2.) Is there a possibility to run parallel queries?
    3.) Do you need to know the rows which can not be inserted if the constraints are enabled? Or it is not necessary?
    4.) Do you need it fast or you have time to do it?
    What does "best approach" mean for you?
    Regards,
    David

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • In Acrobat Professional 8, what is the best way to insert/combine multiple pdf's together in a large

    In Acrobat Professional 8, what is the best way to insert/combine multiple pdf's together in a large volume?
    We have 300 pdf reports and need to insert a 2 page cover page infront of each report. Not sure if Batch processing is best???
    Thanks for any tips.

    Probably each cover page is different too. I would probably just bite the bullet and do each individually. I would create the 2 cover pages in WORD or other word processor and print to cover.pdf. Then open a PDF and Pages>Insert Pages or the cover.pdf to the front of the open PDF and save as to the current PDF. Then repeat 299 times. Each time you would make the appropriate change to the DOC file and print a new cover.pdf file (you might want to turn off open in Acrobat for this processing in the printer properties to save time). Probably a good idea to keep a list of the files to check off what has been done (you can generate a list in DOS by Start>cmd, change the directory to your location [cd path], and do "dir >>list.txt". That will give you a list to use.). There may be an easier way, but by the time you get it figured out you might be done this way.

  • What is the best way to do voice recording in a Macbook Pro?

    What is the best way to do voice recording in a Macbook Pro.I want to voice record and sendas a MP3 file.
               Thanks

    Deleting the application from your /Applications folder is sufficient. There are sample projects in /Library/Application/Aperture you may want to get rid of as well, as they take up a fair bit of space.

  • What is the best way of insertion using structured or binary type in oracle

    what is the best way of insertion using structured or binary type in oracle xml db 11g database

    SQL*Loader.

  • Best way to insert 300dpi photo's into Ilustrator file for print

    I am creating a 5.5 X 8 postcard style card for print.  I need to place 15 photographs that are aprox. 1.5 X 2.5 each.  What is the best way to insert these files so that my file doesn't become too large?  I will be making a PDF of the final artwork to send to the printer.  Your guidance is much appreciated!!

    Link don't embed the images. Use a layered format such as .psd, .tif, or .pdf. Psd is not lossy, .tif can be lossy if you modify the default settings, .pdf you choose how much compression you want, and a small amount of compression can make a much smaller file size.
    I would recomend using .psd, in cmyk color space. When you make your pdf for printing is when you will decide how much compression and which pdf preset you will use(press ready or PDF X should be a good choice). That will be your important choice.

  • Best way to insert a handmade signature

    Best way to insert a handmade signature
    I would like to know the best way to insert a handmade signature in a pdf acrobat document.
    Best Regards
    p.d. A tutorial perhaps ?

    Yep you link in now fixed. what most likely happed is when you click on Link tool in the forum, when window open the http:// is already in the window. Most likely you clicked at the beginning of the http and pasted your link.
    I always when this window open triple click on the http:// so everything is selected, then hit Delete (I use Mac - PC it would be backspace) so window is completely empty then I paste the link.
    But I occasionally I let it happen myself.

  • How to insert the records using sql query

    hi,
    i insert the record uisng sql query and DOTNET programme successfully and increase the doc num .ubut when i try to  add record using SAP B1 the old Doc no show.It means its not consider the docnums which are not inserted by sql query.
    how can i solve this problem?
    Regards,
    M.Thippa Reddy

    You are not support use Insert / Update / Delete on SAP Databases directly.
    SAP will not support any database, which is inconsistent, due to SQL-Queries, which modify datasets or the datastructure of the SAP Business One Database. This includes any update-, delete- or drop-statements executed via SQL-Server Tools or via the the query interface of SAP Business One
    Check SAP Note: 896891 Support Scope for SAP Business One - DB Integrity
    [https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/smb_searchnotes/display.htm?note_langu=E&note_numm=896891]

  • What is the best way to insert massive data into an existing excel file?

    dear gurus,
    i am wondering that what is the best way to insert massive data into an existing excel file, more performance perspective.
    the file is read from BDS , and we want to insert data into it .
    the way i can think of is
    1. OLE AUTOMATION
       i think performance will be a big problem
    2. Office integration
        i am not sure it's facing the same performance issue ?
    3 . XXL_SIMPLE_API/FULL_API
        I am not sure whether they can insert data into an existing excel file?
    could you please give me some advices?
    br.
    jun

    Hi,
    If you want to APPEND data( add data to an existing excel file) from SAP, then use GUI_DOWNLOAD fm with APPEND = 'X' paramter.
    Best regards,
    Prashant

  • Best Way To merge Customer Records

    Hi community,
    What is the best way to merge customer records for the same operson but may have used different email ids to correspond with BC Website

    Not in BC no. You would need to export a custom report, sort that in excel and then form that into the customer import it back in or create some API that goes through and cleans that up.

  • Best way to Insert Records into a DB

    hi
    i have around 400000(4 lakh ) records to insert into a db , whats the optimal way to do it ,
    i tried it doin g using threads i could gain only 2 seconds for 4 lakh records ,suggest me a better way

    Very hard thread, poor informations u give us can not help me too much to understand really the problem.
    Where do u have to 40.000K (input) records?
    Those records must be only added to output table?
    How many rows, and how many indexes output table has?
    The cost oh each insert depends also on how many indexes dbms has
    to manage on oupy table.
    In general to take about 2 seconds to add to a table a large amount of rows depends on many variables (hardware performances, dbms performances and so on)
    If u ahve only to insert and your input secord statys on another table,
    I think that a performance way to do it is insert select, so
    insert into output select * from input
    If your input records statys on a text file, the best way is to use
    native dbms importer
    Let me know something more....
    Regards

  • The best way to insert values in a Nested Table

    Hi!
    I want to insert values from a SQL-query in a Nested Table.What's the best way to do it?
    In addition,the only way that I've found is doing a query and when I've got the query result I insert it into the Nested Table.For instance:
    FOR cur_row IN (SELECT id,nstreet from example Where id=3) LOOP
    --here I'm inserting the values of the query in the nested table.
    -- VarNestedTable is a Nested Table of Row_Type.
    --Row_Type is an object with two fields:Id,Nstreet
    VarNestedTable.extend;
    VarNestedTable(Coincidents.Last):= Row_type(cur_row.id,cur_row.nstreet);
    END LOOP;

    How to Use Tables: Creating a Table Model.
    very bad example:
    class DataObject {
    String name, age, numberOfMonkeys;
    class DOTableModel extends AbstractTableModel {
      final String[] COLUMN_HEADER = { "Name", "Age", Monkeys" };
      List dataObjects;
      public String getColumnName(int column)  {
         return COLUMN_HEADER[ column ];
      public int getRowCount() {
        return dataObjects.size();
      public int getColumnCount() {
         return COLUMN_HEADER.size();
      public Object getValueAt(int row, int column) {
         DataObject do = dataObject.get( row );
         switch( column ) {
           case 0: return do.name;
           case 1: return do.age;
           case 2: return do.numberOfMonkeys;
    }

  • Best way to push change data from sql server to windows/web application

    i apologized that i do not know should i ask this question in this forum or not.
    i have win apps which will load all data initially from db and display through grid but from the next time when any data will change in db or any data will be inserted newly in db then only change or newly inserted data need to be pushed from db side to
    my win apps. now only sql dependency class is coming to my mind but there is a problem regarding sql dependency class that it notify client but do not say which data is updated or inserted.
    so i am looking for best guidance and easy way to achieve my task. what will be the best way to push data from sql server to win or web client.
    there is two issue
    1) how to determine data change or data insert. i guess that can be handle by trigger
    2) next tough part is how very easily push those data from sql server end to win apps end.
    so looking for expert guide. thanks

    Hello,
    Yes, you can create DML trigger on INSERT and UPDATE to get the changed data into a temp table. And then query the temp table from application.
    If you are use SQL Server 2008 or later version, you can also try to use
    Change data capture, which
    can track insert, update, and delete activity that is applied to a SQL Server table and store the changed values on the Change Table.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

Maybe you are looking for

  • Exporting a Intelligence report having a Pivot in to an excel

    Hello,<BR>I am having report which has a Pivot table in it. When i tried to export this report to an excel sheet, all the parts except the pivot is getting exported. Pivot is exported only if the pivot part of the query is being exported.<BR><BR>Need

  • Best Practice to transfer objects

    Hello, I am trying to transfer multiple DTOs to the same method. Is there a good practice of approaching this? Here is an example, public ObjectA oA; public ObjectB oB; someClass.setObject(oA); //How to set oB? I woud like to create one method which

  • How do I add 3 songs from iTunes to my iPhone

    how do I add 3 songs from iTunes to my iPhone there are 3 songs in my iTunes on my MBP that I need listen to on the way to a gig where I am going to play them. I only want these three songs on my phone, not lynk, or synk my library. Please advise and

  • Error in ME2O transaction

    Hi, We are executing ME2O transaction and passing Vendor, Assembly and Plant values as input.After execution when we want to create delivery. After giving storage location and quantity,we are getting an error which says "Error determining posting per

  • Standard SRM reports excluding SAP BI in the landscape.

    Dear Experts, Kindly give the standard SRM reports available other than SAP BI. My system landscape doesw not have SAP BI. We are working in SRM 7.0 classic scenario and we dont have BI. Is there are any standard reports available for SRM7.0. Thanks