Best way to bulk insert records to sqlserver ??

  Hi, Actually i encountered an issue regarding inserting the lakhs of records(10lakhs)  in to sql server table using c#. the appropriate ways i know is using SqlBulkCopy or using parallel programming(LINQ) .
i want to know which one is best according performance wise.  is there any other approach like "ssis".
please suggest.
a.kesavamallikarjuna

Either use SqlBulkCopy or SSIS.
Using a parallel approach with 10lakhs rows will create a huge load on your SQL Server, which may lead to issues on the server side. This is not a general "don't' do it", cause with the appropriate table, index and disk structure, this may work.
SSIS is often the best solution, cause it is built for this kind of operation.
But concrete advice depends on your concrete problem (format, reoccurance, scheduling etc.).

Similar Messages

  • Is there a way to bulk delete records

    It seems that I have a a lot of duplicated records in my " central " area so I wanted to either filter by Area then delete the duplicates if there is a way to do that or bulk delete every record that is "Central" in the Area column..
    is that possible?

    Are you able to select more than 100 through the Content and Structure manager?
    OR
    I found a technet article that uses powershell to perform a bulk-delete, it might be your best bet to start here:
    http://social.technet.microsoft.com/wiki/contents/articles/19036.sharepoint-using-powershell-to-perform-a-bulk-delete-operation.aspx
    Edit: is this you?
    http://sharepoint.stackexchange.com/questions/136778/is-there-a-way-to-bulk-delete-records ;)

  • What is the best way to do voice recording in a Macbook Pro?

    What is the best way to do voice recording in a Macbook Pro.I want to voice record and sendas a MP3 file.
               Thanks

    Deleting the application from your /Applications folder is sufficient. There are sample projects in /Library/Application/Aperture you may want to get rid of as well, as they take up a fair bit of space.

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • Best Way To merge Customer Records

    Hi community,
    What is the best way to merge customer records for the same operson but may have used different email ids to correspond with BC Website

    Not in BC no. You would need to export a custom report, sort that in excel and then form that into the customer import it back in or create some API that goes through and cleans that up.

  • Best way to determine insertion order of items in cache for FIFO?

    I want to implement a FIFO queue. I plan on one producer placing unprocessed Orders into a cache. Then multiple consumers will each invoke an EntryProcessor which gets the oldest unprocessed order, sets it processed=true and returns it. What's the best way to determine the oldest object based on insertion order? Should I timestamp the objects with a trigger when they're added to the cache and then index by that value? Or is there a better way? maybe something coherence automatically saves when objects are inserted? Also, it's not critical that the processing order be precisely FIFO, close is good enough.
    Also, since the consumer won't know the key value for the object it will receive, how could the consumer call something like this so it doesn't violate Constraints on Re-entrant Calls? http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Thanks,
    Andrew

    Ok, I think I can see where you are coming from now...
    By using a queue for each for each FIX session then you will be experiencing some latency as data is pushed around inside the cluster between the 'owning node' for the order and the location of the queue; but if this is acceptable then great. The number of hops within the cluster and hence the latency will depend on where and how you detect changes to your orders. The advantage of assiging specific orders to each queue is that this will not change should the cluster rebalance; however you should consider what happens if the node controlling a specific FIX session is lost - do you recover from FIX log? If so where is that log kept? Remember to consider what happens if your cluster splits, such that the node with the FIX session is still alive, but is separated from the rest of the cluster. In examining these failure cases you may decide that it is easier to use Coherence's in-built partitioning to assign orders to sessions father than an attribute of order object.
    snidely_whiplash wrote:
    Only changes to orders which result in a new order or replace needing to be sent cause an action by the FIX session. There are several different mechanisms you could use to detect changes to your orders and hence decide if they need to be enqueued:
    1. Use a post trigger that is fired on order insert/update and performs the filtering of changes and if necessary adds the item to the FIX queue
    2. Use a cache store that does the same as (1)
    3. Use an entry processor to perform updates to the order object (as I believe you previously mentioned) and performs logic in (1)
    4. Use a CQC on the order cache
    5. A map listener on the order cache
    The big difference between 1-3 and 4, 5 is that the CQC is i) a SPOF ii) not likely located in the same place as your order object or the queue (assuming that queue is in fact an object in another cache), iii) asynchronously fired hence introducing latency. Also note that the CQC will store your order objects locally whereas a map listener will not.
    (1) and (3) will give you access to both old and new values should that be necessary for your filtering logic.
    Note you must be careful not to make any re-entrant calls with any of 1-3. That means if you are adding something to a FIX queue object in another cache (say using an entry processor) then it should be on a different cache service.
    snidely_whiplash wrote:
    If I move to a CacheStore based setup instead of the CQC based one then any change to an order, including changes made when executions or rejects return on the FIX session will result in the store() method being called which means it will be called unnecessarily a lot. It would be nice if I could specify the CacheStore only store() certain types of changes, ie. those that would result in sending a FIX message. Anything like that possible?There is negligible overhead in Coherence calling your store() method; assuming that your code can decide if anything FIX-related needs to be done based only on the new value of the order object then this should be very fast indeed.
    snidely_whiplash wrote:
    What's a partitioned "token cache"?This is a technique I have used in the past for running services. You create a new partitioned cache into which you place 'tokens' representing a user-defined service that needs to be run. The insertion/deletion of a token in the backing map fires a backing map listener to start/stop a service +(not there are 2 causes of insert/delete in a backing map - i) a user ii) cluster repartitioning)+. In this case that service might be a fix session. If you need to designate a specific member on which a service needs to run then you could add the member id to the token object; however you must be careful that unless you write your own partitioning strategy the token will likely not live on the same cache member as the token indicates; in which case you would want a ful map listener or CQC to listen for tokens rather than a backing map listener
    I hope that's useful rather than confusing!
    Paul

  • Best way to generate one record per day from a table with eff/exp dates

    Hi,
    Have a table which has various attributes and an eff and exp date. e.g attributea, 01/05/2012, 16/05/2012
    We wish to create another table from this table to have one record per day. e.g 16 records.
    What is best way to achieve this in OWB ?
    Thanks

    Hi,
    Example if have table
    with following contents
    conversion_rate number(6,4)
    EFFEcTIVE_DATE DATE
    expiration_date date
    example record 1.43, 01/05/2012,16/05/2012
    If want to have another table which instead has 16 records one for each day
    e.g
    1.43, 01/05/2012
    1.43,02/05/2012
    1.43,16/05/2012
    Thoughts on best way to do this.
    Thanks

  • Whats the best way to fix audio recorded on a smartphone?

    So I have to re-edit this previously done video and all I have to do is change the audio introduction, unfortunately the VO person is out of town with no access to professional audio equipment and all he can do is record the intro on his phone (literally one line) and text it to me so I can insert it. The rest of the audio remains the same
    We just did that, but as expected the audio doesn't sound as "clear" as the previously recorded one. I basically just have to insert the new audio line and just keep the rest of the audio as last months. I do have music and sound effects in the background but since Im not an expert on audio effects, my question to you fellow editors is.... what audio effects or procedures do you recommend to clean up this audio, there is no background noises (its clean), it just sounds a little "over modulated" I guess, it just sounds rough, as if it was recorded on a phone (well it was!)
    If anybody had experience with a similar situation Id sincerely appreciate the advice. Please keep in mind Im not an expert on Audition so if you can help me out please do so with a very precise explanation.
    Thank you!

    It may sound a little like ryclark is joshing with you, but I'm afraid he isn't.
    You can't ever process a recording to make the quality better than the original. You may be able to reduce the background noise a little, and alter the frequency response, but the distortion inherent in it remains, whatever you do - there's no such thing as an un-distortion (this is what your over-modulated sound is) plugin. And I have to say that anybody who eventually figures out how to carry out this seemingly impossible task will clean up; it could easily be the most expensive plugin ever! So as he says, either re-record the lot, or add some trash to what you've already got. Or do what I'd do, and get somebody completely different to re-do the introduction. That, I'm afraid, is the most precise explanation I can give you about what you should - or indeed have - to do.

  • Best way to create detail records based on data in the master record..

    Hi,
    I am using 11gr1p1 and ADF stack.
    I have a master detail relation between 2 records. I have EO and entity associations. I want to create a number detail records based on the data on the master record.
    For example
    I have a Stay Record which has begin and end date I need to create the DailyStay Record for each day for the range of begin and end date of StayRecord.
    Where should I do this? In EO implementation or in VO implementation.?
    Thanks

    Where should I do this? In EO implementation or in VO implementation.?If your child record should in no case be created without those default values, then EO.
    Otherwise, to increase reusability of your EO, then in the VO over the view link (i.e. other VO's would still be able to use your EO without having the child VO created with these defaults).
    This way your ViewObjects can also be used in a context where you don't want to copy from the masterFrank, If there's another UI flow that is so drastically different that it does not want things defaulted, then it probably needs a different VO altogether (on the same underlying EO though)

  • Best way to do inserts

    What (or how) do the pros do inserts.
    What I have been doing is to talk on camera then do the insert (image) over the correct part of my voice narration.
    Or is it better to do the segment separately then insert it into the sequence later?
    Thanks!
    Doug

    Hi(Bonjour)!
    That depends on your need.
    If you want a long diaporama with a precisely paced naration: record you voice over first. Adjust still duration to synch with narration.
    If you need a long daporama with paced still (exact duration): desing your still sequence first and add voice over. Record the narration with voice over tool to get correct pace.
    If you need only some stills over an existing video track: add still at right place on V2 and record the voice over.
    Michel Boissonneault

  • Best way to select a record to display in the report

    Hi,
    I am helping a customer get a report (V11) working in the Visual Studio 2008 viewer.
    The report, where they have linked all the tables from the database,  has a paramater called 'jobNo'.
    The record select expert then uses the param to select the required job. ({uv_JobDetails.JobNumText} = {?JobNo})
    My question is how do I from my ASP.net (VB.net code) set this parameter, and then refresh the report to show the correct data matching the parameter jobNo.
    Thanks

    Hi,
    Try with this:  
    ReportDocument boreportdocument = new ReportDocument();
    boreportdocument.Load(Server.MapPath("Decreteparam.rpt"));
    string recordselection = "{Customer.Country} = " + "{?My Parameter}";
    boreportdocument.RecordSelectionFormula = recordselection;
    CrystalReportViewer1.ReportSource = boreportdocument;
    or //Set parameter value like this
    boreportdocument.SetParameterValue("My Parameter", "USA");
    For Range Parameter:
    boreportdocument.Load(Server.MapPath("RangeParam.rpt"));
    CrystalReportViewer1.ReportSource = boreportdocument;
    boparamfields = CrystalReportViewer1.ParameterFieldInfo;
    //setting the start and end values
    boparamField = boparamfields["SalesRange"];
    boparamValues = boparamField.CurrentValues;
    boparamrangevalue.EndValue = "300";
    boparamrangevalue.LowerBoundType = RangeBoundType.BoundInclusive;
    boparamrangevalue.StartValue = "27";
    boparamrangevalue.UpperBoundType = RangeBoundType.BoundInclusive;
    //Adding Range values to parameter field
    boparamValues.Add(boparamrangevalue);
    You can get sample codes from [here|https://boc.sdn.sap.com/codesamples]
    Have a look to our [Dev Library|https://www.sdn.sap.com/irj/boc/sdklibrary ]
    Regards,
    Shweta

  • [solved] Best way to test for record in sqlite3 db?

    In a bash script, I need to know if a sqlite3 'SELECT ... WHERE' operation finds any matches.  I don't need the output; I just need to know if a match is found.  My thought was to do
    if [[ "$(sqlite3 ... "SELECT ..." )" ]];then ...
    Is there a better way to do it?
    Bonus question: if searching a primary key or unique column, is a ' SELECT ... WHERE foo="bar" ' operation smart enough to stop when it finds a match, or will it continue through the rest of the table?
    Thanks.
    Last edited by alphaniner (2012-04-26 14:51:36)

    @alphaniner: Primary key columns are usually indexed, hence a query for a primary key column will not traverse the table, but perform an index lookup. Since primary keys are unique this will end up at a single row.
    To test for existence of a row, you'd usually use "select exists(select 1 from <table> where <condition>)".  This will output 1, if the row exists, and 0 otherwise. I don't know whether the exit state of the "sqlite" tool is reliable, but testing the output will be with this query.
    And again: If you're doing this kind of stuff, you're better off with a real languages with data types and the like. Python provides SQLite support in the standard library.

  • Best way to record Roland V-drums

    Hello to everyone!
    I am a new owner of a mac mini, as of today. It is a 1.83 GHZ Intel Core 2 duo w/ 2 GB of ram.
    The question I have pertains to recording. I own a Roland TD-10 Vdrum kit, and would love to be able to record into Garageband with it. My question is - What is going to be the best way for me to record?
    I believe I have 2 options - One would be to go with a Firewire interface Audio Interface, like this one:
    http://www.guitarcenter.com/M-Audio-FireWire-Solo-Mobile-Audio-Interface-1029518 37-i1154073.gc
    The other option would be to get a USB MIDI interface, like this.
    http://www.zzounds.com/item--MDOUNO
    I am very ignorant when it comes to this, and therefore, before purchasing anything, I am asking for advice. The TD-10 drum kit of mine has MIDI ports on it, as well as a line out.
    Any suggestions would be much appreciated.
    Zach

    if you want to record the sounds that your drum kit makes, as audio, you can get the audio interface (however, all you really need is an audio cable).
    if you want to play GB's (or any third party) drums, get the MIDI interface. MIDI gives you some advantages, like being able to edit each note, if you wish. plus there are many drum kits you can install and then play with your hardware.

  • What is the best approach to track deleted records

    Dear all,
    We have build a CMS platform which is based on SQL server 2012 tables structure hosted in Azure.
    We have build on top of this some REST API method in order to access data from any type of client application.
    The issues we need to solved now is what his the best way to track deleted records in order that client application gets informed through web service about deleted data from our CMS.
    We were thinking of 2 path actually :
    - having a kind of Ghost table for each of our real table where deleted records will be inserted into ( physical delete ). This would mean adding as many Ghost tables as we have production tables
    - Adding a IsDeleted flag to each of our table which will be set to true when a record is deleted from our CMS ( logical delete ). This would means adding an IsDelete field to each of our tables, create and update all our store procedure and web services
    in order to taken in account that new filter criteria to fetch our records. Quite huge job
    Will there be any other approach ?
    We are looking the best solution with minimum impact on our current solution
    reagards
    Your knowledge is enhanced by that of others.

    Hello,
    @Tom, based on your question
    "The question would be what do you need to do with the deleted records and how long do you need to keep them?"
    When records is deleted, then I simply want to delete them and informed any client application about deleted items in order to get data in Sync. I will not have any reporting on deleted data !
    The only reason of tracking delete tables items, is simply to informed client application through web service sync about the data to be ignored. Client application have a caching database records for performance reason and is is require to not used data
    from that local storage which has been reported as deleted by the SQL server on Aure.
    Does this make sense ?
    regards
    Your knowledge is enhanced by that of others.

  • Bulk insert into oracle using ssis

    Hi ,
    Can someone please suggest me the way to Bulk insert data into oracle database? I'm using oledb which doesnt support bulk insert into oracle.
    Pls note I cant use Oracle ATTUnity as it requires enterprise edition but i have only Standard edition and hence that option is ruled out.
    Is there any other way that I can accompolish BULK insert?
    Please help me out.
    Thanks,
    Prabhu

    Hi Prabhu,
    I am very late to help you solve the query but following is the solution to 'Bulk Insert into Oracle' that worked for me.
    To use below code for SSIS 2008 R2 in a
    Script Task component you would need following API references.
    Prerequisites:
    1. C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies\Microsoft.SQLServer.DTSRuntimeWrap.dll
    2. Install "Oracle Data Provider For .NET 11.2.0.1.0" and add a reference to
    Oracle.DataAccess.dll.
       Microsoft SQL Server Integration Services Script Task
       Write scripts using Microsoft Visual C# 2008.
       The ScriptMain is the entry point class of the script.
     *  Description : SQL to Oracle Bulk Copy/Insert
     *  Created By  : Mitulkumar Brahmbhatt
     *  Created Date: 08/14/2014
     *  Modified Date   Modified By     Description
    using System;
    using System.Data;
    using Microsoft.SqlServer.Dts.Runtime;
    using System.Windows.Forms;
    using Oracle.DataAccess.Client;
    using Microsoft.SqlServer.Dts.Runtime.Wrapper;
    using System.Data.OleDb;
    namespace ST_6e18a76102dd4312868504c4ef95279d.csproj
        [System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]
        public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
            #region VSTA generated code
            enum ScriptResults
                Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
                Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
            #endregion
            public void Main()
                ConnectionManager cm;
                IDTSConnectionManagerDatabaseParameters100 cmParams;
                OleDbConnection oledbConn;
                DataSet ds = new DataSet();
                string sql;
                try
                    /********** Pull Sql Source Data into a Dataset *************/
                    cm = Dts.Connections["SRC_CONN"];
                    cmParams = cm.InnerObject as IDTSConnectionManagerDatabaseParameters100;
                    oledbConn = (OleDbConnection)cmParams.GetConnectionForSchema() as OleDbConnection;
                    sql = @"Select * from [sourcetblname]'';
                    OleDbCommand sqlComm = new OleDbCommand(sql, oledbConn);
                    OleDbDataAdapter da = new OleDbDataAdapter(sqlComm);
                    da.Fill(ds);
                    cm.ReleaseConnection(oledbConn);
                    /***************** Bulk Insert to Oracle *********************/
                    cm = Dts.Connections["DEST_CONN"];
                    cmParams = cm.InnerObject as IDTSConnectionManagerDatabaseParameters100;
                    string connStr = ((OleDbConnection)cmParams.GetConnectionForSchema() as OleDbConnection).ConnectionString;
                    cm.ReleaseConnection(oledbConn);
                    sql = "destinationtblname";
                    using (OracleBulkCopy bulkCopy = new OracleBulkCopy(connStr.Replace("Provider=OraOLEDB.Oracle.1", "")))
                        bulkCopy.DestinationTableName = sql;
                        bulkCopy.BatchSize = 50000;
                        bulkCopy.BulkCopyTimeout = 20000;
                        bulkCopy.WriteToServer(ds.Tables[0]);
                    /***************** Return Result - Success *********************/
                    Dts.TaskResult = (int)ScriptResults.Success;
                catch (Exception x)
                    Dts.Events.FireError(0, "BulkCopyToOracle", x.Message, String.Empty, 0);
                    Dts.TaskResult = (int)ScriptResults.Failure;
                finally
                    ds.Dispose();
    Mitulkumar Brahmbhatt | Please mark the post(s) that answered your question.

Maybe you are looking for

  • Instead of trigger on view error

    I've created a view, a form on that view and an INSTEAD OF update trigger on that view. When I press the update button in the form I get Error: An unexpected error occurred: ORA-22816: unsupported feature with RETURNING clause (WWV-16016) The error c

  • Wrong Monitor Profile used by PS CS5

    Running Windows 7 Professional, Photoshop CS5. I have a laptop and an external monitor, identified by Windows as Display 1 and Display 2, respectively. The latter is usually off, unless I'm going to do some photo editting, etc, so my Windows Display

  • HU and Transfer Orders

    Hi All, I have a scenario where system is creating as many Transfer Orders as many HU's that i create. I have HU managed Storage Location. I am creating an inbound delivery for the standard purchase order and in the inbound delivery i need to create

  • Remove all comma's in a string

    I have a company name that can contain comma's in one or more positions. I need to replace these comma's with a space. Example: Dom,s Travel and Dom,s Express Need it to read Dom s Travel and Dom s Express Thanks Paul

  • Windows 7 OEM Version possible?

    Hi I consider installing boot camp on my MacBook Pro. I'm looking at Windows 7 and wonder whether I can use the Windows 7 Home Premium 32 bit OEM Version. Tks a lot!