40 million records in a repository. Possible?

Hi,
Our client wants to load appx 40 million records in a material repository. SAP has tested out material repository with just 1 million records. Is it even possible to load these many records? Has anyone done anything like this before and would like to share their experience?
Regards,

Hello mdm3north
Are you sure that 40 millions is really clear records and don't contain duplicates?
I dont have imagination how somebody will be work with
However as i know from my past expirience:
Usually all material area splited by some logical groups and some persons are working just with one group of materials
One of solution may be split materials by logical groups and create own repository for each logical group.
Regards
Kanstantsin Chernichenka

Similar Messages

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • Need to generate a report for 30 Million records

    HI Gurus,
              We have a requirement wherein we need to generate a report with 30-32 million records on a monthly basis and store the report in some external system via FTP. We estimated the size of the file to be around 2.5 GB. Is it possibloe to save the file in PDF ? or is it possible to store such file in anyother file type?
    Kindly let me know..
    Cheers...
    Nip

    Hi,
    If you are using 7.0 then you can save the file as a PDF file. Would suggest you precalculate while running the report as well.
    Cheers,
    Kedar

  • Fast Updates for 8 million records..!!

    Hi All,
    I was wondering is there any fast method for updating 8 million records out of 10 million table?
    For eg :
    I am having a customer table of 10m records and columns are cust_id, cust_num and cust_name.
    i need to update 8m records out of 10m customer table as follows.
    update customer set cust_id=46 where cust_id=75;
    The above statement will update 8m records. And cust_id is indexed.
    But if i fire the above update statement we'll face rollback segment problem..
    Even if i use ROWNUM and commit after 100K records still its gonna take huge time and also i know CTAS will be lot lot faster but for this scenario i guess its not possible.. Right?
    Any help is much appreciated...
    Thanks.

    You didn't specify what version you're on, but have you looked at dbms_redefinition?
    create table cust (cust_num number, constraint cust_pk primary key (cust_num), cust_id number, name varchar2(10));
    create index cust_id_idx on cust(cust_id);
    insert into cust values( 1, 1, 'a');
    insert into cust values( 2, 2, 'b');
    insert into cust values( 3, 1, 'c');
    insert into cust values( 4, 4, 'd');
    insert into cust values( 5, 1, 'e');
    select * From cust;
    create table cust_int (cust_num number, cust_id number, name varchar2(10));
    exec dbms_redefinition.start_redef_table(user,'cust', 'cust_int', 'cust_num cust_num, decode(cust_id, 1, 99, cust_id) cust_id, name name');
    declare
    i pls_integer;
    begin
    dbms_redefinition.copy_table_dependents( user, 'cust', 'cust_int', copy_indexes=>dbms_redefinition.cons_orig_params, copy_triggers=>true, copy_constraints=>true, copy_privileges=>true, ignore_errors=>false, num_errors=>i);
    dbms_output.put_line('Errors: ' || i);
    end;
    exec dbms_redefinition.finish_redef_table(user, 'cust', 'cust_int');
    select * From cust;
    select table_name, index_name from user_indexes;
    You would probably want to run a sync_interim_table in there before the finish.
    Good luck.

  • Increase performance query more than 10 millions records significantly

    The story is :
    Everyday, there is more than 10 million records which the data in textfiles format (.csv(comma separated value) extension, or other else).
    Example textfiles name is transaction.csv
    Phone_Number
    6281381789999
    658889999888
    618887897
    etc .. more than 10 million rows
    From transaction.csv then split to 3 RAM (memory) tables :
    1st. table nation (nation_id, nation_desc)
    2nd. table operator(operator_id, operator_desc)
    3rd. table area(area_id, area_desc)
    Then query this 3 RAM tables to result physical EXT_TRANSACTION (in harddisk)
    Given physical External Oracle table name EXT_TRANSACTION with column result is :
    Phone_Number Nation_Desc Operator_Desc Area_Desc
    ======================================
    6281381789999 INA SMP SBY
    So : Textfiles (transaction.csv) --> RAM tables --> Oracle tables (EXT_TRANSACTION)
    The first 2 digits is nation_id, next 4 digits is operator_id, and next 2 digits is area_id.
    I ever heard, to increase performance significantly, there is a technique to create table in memory (RAM) and not in harddisk.
    Any advice would be very appreciate.
    Thanks.

    Oracle uses sophisticated algorithms for various memory caches, including buffering data in memory. It is described in Oracle® Database Concepts.
    You can tell Oracle via the CACHE table clause to keep blocks for that table in the buffer cache (refer to the URL for the technical details of how this is done).
    However, this means there are now less of the buffer cache available to cache other data often used. So this approach could make accessing one table a bit faster at the expense of making access to other tables slower.
    This is a balancing act - how much can one "interfere" with cache before affecting and downgrading performance. Oracle also recommends that this type of "forced" caching is use for small lookup tables. It is not a good idea to use this on large tables.
    As for your problem - why do you assume that keeping data in memory will make processing faster? That is a very limited approach. Memory is a resource that is in high demand. It is a very finite resource. It needs to be carefully spend to get the best and optimal performance.
    The buffer cache is designed to cache "hot" (often accessed) data blocks. So in all likelihood, telling Oracle to cache a table you use a lot is not going to make it faster. Oracle is already caching the hot data blocks as best possible.
    You also need to consider what the actual performance problem is. If your process needs to crunch tons of data, it is going to be slow. Throwing more memory will be treating the symptom - not the actual problem that tons of data are being processed.
    So you need to define the actual problem. Perhaps it is not slow I/O - there could be a user defined PL/SQL function used as part of the ELT process that causes the problem. Parallel processing could be use to do more I/O at the same time (assuming the I/O subsystem has the capacity). The process can perhaps be designed better - and instead of multiple passes through a data set, crunching the same data (but different columns) again and again, do it in a single pass.
    10 million rows are nothing ito what Oracle can process on even a small server today. I have dual CPU AMD servers doing over 2,000 inserts per second in a single process. A Perl program making up to a 1,000 PL/SQL procedure calls per second. Oracle is extremely capable - as it today's hardware and software. But that needs a sound software engineering approach. And that approach says that we first need to fully understand the problem before we can solve it, treating the cause and not the symptom.

  • Adding a new Big INT column to existing table in production, which holds 700 million records will impact anything in production?

    Hi Guys,
    I have to add a new Big INT column to existing table in production, which holds 700 million records and would like to know the impact?
    I have been tolled by one of my colleagues that last time they tried adding a column to same table during working hour and it locked out the table and impacted the users.
    Please suggest/share If any one had similar experience.
    Thanks Shiven:) If Answer is Helpful, Please Vote

    If you add a new column to a table using an ALTER TABLE ADD command and specify that the new column allows NULLs and you do not define a default value, then it will take a table lock.  However, once it gets the table lock, it will essentially run instantly
    and then free the table lock.  That will add this new column as the last column in the table, for example
    ALTER MyTable ADD MyNewColumn bigint NULL;
    But if you your change adds a new column with a default value, or you do something like using table designer to add the new column in the middle of the current list of columns, then SQL will have to rewrite the table.  So it will get a table lock, rewrite
    the whole table and then free the table lock.  That will take a considerable amount of time and the table lock will be held for that whole period of time.
    But, no matter how you make the change, if at all possible, I would not alter a table schema on a production database during working hours.  Do it when nothing else is going on.
    Tom

  • Xsd validation in Database for 1 million record

    Hello All,
    I would like to know the pros and cons to do the xsd validation of a million record in 11g database and if possible the processing time taken to do xsd validation for million records.
    What would be good datatype to load this xml file of million records, should it be blog/clob or varchar2(200000000).
    Thanks.

    varchar2(200000000).SQL VARCHAR2 is limited to 4000
    PL/SQL VARCHAR2 is limited to 32767

  • How to process million records

    Hi ,
    How would you process 50 million records without running out of set background time in BDC . please help is there any other process for doing this.
    Moderator message: too vague, help not possible, please describe problems in all technical detail when posting again.
    [Asking Good Questions in the Forums to get Good Answers|/people/rob.burbank/blog/2010/05/12/asking-good-questions-in-the-forums-to-get-good-answers]
    Edited by: Thomas Zloch on Dec 9, 2010 9:40 AM

    Hi,
    I am not sure but please check below given link might be it will useful for you.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c0d1e9c4-bfb4-2c10-76b4-d5e2912b83be
    Thanks,
    Jaten Sangal

  • Loading 3 millions record in database via externel table

    I am loading 3+ millions records in database by using externel tables. It is very slow process. How can I make this process fast?

    Hi,
    1. Break down the file into several files. let just say 10 files (300,000 record each)
    2. disable all index on the target table if possible
    3. disable Foreign key if possible, beside you can check this later using exceptions table
    4. make sure your freelist is and initrans is 10 for the target table, if you are inserting tabel resides in manual space management tablespace
    5. Create 10 proccess, each reading from their own file. and run this 10 process concurrently and used log error with unlimited reject limit facility. so the insert will continue until finish
    hope can help.

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Deleting records from a table with 12 million records

    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    Narayan

    user7202581 wrote:
    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;

  • Which is the Best way to upload BP for 3+ million records??

    Hello Gurus,
                       we have 3+million records of data to be uploaded in to CRM coming from Informatica. which is the best way to upload the data in to CRM, which takes less time consumption and easy. Please help me.
    Thanks,
    Naresh.

    do with bapi BAPI_BUPA_FS_CREATE_FROM_DATA2

  • Best way to Insert Millions records in SQL Azure on daily basis?

    I am maintaining millions of records in Sql Server 2008 R2 and now i am intended to migrate these on SQL Azure.
    In existing system with SQL Server 2008 R2, few SSIS packages and Stored Procedures are firstly truncate the existing records and then perform Insert operation on the table which holds
    approx 26 Million records in 30 mins. on Daily basis (as system demands).
    When i migrate these on SQL Azure, i am unable to perform these operations in a
    faster way as i did in SQL 2008. Sometimes i got Request timeout error.
    While searching for faster way, many of them suggest for Batch process or BCP. But Batch processing is NOT suitable in my case because it takes much time to insert those records. I required some faster and efficient way on SQL Azure.
    Hoping for some good suggestions.
    Thanks in advance :)
    Ashish Narnoli

    +1 to Frank's advice.
    Also, please upgrade your Azure SQL Database server to
    V12 as you will receive higher performance on the premium tiers.  As you scale-up your database for your bulk insert, remember that
    SQL Database charges by the hour. To minimize costs, scale back down when the inserts have completed.

  • ABAP Proxy for 10 million records

    Hi,
    I am running a extract program for my inventory which has about 10 million records.
    I am sending through ABAP proxy and  job is cacelled due to memory problem.
    I am breaking up the records while sending through ABAP proxy..
    I am sending about 2000 times proxy by breaking the number of records..
    do you think ABAP proxy would able to handle 10 million records..?
    Any advice would be highly appreciated.
    Thanks and Best Regards,
    M-

    Hi,
    I am facing the same problem. My temporary solution is to break up the selected data into 30.000 records and send those portions by ABAP proxy to PI.
    I think the problem lies in the ABAP to xml conversion (call transformation) within the proxy.
    Although breaking up the data seems to work for me now, it gives me an other issue: I have to combine the data back again in PI.
    So now I am thinking of saving all the records as a dataset file on the application server and using the file adapter instead.
    Regards,
    Arjan Aalbers

  • Problem with Fetching Million Records from Table COEP into an Internal Tabl

    Hi Everyone ! Hope things are going well.
           Table : COEP has 6 million records.
    I am trying to get records based on certain criteria, that is, there are atleast 5 conditions in the WHERE clause.
    I've noticed it takes about 15 minutes to populate the internal table. How can i improve the performance to less than a minute for a fetch of 500 records from a database set of 6 million?
    Regards,
    Owais...

    The first obvious sugession would be to use the proper indexes. I had a similar Issue with COVP which is a join of COEP and COBK. I got substanstial performance improvement by adding "where LEDNR EQ '00'" in the where clause.
    Here is my select:
              SELECT kokrs
                     belnr
                     buzei
                     ebeln
                     ebelp
                     wkgbtr
                     refbn
                     bukrs
                     gjahr
                FROM covp CLIENT SPECIFIED
                INTO TABLE i_coep
                 FOR ALL ENTRIES IN i_objnr
               WHERE mandt EQ sy-mandt
                 AND lednr EQ '00'
                 AND objnr = i_objnr-objnr
                 AND kokrs = c_conarea.

Maybe you are looking for

  • How do I get iTunes 11 to see my existing music?

    iTunes 11.1.5.5 was just downloaded to my Windows 8 laptop.   No previous version of itunes was on this device.  My 35 gigs of music is stored in a dropbox file. The existing dropbox "itunes" folder does not have the current iTunes 11.1 folder struct

  • Can only play one song at a time!

    Hi, Having a bit of trouble with iTunes. when i press play on a song or album, it plays the song, but then dosent move onto the next song. it just reverts back to the Nothing Playing screen. Recently d/l iTunes 8.2 Any ideas???

  • IPod isn't recognized by windows or itunes

    Recently my ipod ran into some problems. Sometimes when I plug it into USB it says that the USB Device Not Recognized in a bubble on the bottom of the screen. Other times it will connect but will not open up itunes and won't show up in my computer an

  • Big Big Clusters / Data Centers

    Hi folks. I have a bit of a theoretical question for the group, seeing as some of you should know how to handle this. I am facing a job that could get quite big due to the sheer size of the nature of what it does. I would like to use Apple technology

  • Script to smart forms

    Hi... good evening. how to conver SAP-SCRIPT into SMART FORMS. please provide some navigation. thanks and regards, k.swaminath reddy.