Issue of hanldle 10 million record

Dears,
Program is executed in online, and need handle 10 million records.
If I  select data from database directly,  TSV_TNEW_PAGE_ALLOC_FAILED Short dump will occurs.
ex: select xxx
        from xxx
        into table xxx.
If I  select 1 million record from database every time, then handle 1 million records,  TIME_OUT Short dump will occurs.
ex: select xxx
       from xxx
       into table xxx
       package size 10000000.
        perform xxxx     " handle 1 million records,
     endselect.
Could you please teach me solution?
Thanks in advance.
Best regards
Lily

Please start with basic ABAP training before you want to SELECT 1 Mio records.
If it is a good training you will not SELECT 1 Mio records afterwards.
I would recommend to start with the chapter about the WHERE conditions.
And please forget the parallel cursor, it comes at the end of the advanced training under the title 'special and obsolete commands'.
Siegfried

Similar Messages

  • Strange issue with a huge SSRS 2008 report (supposed to return 3 million records)

    Hi,
    NOTE: The strange part (as mentioned in title) will come in the end.
    I am running a SSRS 2008 report which fetches 3 million records from a remote server. After around 1 hour the report processing stops and I see an error icon on the lft bottom of the browser window. When I click on that to see the error details it shows
    some PageRequestManagerSQLErrorException with an unknown error message with code 12029 (sometimes 12002).
    When I see the reportserver logs there is an error message logged in it which says "Microsoft.ReportingServices.Library.ReportServerDatabaseUnavailableException: The report server cannot open a connection to the report server database. A connection to the
    database is required for all requests and processing. ---> Microsoft.ReportingServices.Library.ReportServerDatabaseUnavailableException: The report server cannot open a connection to the report server database. A connection to the database is required for
    all requests and processing. ---> System.InvalidOperationException: Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size
    was reached."
    The <DatabaseQueryTimeOut> value in the report server configuration file is already having a value set to 7200 seconds(2 hours).
    NOW, the strange part is, when I open the ExecutionLog2 table in ReportServer database, there is an entry for the same report with the status as "success" !!!
    My head is spinning over this issue, somebody please rescue.
    Regards.

    Not sure if this will help but you might give it a try :
    1. Open the Registry Editor.
    2. Navigate to the registry path below.
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\
    3. Create a DWORD value MaxUserPort.
    Value name
    MaxUserPort
    Value data (in Decimal)
    10000
    4. Restart the server.
    5. Check and confirm that the registry key ‘MaxUserPort’ is added.
    Do the same process as above for TCPTimedWaitDelay and reduce the value to 60.  The above steps are to increase the maxuserports and reduce the time_wait in order to free up resources on the server.

  • Performance issues in million records table

    I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
    On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
    Am looking for archival solutions for these master tables.
    Operations on Archival Tables, would be limited to read.
    Expected benefits
    User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
    Very limited usage on Historical data - compared to operations on current data
    Performance on operations over current data is important compared over that on historical data
    Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
    Some solutions i cud think of ...
    [ 1 ] Put every archived record into a archival table and fetch it from there
    i.e clearly distinguish searches as current or archival - prior to searching
    the impact i feel is again archival tables are ever increasing by approx a million in a year
    [ 2 ] Put records into various archival tables each differentiated by a year
    For instance every year i do replicate the set of tables and that year data goes into that table.
    how do i do a fetch??
    Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
    The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
    Also I don't want to make change to every query in my app - until there is no way out..

    Hi,
    Read the following documentation link about Partitioning in Oracle.
    Best Regards,
    Alex

  • Deleting 5 million records(slowness issue)

    Hi guys ,
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .

    >
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .
    >
    Nothing much you can do.
    About the only alternatives are
    1) create a new table that copies the records you want to keep. Then drop the old table and rename the new one to the old name. If you are deleting most of the records this is a good approach.
    2) create a new table that copies the records you want to keepl. Then truncate the partitions of the old table and use partition exchange to put the data back.
    3) delete the data in smaller batches of 100K records or so each. You could do this by using a different date value in the WHERE clause. Delete data < 2003, then delete data < 2004 and so on.
    4. If you want to delete all data in a partition you can just truncate the partition. That is the approach to use if you partition by date and are trying to remove older data.

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • ABAP Proxy for 10 million records

    Hi,
    I am running a extract program for my inventory which has about 10 million records.
    I am sending through ABAP proxy and  job is cacelled due to memory problem.
    I am breaking up the records while sending through ABAP proxy..
    I am sending about 2000 times proxy by breaking the number of records..
    do you think ABAP proxy would able to handle 10 million records..?
    Any advice would be highly appreciated.
    Thanks and Best Regards,
    M-

    Hi,
    I am facing the same problem. My temporary solution is to break up the selected data into 30.000 records and send those portions by ABAP proxy to PI.
    I think the problem lies in the ABAP to xml conversion (call transformation) within the proxy.
    Although breaking up the data seems to work for me now, it gives me an other issue: I have to combine the data back again in PI.
    So now I am thinking of saving all the records as a dataset file on the application server and using the file adapter instead.
    Regards,
    Arjan Aalbers

  • Problem with Fetching Million Records from Table COEP into an Internal Tabl

    Hi Everyone ! Hope things are going well.
           Table : COEP has 6 million records.
    I am trying to get records based on certain criteria, that is, there are atleast 5 conditions in the WHERE clause.
    I've noticed it takes about 15 minutes to populate the internal table. How can i improve the performance to less than a minute for a fetch of 500 records from a database set of 6 million?
    Regards,
    Owais...

    The first obvious sugession would be to use the proper indexes. I had a similar Issue with COVP which is a join of COEP and COBK. I got substanstial performance improvement by adding "where LEDNR EQ '00'" in the where clause.
    Here is my select:
              SELECT kokrs
                     belnr
                     buzei
                     ebeln
                     ebelp
                     wkgbtr
                     refbn
                     bukrs
                     gjahr
                FROM covp CLIENT SPECIFIED
                INTO TABLE i_coep
                 FOR ALL ENTRIES IN i_objnr
               WHERE mandt EQ sy-mandt
                 AND lednr EQ '00'
                 AND objnr = i_objnr-objnr
                 AND kokrs = c_conarea.

  • Best way to update 8 out of10 million records

    Hi friends,
    I want to update a table 8 million records of a table which has 10 millions records, what could be the best strategy if the table has a BLOB column with 600GB worth of data. BLOB itself is 550GB.  I am not updating the BLOB column.
    Usually with non-BLOB data i have tried doing "CREATE TABLE new_table as select <do the update "here"> from old_table;" method .
    How should i approach this one?

    @Mark D Powell
    To give you a background my client faced this problem  a week ago , This is part of a daily cleanup activity .
    Right now i don't have the access to it due to security issue . I could only take few AWR reports and stats when the access window was opened. So basically next time when i get the access i want to close the issue once and for all
    Coming to your questions:
    So what is wrong with just issuing an update to update all 8 Million rows? 
    In a previous run , of a single update with full table scan in the plan with no parallel degree it started reading from UNDO(current_obj=-1 on event "db file sequential read" wait event) and errored out after 24 hours with tablespace full on the tablespace which contains the BLOB data(a separate tablespace)
    To add to the problem redo log files were sized too less , about 50MB only .
    The wait events (from DBA_HIST_ACTIVE_SESS_HISTORY )for the problematic sql id shows
    -  log file switch (checkpoint incomplete) and log file switch completion as the events comprising 62% of the wait events
    -CPU 29%.
    -db file sequential read 6%.
    -direct path read 2% and others contributing a little.
    30 % of the samples "db file sequential read" had a current_obj#=-1 & p1 showing undo file id.
    Is there any concurrent DML against this table? If not, the parallel DML would be an option though it may not really be needed. 
    I think there was in the previous run and i have asked to avoid in the next run.
    How large are the base table rows?
    AVG_ROW_LEN is 227
    How many indexes are effected by the update if any?
    The last column of the primary key column is the only column to be updated ( i mean used in the "SET" clause of the update)
    Do you expect the update will cause any row migration?
    Yes i think so because the only column which is going to be updated is the same column on which the table is partitioned.
    Now if there is a lot of concurrent DML on the table you probably want to use pl/sql so you can loop through the data issuing a commit every N rows so as to not lock other concurrent sessions out of the table for too long a period of time.  This may well depend on if you can write a driving cursor that can be restarted in the event of interruption and would skip over rows that have already been updated.  If not you might want to use a driving table to control the processing.
    Right now to avoid UNDO issue i have suggested to use PL/SQL approach & have asked increasing the REDO size to atleast 10 times more.
    My big question after seeing the wait events profile for the session is:
    Which was the main issue here , redo log size or the reading from UNDO which hit the update statement. The buffer gets had shot to 600 million , There are only 220k blocks in the table.

  • How to update a table that has  Million Records

    Hi,
    Lets consider the basic EMP table and lets assume that it has around 20 Million Records . we need to have an update statement.Normal UPdate statement may hang the system or it may take a lot of time.
    The basic or Normal update statement goes like this and hope it may not work.
    update emp set hiredate = sysdate where comm is null and hiredate is null;Basic statement may not work. sugestions Needed.
    Regards,
    Vinesh

    sri wrote:
    I heard Bulk collect will resolve these type of issues and i am really poor at Bulk Collect concepts.Exactly what type of issue are you concerned with? The business requirements here are pretty important-- what problem is the UPDATE causing, specifically, that you are trying to work around.
    so looking for a solution to the problem using Bulk Collect .Without knowing the problem, it's very tough to suggest a solution. If you process data in batches using BULK COLLECT, your UPDATE statement will take longer to run and will consume more resources on the database. If the problem you are trying to solve is that your UPDATE is not fast enough, this is a poor approach.
    On the other hand, if you process data in batches, and do interim commits, you can probably hold locks on individual rows for a shorter amount of time. That would only be a concern, though, if you have some other process that is trying to update the same rows that you are updating at the same time that you're updating them, which is pretty rare. And breaking your update into multiple transactions introduces a whole bunch of complexity. You now have to write a bunch of code to ensure that your process is restartable should the update fail mid-way through leaving some number of updates committed and some number rolled back. You have to have a very detailed understanding of the data and data consistency to ensure that breaking up the transaction isn't going to negatively impact any process, report, etc. To do it correctly is a pile of work and then it's something that is constantly at risk of creating problems in the future when requirements change.
    In the vast majority of cases, you're better off issuing a simple SQL statement during a time when the system isn't particularly busy.
    Justin

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Deleting records from a table with 12 million records

    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    Narayan

    user7202581 wrote:
    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;

  • Which is the Best way to upload BP for 3+ million records??

    Hello Gurus,
                       we have 3+million records of data to be uploaded in to CRM coming from Informatica. which is the best way to upload the data in to CRM, which takes less time consumption and easy. Please help me.
    Thanks,
    Naresh.

    do with bapi BAPI_BUPA_FS_CREATE_FROM_DATA2

  • Issue when number of records in a DSO exceeds DTP package size

    Hi all,
    I'm having a strange problem with the package size of my DTPs.
    I move data from a DSO to another one while performing some transformations in an Export routine. The DTP has a package size of 6 million records.
    When the number of records in the first DSO exceeds the package size, some of the records seem not to be processed properly by the transformation and this forces me to choose a very large number for the package size so that the ABAP code in the transformation is processed for all the records of the source DSO.
    I can't understand why this is happening because package size is only supposed to determine the number of records to be processed in a single step and nothing else.
    Am I right?
    Thanks

    My tip would also be the summary item

  • Issue while Creating the records in OAF by disabling one field.

    Hi Experts,
    I am having the scenario like this.
    From the OAF page I want to create the record by omitting one field(it is disable for OAF).
    Consider the folowing scenario..
    I am having the following fields Empno,Empname,Salary,Job in the OAF page where I made the Job field alone as disable and the style i am using for job field is message text input and i have set the initial value as 'Manger'.
    I am facing issue while creating the records in OAF page without the job filed value ie 'Manager' then the records were not inserted into my tables. Whereas if I enable the job field in OAF page(meaning I gave the job as 'Manager' in OAF page) I am able to create the records from OAF page and able get inserted in the database.
    Could Anyone can suggest where I fail as it is my Priority issue.
    Any Suggestion will be a great help for me.
    Thanks,
    Murugesh.

    or you can default it in the controller by handling add row event
    if(vo.hasNext())
    vorow = vo.next();
    vorow.setAttribute("xxxx",3838);
    --Prasanna                                                                                                                                                                                                                                                                                                                                   

  • Best way to Insert Millions records in SQL Azure on daily basis?

    I am maintaining millions of records in Sql Server 2008 R2 and now i am intended to migrate these on SQL Azure.
    In existing system with SQL Server 2008 R2, few SSIS packages and Stored Procedures are firstly truncate the existing records and then perform Insert operation on the table which holds
    approx 26 Million records in 30 mins. on Daily basis (as system demands).
    When i migrate these on SQL Azure, i am unable to perform these operations in a
    faster way as i did in SQL 2008. Sometimes i got Request timeout error.
    While searching for faster way, many of them suggest for Batch process or BCP. But Batch processing is NOT suitable in my case because it takes much time to insert those records. I required some faster and efficient way on SQL Azure.
    Hoping for some good suggestions.
    Thanks in advance :)
    Ashish Narnoli

    +1 to Frank's advice.
    Also, please upgrade your Azure SQL Database server to
    V12 as you will receive higher performance on the premium tiers.  As you scale-up your database for your bulk insert, remember that
    SQL Database charges by the hour. To minimize costs, scale back down when the inserts have completed.

Maybe you are looking for