SQL question- on how to handle groups of records at a time.

Hi,
I have a sql that looks like the following:
insert into INVALID_DATES_TMP
(id, gid, created, t_rowid, updated)
select id, gid, created, rowid, updated
from TABLE1
where fix_invalid_date_pkg.is_date_invalid('TABLE1', 'CREATED', ROWID) = 'Y';
COMMIT;
What the above sql does is selects all rows from TABLE1 where the CREATED column
has invalid dates and inserts them into invalid_dates_tmp table. So we can process/fix
those invalid dates from the temp table. Problem is our DBA said Table1 can have
millions of rows so the above sql can be very database intensive. So, I need to
figure out another way that may handle chunks of rows at a time from table1.
Any ideas are appreciated!
ThankYou,
Radhika.

Hallo,
in general INSERT AS SELECT is the fastest method to insert into the table.
Probably you can use direct load ? (Hint APPEND).
Other options (INSERT IN LOOP or BULK + FORALL) are slower.
I think, this method is optimal.
Another question is, the function itself. It is not clear, whether it searches the invalid dates optimal. I suppose strong, that function uses dynamic SQL.
Why ? It is better to search static . Or you use this function for many other columns ? Could you post the function also ?
Regards
Dmytro

Similar Messages

  • How to handle the bad record while using bulk collect with limit.

    Hi
    How to handle the Bad record as part of the insertion/updation to avoid the transaction.
    Example:
    I am inserting into table with LIMIT of 1000 records and i've got error at 588th record.
    i want to commit the transaction with 588 inserted record in table and log the error into
    error logging table then i've to continue with transaction with 560th record.
    Can anyone suggest me in this case.
    Regards,
    yuva

    >
    How to handle the Bad record as part of the insertion/updation to avoid the transaction.
    >
    Use the SAVE EXCEPTIONS clause of the FORALL if you are doing bulk inserts.
    See SAVE EXCEPTIONS in the PL/SQL Language doc
    http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/tuning.htm
    And then see Example 12-9 Bulk Operation that continues despite exceptions
    >
    Example 12-9 Bulk Operation that Continues Despite Exceptions
    -- Temporary table for this example:
    CREATE TABLE emp_temp AS SELECT * FROM employees;
    DECLARE
    TYPE empid_tab IS TABLE OF employees.employee_id%TYPE;
    emp_sr empid_tab;
    -- Exception handler for ORA-24381:
    errors NUMBER;
    dml_errors EXCEPTION;
    PRAGMA EXCEPTION_INIT(dml_errors, -24381);
    BEGIN
    SELECT employee_id
    BULK COLLECT INTO emp_sr FROM emp_temp
    WHERE hire_date < '30-DEC-94';
    -- Add '_SR' to job_id of most senior employees:
    FORALL i IN emp_sr.FIRST..emp_sr.LAST SAVE EXCEPTIONS
    UPDATE emp_temp SET job_id = job_id || '_SR'
    WHERE emp_sr(i) = emp_temp.employee_id;
    -- If errors occurred during FORALL SAVE EXCEPTIONS,
    -- a single exception is raised when the statement completes.
    EXCEPTION
    -- Figure out what failed and why
    WHEN dml_errors THEN
    errors := SQL%BULK_EXCEPTIONS.COUNT;
    DBMS_OUTPUT.PUT_LINE
    ('Number of statements that failed: ' || errors);
    FOR i IN 1..errors LOOP
    DBMS_OUTPUT.PUT_LINE('Error #' || i || ' occurred during '||
    'iteration #' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
    DBMS_OUTPUT.PUT_LINE('Error message is ' ||
    SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
    END LOOP;
    END;
    DROP TABLE emp_temp;

  • How to handle the control records in case of file to idoc scenario.

    Hi All,
    can you please clarify me how to handle the control records in case of file to idoc scenario.

    Hi,
    In File to Idoc scenario even though you selected apply control record values from payload and you are not getting those correct values which you have provided in the mapping.
    Also check the checkboxes Take sender from payload and Take receiver from payload along with the Apply control record values from payload checkbox
    Regards
    Seshagiri

  • How to handle the selected record

    how to handle the selected record when using alv classes

    Hi,
    use method : get_current_cell                    
      CALL METHOD grid1->get_current_cell                    
                          IMPORTING es_row_id = ls_row_id     
                                    es_col_id = ls_col_id     
                                     e_value  = l_value.     

  • Question about how ExifMeta handles multivalued tags

    Hi,
    I have tested the ExifMeta (Great work, just what I needed!) to read face recognition data written by Picasa to RegionName, RegionType etc. fields in jpg metadata. Everything seems to work fine if there is only one name in the field but Picasa writes several values separated with commas like this:
    RegionName     John Doe, Jill Doll
    RegionType     Face, Face
    etc.
    In that case, the corresponding fields remain completely empty. So is this convention of writing several values separated by commas "standard" way of doing this? Is there any way to fix this?
    Additionally, three Region category tags refuse to load giving the following error:
    "Not updating due to error getting property, id: XMPmwgrs_RegionAppliedToDimensionsUnit, from: nil, to: pixel, err-msg: Attempt to access property "XMPmwgrs_RegionAppliedToDimensionsUnit" not declared in com.robcole.lightroom.ExifMeta's Info.lua"
    This is minor problem but it would be nice to know how to handle this kind of error in the future. Do I need to declare these in the lua-code if I need them?
    Looking forward to replies,
    Mikko

    Mikko,
    In future, please direct exif-meta specific questions to the exif-meta forum, or me personally.
    But a quick answer: exif-meta doesn't (by default) parse ("interpret") text values, so commas shouldn't be handled differently than any other text.
    The best way to find out what's going on is to invoke exiftool from the command line and inspect the output.
    My hunch is there is a zero character in there representing end-of-string, which can cause problems for lua code (e.g. Lightroom).
    If you send me one of your files which includes the face data, I'll have a look.
    PS - No clue about the 3 region tags (might be worth trying it with the latest version of exiftool) - please send me a file for inspection - thanks.
    Rob

  • Question on "How to Handle Inv management Scenarios in BW" docuemnt.

    Hi all,
    I read the document "How to Handle inventory management scenarios in BW" and I did not understood what a snapshot scenario is?  Can anyone tell me what is the difference between snapshop and non-cumulative scenario's.
    thanks,
    Sabrina.

    In a non-cumulative scenario the current stock of any day (or month) is not stored physically in the cube. Instead the stock values over time is calculated dynamically at query runtime by the BW OLAP engine, which basically derives the stock by summing up the periodic value changes (cumulative inflow and outflow of the non-cumulative key figure), which are stored in the cube (together with a so called stock marker used as the basis of the calclation).
    In the snapshot scenario the current stock of any month is calculated in a snapshot ODS and then loaded to a cube. This means that the the stock value is physically stored in the cube in an ordinary cumulative key figure.
    Since a non-cumulative cube store value changes and not the actual stock this means that performance might be bad if there are many value changes for each characteristic combination in a month (since the stock is calculated at runtime and many records must be processed to derive the stock). So in this case the snapshot scenario is better since no runtime calculations of the stock need to occur  and since only one record, containing the actual stock value will be stored in each month for each characteristic combination having a stock value.
    I think you would be better of with an example, but with this explanation in mind looking at the scenarios in the How to again might clarify things...
    Regards,
    Christian
    / Christian
    Message was edited by: Christian

  • Help: SQL to select closest points within groups of records grouped by time

    Hi,
    I need help to find an SQL for efficient solution for the following problem:
    - I have 20 buses circling on one bus route and their locations are reported every 10 seconds and recorded into a spatial table “bus_location” with “bus_loc” as SDO_GEOMETRY type.
    - Each bus location record there has a msg_datetime column with the time when the location was recorded.
    - Buses circle the bus route 8-12 times, from A to B, then from B back to A, then new loop from A to B, and so on.
    - I have 3 timing points in spatial table “timing_point” with “tp_loc” as SDO_GEOMETRY type.
    - A bus will pass each timing point 8-12 times in one direction and 8-12 times in the other direction, while making loops on the bus route.
    My task is: for each timing point, for each bus, for each trip direction (A-B or B-A), find the closest bus location to that timing point.
    Example:
    Bus...*TimingPoint*......*Time*.....*Distance*
    ...........................................*between*
    ...........................................*bus and TP*
    12......A................14:01:00......250 m
    12......A................14:01:10......150 m
    12......A................14:01:20......12 m <== This is the record closest to the TP for this pass
    12......A................14:01:30......48 m
    12......A................14:01:40......100 m
    12......A................14:01:50......248 m
    12......A................14:29:40......122 m
    12......A................14:29:50......72 m
    12......A................14:30:00......9 m <== This is the record closest to the TP for this pass
    12......A................14:30:10......10 m
    12......A................14:30:20......12 m
    I tried to use SDO_NN and I can get closest bus locations, but how to find the closest bus location for each pass? Or how to identify a groups of bus locations which are together within 2-3 minutes and to find closest record for each of these groups of locations?
    Thank you very much for any help.
    Milan
    Edited by: mburazor on 9-Feb-2012 2:41 PM

    Milan,
    Yes, the problem is one of analytics not spatial. Here is another go that should be independent of year/month/date/hour/minute
    as it does all its math based on an absolute number of minutes since 1970. I round to 5 minutes but you could round/trunc to any
    particular interval (5 minutes, 10 minutes, 15 minutes etc)
    Note that I have created extra records in the same table that I built. (Tip: good idea to provide a sample dataset to the forum when
    asking difficult SQL questions like this.)
    drop table buslocns;
    create table buslocns(
    busno number,
    timingpt varchar2(1),
    pttime   timestamp,
    distance number)
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:01:00','YYYY/MM/DD HH24:MI:SS'),250);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:01:10','YYYY/MM/DD HH24:MI:SS'),150);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:01:20','YYYY/MM/DD HH24:MI:SS'),12);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:01:30','YYYY/MM/DD HH24:MI:SS'),48);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:01:40','YYYY/MM/DD HH24:MI:SS'),100);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:01:50','YYYY/MM/DD HH24:MI:SS'),248);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:29:40','YYYY/MM/DD HH24:MI:SS'),122);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:29:50','YYYY/MM/DD HH24:MI:SS'),72);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:30:00','YYYY/MM/DD HH24:MI:SS'),9);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:30:10','YYYY/MM/DD HH24:MI:SS'),10);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:30:20','YYYY/MM/DD HH24:MI:SS'),12);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:59:40','YYYY/MM/DD HH24:MI:SS'),53);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:59:50','YYYY/MM/DD HH24:MI:SS'),28);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 15:00:00','YYYY/MM/DD HH24:MI:SS'),12);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 15:00:10','YYYY/MM/DD HH24:MI:SS'),73);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:44:40','YYYY/MM/DD HH24:MI:SS'),53);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 14:44:50','YYYY/MM/DD HH24:MI:SS'),28);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 15:45:00','YYYY/MM/DD HH24:MI:SS'),12);
    insert into buslocns values (12,'A',to_timestamp('2012/02/10 15:45:10','YYYY/MM/DD HH24:MI:SS'),73);
    commit;
    with tensOfMinutes as (
    select row_number() over (order by busno,timingpt, pttime) as rid,
           busno,
           timingpt,
           pttime,
           round(((cast(pttime as date) - cast('01-JAN-1970' as date)) * 1440) / 5) * 5 as mins,
           distance
      from BUSLOCNS
    select busno,timingpt,to_char(to_date('01-JAN-1970','DD-MON-YYYY') + ( mins / 1440 ),'DD-MON-YYYY HH24:MI:SS') as pttime,minDist
      from (select busno,timingpt,mins,min(distance) as minDist
              from tensOfMinutes a
             group by a.busno, a.timingpt, a.mins
             order by 1, 2, 3 ) ;
    -- Result
    BUSNO                  TIMINGPT PTTIME               MINDIST
    12                     A        10-FEB-2012 14:00:00 12
    12                     A        10-FEB-2012 14:30:00 9
    12                     A        10-FEB-2012 14:45:00 28
    12                     A        10-FEB-2012 15:00:00 12
    12                     A        10-FEB-2012 15:45:00 12regards
    Simon

  • How to handle table type record in OAF

    Hi ,
    Req:
    There is a search page that can be accessed from different locations .
    The result set should be different when accessed from each location .
    Approach to be used :
    Call a PL/SQL api which returns a table type input parameter for different search conditions . Each record that is returned by the table type input needs to be iterated and a new row should be created in a transtient VO to show results on the UI
    Problem :
    How to handle to table type input returned from pl/sql in java and show the results?
    Thanks in advance.

    There is a requirement to fetch records from cursor into a collection .Following is the code snippet that I have used.
    /** creating an object **/
    CREATE OR REPLACE TYPE XXCRM_GBL_DSW_AUDIT_RSTLS_O_TP AS OBJECT (
    CUSTOMER_NAME VARCHAR2(360),
    PROJECT_NAME VARCHAR2(40),
    BOARD_NAME VARCHAR2(40),
    ARROW_UNIQUE_NO NUMBER,
    FIELD_NAME VARCHAR2(2000),
    OLD_VALUE VARCHAR2(2000),
    NEW_VALUE VARCHAR2(2000),
    USER_ID NUMBER,
    USER_NAME VARCHAR2(100),
    AUDIT_DATE DATE,
    AUDIT_TYPE VARCHAR2(1),
    SUPP_SYS_LAST_UPDATE_ON DATE,
    SUPP_TRACKING_NUM VARCHAR2(50),
    CONSTRUCTOR FUNCTION XXCRM_GBL_DSW_AUDIT_RSTLS_O_TP
    RETURN SELF AS RESULT
    /** initializing the object **/
    CREATE OR REPLACE TYPE BODY XXCRM_GBL_DSW_AUDIT_RSTLS_O_TP IS
    CONSTRUCTOR FUNCTION XXCRM_GBL_DSW_AUDIT_RSTLS_O_TP
    RETURN SELF AS RESULT IS
    BEGIN
    self.customer_name := fnd_api.g_miss_char;
    self.project_name := fnd_api.g_miss_char;
    self.board_name := fnd_api.g_miss_char;
    self.arrow_unique_no := fnd_api.g_miss_num;
    self.field_name := fnd_api.g_miss_char;
    self.old_value := fnd_api.g_miss_char;
    self.new_value := fnd_api.g_miss_char;
    self.user_id := fnd_api.g_miss_num;
    self.user_name := fnd_api.g_miss_char;
    self.audit_date := fnd_api.g_miss_date;
    self.audit_type := fnd_api.g_miss_char;
    self.supp_sys_last_update_on := fnd_api.g_miss_date;
    self.supp_tracking_num := fnd_api.g_miss_char;
    return;
    END;
    END;
    /** creating a collection of object**/
    CREATE TYPE XXCRM_GBL_DSW_AUDIT_RSTLS_C_TP AS TABLE OF XXCRM_GBL_DSW_AUDIT_RSTLS_O_TP;
    /**Inside the package body **/
    CURSOR c_proj_cur is
    SELECT NULL CUSTOMER_NAME,
    (SELECT project_name
    FROM xxcrm_gbl_dsw_projects_all
    WHERE project_id = xgda.project_id) PROJECT_NAME,
    DECODE(xgda.board_id,NULL,'test',
    (SELECT board_name
    FROM xxcrm_gbl_dsw_boards_all
    WHERE board_id = xgda.board_id)) BOARD_NAME,
    DECODE(xgda.internal_dwr_id,
    NULL,NULL,
    (SELECT internal_dwr_id
    FROM xxcrm_gbl_dsw_regstrations_all
    WHERE internal_dwr_id = xgda.internal_dwr_id)) ARROW_UNIQUE_NO,
    fielslu.meaning FIELD_NAME,
    xgda.old_value OLD_VALUE,
    xgda.new_value NEW_VALUE,
    xgda.created_by USER_ID,
    'kuldeep' USER_NAME,
    xgda.Creation_Date AUDIT_DATE,
    xgda.audit_Level AUDIT_TYPE,
    SYSDATE SUPP_SYS_LAST_UPDATE_ON,
    't001' SUPP_TRACKING_NUM
    FROM XXCRM_GBL_DSW_AUDIT_ALL xgda ,
    FND_LOOKUP_VALUES_VL fielslu
    WHERE xgda.project_id =174
    AND fielslu.lookup_type = 'XXCRM_DSW_REV_FIELD_NAMES'
    AND fielslu.lookup_code = xgda.FIELD_NAME
    AND enabled_flag = 'Y'
    AND TRUNC(SYSDATE) BETWEEN TRUNC(NVL(fielslu.start_date_active,sysdate))
    AND TRUNC(NVL(fielslu.end_date_active,sysdate))
    ORDER BY xgda.creation_date;
    Tab XXCRM_GBL_DSW_AUDIT_RSTLS_C_TP ;
    BEGIN
    OPEN c_proj_cur;
    LOOP
    FETCH c_proj_cur BULK COLLECT INTO tab; --ERROR type mismatch found at 'TAB' between FETCH cursor and INTO variables
    END LOOP;
    END;
    /**Error */
    When the above script is executed it gives an error “type mismatch found at 'TAB' between FETCH cursor and INTO variables” (at line highlighted above).
    I have validated the data type of cursor and object too. But still the error is not resolved .
    Did anyone of you come across a similar requirement, fetching the values from cursor into a collection? Please help.

  • How to handle the failed records from the table when using DB Adapter

    Hi,
    I am reading some records from table using DB Adapter inside my synchronous BPEL process. Say like reading 100 records from table in between after successful reading of 90 records an error occured in 91st record due some various reasons(like DB down, Connection interrupted etc.). Then how to handle this situation, whether i have to read all the records from the begining and is there any option to continue from where it stopped reading.
    Can please anybody help me out in the regard?
    Thanks in advance
    Regards,
    Aejaz

    we had the same requirement some time ago and had two option:
    1. ask the R/3 development team add a deletion indicator in the table (and thus not actually deleting the record). this deletion indicator could then be used like for any other standard datasource
    this option was however refused, due to huge data volume after a while
    2. at the end of the load we copied the ZTABLE1 to ZTABLE2. then in the begin of the load (day after) we compare the data of table1 to table2. entries available in table2 but not in table1 are deleted, and we put a 'D'. in deletion indicator; as we only keep the deleted entries for one day, the volume of the new table is acceptable.
    M.

  • IN Premiere Pro CC how can I group copy all files in time line to another part of the time line

    How can I group copy all files from one part of the timeline to another part

    From the Premiere Pro manual (I believe track targeting matters, too.  That is, the targeted tracks only are copied): 
    Copy command enhanced to operate on a range in a sequence
    The existing Copy command (Edit > Copy, Ctrl+C (Windows), Command+C (Mac OS)) is enhanced when used in the Timeline panel. If there are no clips or keyframes selected, and an In Point or Out Point is set, then the range of time between the In point and Out point is copied to the clipboard.

  • How to handle the deleted records from R3

    Hello,
    We have created a generic data source on a database table in R3 side and now we have a case where there is a huge volume of data gets deleted and new records get updated every day.
    By doing a delta load we are able to load the New records and also the changed ones but we are unable to identify the deleted records from that table and move them to BI for deleteing those records in BI also.
    Can any one please suggest a solution for the handling the deleted records.
    Thanks,
    Ravindra.

    we had the same requirement some time ago and had two option:
    1. ask the R/3 development team add a deletion indicator in the table (and thus not actually deleting the record). this deletion indicator could then be used like for any other standard datasource
    this option was however refused, due to huge data volume after a while
    2. at the end of the load we copied the ZTABLE1 to ZTABLE2. then in the begin of the load (day after) we compare the data of table1 to table2. entries available in table2 but not in table1 are deleted, and we put a 'D'. in deletion indicator; as we only keep the deleted entries for one day, the volume of the new table is acceptable.
    M.

  • How to handle Duplication in records

    Hi,
    I have table
    Table name: PM
    and with fields wo_no, wo_descr, pm_no, current_date
    wo_no is a Primary key.
    I am enter data in this table through loop (Many records at a time), when I press a Button. Data is as under in table.
    wo_no          wo_descr          pm_no          current_date
    1          cleaning the relays     2          12-DEC-04
    2          calibration           3      12-DEC-04
    Now the problem is that when I again press the button at second time on same time, the following data will be entered as
    wo_no          wo_descr          pm_no          current_date
    3          cleaning the relays     2          12-DEC-04
    4          calibration           3      12-DEC-04
    So this is not required, instead of this when I press the button second time a message will appear like “duplication of wo_descr, pm_no in same date”
    I am using form 6i, please send me the code on urgently basis.
    Best Regard,
    Shahzad

    Hi Pragati,
    I am trying to run a form from areport which I am successfull in.But when I run another report through this form the earlier report remains open resulting the report background engine showing the status of the current report as waiting.When I close all the earlier reports run then only the current report is run.I don't want the user to close the earlier reports but happen it automatically.
    How do I?
    Tanx in adv.
    Manoj

  • How can one display 1 record at a time

    How can one display a single record at a time.
    The next record should be displayed on the press of a button.
    Thanks in advance

    Vijaya,
    I have 3 regions, each contains a check box item,
    the label for which is dynamicaly generated through a SQL Select.
    ( &P1_ANSWER1. P1_ANSWER1 is the hidden item name)
    In the Region Source,
    I have put
    SELECT col1 from (SELECT col1, row_number() over(ORDER BY col1) row_number from prototype) WHERE row_number = :P1_QUESTION_NO;
    SELECT col3 from (SELECT col3, row_number() over(ORDER BY col3) row_number from prototype) WHERE row_number = :P1_QUESTION_NO;
    for each of the reqion that contains these items,
    Here the table is prototype and P1_QUESTION_NO is the hidden item.
    As directed I have included under Processes (after submit and computations) on button "Next Record" clicked:
    :P1_QUESTION_NO := ::P1_QUESTION_NO + 1;
    When I am doing this, the label seems to blank out.
    Thanks
    Message was edited by:
    faq123

  • How to select only one record at a time in ALV

    Hi all,
    I have to use ALV report.  Each record on the report output should have a button (or radio button) which will allow the record to be selected.  My requirement is to select only one record at a time.
    please guide me how to proceed further.
    regards
    manish

    hi
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/snippets/sampleCheckBoxProgram
    Regards
    Pavan

  • How to update all condition records at a time with some percentage or some

    Dear Sir,
    In PRD system we have more than 600 condtion records.
    Noe desiel hikes happend, due to that client want to update all condition records(600) at a time with required percentage or required value.
    How to update all records at a time
    With regards
    Lakshmikanth

    Hi,
    Through BDC or LSMW first you have to do the recording for particular transaction code and then base on that u have to prepare the flat file to upload the file for the same.
    For BDC programing you have to take the help of ABAP progmer but LSMW you can also do.
    For more information about LSMW plz go to following link :
    [LSMW |http://www.slideshare.net/arun_bala1/sap-sd-lsmw-legacy-system-migration-workbench/]
    [Step-by-Step Guide for using LSMW|www.scmexpertonline.com/downloads/SCM_LSMW_StepsOnWeb.doc ]
    BDC
    [BDC Call Transaction|http://www.sapdevelopment.co.uk/bdc/bdc_ctcode.htm]
    [BDC Recording|http://www.sapdevelopment.co.uk/bdc/bdc_recording.htm]
    But it would be better for you to please contact to your ABAPer for BDC programming.
    Cheers...

Maybe you are looking for