Insert into database table

Hi Guy's,
Please help me, trying to insert the records into database table. when i debug the program work area contain  data records  but not insert into databse table.  Here pasted my code pls suggest me where i did mistake.
FUNCTION ZEXM_PHOTOCOPYDTLS_EP.
""Local Interface:
*"  IMPORTING
*"     VALUE(EXMCODE) TYPE  ZEXMCODE OPTIONAL
*"     VALUE(EXMMONTH) TYPE  ZEXMMONTH OPTIONAL
*"     VALUE(SEATNO) TYPE  ZSEATNO OPTIONAL
*"     VALUE(STDFIRSTNAME) TYPE  ZSTDFIRSTNAME OPTIONAL
*"     VALUE(STDMIDDLENAME) TYPE  ZSTDMIDDLENAME OPTIONAL
*"     VALUE(STDLASTNAME) TYPE  ZSTDLASTNAME OPTIONAL
*"     VALUE(ADDLINE1) TYPE  ZADDLINE1 OPTIONAL
*"     VALUE(CITY) TYPE  ZCITY OPTIONAL
*"     VALUE(COUNTRY) TYPE  ZCOUNTRY OPTIONAL
*"     VALUE(PIN) TYPE  ZPIN OPTIONAL
*"     VALUE(TELEPHONENO) TYPE  ZTELEPHONENO OPTIONAL
*"     VALUE(MOBNO) TYPE  ZMOBNO OPTIONAL
*"     VALUE(EMAIL) TYPE  ZEMAIL OPTIONAL
*"     VALUE(DDNO) TYPE  ZDDNO OPTIONAL
*"     VALUE(DDAMT) TYPE  ZDDAMOUNT OPTIONAL
*"     VALUE(DDDATE) TYPE  ZDDDATE OPTIONAL
*"     VALUE(BANKNAME) TYPE  ZBANKNAME OPTIONAL
*"     VALUE(COLNAME) TYPE  ZCOLNAME OPTIONAL
*"     VALUE(COLADDRESS) TYPE  ZCOLADDRESS OPTIONAL
*"     VALUE(COURSECODE) TYPE  ZCOURSECODE OPTIONAL
*"     VALUE(EXMYEAR) TYPE  ZEXMYEAR OPTIONAL
*"     VALUE(PAPERCODE) TYPE  ZPAPERCODE OPTIONAL
*"     VALUE(PAPERNO) TYPE  ZPAPERNO OPTIONAL
*"     VALUE(MARKSOBT) TYPE  ZMARKSOBT OPTIONAL
*"     VALUE(EXMDATE) TYPE  ZEXMDATE OPTIONAL
*"     VALUE(EXMTIME) TYPE  ZEXMTIME OPTIONAL
*"     VALUE(SUBCODE) TYPE  ZSUBCODE OPTIONAL
*"  EXCEPTIONS
*"      SELECTION_ERROR
  DATA : WA_OUTPUT TYPE  ZEXM_PHOTOCOPYDT,
         WA_COLL TYPE ZEXM_COLLEGEMST,
         WA_COURSE TYPE ZEXM_COURSEMST,
         WA_EXMCODE TYPE ZEXM_EXMCODEMST,
         WA_PAPER   TYPE ZEXM_PHOPAPERMAP,
         WA_SUBJECT TYPE  ZEXM_SUBJECTMST.
STUDENT DETAILS
  MOVE : EXMCODE     TO WA_OUTPUT-ZEXMCODE,
         EXMMONTH    TO WA_OUTPUT-ZEXMMONTH,
         SEATNO    TO WA_OUTPUT-ZSEATNO,
         STDFIRSTNAME TO WA_OUTPUT-ZSTDFIRSTNAME,
         STDMIDDLENAME TO WA_OUTPUT-ZSTDMIDDLENAME,
         STDLASTNAME TO WA_OUTPUT-ZSTDLASTNAME,
         ADDLINE1    TO WA_OUTPUT-ZADDLINE1,
         CITY        TO WA_OUTPUT-ZCITY,
         COUNTRY     TO WA_OUTPUT-ZCOUNTRY,
         PIN         TO WA_OUTPUT-ZPIN,
         TELEPHONENO TO WA_OUTPUT-ZTELEPHONENO,
         MOBNO       TO WA_OUTPUT-ZMOBNO,
         EMAIL       TO WA_OUTPUT-ZEMAIL,
         DDNO        TO WA_OUTPUT-ZDDNO,
         DDAMT       TO WA_OUTPUT-ZDDAMT,
         DDDATE      TO WA_OUTPUT-ZDDDATE,
         BANKNAME    TO WA_OUTPUT-ZBANKNAME.
insert into ZEXM_PHOTOCOPYDT values wa_output.
if sy-subrc = 0.
commit work.
endif.
Thanks and Regards,
Sai.
Edited by: sai shanhu on Jun 4, 2008 11:13 AM

Hai,
move wa_output into ZEXM_PHOTOCOPYDT.
insert  ZEXM_PHOTOCOPYDT.
Thanks,
Durai.V

Similar Messages

  • Program to upload csv file to internal table and insert into database table

    Hi I'm writing a program where I need to upload a csv file into an internal table using gui_upload, but i also need this program to insert the data into my custom database table using the split command.  Anybody have any samples to help, its urgent!

    Hi,
    Check this table may be it will give u an hint...
    REPORT z_table_upload LINE-SIZE 255.
    Data
    DATA: it_dd03p TYPE TABLE OF dd03p,
          is_dd03p TYPE dd03p.
    DATA: it_rdata  TYPE TABLE OF text1024,
          is_rdata  TYPE text1024.
    DATA: it_fields TYPE TABLE OF fieldname.
    DATA: it_file  TYPE REF TO data,
          is_file  TYPE REF TO data.
    DATA: w_error  TYPE text132.
    Macros
    DEFINE write_error.
      concatenate 'Error: table'
                  p_table
                  &1
                  &2
             into w_error
             separated by space.
      condense w_error.
      write: / w_error.
      stop.
    END-OF-DEFINITION.
    Field symbols
    FIELD-SYMBOLS: <table> TYPE STANDARD TABLE,
                   <data>  TYPE ANY,
                   <fs>    TYPE ANY.
    Selection screen
    SELECTION-SCREEN: BEGIN OF BLOCK b01 WITH FRAME TITLE text-b01.
    PARAMETERS: p_file  TYPE localfile DEFAULT 'C:\temp\' OBLIGATORY,
                p_separ TYPE c DEFAULT ';' OBLIGATORY.
    SELECTION-SCREEN: END OF BLOCK b01.
    SELECTION-SCREEN: BEGIN OF BLOCK b02 WITH FRAME TITLE text-b02.
    PARAMETERS: p_table TYPE tabname OBLIGATORY
                                     MEMORY ID dtb
                                     MATCHCODE OBJECT dd_dbtb_16.
    SELECTION-SCREEN: END OF BLOCK b02.
    SELECTION-SCREEN: BEGIN OF BLOCK b03 WITH FRAME TITLE text-b03.
    PARAMETERS: p_create TYPE c AS CHECKBOX.
    SELECTION-SCREEN: END OF BLOCK b03,
                      SKIP.
    SELECTION-SCREEN: BEGIN OF BLOCK b04 WITH FRAME TITLE text-b04.
    PARAMETERS: p_nodb RADIOBUTTON GROUP g1 DEFAULT 'X'
                                   USER-COMMAND rg1,
                p_save RADIOBUTTON GROUP g1,
                p_dele RADIOBUTTON GROUP g1.
    SELECTION-SCREEN: SKIP.
    PARAMETERS: p_test TYPE c AS CHECKBOX,
                p_list TYPE c AS CHECKBOX DEFAULT 'X'.
    SELECTION-SCREEN: END OF BLOCK b04.
    At selection screen
    AT SELECTION-SCREEN.
      IF sy-ucomm = 'RG1'.
        IF p_nodb IS INITIAL.
          p_test = 'X'.
        ENDIF.
      ENDIF.
    At selection screen
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
      CALL FUNCTION 'F4_FILENAME'
           EXPORTING
                field_name = 'P_FILE'
           IMPORTING
                file_name  = p_file.
    Start of selection
    START-OF-SELECTION.
      PERFORM f_table_definition USING p_table.
      PERFORM f_upload_data USING p_file.
      PERFORM f_prepare_table USING p_table.
      PERFORM f_process_data.
      IF p_nodb IS INITIAL.
        PERFORM f_modify_table.
      ENDIF.
      IF p_list = 'X'.
        PERFORM f_list_records.
      ENDIF.
    End of selection
    END-OF-SELECTION.
          FORM f_table_definition                                       *
    -->  VALUE(IN_TABLE)                                               *
    FORM f_table_definition USING value(in_table).
      DATA: l_tname TYPE tabname,
            l_state TYPE ddgotstate,
            l_dd02v TYPE dd02v.
      l_tname = in_table.
      CALL FUNCTION 'DDIF_TABL_GET'
           EXPORTING
                name          = l_tname
           IMPORTING
                gotstate      = l_state
                dd02v_wa      = l_dd02v
           TABLES
                dd03p_tab     = it_dd03p
           EXCEPTIONS
                illegal_input = 1
                OTHERS        = 2.
      IF l_state NE 'A'.
        write_error 'does not exist or is not active' space.
      ENDIF.
      IF l_dd02v-tabclass NE 'TRANSP' AND
         l_dd02v-tabclass NE 'CLUSTER'.
        write_error 'is type' l_dd02v-tabclass.
      ENDIF.
    ENDFORM.
          FORM f_prepare_table                                          *
    -->  VALUE(IN_TABLE)                                               *
    FORM f_prepare_table USING value(in_table).
      DATA: l_tname TYPE tabname,
            lt_ftab TYPE lvc_t_fcat.
      l_tname = in_table.
      CALL FUNCTION 'LVC_FIELDCATALOG_MERGE'
           EXPORTING
                i_structure_name = l_tname
           CHANGING
                ct_fieldcat      = lt_ftab
           EXCEPTIONS
                OTHERS           = 1.
      IF sy-subrc NE 0.
        WRITE: / 'Error while building field catalog'.
        STOP.
      ENDIF.
      CALL METHOD cl_alv_table_create=>create_dynamic_table
        EXPORTING
          it_fieldcatalog = lt_ftab
        IMPORTING
          ep_table        = it_file.
      ASSIGN it_file->* TO <table>.
      CREATE DATA is_file LIKE LINE OF <table>.
      ASSIGN is_file->* TO <data>.
    ENDFORM.
          FORM f_upload_data                                            *
    -->  VALUE(IN_FILE)                                                *
    FORM f_upload_data USING value(in_file).
      DATA: l_file    TYPE string,
            l_ltext   TYPE string.
      DATA: l_lengt   TYPE i,
            l_field   TYPE fieldname.
      DATA: l_missk   TYPE c.
      l_file = in_file.
      l_lengt = strlen( in_file ).
      FORMAT INTENSIFIED ON.
      WRITE: / 'Reading file', in_file(l_lengt).
      CALL FUNCTION 'GUI_UPLOAD'
           EXPORTING
                filename = l_file
                filetype = 'ASC'
           TABLES
                data_tab = it_rdata
           EXCEPTIONS
                OTHERS   = 1.
      IF sy-subrc <> 0.
        WRITE: /3 'Error uploading', l_file.
        STOP.
      ENDIF.
    File not empty
      DESCRIBE TABLE it_rdata LINES sy-tmaxl.
      IF sy-tmaxl = 0.
        WRITE: /3 'File', l_file, 'is empty'.
        STOP.
      ELSE.
        WRITE: '-', sy-tmaxl, 'rows read'.
      ENDIF.
    File header on first row
      READ TABLE it_rdata INTO is_rdata INDEX 1.
      l_ltext = is_rdata.
      WHILE l_ltext CS p_separ.
        SPLIT l_ltext AT p_separ INTO l_field l_ltext.
        APPEND l_field TO it_fields.
      ENDWHILE.
      IF sy-subrc = 0.
        l_field = l_ltext.
        APPEND l_field TO it_fields.
      ENDIF.
    Check all key fields are present
      SKIP.
      FORMAT RESET.
      FORMAT COLOR COL_HEADING.
      WRITE: /3 'Key fields'.
      FORMAT RESET.
      LOOP AT it_dd03p INTO is_dd03p WHERE NOT keyflag IS initial.
        WRITE: /3 is_dd03p-fieldname.
        READ TABLE it_fields WITH KEY table_line = is_dd03p-fieldname
                             TRANSPORTING NO FIELDS.
        IF sy-subrc = 0.
          FORMAT COLOR COL_POSITIVE.
          WRITE: 'ok'.
          FORMAT RESET.
        ELSEIF is_dd03p-datatype NE 'CLNT'.
          FORMAT COLOR COL_NEGATIVE.
          WRITE: 'error'.
          FORMAT RESET.
          l_missk = 'X'.
        ENDIF.
      ENDLOOP.
    Log other fields
      SKIP.
      FORMAT COLOR COL_HEADING.
      WRITE: /3 'Other fields'.
      FORMAT RESET.
      LOOP AT it_dd03p INTO is_dd03p WHERE keyflag IS initial.
        WRITE: /3 is_dd03p-fieldname.
        READ TABLE it_fields WITH KEY table_line = is_dd03p-fieldname
                             TRANSPORTING NO FIELDS.
        IF sy-subrc = 0.
          WRITE: 'X'.
        ENDIF.
      ENDLOOP.
    Missing key field
      IF l_missk = 'X'.
        SKIP.
        WRITE: /3 'Missing key fields - no further processing'.
        STOP.
      ENDIF.
    ENDFORM.
          FORM f_process_data                                           *
    FORM f_process_data.
      DATA: l_ltext TYPE string,
            l_stext TYPE text40,
            l_field TYPE fieldname,
            l_datat TYPE c.
      LOOP AT it_rdata INTO is_rdata FROM 2.
        l_ltext = is_rdata.
        LOOP AT it_fields INTO l_field.
          ASSIGN COMPONENT l_field OF STRUCTURE <data> TO <fs>.
          IF sy-subrc = 0.
          Field value comes from file, determine conversion
            DESCRIBE FIELD <fs> TYPE l_datat.
            CASE l_datat.
              WHEN 'N'.
                SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
                WRITE l_stext TO <fs> RIGHT-JUSTIFIED.
                OVERLAY <fs> WITH '0000000000000000'.           "max 16
              WHEN 'P'.
                SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
                TRANSLATE l_stext USING ',.'.
                <fs> = l_stext.
              WHEN 'F'.
                SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
                TRANSLATE l_stext USING ',.'.
                <fs> = l_stext.
              WHEN 'D'.
                SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
                TRANSLATE l_stext USING '/.-.'.
                CALL FUNCTION 'CONVERT_DATE_TO_INTERNAL'
                     EXPORTING
                          date_external = l_stext
                     IMPORTING
                          date_internal = <fs>
                     EXCEPTIONS
                          OTHERS        = 1.
              WHEN 'T'.
                CALL FUNCTION 'CONVERT_TIME_INPUT'
                     EXPORTING
                          input  = l_stext
                     IMPORTING
                          output = <fs>
                     EXCEPTIONS
                          OTHERS = 1.
              WHEN OTHERS.
                SPLIT l_ltext AT p_separ INTO <fs> l_ltext.
            ENDCASE.
          ELSE.
            SHIFT l_ltext UP TO p_separ.
            SHIFT l_ltext.
          ENDIF.
        ENDLOOP.
        IF NOT <data> IS INITIAL.
          LOOP AT it_dd03p INTO is_dd03p WHERE datatype = 'CLNT'.
          This field is mandant
            ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                          TO <fs>.
            <fs> = sy-mandt.
          ENDLOOP.
          IF p_create = 'X'.
            IF is_dd03p-rollname = 'ERDAT'.
              ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                            TO <fs>.
              <fs> = sy-datum.
            ENDIF.
            IF is_dd03p-rollname = 'ERZET'.
              ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                            TO <fs>.
              <fs> = sy-uzeit.
            ENDIF.
            IF is_dd03p-rollname = 'ERNAM'.
              ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                            TO <fs>.
              <fs> = sy-uname.
            ENDIF.
          ENDIF.
          IF is_dd03p-rollname = 'AEDAT'.
            ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                          TO <fs>.
            <fs> = sy-datum.
          ENDIF.
          IF is_dd03p-rollname = 'AETIM'.
            ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                          TO <fs>.
            <fs> = sy-uzeit.
          ENDIF.
          IF is_dd03p-rollname = 'AENAM'.
            ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                          TO <fs>.
            <fs> = sy-uname.
          ENDIF.
          APPEND <data> TO <table>.
        ENDIF.
      ENDLOOP.
    ENDFORM.
          FORM f_modify_table                                           *
    FORM f_modify_table.
      SKIP.
      IF p_save = 'X'.
        MODIFY (p_table) FROM TABLE <table>.
      ELSEIF p_dele = 'X'.
        DELETE (p_table) FROM TABLE <table>.
      ELSE.
        EXIT.
      ENDIF.
      IF sy-subrc EQ 0.
        FORMAT COLOR COL_POSITIVE.
        IF p_save = 'X'.
          WRITE: /3 'Modify table OK'.
        ELSE.
          WRITE: /3 'Delete table OK'.
        ENDIF.
        FORMAT RESET.
        IF p_test IS INITIAL.
          COMMIT WORK.
        ELSE.
          ROLLBACK WORK.
          WRITE: '- test only, no update'.
        ENDIF.
      ELSE.
        FORMAT COLOR COL_NEGATIVE.
        WRITE: /3 'Error while modifying table'.
        FORMAT RESET.
      ENDIF.
    ENDFORM.
          FORM f_list_records                                           *
    FORM f_list_records.
      DATA: l_tleng TYPE i,
            l_lasti TYPE i,
            l_offst TYPE i.
    Output width
      l_tleng = 1.
      LOOP AT it_dd03p INTO is_dd03p.
        l_tleng = l_tleng + is_dd03p-outputlen.
        IF l_tleng LT sy-linsz.
          l_lasti = sy-tabix.
          l_tleng = l_tleng + 1.
        ELSE.
          l_tleng = l_tleng - is_dd03p-outputlen.
          EXIT.
        ENDIF.
      ENDLOOP.
    Output header
      SKIP.
      FORMAT COLOR COL_HEADING.
      WRITE: /3 'Contents'.
      FORMAT RESET.
      ULINE AT /3(l_tleng).
    Output records
      LOOP AT <table> ASSIGNING <data>.
        LOOP AT it_dd03p INTO is_dd03p FROM 1 TO l_lasti.
          IF is_dd03p-position = 1.
            WRITE: /3 sy-vline.
            l_offst = 3.
          ENDIF.
          ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data> TO <fs>.
          l_offst = l_offst + 1.
          IF is_dd03p-decimals LE 2.
            WRITE: AT l_offst <fs>.
          ELSE.
            WRITE: AT l_offst <fs> DECIMALS 3.
          ENDIF.
          l_offst = l_offst + is_dd03p-outputlen.
          WRITE: AT l_offst sy-vline.
        ENDLOOP.
      ENDLOOP.
    Ouptut end
      ULINE AT /3(l_tleng).
    ENDFORM.
    Regards,
    Joy.

  • Import parametes and insert into database table suggestions

    experts need suggestions
    created a table with 3 fields.
    zstable.
    field dataelement type length
    kunnr  kunnr      char  10       key field
    name   dname      char   30      key field
    aedat aedat        dats  8
    aenam  aenam       char  12
    now i am creating a function modlue with two input parameters
    1)kunnr
    2)name
    i want to update this two fields into table zstable.
    1. so suggest me the best way to do this.
    Delcarations in import options which one of the below is best and why?
    name type char30
    kunnr type kunnr
    or
    name like zstable-name
    kunnr like zstable-kunnr
    or
    kunnr type zstable-kunnr
    name type zstable-name.
    2. inserting into table
    the data that we recieve from the import parameters need to be inserted into the zstable.
    please suggest me the appropriate statement  waht i need to code for this.
    insert kunnr name into zstable.
    can i ignore the third field as it is not the key field?
    3. before inserting do i need to check whether kunnr and uname is initial ie import parametrs?
    4. suppose if i delcared two input parameters  in function module zfuntion say vbeln and posnr.
      and source code
    insert vbeln and posnr into zstable.
    now i am calling from the program
    zreport.
    call function 'ZFUNCTION'.
    EXPORT
    NAME =
    KUNNR =
    Will the above statement updates the blank records into the table zstable?.

    Hi,
    1) I think the best is either refer directly to data element or to table field. This is beacuse once any change is made to this data element it will automatically reflect in your parameters. For table fields it is a bit less safier but still correct. Golden rule - always try to refer to DDIC components like data elements.
    name type dname
    kunnr type kunnr
    2) You need to provide all key fields. I recommend to use MODIFY as it would hanlde both INSERT (if yet no such record exists ) and UPDATE (if already exists).
    data: wa_zstable type zstable. "declare work area
    wa_zstable-kunnr = kunnr.  "pass parameters here
    wa_zstable-name = name.
    modify zstable from wa_zstable.
    3) good practise, for this use
    if kunnr is initial and  
       name is initial.
      "modify ...
    endif.
    "or if parameters are set as OPTIONAL, then use
    if kunnr is supplied and
       name is supplied.
    endif.
    4) No, the statment you want to use
    insert vbeln and posnr into zstable.
    is only used for FIELD GROUPS, not DB table. Please refer [INSERT|http://help.sap.com/abapdocu/en/ABAPINSERT_DBTAB_SHORTREF.htm] for correct syntax.
    You can alternatively use somthing like
    UPDATE zstable SET vbeln = ...
                                            posnr = ...
                                     WHERE ...
    This will update fields VBELN and PONR for all records fullfilling WHERE condition. These however musn't be key fields, only non-key fields can be udpated.
    Regards
    Marcin

  • Its very urgent:how to insert data into database tables

    Hi All,
    I am very new to oaf.
    I have one requirement data insert into database tables.
    here createPG having data that data insert into one custom table.
    but i dont know how to insert data into database tables.
    i wrote the code in am,co as follows.
    in am i wrote the code:
    public void NewoperationManagerLogic()
    ManagerCustomTableVOImpl vo1=getManagerCustomTableVO1();
    OADBTransaction oadbt=getOADBTransaction();
    if(!vo1.isPreparedForExecution())
    vo1.executeQuery();
    Row row=vo1.createRow();
    vo1.insertRow(row);
    in createPG processrequest co:
    public void processRequest(OAPageContext pageContext, OAWebBean webBean)
    super.processRequest(pageContext, webBean);
    ManagerInformationAMImpl am=(ManagerInformationAMImpl)pageContext.getApplicationModule(webBean);
    am.NewoperationManagerLogic();
    process form request:
    public void processFormRequest(OAPageContext pageContext, OAWebBean webBean)
    super.processFormRequest(pageContext, webBean);
    if(pageContext.getParameter("Submit")!=null)
    ManagerInformationAMImpl am=(ManagerInformationAMImpl)pageContext.getApplicationModule(webBean);
    am.getOADBTransaction().commit();
    please help with an example(sample code).
    its very urgent.
    thanks in advance
    Seshu
    Edited by: its urgent on Dec 25, 2011 9:31 PM

    Hi ,
    1.)You must have to create a EO based on custom table and then VO based on this EO eventually to save the values in DB
    2.) the row.setNewRowState(Row.STATUS_INITIALIZED); is used to set the the status of row as inialized ,this is must required.
    3.) When u will create the VO based on EO the viewattributes will be created in VO which will be assigned to the fields to take care the db handling .
    You must go thtough the lab excercise shipped with you Jdeveloper ,there is a example of Create Employee page ,that will solve your number of doubts.
    Thanks
    Pratap

  • Inserting Multiple Rows into Database Table using JDBC Adapter - Efficiency

    I need to insert multiple rows into a database table using the JDBC adapter (receiver).
    I understand the traditional way of repeating the statement multiple times, each having its <access> element. However, I am just wondering whether this might be performance-inefficient, as it might insert records one by one.
    Is there a way to ensure that the records are inserted into the table as a block, rather than record-by-record?

    Hi Bhavesh/Kanwaljit,
    If we have multiple ACCESS tags then what happens is that the connection to the database is made only once. But the data is inserted row by row.
    Why i am saying this?
    If we add the following in JDBC Adapter..logSQLStatement = true. Then incase of multiple inserts we can see that there are multiple
    <i>Insert into tablename(EMP_NAME,EMP_ID) VALUES('J','1000')
    Insert into tablename(EMP_NAME,EMP_ID) VALUES('J','2000')</i>
    Doesnt this mean that rows are inserted one by one?
    Correct me if i am wrong.
    This does not mean that the transaction is not guaranted. Either all the rows will be inserted or rolled back.
    Regards,
    Sumit

  • Inserting new records into database table at runtime

    Hi all ,
    How to insert new records into database table at runtime on click update?
    Thanks.

    Hi Sasikala,
    Just for your understanding am giving a sample code snippet which you can use to read the contents of your Table UI element & save the data on to your database. Suppose you have a button up on pressing which you want to read the data from your screens table & save on to the database then you can proceed as shown below:
    1) Obtain the reference of your context node.
    2) Fetch all the data present in your table into an internal table using methods of if_wd_context_node
    3) Use your normal ABAP logic to update the database table with the data from your internal table
    In my example I have a node by name SFLIGHT_NODE and under this I have the desired attributes from SFLIGHT. Am displaying these in an editable table & the user would press up on a push button after making the necessary changes to the tables data. I would then need to obtain the tables information & save on to the database.
    data: node_sflight           type ref to if_wd_context_node,
            elem_sflight           type ref to if_wd_context_element,
            lt_elements            type WDR_CONTEXT_ELEMENT_SET,
           stru_sflight           type if_main=>element_sflight_node,
           it_flights             type if_main=>elements_sflight_node.
    "   navigate from <CONTEXT> to <SFLIGHT_NODE> via lead selection
        node_sflight_node = wd_context->get_child_node( name = 'SFLIGHT_NODE'  ).
       lt_elements = node_sflight->get_elements( ).
    "   Get all the rows from the table for saving on to the database
        loop at lt_elements into elem_sflight.
          elem_sflight->get_static_attributes( importing static_attributes = stru_sflight ).
          append stru_sflight to it_flights.
        endloop.
    " Finally save the entries on to the database
        modify ZSFLIGHT99 from table it_flights.
        if sy-subrc eq 0.
    endif.
    However a word of caution here.... SAP doesn't ever recommend directly modifying the database through an SQL query. You would preferably make use of a BAPI for the same. Try go through Thomas Jung's comments in [here|modify the data base table which is comming dynamiclly;.
    Regards,
    Uday

  • Insert Records into database table.

    Hello Friends,
                         I am trying to insert set of records from an Internal table into Database table.
    INSERT SPFLI FROM TABLE ITAB ACCEPTING DUPLICATE KEYS.
    This is my statement. But It inserts only one record in to SPFLI table.
    Cheers
    Senthil

    Hi Sentil,
    USE THIS STATEMENT.
    MOVE-CORRESPONDING ITAB TO SPFLI.
    SPFLI-MANDT = SY-MANDT.
    MODIFY SPFLI.
    thy this....
    <b>reward if useful</b>
    regards,
    sunil kairam.

  • How can I insert into a table other than the default table in a form

    Hi,
    I want to insert into a table with some fields value of a form of another table. I have written insert code On successful submission of that form, but after submit it gives the following error
    An unexpected error occurred: ORA-06502: PL/SQL: numeric or value error (WWV-16016)
    My code is like this
    declare
    l_trn_id number;
    l_provider_role varchar2(3);
    l_provider_id varchar2(10);
    begin
    l_trn_id := p_session.get_value_as_number(p_block_name=>'DEFAULT',p_attribute_name=>'A_TRANSACTION_ID');
    l_provider_id := p_session.get_value_as_varchar2(p_block_name=>'DEFAULT',p_attribute_name=>'A_PROVIDER1');
    l_PROVIDER_ROLE := p_session.get_value_as_varchar2(p_block_name=>'DEFAULT',p_attribute_name=>'A_PROVIDER_ROLE1');
    if (l_provider_role is not null) and (l_provider_id is not null) then
    insert into service_provider_trans_records(service_provider_id,transaction_id,role_type_id)
    values(l_provider_id, l_trn_id, l_provider_role);
    commit;
    end if;
    end;
    Where 'PROVIDER1' and 'PROVIDER_ROLE1' are not table fields.
    How can do that or why this error comes ? Any idea?
    Thanks
    Sumita

    Hi,
    When do you get this error? Is it while running or while creating the form.
    Here is a sample code which inserts a non-database column dummy into a table called dummy. This is done in successful procedure.
    declare
    l_dummy varchar2(1000);
    begin
    l_dummy := p_session.get_value_as_varchar2(p_block_name=>'DEFAULT',
    p_attribute_name=>'A_DUMMY');
    insert into sjayaram903_1g.dummy values(l_dummy);commit;
    end;
    Please check in your case if the size of the local variable is enough to hold the values being returned.
    Thanks,
    Sharmila

  • Inserting into a table which is created "on the fly" from a trigger

    Hello all,
    I am trying to insert into a table from a trigger in Oracle form. The table name however, is inputted by the user in am item form.
    here is what the insert looks like:
    insert into :table_name
    values (:value1, :value2);
    the problem is that forms does not recognize ::table_name. If I replace :table_name with an actual database table, it works fine. However, I need to insert to a table_name based from oracle form item.
    By the way, the table|_name is built on the fly using a procedure before I try to insert into it.
    Any suggestion on how can I do that? My code in the trigger is:
    declare
    dm_drop_tbl(:table_name,'table) // a call to an external procedure to drop the table
    dm_create_tbl(:table_name,'att1','att2');
    insert into :table_name
    values (:value1, :value2);
    this give me an error:
    encounter "" when the symbol expecting one.....

    Hi ,
    You should use the FORMS_DDL built_in procedure. Read the on-line documentation of forms ...
    Simon

  • Problem while inserting into a table which has ManyToOne relation

    Problem while inserting into a table *(Files)* which has ManyToOne relation with another table *(Folder)* involving a attribute both in primary key as well as in foreign key in JPA 1.0.
    Relevent Code
    Entities:
    public class Files implements Serializable {
    @EmbeddedId
    protected FilesPK filesPK;
    private String filename;
    @JoinColumns({
    @JoinColumn(name = "folder_id", referencedColumnName = "folder_id"),
    @JoinColumn(name = "uid", referencedColumnName = "uid", insertable = false, updatable = false)})
    @ManyToOne(optional = false)
    private Folders folders;
    public class FilesPK implements Serializable {
    private int fileId;
    private int uid;
    public class Folders implements Serializable {
    @EmbeddedId
    protected FoldersPK foldersPK;
    private String folderName;
    @OneToMany(cascade = CascadeType.ALL, mappedBy = "folders")
    private Collection<Files> filesCollection;
    @JoinColumn(name = "uid", referencedColumnName = "uid", insertable = false, updatable = false)
    @ManyToOne(optional = false)
    private Users users;
    public class FoldersPK implements Serializable {
    private int folderId;
    private int uid;
    public class Users implements Serializable {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer uid;
    private String username;
    @OneToMany(cascade = CascadeType.ALL, mappedBy = "users")
    private Collection<Folders> foldersCollection;
    I left out @Basic & @Column annotations for sake of less code.
    EJB method
    public void insertFile(String fileName, int folderID, int uid){
    FilesPK pk = new FilesPK();
    pk.setUid(uid);
    Files file = new Files();
    file.setFilename(fileName);
    file.setFilesPK(pk);
    FoldersPK folderPk = new FoldersPK(folderID, uid);
         // My understanding that it should automatically handle folderId in files table,
    // but it is not…
    file.setFolders(em.find(Folders.class, folderPk));
    em.persist(file);
    It is giving error:
    Internal Exception: java.sql.SQLException: Field 'folderid' doesn't have a default value_
    Error Code: 1364
    Call: INSERT INTO files (filename, uid, fileid) VALUES (?, ?, ?)_
    _       bind => [hello.txt, 1, 0]_
    It is not even considering folderId while inserting into db.
    However it works fine when I add folderId variable in Files entity and changed insertFile like this:
    public void insertFile(String fileName, int folderID, int uid){
    FilesPK pk = new FilesPK();
    pk.setUid(uid);
    Files file = new Files();
    file.setFilename(fileName);
    file.setFilesPK(pk);
    file.setFolderId(folderId) // added line
    FoldersPK folderPk = new FoldersPK(folderID, uid);
    file.setFolders(em.find(Folders.class, folderPk));
    em.persist(file);
    My question is that is this behavior expected or it is a bug.
    Is it required to add "column_name" variable separately even when an entity has reference to ManyToOne mapping foreign Entity ?
    I used Mysql 5.1 for database, then generate entities using toplink, JPA 1.0, glassfish v2.1.
    I've also tested this using eclipselink and got same error.
    Please provide some pointers.
    Thanks

    Hello,
    What version of EclipseLink did you try? This looks like bug https://bugs.eclipse.org/bugs/show_bug.cgi?id=280436 that was fixed in EclipseLink 2.0, so please try a later version.
    You can also try working around the problem by making both fields writable through the reference mapping.
    Best Regards,
    Chris

  • Record not inserting into the table through Forms 10g

    Hi all,
    I have created a form in 10g(10.1.2.0.2) based on just one table that has 4 columns(col1, col2, col3, col4).
    Here col1, col2 and col3 are VARCHAR2 and col4 is date and all the columns are not null columns(There are no primary and foriegn key constrains, which means duplicates are allowed).
    My form contains 2 blocks where block 1 has one text item (col1) and 3 buttons (Delete, Save, Exit).
    And block2 is a database block and has col2,col3,col4 which are in tabluar layout frame displaying 10 records.
    When the form is opened the cursor has to be in block1.col1 for querrying. Here i enter a value in col1, and then when I click on col2 in the block2, then I put execute_query in new_block_instance of block2, which displays the records.
    The block2 properties are not updatable, insertable and query is allowed.
    Everything is working good until here. But here in the block2 when I want to insert another record into the table, by navigating all the way down to the last empty record and entering the new values for col2, col3 and col4 And then Ctrl+S will display the message "*FRM-40400: Transaction complete: 1 record applied and saved.*" But actually the record is not inserted into the table.
    I also disabled the col4 by setting the Enabled property to No, since while inserting new record the date have to be populated into it and it shouldnt be changed by the user. And im populating the sysdate into the new record by setting Intial Value property to *$$DATE$$*.
    And another requirement which I could not work arround here is that, the col3 also should be populated with the username of the user while inserting.
    please help me...

    Hi Sarah,
    I do not want to update the existing record. So I kept Udate Allowed to No in property palette for the items in block2.
    Do I have to do this property at block level also?
    I'm inserting a new record here.
    Edited by: Charan on Sep 19, 2011 8:48 AM

  • Fetch data from one table and insert into two tables in desired format

    I have similar to the following data in a table and it is not normalized. The groupID is being used to group two records of similar nature.
    DECLARE @OldDoc TABLE (oldDocID INT, groupID INT, deptID INT)
    INSERT INTO @OldDoc (oldDocID, groupID) VALUES (1, NULL, 111),(2,NULL,111),(3,1,111),(4,NULL,333),(5,1,222),(6,NULL,333),(7,2,222),(8,2,333),(9,NULL,111),(10,3,222),(11,NULL,333),(12,3,444)
    I need to process the data from the above table (@OldDoc) and write into two new tables (@NewDoc and @NewDocGroup) as follows.
    oldDocID should be stored as newDocID when inserting to @NewDoc table. Only records with groupID NULL and one record (first one) per group should be considered (For example, oldDocID 5 is not considered as 3 and 5 belong to the same groupID 1) for insertion. 
    DECLARE @NewDoc TABLE (newDocID INT)
    INSERT INTO @NewDoc (newDocID) VALUES (1),(2),(3),(4),(6),(7),(9),(10),(11)
    All records from @OldDoc should be considered for insertion into @NewDocGroup table. OldDocID is inserted as NewDocID and deptID is as-is. Instead of groupID, the ID of the first record in the 
    group should be considered as parentNewDocID (For example, 3 is considered as parentNewDocID for newDocID 5 as 3 and 5 belong to the same groupID in @OldDoc table) for the newDocID.
    DECLARE @NewDocGroup (newDocID INT, parentNewDocID INT, deptID INT)
    INSERT INTO @NewDocGroup (newDocID, parentNewDocID, deptID) VALUES (1,1,111),(2,2,111),(3,3,111),(4,4,333),(5,3,222),(6,6,333),(7,7,222),(8,7,333),(9,9,111),(10,10,222),(11,11,333),(12,10,444)
    How do I accomplish the above using SQL ? Thanks for the help.

    >> I have similar to the following data in a table and it is not normalized. The group_id is being used to group two records [sic] of similar nature. <<
    Rows are not records. Tables have to have a key by definition. You do not do math with identifiers, so they should not be numeric. Let's ignore that error for now. In short, you are posting garbage. If you had followed Forum Netiquette, would you have posted
    this? 
    CREATE TABLE Old_Documents
    (old_doc_id INTEGER NOT NULL PRIMARY KEY, 
     group_id INTEGER, 
     dept_nbr INTEGER NOT NULL
       REFERENCES Departments (dept_nbr));
    INSERT INTO Old_Documents(old_doc_id, group_id, dept_nbr) 
    VALUES  (1, NULL, 111), 
    (2, NULL, 111), 
    (3, 1, 111), 
    (4, NULL, 333), 
    (5, 1, 222), 
    (6, NULL, 333), 
    (7, 2, 222), 
    (8, 2, 333), 
    (9, NULL, 111), 
    (10, 3, 222), 
    (11, NULL, 333), 
    (12, 3, 444);
    >> I need to process the data from the above table (Old_Documents) and write into two new tables (New_Documents and New_Documents_Groups) as follows. <<
    Just like punch cards and mag tape data processing! Being old and being new are a status, not another kind of entity. But that is how mag tapes work. And you even use the verb "fetch" from tape files. This design flaw is called  attribute splitting.
    Do you have a Male_Personnel and Female_Personnel table? NO! It is just Personnel! 
    >> old_doc_id should be stored as new_doc_id when inserting to New_Documents table. Only records [sic] with group_id NULL and one record [sic] (first [sic; no ordering in a table] one) per group should be considered (For example, old_doc_id 5 is not considered
    as 3 and 5 belong to the same group_id =1) for insertion. <<
    Think about your punch card mindset. Why did you physically materialize that redundant New_Documents table? Let me answer that: this is how you work with punch cards! In SQL we use a VIEW:
    CREATE VIEW New_Documents (new_doc_id)
    AS 
    SELECT old_doc_id 
      FROM Old_Documents;
    >> All records [sic] from Old_Documents should be considered for insertion into New_Documents_Groups table. The old_doc_id is inserted as new_doc_id and dept_nbr is as-is. Instead of group_id, the ID [sic: which identifier??] of the first [sic: tables
    have no ordering like a deck of punch cards] record [sic] in the group should be considered as parent_new_doc_id (For example, 3 is considered as parent_new_doc_id for new_doc_id 5 as 3 and 5 belong to the same group_id in Old_Documents table) for the new_doc_id.
    <<
    Why not use 5 as the parent? My guess is that you are trying to form equivalence classes. See:
    https://www.simple-talk.com/content/print.aspx?article=2020
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Trigger a process when a record is inserted into database

    Hi, could someone tell me how to do the following (if it's possible):
    when a record is inserted into a table in a database,
    I want it to somehow trigger an action which will put an item into user's To-Do inBox of the WorkSpace,
    the user can then open the form in the To-Do and use it to view the data in the newly inserted record.
    this "user"'s login ID is part of the inserted record.
    thanks

    Mmm, trigger?  I don't know of a way.
    But I supposed you could have a process that has a timer that peroidically queries the database for changes and then do the same thing.
    You could also perhaps write a Service and then use the Java API to start a process?
    Maybe take a look here
    http://blogs.adobe.com/livecycle/2010/12/how-to-invoke-a-livecycle-process-periodically.ht ml

  • Insert into two tables, how to insert multiple slave records

    Hi, I have a problem with insert into two tables wizard.
    The wizard works fine and I can add my records, but I need to enter multiple slave table records.
    My database:
    table: paper
    `id_paper` INTEGER(11) NOT NULL AUTO_INCREMENT,
    `make` VARCHAR(20) COLLATE utf8_general_ci NOT NULL DEFAULT '',
    `model` VARCHAR(20) COLLATE utf8_general_ci NOT NULL DEFAULT '',
    `gsm` INTEGER(11) NOT NULL,
    PRIMARY KEY (`id_paper`)
    table: paper_data
    `id_paper_data` INTEGER(11) NOT NULL AUTO_INCREMENT,
    `id_paper` INTEGER(11) NOT NULL,
    `value` DOUBLE(15,3) NOT NULL,
    `nanometer` INTEGER(11) NOT NULL,
    PRIMARY KEY (`id_paper_data`)
    I need to add multiple fields "value" and "nanometer"
    Current form looks like this:
    Make:
    Model:
    Gsm:
    Value:
    nanometer:
    I need it to look like this:
    Make:
    Model:
    Gsm:
    Value:
    nanometer:
    Value:
    nanometer:
    Value:
    nanometer:
    Value:
    nanometer:
    and so on.
    The field "id_paper" in table paper_data needs to get same id for entire transaction. Also how do I set default values for each field "nanometer" on my form the must be different (370,380,390 etc)?
    Thanks.

    you can find an answer here: http://209.85.129.132/search?q=cache:PzQj57dsWmQJ:www.experts-exchange.com/Web_Development /Software/Macromedia_Dreamweaver/Q_23713792.html+Insert+Into+Two+Tables+Wizard&cd=3&hl=lt& ct=clnk&gl=lt
    This is a copy of the post:
    Hi experts,
    Im using ADDT to design a page that needs to insert one record into a master ALBUMS table, along with three records into a GENRES table, all linked by the primary, auto-incremented ALBUMS. ALBUM_ID.
    Ive tried many different ways of combining the Insert into Multiple Tables wizard and the insert record wizard with Link Transactions, all with no luck.  Either only the album info gets inserted, or a ALBUM_ID cannot be null error from MySQL.  Here is the structure of the tables
    ALBUMS
    ALBUM_ID, INT(11), Primary, Auto_Increment
    alb_name, varchar
    alb_release, YEAR
    USER_ID, int
    alb_image, varchar
    GENRES
    ALBUM_ID, int, NOT NULL
    GENRE_ID, int, NOT NULL
    ID, int, primary, auto-increment
    Many thanks in advance...
    ==========================================================================================
    //remove this line if you want to edit the code by hand
    function Trigger_LinkTransactions(&$tNG) {
      global $ins_genres;
      $linkObj = new tNG_LinkedTrans($tNG, $ins_genres);
      $linkObj->setLink("ALBUM_ID");
      return $linkObj->Execute();
    function Trigger_LinkTransactions2(&$tNG) {
      global $ins_genres2;
      $linkObj = new tNG_LinkedTrans($tNG, $ins_genres2);
      $linkObj->setLink("ALBUM_ID");
      return $linkObj->Execute();
    function Trigger_LinkTransactions3(&$tNG) {
      global $ins_genres3;
      $linkObj = new tNG_LinkedTrans($tNG, $ins_genres3);
      $linkObj->setLink("ALBUM_ID");
      return $linkObj->Execute();
    //end Trigger_LinkTransactions trigger
    //-----------------------Different Section---------------------//
    // Make an insert transaction instance
    //Add Record Genre 1
    $ins_genres = new tNG_insert($conn_MySQL);
    $tNGs->addTransaction($ins_genres);
    $ins_genres->registerTrigger("STARTER", "Trigger_Default_Starter", 1, "VALUE", null);
    $ins_genres->registerTrigger("BEFORE", "Trigger_Default_FormValidation", 10, $detailValidation);
    $ins_genres->setTable("genres");
    $ins_genres->addColumn("GENRE_ID", "NUMERIC_TYPE", "POST", "GENRE_ID");
    $ins_genres->addColumn("ALBUM_ID", "NUMERIC_TYPE", "VALUE", "");
    $ins_genres->setPrimaryKey("ID", "NUMERIC_TYPE");
    // Add Record Genre 2
    $ins_genres2 = new tNG_insert($conn_MySQL);
    $tNGs->addTransaction($ins_genres2);
    $ins_genres2->registerTrigger("STARTER", "Trigger_Default_Starter", 1, "VALUE", null);
    $ins_genres2->setTable("genres");
    $ins_genres2->addColumn("GENRE_ID", "NUMERIC_TYPE", "POST", "GENRE_ID2");
    $ins_genres2->addColumn("ALBUM_ID", "NUMERIC_TYPE", "VALUE", "");
    $ins_genres2->setPrimaryKey("ID", "NUMERIC_TYPE");
    // Add Record Genre 3
    $ins_genres3 = new tNG_insert($conn_MySQL);
    $tNGs->addTransaction($ins_genres3);
    $ins_genres3->registerTrigger("STARTER", "Trigger_Default_Starter", 1, "VALUE", null);
    $ins_genres3->setTable("genres");
    $ins_genres3->addColumn("GENRE_ID", "NUMERIC_TYPE", "POST", "GENRE_ID3");
    $ins_genres3->addColumn("ALBUM_ID", "NUMERIC_TYPE", "VALUE", "");
    $ins_genres3->setPrimaryKey("ID", "NUMERIC_TYPE");
    =========================================================================================
    Hi Aaron,
    Nice job!!
    $ins_albums->registerTrigger("AFTER", "Trigger_LinkTransactions2", 98);
    $ins_albums->registerTrigger("AFTER", "Trigger_LinkTransactions3", 98);
    These lines, right? :-( Sorry I forgot to mention that
    Thanks a lot for the grading!

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

Maybe you are looking for

  • 20" Display Dimensions???

    I am considering an iMac 20" but need to know if the display will fit in my current desk. I plan on using a vesa mount and an arm of some sort. What are the dimensions of the display only (not including the base)??? Thanks for the help! M

  • Service PO Sales and Use Tax Conditions

    I'm trying to located where the details of the tax condition JP1I is located. Its not konv. In there you see the NAVS but this is a sum of all levels of tax. Does anyone know where to find JP1I. It might be BSET but I need the link. Any help will be

  • How to check two internal table fields

    Hi all, I need to check two internal table fields are equal or not means which statement i can use. Not internal table ,table contents i need to check i.e it1-pernr = it2-pernr. like this if i put loop it'll check one table field for one loop. if i p

  • Db-link problem from v8 to v7

    Hello, We have 3 Oracle databases called A, B and C. A and B are Oracle 8.1.6 databases and C is Oracle 7.3 We have a database link from A to C (create database link dbl_c connect to user_ora identified by <pwd> using 'C'). On A, we have synonyms to

  • What to do? My sliders on the Macbook Pro don't work

    What is the fix for the problem? My sliders don't work in LR