Table Update using Nolog/Hint

Hi all
Although we can insert records using /nolog so that they are not added into archive logs.
We required using this for table update for some large routene updates which in any way does not effect the integrity of the database.
This table updation facility using /nolog was not available in earlier versions.
Alternate solution for temporarily turning the database into Noarchivelog ->Fire update statement ->Turn database into archivelog mode is not acceptable as the database is production database where certain users are always on with their routene small entry/updation jobs.
I want to know whether this has been enhanced in Oracle 10g or is there any work around by way of giving some hints at the update statement. As large table updation generating too much archive logs are not really desired in certain cases.
Suresh Bansal

Mr. Sayed
Thanks for reply. Insert idea is given just for the available workaround when required and to link this to the subject of discussion. Idea is when Oracle has provided some workaround for Inserting records without loging , why they have not provide so for update.
Pl. advise some workaround for the large update problem.
In Oracle 10g, the default is 'database force logging'. Hence, even nologging transaction, oracle will force to create logging.I would request you to elaborate this. We installed Oracle 10g at one site and the database runs in Noarchive log which is default installation in Windows 2003 server. I could not understand what you mean by default 'database force logging'
Suresh Bansal

Similar Messages

  • LIKP table updation using the IDOC_INPUT_DESADV1

    Hi,
    Please help me when and where the LIKP table is updating using the FM IDOC_INPUT_DESADV1. I debugged the program several times even though i couldnt able to find the updation part.
    Further to the above wherer exactly the idoc number is generrating in this FM.
    This is very urgent.Kindly give the replies at the earliest.

    Ubay
    Refer to this code in IDOC function module DOC_INPUT_DESADV1. Delivery order must be getting created thorugh this function module.
      CALL FUNCTION 'GN_DELIVERY_CREATE'
           EXPORTING
                VBSK_I        = S_VBSK
                NO_COMMIT     = TRUE
                IF_SYNCHRON   = ' '             "INS_HP_338221
                IF_NO_DEQUE   = 'X'             "n_632020
           IMPORTING
                VBSK_E        = S_VBSK
           TABLES
                XKOMDLGN      = T_DLGN
                XVBFS         = T_VBFS
                XVBLS         = T_VBLS
                XVERKO        = T_VSEK
                XVERPO        = T_VSEP
                IT_GN_HUSERNR = T_HUSN
                IT_GN_SERNR   = T_SERN
           EXCEPTIONS
                ERROR_MESSAGE = 1
                OTHERS        = 2.
    Thanks
    Amol G. Lohade

  • Table Update using SQL Loader

    Hi All,
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    I am working on Loading a file using SQL Loader.
    I have loaded all the records (20Million) in to a table which has 30 columns.
    Issue : Now i got a new layout for same Data file, Which already has 5 new columns at the end which we didn't loaded.
    I have created the CTL file with New columns but can i update only those 5 new columns into the table. Is that Possible?
    Or should i truncate and do it from the scratch....Please suggest me...
    Thanks

    SQL*Loader does not update; It only inserts. You could load the data into a staging table, then use SQL to update, but that would be slower than just starting over. So, you should just reload the whole file, including the new columns, using the REPLACE in the SQL*Loader control file, instead of APPEND. That will overwrite any existing data.

  • Table updation using reports.

    Hello friends,
                       I have written a code to convert internal table into XML and i have provided an option to Download it.
    1. Once they download , the invoice no and date should store in a newly created table.
    2. If they try to download it again it should show an error like "This has been Downloaded".
    How to update by using reports.
    Please someone help me on this.
    Regards,
    Vijay Vikram

    Hi Naidu,
    Hope you have created a database table ('Z') to maintain the downloaded invoice number and date.
    If then you can update the invoice number and date to that table when download was successful, if sy-subrc equals 0. If again user prompts to download the invoice number should be same. So you can write a select query to find whether the same invoice exists or not. if it finds the you can show the error message.
    select inv_no from zdown where inv_no = lv_inv_no.
    if sy-subrc  <> 0.
       call function 'GUI_DOWNLOAD'
          if sy-subrc <>0.
           message e...
        else.
           update table zdown ...
    else.
       message e...
    endif.

  • Table updation using function exit in CC01

    In CC01 transaction , Using screen exit i have provided a field for input in the screen . the fields added are in the table AENR .
    But when I Save the transaction, the customised fields are cleared.
    How do I get these fields to be updated in the table ?
    How to use the function exit(provided with the screen exit) in order to update the fields in the table .
    Thanks,
    Amit

    Infact you have to create it:
    - Trx SE11 show table AENR
    - Go to the end of the structure of AENR you should see the structure CI_AENR, do a doubleclick and create it;
    - After creating it, insert your new fields
    In enhancement:
    - Create the screen 100 using structure AENR
    - You have to implement 2 user-exit:
    1) To import the data from std to screen-exit
    EXIT_SAPMC29C_001
    Here insert the code:
    MOVE USERDATA TO AENR
    2) To export the data from screen-exit to std program:
    EXIT_SAPMC29C_002
    Here insert the code:
    MOVE AENR TO USERDATA.
    Max

  • Table update using FM

    Hi ,
    IDOC_CREATE_ON_DATABASE   which DB table is going to update by using this FM for ARTMAS.
    Thanks,
    jo
    Moderator message: please do some research before asking.
    Edited by: Thomas Zloch on Mar 15, 2011 11:55 AM

    Hi ,
    We do not want to use the direct update, insert statements to update the table. So we are looking for an alternative FM or BAPI to update the same. The records we are trying to update are for outbound delivery details.
    Regards,
    Raksha

  • Required to create a script for base table update using XMLSTORE package.

    Hi can anybody provide me some help full suggestion on how to update base table using XMLSTORE package.
    I created a simple script for Employee table and can able to do the basic operation like Insert and update on the table.
    Query is as follow's
    DECLARE
    insCtx DBMS_XMLSTORE.ctxType;
    rows NUMBER;
    xmlDoc CLOB :=
    '<ROWSET>
    <ROW num="1">
    <EMPLOYEE_ID>922</EMPLOYEE_ID>
    <SALARY>1801</SALARY>
    <HIRE_DATE>17-DEC-2007</HIRE_DATE>
    <JOB_ID>ST_CLERK</JOB_ID>
    <EMAIL>RAUSSJACK</EMAIL>
    <LAST_NAME>JACK</LAST_NAME>
    <DEPARTMENT_ID>20</DEPARTMENT_ID>
    </ROW>
    <ROW>
    <EMPLOYEE_ID>923</EMPLOYEE_ID>
    <SALARY>2001</SALARY>
    <HIRE_DATE>31-DEC-2005</HIRE_DATE>
    <JOB_ID>ST_CLERK</JOB_ID>
    <EMAIL>PATHAK</EMAIL>
    <LAST_NAME>PRATIK</LAST_NAME>
    <DEPARTMENT_ID>20</DEPARTMENT_ID>
    </ROW>
    </ROWSET>';
    BEGIN
    insCtx := DBMS_XMLSTORE.newContext('EMPLOYEES'); -- Get saved context
    DBMS_XMLSTORE.clearUpdateColumnList(insCtx); -- Clear the update settings
    -- Set the columns to be updated as a list of values
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'EMPLOYEE_ID');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'SALARY');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'HIRE_DATE');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'JOB_ID');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'EMAIL');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'LAST_NAME');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'DEPARTMENT_ID');
    -- Insert the doc.
    rows := DBMS_XMLSTORE.insertXML(insCtx, xmlDoc);
    --COMMIT;
    DBMS_OUTPUT.put_line(rows || ' rows inserted.');
    -- Close the context
    DBMS_XMLSTORE.closeContext(insCtx);
    END;
    SELECT employee_id, LAST_name FROM employees WHERE employee_id = 114;
    DECLARE
    updCtx DBMS_XMLSTORE.ctxType;
    rows NUMBER;
    xmlDoc CLOB :=
    '<ROWSET>
    <ROW>
    <EMPLOYEE_ID>114</EMPLOYEE_ID>
    <LAST_NAME>PRABHU</LAST_NAME>
    </ROW>
    </ROWSET>';
    BEGIN
    updCtx := DBMS_XMLSTORE.newContext('EMPLOYEES'); -- get the context
    DBMS_XMLSTORE.clearUpdateColumnList(updCtx); -- clear update settings
    -- Specify that column employee_id is a "key" to identify the row to update.
    DBMS_XMLSTORE.setKeyColumn(updCtx, 'EMPLOYEE_ID');
    rows := DBMS_XMLSTORE.updateXML(updCtx, xmlDoc); -- update the table
    DBMS_XMLSTORE.closeContext(updCtx); -- close the context
    commit;
    END;
    Nowi want little modification on this above query like as i am passing static XML tags and i want it to pick the dynamic XML from web and use the XMLSTORE for the update.
    and also for complex XML having 2-3 levels how this query needs to be changed.As i am new to this Oracle utillity any help from xepert will be a great help for me.
    Thanks

    Nowi want little modification on this above query like as i am passing static XML tags and i want it to pick the dynamic XML from webFrom a Web Service?
    You'll need UTL_HTTP or HttpUriType interface to send the request and receive the XML response.
    Search in the forum, there are already a lot of useful examples available.
    and also for complex XML having 2-3 levels how this query needs to be changed.DBMS_XMLStore is OK for readily processing a canonical XML format (/ROWSET/ROW/COLUMN structure or alike).
    However, if you have to deal with a more complex structure, you either have to :
    - use a target object table that matches the XML structure
    - preprocess the input document using XSLT to transform it to canonical format
    That's why DBMS_XMLStore is not appropriate for multilevel documents, especially if they contain nested repeating groups.
    In this case, XMLTable is a more flexible way of parsing the XML and process it relationally at the same time.
    Depending on the size of the document, performance may be improved with schema-based object-relational storage.
    For more help, please post a new thread in the {forum:id=34} forum, with the following information :
    - database version (select * from v$version)
    - a sample XML document (the complex one)
    - DDL of your target table
    - mapping between XML elements and columns (ie which tag goes to which column?)
    - an XML schema (if you have one)

  • Table updation using alv

    Dear Expert,
    While updating the jest table against the Order number, the data in the table is not getting affected. When i refresh the list t again shows the order number. Can anyone please provide me sample ALV program where it displays the list of Orders.
    when one selects the order the order should get approved.
    if i get sample program then i would be helpful.
    Regards,
    Shakti.

    hi Sapbond007,
    You have to create an ALV event handler class to write back the data that has been changed in your ALV.
    Please refer to example programs BCALV_EDIT_* (I think BCALV_EDIT_04 should be the correct one)...
    or you should refer This code
    Lock the table
    CALL FUNCTION 'ENQUEUE_E_TABLE'
    EXPORTING
    mode_rstable = 'E'
    tabname = p_table
    EXCEPTIONS
    foreign_lock = 1
    system_failure = 2
    OTHERS = 3.
    IF sy-subrc = 0.
    Modify the database table with these changes
    MODIFY (p_table) FROM TABLE <dyn_tab_temp>.
    REFRESH <dyn_tab_temp>.
    Unlock the table
    CALL FUNCTION 'DEQUEUE_E_TABLE'
    EXPORTING
    mode_rstable = 'E'
    tabname = p_table.
    Regards
    Saurabh Goel

  • Updating ARDT table without using direct update statement

    hi,
        can any one guide me how to update REMARK field in the ADRT table without using direct UPDATE statement. It would be helpful if any one can tell me the bapi or a function module with a sample code.

    Hi                                                                               
    <b>SZA0                           Business Address Services (w/o Dialog) </b> ADDR_PERSONAL_UPDATE                                                          
    ADDR_PERSON_UPDATE                                                            
    ADDR_PERS_COMP_UPDATE                                                         
    ADDR_UPDATE                                                                   
    these are the four function modules which will update the (Business Address Services) reward if usefull
    check these is there any  help ful for u or not

  • How Update Custom fields for EABL DB table by using BAPI_MTRREADDOC_UPLOAD

    Hi friends,
    How can I Update Custom fields for EABL DB table by using BAPI_MTRREADDOC_UPLOAD
    for the parameter EXTENSIONIN of type BAPIPAREX
    I am passing the Structure as BAPI_TE_EABL
    in that structure MRIDNUMBER as EABL-ABLBELNR field value
    ZMESSAGE some text of 30 char and
    ZSKIPC of 2 char
    but I am not able to update that data for the MRIDNUMBER (ABLBELNR ) in DB table EABL
    I am getting RETURN Structure message type E
    as
    "Upload interim entries: Maintain one table only"
    Can any one provide me solution
    Thanks in Advance
    Ganesh

    Hi,
    Refer the following SAP notes.
    1. Note 485557 - BAPI_REQUISITION_CREATE: 'EXTENSIONIN' customer enhancements
    2. Note 584902 - BAPI_REQUISITION_CHANGE: ExtensionIn not connected
    3. Note 792132 - EBAN, EBKN: user-defined fields are not filled
    Regards,
    Harish

  • Problem in MSEG Table updation

    Hello All,
    The problem is related to MSEG table updation after Stock Transport Order  (STO) is done between Manufacturing plant & Depot. The problem is described below with an example.
    In case of STO the Supplying Plant or vendor is SP02 and the Receiving Plant or the customer is RP15.in the STO under the shipping tab for Customer the system is showing RP15 and under the Delivery address tab for Vendor the system is showing SP02.
    While during PGI when the material document is generated when we check the MSEG table two line items have been generated where  we found that the field WERKS contained SP02 & RP15 against which the feld XAUTO is showing blank space & X symbol respectively.But in case of both LIFNR & KUNNR fields it is showing blank space, which indicates that both the vendor & customer  fields are not getting updated in the MSEG table.
    Now I want the system to pick up the data for KUNNR field (in case of XAUTO is X) also in the MSEG table i.e. RP15 should also be displayed in the MSEG table.
    Is there any configuration to attain my requirement?
    Looking forward to some valuable suggestions.
    Thanks & Regards
    Priyanka Mitra

    the ADRx tables are central tables that are used from various transactions in SAP.
    e.g. from customizing, master data maintenance, transactional data, such as purchase orders and sales orders.
    use the ADRNR from ADR6 table and then lookup an entry in table ADRC to find and hint to its origin  field ADDR_GROUP

  • 10g: parallel pipelined table func. using table(cast(SQL collect.))?

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

  • How to use Oracle Hints in OBIEE

    Hello guys
    I have a report that is running forever and I have reviewed the explain plan over and over, which makes me think that the join between these 2 tables are slowing everything down --- The fact table has 10Billion records and the Dim has 10 Millions records. Even if the report has filters on various columns of the 2 tables, it is still taking forever. The explain is showing that its doing a nested loop join of these 2 tables which takes a lot of time.
    I am thinking of using some Hint to force the query to scan the dimension table first (with the filter condition I defined, it will return 80 rows only from Dim table), and then use the result set of the scanning to go and look for matching records only in the fact table. Its should help speed up the query..
    I'd like to know what kind of hints can help with that kind of operation, also how to define these kind of hints in OBIEE. Now I know that in Physical Layers, I can include hints in physical table property, but how to determine which table (fact or dim) to embed the join hints?
    Need advice from anybody who have experience working with hints?
    Much appreciation!

    Just an update:
    I have tried a couple of join operation hints like USE_NL and USE_HASH hints, it changes the join method but didn't help with performance at all.
    The original SQL that OBIEE generates is: ----------Takes about 20 mins to return a dozen rows
    select sum(T991020.LP Amount) as c1,
    T991021.DAYOFWEEK as c2,
    T991021.PU DATES as c3
    from RD_Grant.PU_DATES T991021, Dimension -----Huge
    PU_Grant.PU_FACTS_9 T991020 FACT ----Gigantic
    where (T991020.COMPANY = T991021.COMPANY
    and T991020.DATES = T991021.DATES
    and T991020.DATES between "10/10/2009" and "31/10/2009"
    and T991021.DAYOFWEEK = 'Sunday'
    and T991021.DATES between '09/13/2009' and '11/03/2009')
    group by T991021.PU DATES, T991021.DAYOFWEEK
    Basically I am still thinking of finding a way to force the query to scan the dimension table first (with the filter condition I defined, it will return 80 rows only from Dim table), and then use the result set of the scanning to go and look for matching records only in the fact table. Its should help speed up the query..
    Any suggestions?
    Thanks
    Edited by: user7276913 on Nov 5, 2009 7:07 PM
    Edited by: user7276913 on Nov 5, 2009 7:22 PM

  • Tablespace, table logging and nologging

    Hi i had a tablespace which is in logging mode.
    i had a table in that tablespace which i want to place in nologging mode.
    i do that using following command
    ALTER TABLE SCOTT.XYZ NOLOGGING;
    i want to know whether the above statement will work even though my tablespace is in logging mode?or should i place my tablespace also in nologging mode?
    thanks in advance

    The tablespace and table logging mode (specified during create tablespace and create table) DOES NOT affect the logging of inserts, deletes, updates.
    Inserts, deletes, updates will be logged regardless, with the exception of "insert /*+ append. */ ...".
    The following sentence from the blog should be re-worded because it is ambiguous:
    Actually the real meaning of NOLOGGING is whatever operations are performed on
    the object with the NOLOGGING option, will NOT be recorded in logfiles.

  • Query regarding HRP table updation in EHP4

    Hello Experts,
    Recently we have upgraded our system to EHP4 and we have done data migration of the E-Recruitment to Career Profile (eg. data migration from HRP9301 to HRP7402 infotype).
    The infotype HRP7402 has a field called TABNR at the last and there is an include provided where we have included 2 more essential fields.
    But when i am updating the HRP7402 (internal table has the correct data), this TABNR (length 32) is getting filled in the infotype automatically which is fine but it is having leading spaces because of which the remaining 5 characters are being moved the other 2 fields (the fields which are included in the include provided) after this field.
    The same TABNR is getting populated perfectly in other infotypes where it is the last field and there are no additonal enhancements done in the standard include provided.
    Please let me know on how can i resolve this problem of leading spaces which are automatically getting added up.  And I am not clear on where this TABNR is getting filled up in the FM. The following is the code snippet....
            CALL FUNCTION 'RH_INSERT_INFTY'
              EXPORTING
                fcode               = 'INSE'
                vtask               = 'D'
                order_flg           = 'X'
                commit_flg          = 'X'
                authy               = 'X'
                repid               = sy-repid
                form                = 'FILL_DATA'
              TABLES
                innnn               = it_p7402
    FORM fill_data TABLES ct_tabinfty_tab
      USING ct_infty TYPE any
            ct_tabix TYPE any.
      CLEAR ct_tabinfty_tab. REFRESH ct_tabinfty_tab.
      ct_tabinfty_tab = ' '.
      APPEND ct_tabinfty_tab.
    ENDFORM.  
    Thanks
    Raj Kishore

    Sorry to intervene here Dhina but that is just halfways correct.
    If delivery class NOT = C then it is NOT a customizing table and therefore you wont get any transport request updating or deleting data from it.
    But if it is a customizing table then you got to look for the recording routine. You can see this by viewing your table in SE11, then using the menu like following:
    utilities->table maintainance generator.
    Then look at the info in the bottowm frame. if "standard recording routine" is tagged, then you get a transport request.
    If the other option is tagged you wont get it. What again means you can customize this table directly in the system where you need those entries.
    why is that?
    well assume there is a customizing for e.G. materials, depending on materialnumber. If that table would have a standard recording routine you would need ALL your materials in your development system which would be quite unusual.
    Edited by: Florian Kemmer on Jun 9, 2011 2:00 PM
    some typos corrected, left some for sure

Maybe you are looking for

  • Importing Video From Home Movie DVD

    I have home movies burnt onto a DVD from the original VHS Tapes, and want to edit the the using IMovieHD 6, but when attempting to import, I get an error message that says, "The File could not be imported". Quicktime was unable to parse it. According

  • IOS 7 Calendar Lag

    When creating an event in iOS 7 calendar, the event editing box has a really choppy scroll animation (or whatever it is). However, if my iPad switches to landscape/ switches to landscape and back to portrait view, the scroll lag goes away. Any soluti

  • Abs timing out [SOLVED!]

    It's some weeks I'm having problems syncing the abs database. I can connect to the server, but then the connection times out, and I don't understand why. My local provider (Fastweb, an Italian company) keeps us in a big WAN, we have a private class a

  • Podcasts + autoupdate

    hey, i guess this question has been asked before, but i was having some problem with uploading podcasts into my ipod (5G 30gig) after subscribing to podcasts at first they werent updating even when i set it to auto.. i plugged it in again and it work

  • Can't update from Photoshop CC 2014 to 2014.1 or 2014.2.1 or 2014.2.2

    Whenever I try to update my Photoshop CC 2014 to higher version, it says "This patch is not applicable for you." Please help me. The same thing is happening to Audition. I can't update the file. Please HELP!!!