Step by step procudure to create table popin on lead select.

hello,
I want to create a table popin on row lead select.
can any one help me out in step by step procedure to be doen on view and
the coding which has to be done.
example: when I lead select on a row in table for particular flight details,
              I should get the planetype in the popup.
Thanks in Advance.
Ajay

Hi,
For this you need to do the following:
1. Add one Popin to your table. (say "pop1")
2. Add one column to your table (preferably it should be the first one in the table).
3. Add cellvariant to this column.
Thats it.
If you have more than one popin in your table as row or cell popin then use context attr to map it to selectedPopin property of tour table and pass the popin Id to it.
thanks & regards,
Manoj

Similar Messages

  • How to create Table Popins inside a table using Java Webdynpro

    Hi All,
    I am working on a Java Webdynpro project and one of our requirements is to create Table popins inside the Table. I need to popin data beneath the row based on a value selected from the DropDown Box which is one of the  column in the Table.
    Also, I need to display vertical scrollbar instead of  Table Footer to scroll thru the table rows. Any ideas.........
    I would appreciate if you can let me know how can I do this. I am working on the 7.0 version of Java Webdynpro.
    Thanks for your time and consideration!
    Regards,
    Madhavi

    Hi,
    Please refer the following link:
    Re: Table popins
    step by step procudure to create table popin on lead select.
    thanks & regards,
    Manoj
    Edited by: Manoj Kumar on Apr 1, 2008 10:17 AM

  • Create Table... as Select * from.....  Vs Insert into table.....Select * from .....

    Hi All,
    I am having one query as below:
    There is a master table named: MSTR_TABLE  (It's having partitions on a month column [YYYYMM] )
    I am having 4 temp Tables As: Temp1, Temp2, temp3, Temp4 (These tables are not having any index, no partitions)
    Scenario 1: I am trying to insert data into MSTR_TABLE from Temp1, Temp2, temp3, Temp4 tables (Select data from temp tables by Outer Join), which i taking lot's of time. Coming to no of records in temp tables are Temp1: 5 Million Records,
                           Temp2: 3 Million Records,
                           Temp3: 0.5 Million records,
                           Temp4: 5000 Records
    Scenario 2: Creating a new Table as TEMP_MASTER using Temp1, Temp2, temp3, Temp4 tables. This is executing much faster and the table is getting created in less than 10 mins.
    Scenario 3: Now i am inserting data from TEMP_MASTER to MSTR_TABLE. This is is also getting executed in less than 5 mins.
    I tried following methods:
    1) Creating a table with partitions and tried Inserting Data - No Luck
    2) Disabled Logging on table and tried Inserting data - No Luck
    3) used /*+ append nologging*/ and tried inserting data - No Luck
    4) Altered Next Extent to 8MB and tried - No Luck
    Please let me know, why while inserting into main table (Scenario 1) from query is not working and running long time (More than 2 hrs).
    Thanks in Advance,
    Venu Gopal K

    Hi Sudhakar,
    Thanks for your reply. Please find the detailed description of situation as below:
    MSTR_TABLE: Existing Number of columns : 120
                            Indexes: NO
                            Partitions: On Month Column (Datatype: Number, format (YYYYMM, Ex: 201205))
                            Inserting Data for every month
    I added 18 new columns to MSTR_TABLE and the requirement is to populate data for 18 columns. These 18 columns data i am fetching from multiple tables and storing in temp tables as below, and creating a new table by joining PRNT_TABLE, TEMP1,TEMP2,TEMP3,TEMP4 and fetching 120+18 columns data and storing in MSTR_TABLE.
    PRNT_TABLE:  Existing Number of columns : 120
                            Indexes: NO
                            Partitions: On Month Column (Datatype: Number, format (YYYYMM, Ex: 201205))
                            Data : more than 100 Million records
    TEMP1: No Index, No Partitions (No Of Rows: 5 million)
    TEMP2: No Index, No Partitions  (No Of Rows: 4 million)
    TEMP3: No Index, No Partitions  (No Of Rows: 30000)
    TEMP4: No Index, No Partitions  (No Of Rows: 5000)
    Queries are as follows:
    Scenario 1:
    Insert into MSTR_TABLE
    (col1,
    col2,
    col3,
    col119,
    col120,
    New_col121,
    New_col122,
    New_col138)
    select col1, col2, col3 ...............col119, col120, New_col121, New_col122.................New_col138
    from PRNT_TABLE, TEMP1,TEMP2,TEMP3,TEMP4
    where PRNT_TABLE.Col1 = TEMP1.Col1(+)
       AND PRNT_TABLE.Col1 = TEMP1.Col1(+)
       AND PRNT_TABLE.Col1 = TEMP1.Col1(+)
      AND PRNT_TABLE.Col1 = TEMP1.Col1(+)
       AND  PRNT_TABLE.Month  = 201102
    Execution Plan Statistics: INSERT STATEMENT  ALL_ROWSCost: 19,451,465  Bytes: 2,472,327,442  Cardinality: 5,593,501 
    Scenario 1:
    Create table NEW_TABLE
    as
    select col1, col2, col3 ...............col119, col120, New_col121, New_col122.................New_col138
    from PRNT_TABLE, TEMP1,TEMP2,TEMP3,TEMP4
    where PRNT_TABLE.Col1 = TEMP1.Col1(+)
       AND PRNT_TABLE.Col1 = TEMP1.Col1(+)
       AND PRNT_TABLE.Col1 = TEMP1.Col1(+)
      AND PRNT_TABLE.Col1 = TEMP1.Col1(+)
       AND  PRNT_TABLE.Month  = 201102
    Execution Plan Statistics: CREATE TABLE STATEMENT  ALL_ROWSCost: 19,500,277  Bytes: 2,472,327,442  Cardinality: 5,593,501 
    Thanks,
    Venu Gopal K.

  • How to delete the record in the table without using lead selection?

    hi,
    I have added the separate column "delete" to the table uielement and so for each record or row of the table the appropriate "delete" link to action will be there................the code below works when the particular row is selected through lead selection only.......
    help me how to delete without using lead selection.....
      DATA:
      NODE_MODULE                         TYPE REF TO IF_WD_CONTEXT_NODE,
      ELEM_MODULE                         TYPE REF TO IF_WD_CONTEXT_ELEMENT,
      STRU_MODULE                         TYPE IF_V_MODULE=>ELEMENT_MODULE .
       data itab TYPE TABLE OF zac_modules.
      navigate from <CONTEXT> to <MODULE> via lead selection
      NODE_MODULE = WD_CONTEXT->GET_CHILD_NODE( NAME = `MODULE` ).
      get element via lead selection
      ELEM_MODULE = NODE_MODULE->GET_ELEMENT(  ).
      get all declared attributes
      ELEM_MODULE->GET_STATIC_ATTRIBUTES(
        IMPORTING
          STATIC_ATTRIBUTES = STRU_MODULE ).
    NODE_MODULE->GET_STATIC_ATTRIBUTES_TABLE(
        IMPORTING
         TABLE  = itab )
    DELETE itab WHERE zmodule_id = STRU_MODULE-zmodule_id.
    CALL METHOD NODE_MODULE->BIND_TABLE
        EXPORTING
          NEW_ITEMS            = itab
       SET_INITIAL_ELEMENTS = ABAP_TRUE
       INDEX                =
    ENDMETHOD.

    Hi  ,
    The onclick event provides you with a standard paramater "CONTEXT_ELEMENT" which has the element from which the event is triggered.
    so you can declare this in the handler(if it is not there) and use it as follows.
    CONTEXT_ELEMENT  TYPE REF TO IF_WD_CONTEXT_ELEMENT  an importing paramater.
    DATA:
    NODE_MODULE TYPE REF TO IF_WD_CONTEXT_NODE,
    ELEM_MODULE TYPE REF TO IF_WD_CONTEXT_ELEMENT,
    STRU_MODULE TYPE IF_V_MODULE=>ELEMENT_MODULE .
    data itab TYPE TABLE OF zac_modules.
    CONTEXT_ELEMENT->GET_STATIC_ATTRIBUTES(
    IMPORTING
    STATIC_ATTRIBUTES = STRU_MODULE ). "Using the context_element paramater to get the static attributes.
    NODE_MODULE->GET_STATIC_ATTRIBUTES_TABLE(
    IMPORTING
    TABLE = itab )   "getting all the data.
    DELETE itab WHERE zmodule_id = STRU_MODULE-zmodule_id. "deleting the particular row from the table and binding it.
    CALL METHOD NODE_MODULE->BIND_TABLE
    EXPORTING
    NEW_ITEMS = itab
    * SET_INITIAL_ELEMENTS = ABAP_TRUE
    * INDEX =
    thanks,
    Aditya.

  • Index of the Table control - No lead selection activated

    Hi guys,
    I have a table control where there is no lead selection.... One colum of the table contains push button on all rows and I need to find the row id based on the click of the which row button... How to do this???
    regards,
    Prabhu

    Hi Prabhu ,
    write the below piece of code in the button Action .
    DATA lo_nd_node_table TYPE REF TO if_wd_context_node.
      DATA lo_el_node_table TYPE REF TO if_wd_context_element.
      DATA ls_node_table TYPE wd_this->Element_node_table.
    *   navigate from <CONTEXT> to <NODE_TABLE> via lead selection
      lo_nd_node_table = wd_context->get_child_node( name = wd_this->wdctx_node_table ).
      CALL METHOD wdevent->get_context_element
        EXPORTING
          name  = 'CONTEXT_ELEMENT'
        RECEIVING
          value = lo_el_node_table.  " getting the clicked line
      CALL METHOD lo_nd_node_table->SET_LEAD_SELECTION
        EXPORTING
          ELEMENT = lo_el_node_table.  " Setting the lead
      data lv_index type i.
      CALL METHOD lo_nd_node_table->GET_LEAD_SELECTION_INDEX
        RECEIVING
          INDEX = lv_index. " getting the index of the lead line
    Hope it will Helpfull .
    Regards
    Chinnaiya P
    Edited by: chinnaiya pandiyan on Jun 28, 2010 12:04 PM

  • Getting the selected index of a table (CL_WD_TABLE) without lead-selection

    Hi
    When I click in a table-UI (not alv-table) on a f4-help, I can search with se11-searchhelp on a field. After I choose something, I should also fill some other fields in the same row.
    In the table I have no lead-selection. So, I am searching for a method that gets me the selected index. The element-class for the table-UI is CL_WD_TABLE.
    Once I have the selected index, I could change the fields in the same row inside the WDDOMODIFYVIEW.
    With activated node-selection it can be, that the user selects one row, but he starts the f4-help on another row (the lead-selection doesn't change).
    I have seen this article, but i hope that for the table-UI it is different than with alv:
    [ON_CELL_ACTION triggered only by ENTER ?]

    Hi Lorenzo,
    I'm trying to do the same thing and came up with this solution. In WDDOMODIFYVIEW, since we cannot know in which table row the user clicked the search help, I simply loop through all the rows of the table. For each row, I delegate to a method in the Component Controller which sets the specified index as the table's lead selection then reads the selected table row from the context. I can then populate fields in the selected row as needed.
    METHOD wddomodifyview .
       DATA lo_componentcontroller TYPE REF TO ig_componentcontroller.
       DATA lo_nd_items TYPE REF TO if_wd_context_node.
       DATA lt_items TYPE wd_this->elements_items.
    * -- Upon return from the search help, populate table row details
       lo_componentcontroller = wd_this->get_componentcontroller_ctr( ).
       lo_nd_items = wd_context->get_child_node( name = wd_this->wdctx_items ).
       lo_nd_items->get_static_attributes_table( IMPORTING table = lt_items ).
    * Since we cannot know in which table row the user requested the search help,
    * loop through all table rows, populating table row details as needed.
       DESCRIBE TABLE lt_items LINES sy-tfill.
       DO sy-tfill TIMES.
    *   Delegate to the Component Controller
         lo_componentcontroller->get_table_row_detail( sy-index ).
       ENDDO.
    ENDMETHOD.

  • Do I need to install Oracle database 12c on PC to create tables for Weblogic server

    Hi,
    I am new to WebLogic and I am following a book Java EE Development
    with Eclipse published by PACKT Publishing to learn  Java EE.  I have installed Oracle Enterprise Pack for Eclipse
    on the PC and I am able to log into the WebLogic Server
    Administration Console and set up a Data Source.  However the next step is to create
    tables for the database.  The book says that the tables can be
    created using SQL script run from the SQL command line.
    I cannot see any way of inputting SQL script into the WebLogic
    Server Admistration Console.  Aslo there is no SQL Command line
    in DOS.
    I have put a previous question on the Oracle Forum about the above and I  was advised to use SQL Plus
    to create the tables for the weblogic server but no mention was made of installing Oracle database.
    I have downloaded InstantClient SQL Plus and Instantclient Basic on a PC and
    put the files into the same directory C:\SQLPlus but now I am having a problem in that it is
    looking for a username and password when I run the sqlplus.exe file.
    Now I assume that I need to have an installation of Oracle databse on the
    PC so as to be able to use SQL Plus.  Am I correct in this or can I create tables for the
    datasource in weblogic without an Oracle database?
    Thanks in advance for your help. Brian

    You should install it separately and also need to change the OBIEE's user.sh to tell it where is your Oracle client installed.

  • TABLE POPINS

    hi all,
    I trying to create table popin in webdynpro. It doent show in output. could anyone tell the possible reason.
    I have aaded table popin in table with PI and another table column with cell variant tablepopintogglecell with varient key.
    Thanks
    Ritika

    check this link
    https://www.sdn.sap.com/irj/scn/wiki?path=%3fpath=/display/wdjava/table%252bcell%252bpopin%252bin%252bce%252b7.1%252bwith%252ban%252bexample.
    may be this could help you

  • Create table more than 1000 colum

    Hello
    I have 1500+ column in my select query. how to create table like my query is
    create table xyz nologging as
    select a1,a2,a3,a4,a5,b1,b2,b3,b4,b5,................a1500
    from abcd a,bcde b
    where a.a1=b.b1
    In oracle 10g I can create only 1000 column in a table.
    Please suggest

    799301 wrote:
    My all columns are formulated with different different tables and want to create one intermediate table
    below query is just example
    select count(unique((case when sysdate >45 then video=Y and audio=N then 1 else 0 end))) vd_45,
    count(unique((case when sysdate >105 then video=Y and audio=N then 1 else 0 end))) vd_105,
    count(unique((case when sysdate >195 then video=Y and audio=N then 1 else 0 end))) vd_195,
    count(unique((case when sysdate >380 then video=Y and audio=N then 1 else 0 end))) vd_380,
    count(unique((case when sysdate >45 then video=N and audio=Y then 1 else 0 end))) ad_45,
    count(unique((case when sysdate >105 then video=N and audio=Y then 1 else 0 end))) ad_105,
    count(unique((case when sysdate >195 then video=N and audio=Y then 1 else 0 end))) ad_195,
    count(unique((case when sysdate >380 then video=N and audio=Y then 1 else 0 end))) ad_380
    from abcd a,bcde b
    where a.a1=b.b1;
    please suggest how create more than 1000 table columnBy creating a proper relation between the "formulated function" and stored value.
    Simplistically put:
    Do you model invoices as entities INVOICES_2010, INVOICES_2011 and so on?
    Or do you model it as INVOICES that has a year (date) attribute?
    It does not make sense to put attribute data into the name of entity. Nor does it make sense to put relationship data into the name of an attribute and create a 1000+ attributes as a result.

  • Create Table Temp As Select * from Table

    What happens in the backend (performance point of view) when we create a table as follows:
    Create Table Temp As Select * from Table1;
    Suppose that the table Table1 has 10 million rows.
    How is this different from inserting 10 million rows using Insert Statement in a script etc. into Temp table.
    i.e. insert into temp values(1, 'description', sysdate);

    >
    Create Table Temp As Select * from Table1; is always faster than inserting different rows using insert scripts.
    >
    Incorrect - not sure where you got that information.
    The two things being talked about that can affect performance are the use of direct-path loads and whether those loads are logged.
    It makes no difference if the table is created first and then loaded using INSERT or if a CTAS is used to do both.
    A traditional INSERT can use the APPEND hint to perform a direct-path load.
    >
    Cretae table Temp As select * from Tabl1 does not generate logging to the database.Also as datatypes are determined automatically ,there is less work up front.
    >
    The statement that CTAS 'does not generat logging' is incorrect. Whether logging is generated depends on whether the table is in LOGGING or NOLOGGING mode. A CTAS does not automatically, on its own bypass logging.
    To bypass logging and existing table can be set to NOLOGGING mode
    ALTER TABLE myTable NOLOGGINGAnd for CTAS the mode can be specified as part of the statement
    Create table Temp NOLOGGING As select * from Tabl1;

  • SQL Developer 1.5.1 - warning messages generated by CREATE TABLE

    Hi,
    Have an issue with a CREATE TABLE statement - it works correctly, but generates a warning message when used in SQL Developer (1.2 or 1.5.1). Full test case below:
    Setup:
    drop table samplenames;
    drop table customers;
    drop table phones;
    drop table customers_phone;
    drop sequence primkey;
    create table samplenames
    (name VARCHAR2(10));
    insert into samplenames values ('dan');
    insert into samplenames values ('joe');
    insert into samplenames values ('bob');
    insert into samplenames values ('sam');
    insert into samplenames values ('weslington');
    insert into samplenames values ('sue');
    insert into samplenames values ('ann');
    insert into samplenames values ('mary');
    insert into samplenames values ('pam');
    insert into samplenames values ('lucy');
    create sequence primkey
    start with 1000000
    increment by 1;
    create table customers as
    select primkey.nextval as cust_id,
    tmp1.name || tmp2.name as first_name,
    tmp3.name || tmp4.name || tmp5.name as last_name
    from samplenames tmp1,
    samplenames tmp2,
    samplenames tmp3,
    samplenames tmp4,
    samplenames tmp5;
    CREATE TABLE PHONES AS
    SELECT cust_id, 'H' as phn_loc, trunc(dbms_random.value(100,999)) as area_cde,
    trunc(dbms_random.value(1000000,9999999)) as phn_num
    FROM customers;
    INSERT INTO PHONES
    SELECT cust_id, 'B' as phn_loc, trunc(dbms_random.value(100,999)) as area_cde,
    trunc(dbms_random.value(1000000,9999999)) as phn_num
    FROM customers;
    --randomly delete ~10% of records to make sure nulls are handled correctly.
    delete from phones
    where MOD(area_cde + phn_num, 10) = 0;
    create table statement (there are legacy reasons for why this is written the way it is):
    CREATE TABLE customers_phone NOLOGGING AS
    SELECT cst.*,
    piv.HOME_PHONE,
    piv.WORK_PHONE
    FROM (SELECT cust_id,
    MAX(decode(phn_loc, 'H', '(' || area_cde || ') ' ||
    substr(phn_num,1,3) || '-' || substr(phn_num,4,4), NULL)) AS HOME_PHONE,
    MAX(decode(phn_loc, 'B', '(' || area_cde || ') ' ||
    substr(phn_num,1,3) || '-' || substr(phn_num,4,4), NULL)) AS WORK_PHONE
    FROM phones phn
    WHERE phn_loc IN ('H', 'B')
    AND cust_id IS NOT NULL
    AND EXISTS (SELECT NULL
    FROM customers
    WHERE cust_id = phn.cust_id)
    GROUP BY cust_id) piv,
    customers cst
    WHERE cst.cust_id = piv.cust_id (+)
    Warning message output:
    "Error starting at line 1 in command:
    CREATE TABLE customers_phone NOLOGGING AS
    SELECT cst.*,
    piv.HOME_PHONE,
    piv.WORK_PHONE
    FROM (SELECT cust_id,
    MAX(decode(phn_loc, 'H', '(' || area_cde || ') ' || substr(phn_num,1,3) || '-' || substr(phn_num,4,4), NULL)) AS HOME_PHONE,
    MAX(decode(phn_loc, 'B', '(' || area_cde || ') ' || substr(phn_num,1,3) || '-' || substr(phn_num,4,4), NULL)) AS WORK_PHONE
    FROM phones phn
    WHERE phn_loc IN ('H', 'B')
    AND cust_id IS NOT NULL
    AND EXISTS (SELECT NULL
    FROM customers
    WHERE cust_id = phn.cust_id)
    GROUP BY cust_id) piv,
    customers cst
    WHERE cst.cust_id = piv.cust_id (+)
    Error report:
    SQL Command: CREATE TABLE
    Failed: Warning: execution completed with warning"
    I am on 10.2.0.3. The CREATE TABLE always completes successfully, but the warning bugs me, and I have had no success tracking it down since there is no associated numberr.
    Anyone have any ideas?

    Hi ,
    The Oracle JDBC driver is returning this warning so I will be logging an issue with them, but for the moment SQL Developer will continue to report the warning as is.
    The reason for the warning is not clear or documented as far as I can tell,
    but I have replicated the issue with a simpler testcase which makes it easier to have a guess about the issue :)
    ----START
    DROP TABLE sourcetable ;
    CREATE TABLE sourcetable(col1 char);
    INSERT INTO sourcetable VALUES('M');
    DROP TABLE customers_phone;
    CREATE TABLE customers_phone AS
    SELECT MAX(decode(col1, 'm','OK' , NULL)) COLALIAS
    FROM sourcetable;
    ----END
    The warning occurs in the above script in SQL Developer , but not in SQL*Plus.
    The warning disappears when we change 'm' to 'M'.
    The warning disappears when we change NULL to 'OK2'
    In all cases the table creates successfully and the appropriate values inserted.
    My gut feeling is ...
    During the definition of customers_phone, Oracle has to work out what the COLALIAS datatype is.
    When it sees NULL as the only alternative (as sourcetable.col1 = 'M' not 'm') it throws up a warning. It then has to rely on the 'OK' value to define the COLALIAS datatype, even though the 'OK' value wont be inserted as sourcetable.col1 = 'M' and not 'm'. So Oracle makes the correct decision to define the COLALIAS as VARCHAR2(2), but the warning is just to say it had to use the alternative value to define the column.
    Why SQL*Plus does not report it and JDBC does, I'm not sure. Either way it doesn't look like a real issue.
    Again, this is just a guess and not a fact.
    Just though an update was in order.
    Regards,
    Dermot.

  • CREATE TABLE AS - PERFORMANCE ISSUE

    Hi All,
    I am creating a table CONTROLDATA from existing tables PF_CONTROLDATA & ICDSV2_AutoCodeDetails as per the below query.
    CREATE TABLE CONTROLDATA AS
    SELECT CONTROLVALUEID, VALUEORDER, CONTEXTID, AUDITORDER, INVALIDATINGREVISIONNUMBER, CONTROLID, STRVALUE
    FROM PF_CONTROLDATA CD1
    JOIN ICDSV2_AutoCodeDetails AC ON (CD1.CONTROLID=AC.MODTERM_CONTROL OR CD1.CONTROLID=AC.FAILED_CTRL OR CD1.CONTROLID=AC.CODE_CTRL)
    AND CD1.AUDITORDER=(SELECT MAX(AUDITORDER) FROM PF_CONTROLDATA CD2 WHERE CD1.CONTEXTID=CD2.CONTEXTID);The above statement is taking around 10mins of time to create the table CONTROLDATA which is not acceptible in our environment. Can any one please suggest is there any way to improve the performance of the above query to create the table CONTROLDATA under a minute?
    PF_CONTROLDATA has 1,50,00,000 (15million) rows and has composite index(XIF16PF_CONTROLDATA) on CONTEXTID, AUDITORDER columns and one more index(XIE1PF_CONTROLDATA) on CONTROLID column.
    ICDSV2_AutoCodeDetails has only 6 rows and no indexes.
    After the create table statement CONTROLDATA will have around 10,00,000 (1million) records.
    Can some one give any suggestion to improve the performance of the above query?
    oracle version is : 10.2.0.3
    Tkprof output is:
    create table CONTROLDATA2 as
    SELECT CONTROLVALUEID, VALUEORDER, CONTEXTID, AUDITORDER, INVALIDATINGREVISIONNUMBER, CONTROLID, DATATYPE, NUMVALUE, FLOATVALUE, STRVALUE, PFDATETIME, MONTH, DAY, YEAR, HOUR, MINUTE, SECOND, UNITID, NORMALIZEDVALUE, NORMALIZEDUNITID, PARENTCONTROLVALUEID, PARENTVALUEORDER
    FROM PF_CONTROLDATA CD1
         JOIN ICDSV2_AutoCodeDetails AC ON (CD1.CONTROLID=AC.MODTERM_CONTROL OR CD1.CONTROLID=AC.FAILED_CTRL OR CD1.CONTROLID=AC.CODE_CTRL OR CD1.CONTROLID=AC.SYNONYM_CTRL)
         AND AUDITORDER=(SELECT MAX(AUDITORDER) FROM PF_CONTROLDATA CD2 WHERE CD1.CONTEXTID=CD2.CONTEXTID)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.03          2          2          0           0
    Execute      1     15.25     593.43     211688    4990786       6617     1095856
    Fetch        0      0.00       0.00          0          0          0           0
    total        2     15.25     593.47     211690    4990788       6617     1095856
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 40 
    ********************************************************************************Explain plan output is:
    Plan hash value: 2771048406
    | Id  | Operation                           | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE TABLE STATEMENT              |                        |     1 |   105 |  3609K  (1)| 14:02:20 |
    |   1 |  LOAD AS SELECT                     | CONTROLDATA2           |       |       |            |          |
    |*  2 |   FILTER                            |                        |       |       |            |          |
    |   3 |    TABLE ACCESS BY INDEX ROWID      | PF_CONTROLDATA         |   178K|  9228K| 55344   (1)| 00:12:55 |
    |   4 |     NESTED LOOPS                    |                        |   891K|    89M| 55344   (1)| 00:12:55 |
    |   5 |      TABLE ACCESS FULL              | ICDSV2_AUTOCODEDETAILS |     5 |   260 |     4   (0)| 00:00:01 |
    |   6 |      BITMAP CONVERSION TO ROWIDS    |                        |       |       |            |          |
    |   7 |       BITMAP OR                     |                        |       |       |            |          |
    |   8 |        BITMAP CONVERSION FROM ROWIDS|                        |       |       |            |          |
    |*  9 |         INDEX RANGE SCAN            | XIE1PF_CONTROLDATA     |       |       |    48   (3)| 00:00:01 |
    |  10 |        BITMAP CONVERSION FROM ROWIDS|                        |       |       |            |          |
    |* 11 |         INDEX RANGE SCAN            | XIE1PF_CONTROLDATA     |       |       |    48   (3)| 00:00:01 |
    |  12 |        BITMAP CONVERSION FROM ROWIDS|                        |       |       |            |          |
    |* 13 |         INDEX RANGE SCAN            | XIE1PF_CONTROLDATA     |       |       |    48   (3)| 00:00:01 |
    |  14 |        BITMAP CONVERSION FROM ROWIDS|                        |       |       |            |          |
    |* 15 |         INDEX RANGE SCAN            | XIE1PF_CONTROLDATA     |       |       |    48   (3)| 00:00:01 |
    |  16 |    SORT AGGREGATE                   |                        |     1 |    16 |            |          |
    |  17 |     FIRST ROW                       |                        |     1 |    16 |     3   (0)| 00:00:01 |
    |* 18 |      INDEX RANGE SCAN (MIN/MAX)     | XIF16PF_CONTROLDATA    |     1 |    16 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("AUDITORDER"= (SELECT MAX("AUDITORDER") FROM "PF_CONTROLDATA" "CD2" WHERE
                  "CD2"."CONTEXTID"=:B1))
       9 - access("CD1"."CONTROLID"="AC"."MODTERM_CONTROL")
      11 - access("CD1"."CONTROLID"="AC"."FAILED_CTRL")
      13 - access("CD1"."CONTROLID"="AC"."CODE_CTRL")
      15 - access("CD1"."CONTROLID"="AC"."SYNONYM_CTRL")
      18 - access("CD2"."CONTEXTID"=:B1)
    Note
       - dynamic sampling used for this statement
    ********************************************************************************I tried to change the above logic even by using insert statement and APPEND hint, but still taking the same time.
    Please suggest.
    Edited by: 867546 on Jun 22, 2011 2:42 PM

    Hi user2361373
    i tried using nologging also but still it is taking same amout of time. Please find below the tkprof output.
    create table CONTROLDATA2 NOLOGGING as
    SELECT CONTROLVALUEID, VALUEORDER, CONTEXTID, AUDITORDER, INVALIDATINGREVISIONNUMBER, CONTROLID, DATATYPE, NUMVALUE, FLOATVALUE, STRVALUE, PFDATETIME, MONTH, DAY, YEAR, HOUR, MINUTE, SECOND, UNITID, NORMALIZEDVALUE, NORMALIZEDUNITID, PARENTCONTROLVALUEID, PARENTVALUEORDER
    FROM PF_CONTROLDATA CD1
         JOIN ICDSV2_AutoCodeDetails AC ON (CD1.CONTROLID=AC.MODTERM_CONTROL OR CD1.CONTROLID=AC.FAILED_CTRL OR CD1.CONTROLID=AC.CODE_CTRL OR CD1.CONTROLID=AC.SYNONYM_CTRL)
    AND AUDITORDER=(SELECT MAX(AUDITORDER) FROM PF_CONTROLDATA CD2 WHERE CD1.CONTEXTID=CD2.CONTEXTID)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.03       0.03          2          2          0           0
    Execute      1     13.98     598.54     211963    4990776       6271     1095856
    Fetch        0      0.00       0.00          0          0          0           0
    total        2     14.01     598.57     211965    4990778       6271     1095856
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 40 
    ********************************************************************************Edited by: 867546 on Jun 22, 2011 3:09 PM
    Edited by: 867546 on Jun 22, 2011 3:10 PM

  • Create table as select (CTAS)statement is taking very long time.

    Hi All,
    One of my procedure run a create table as select statement every month.
    Usually it finishes in 20 mins. for 6172063 records and 1 hour in 13699067.
    But this time it is taking forever even for 38076 records.
    When I checked all it is doing is CPU usage. No I/O.
    I did a count(*) using the query it brought results fine.
    BUT CTAS keeps going on.
    I'm using Oracle 10.2.0.4 .
    main table temp_ip has 38076
    table nhs_opcs_hier has 26769 records.
    and table nhs_icd10_hier has 49551 records.
    Query is as follows:
    create table analytic_hes.temp_ip_hier as
    select b.*, (select nvl(max(hierarchy), 0)
    from ref_hd.nhs_opcs_hier a
    where fiscal_year = b.hd_spell_fiscal_year
    and a.code in
    (primary_PROCEDURE, secondary_procedure_1, secondary_procedure_2,
    secondary_procedure_3, secondary_procedure_4, secondary_procedure_5,
    secondary_procedure_6, secondary_procedure_7, secondary_procedure_8,
    secondary_procedure_9, secondary_procedure_10,
    secondary_procedure_11, secondary_procedure_12)) as hd_procedure_hierarchy,
    (select nvl(max(hierarchy), 0) from ref_hd.nhs_icd10_hier a
    where fiscal_year = b.hd_spell_fiscal_year
    and a.code in
    (primary_diagnosis, secondary_diagnosis_1,
    secondary_diagnosis_2, secondary_diagnosis_3,
    secondary_diagnosis_4, secondary_diagnosis_5,
    secondary_diagnosis_6, secondary_diagnosis_7,
    secondary_diagnosis_8, secondary_diagnosis_9,
    secondary_diagnosis_10, secondary_diagnosis_11,
    secondary_diagnosis_12, secondary_diagnosis_13,
    secondary_diagnosis_14)) as hd_diagnosis_hierarchy
    from analytic_hes.temp_ip b
    Any help would be greatly appreciated

    Hello
    This is a bit of a wild card I think because it's going to require 14 fill scans of the temp_ip table to unpivot the diagnosis and procedure codes, so it's lilkely this will run slower than the original. However, as this is a temporary table, I'm guessing you might have some control over its structure, or at least have the ability to sack it and try something else. If you are able to alter this table structure, you could make the query much simpler and most likely much quicker. I think you need to have a list of procedure codes for the fiscal year and a list of diagnosis codes for the fiscal year. I'm doing that through the big list of UNION ALL statements, but you may have a more efficient way to do it based on the core tables you're populating temp_ip from. Anyway, here it is (as far as I can tell this will do the same job)
    WITH codes AS
    (   SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            primary_PROCEDURE       procedure_code,
            primary_diagnosis       diagnosis_code,
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_1    procedure_code,
            secondary_diagnosis_1    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_2    procedure_code ,
            secondary_diagnosis_2    diagnosis_code     
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_3    procedure_code,
            secondary_diagnosis_3    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_4    procedure_code,
            secondary_diagnosis_4    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_5    procedure_code,
            secondary_diagnosis_5    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_6    procedure_code,
            secondary_diagnosis_6    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_7    procedure_code,
            secondary_diagnosis_7    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_8    procedure_code,
            secondary_diagnosis_8    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_9    procedure_code,
            secondary_diagnosis_9    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_10  procedure_code,
            secondary_diagnosis_10    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_11  procedure_code,
            secondary_diagnosis_11    diagnosis_code
        FROM
            temp_ip    
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_12  procedure_code,
            secondary_diagnosis_12    diagnosis_code
        FROM
            temp_ip
    ), hd_procedure_hierarchy AS
    (   SELECT
            NVL (MAX (a.hierarchy), 0) hd_procedure_hierarchy,
            a.fiscal_year
        FROM
            ref_hd.nhs_opcs_hier a,
            codes pc
        WHERE
            a.fiscal_year = pc.hd_spell_fiscal_year
        AND
            a.code = pc.procedure_code
        GROUP BY
            a.fiscal_year
    ),hd_diagnosis_hierarchy AS
    (   SELECT
            NVL (MAX (a.hierarchy), 0) hd_diagnosis_hierarchy,
            a.fiscal_year
        FROM
            ref_hd.nhs_icd10_hier a,
            codes pc
        WHERE
            a.fiscal_year = pc.hd_spell_fiscal_year
        AND
            a.code = pc.diagnosis_code
        GROUP BY
            a.fiscal_year
    SELECT b.*, a.hd_procedure_hierarchy, c.hd_diagnosis_hierarchy
      FROM analytic_hes.temp_ip b,
           LEFT OUTER JOIN hd_procedure_hierarchy a
              ON (a.fiscal_year = b.hd_spell_fiscal_year)
           LEFT OUTER JOIN hd_diagnosis_hierarchy c
              ON (c.fiscal_year = b.hd_spell_fiscal_year)HTH
    David

  • Table Popin

    I have a WD component which has a view that contains a table.  I have a popin in the table for each row which also contains a table.  When toggling the popin for each row, the values for the popin populate just fine.  However, part of the requirement was to add an expand/collapse all button to toggle the popins for all rows at the same time.
    I currently have a test case where the main table has three rows of data.  The popin table has four columns providing additional detail for the main rows.  When I toggle the expand/collapse all, all the popins toggle open just fine, but the one for the first row only populates the first column in the popin and not columns 2-4.  However, for main rows 2 and 3 of the table, the popins populate all 4 columns just fine.
    While debugging, the values for the first row's table popin are populating in the node just fine, but at some point it loses them and I can't figure out where.
    I have a method on the component controller called Define_Details which populates the popin, and again when toggling the popins individually per row, the values are just fine.
    I have a method on the view called Onaction_expandall for the button expand/collapse all which controls opening the popins and also loops through all the elements and calls the method from the component controller to populate the values.
    I appreciate any insight and thoughts as to why when toggling the expand/collapse all button, the first row populates only the first column in the popin but not the other three while all other rows populate all four columns.
    [ScreenShot of Component|http://img14.imageshack.us/img14/1929/wdcomponent.jpg]
    Edited by: Roger Beach on Nov 19, 2011 1:05 PM

    On further testing, it is not filling the details popin for whichever main row of the table is the lead selection.  I originally had the following code but without index = lv_count for the context element and that made it only work for the row with lead selection.  Adding the index to the context element seemed to sort things out except now it doesn't work completely with lead selection.
    lv_count = 0.
      LOOP AT lt_elem_set ASSIGNING <ls_elem>.
        lv_count = lv_count + 1.
        IF checked = abap_true.
        context_element = wd_context->get_child_node( name = wd_this->wdctx_z9027 )->get_element( index = lv_count ).
        lo_componentcontroller = wd_this->get_componentcontroller_ctr( ).
        lo_componentcontroller->define_z9027_details( index = lv_count io_context_element = context_element ).
          <ls_elem>->set_attribute(
            EXPORTING
              name =  `POPIN_NAME`
              value = `TABLEPOPIN` ).
        ELSE.
          <ls_elem>->set_attribute(
            EXPORTING
              name =  `POPIN_NAME`
              value = `` ).
        ENDIF.
      ENDLOOP.

  • Parameter into a CREATE TABLE with PRO*C

    Hi,
    I use PRO*C and i've a problem with a CREATE TABLE request.
    My program asks for the name of my variable mission, quite simply that I get with a scanf.
    So, I want to create a table where one of the fields of the table must be the same that the char of the name of the mission. But, the function doesn't do anything… But when i write the name of the variable there is no problem.
    Thanks for your answers and sorry for my english.
    Code :
         void creer_table1(char nom_mission[50])
    EXEC SQL BEGIN DECLARE SECTION;
    VARCHAR TABLE[50];
    EXEC SQL END DECLARE SECTION;
    strcpy(TABLE.arr,nom_mission);
    TABLE.len =strlen(TABLE.arr);
    EXEC SQL CREATE TABLE mission.msn_test AS
    SELECT * FROM mission.msn_mission
    WHERE missionname=:TABLE;
    printf("Table MSN_TEST creee dans le schema MISSION\n");
    PS: My IDE is code::blocks
    Bonjour à tous,
    J'utilise Pro*C et j'ai un problème avec une requête CREATE TABLE. Je m'explique:
    Mon programme demande quel est le nom de ma variable mission, que je récupère tout bêtement avec un scanf.
    Je veux alors créer une table où un de mes champs doit avoir comme valeur le char qui est le nom de la mission. Mais voila, la fonction qui doit créer la table ne la crée pas... Alors que quand je rentre cette variable en dur, cela fonctionne très bien. J'aimerais avoir un petit coup de pouce de votre part si c'est possible. Si ça se trouve j'essaie de faire quelque chose pas forcément possible.
    Voila mon code de la fonction servant à lire la table:
    Voila le code servant créer la fonction:
    Code :
    void creer_table1(char nom_mission[50])
    EXEC SQL BEGIN DECLARE SECTION;
    VARCHAR TABLE[50];
    EXEC SQL END DECLARE SECTION;
    strcpy(TABLE.arr,nom_mission);
    TABLE.len =strlen(TABLE.arr);
    EXEC SQL CREATE TABLE mission.msn_test AS
    SELECT * FROM mission.msn_mission
    WHERE missionname=:TABLE;
    printf("Table MSN_TEST creee dans le schema MISSION\n");
    Merci d'avance pour votre aide.
    PS:Je travaille sous code blocks et avec oracle
    Laurent Barale
    Edited by: 899981 on 30 nov. 2011 08:16

    Hi,
    I hadn't got errors.
    Bit, i solved my problem using dynamic sql.
    exemple:
    void function(char name[50]){
    EXEC SQL BEGIN DECLARE SECTION;
    char *varsql;
    EXEC SQL END DECLARE SECTION;
    char toto[150]="CREATE TABLE test AS SELECT * FROM tutu WHERE employe='";
    varsql=strcat(toto,name);
    strcat(varsql,"'");
    EXEC SQL EXECUTE IMMEDIATE :varsql;
    I hope that could help somebody.
    Bye and thank you for your answer.
    Edited by: 899981 on 20 déc. 2011 05:28

Maybe you are looking for

  • "Error [10151] The LCP Port on CCM Admin for Mobile Agent is not Configured or misconfigured"

    We are running into a problem which we have noticed on UCCE 8.5 and havent seen on earlier 7.5 release. If we setup a CTI port as agent device target and if we try to login the agent using Cisco Desktop (Not CAD, just CTI OS Client) then we get follo

  • How do you "save as" in Keynote '09

    In Keynote '09 I can "save version" but only to the already saved to name. There's no way to rename the file it seems. ANyone know?

  • Look for specific message in iMessage

    Hi! Im looking to get a script going that looks in the background for a message to arrive, and when it does it should delete a file or program. I've made some scripts that sends messages, but not worked with receiving. The script should look for a te

  • BPEL designer problem

    Hey guys, I am trying to download BPEL designer and it absolutely does not let me register my eclipse directory. The eclipse version I am using is 3.0.1, my eclipse dir is C:/eclipse and thats what I am entering when the BPEL designer installation as

  • Upgrade from osx 10.4.11 to osx 10.5.8

    I have new nano 16gb that requires OSX 10.5.8 .Can i go directly to 10.5.8 from 10.4 11. OR can i go directly from 10.4.11 to the most recent OSX system what ever that is. Please advise what to buy.Also can this be installed by me,is there a how to l