Data pivot issue

Hello everybody,
I have data like this:
product_id     product_param
10          prod_name
10          prod_desc
10          prod_retail_val
20          prod_name
30          prod_name
30          prod_descThis data is not stored in any table. I get it in a collection while working through my PL/SQL code.
I can use table function for this collection.
The nature of this data is : product_param can have 20 different values.
A given product_id can have one or more records in this collection. ofcourse product_id and product_param combo
will be unique.
In the PL/SQL code, I read huge volume of data using bulk collect and limit clause(with limit of 500 records). Then do some checks and
capture some desired ones in this collection.So this collection can have at the most 500 records
assuming all the 500 records fetched through bulk collect met certain criterion and made to this collection.
I want to get following output from this collection:
10     prod_name||prod_desc||prod_retail_val
20     prod_name
30     prod_name||prod_descNot able to use PIVOT. How can I achieve this?
Thanks,
RN

It is a collection of objects. I used analytic function to get the results. I am using 11g.
Thanks a lot for your response. Please let me know if you have better alternative.
Appreciate your help!
RN

Similar Messages

  • Data pivoting in query

    Greetings Oracle gurus!
    Problem:
    I'm trying to write a query that does some data pivoting. I've done this before in the past on smaller data sets, and they've worked great. However, now I'm doing it against a table that has well over a million records. What I'm looking for is the most efficient method in doing this. I've seen ways of doing it by utilizing "union alls" in a WITH query. I've seen ways be creating columns in the query with max() and decode() functions. So... what's the best way to pivot the data? I've seen listagg(), but that comes only with Oracle 11+ I believe... so gotta bust out some sql magic here.
    All the good stuff:
    Running Oracle 10.2
    Sample data:
    drop table WO_COMMENTS;
      CREATE TABLE "WO_COMMENTS"
          "ORDER_NO"      varchar2(10),
          "COMMENT_SEQ"   number,
          "COMMENT_TYPE"  VARCHAR2(4) ,
          "COMMENT_TEXT"  VARCHAR2(80)
    SET DEFINE OFF;
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',1,'WOMM','Test1');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',2,'WOMM',null);
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',10,'WOMM','The ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',11,'WOMM','big ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',12,'WOMM','blue ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',13,'WOMM','dog ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',14,'WOMM','died ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',20,'WOMM','Yet ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',21,'WOMM','again');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',22,'WOMM',' an ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',23,'WOMM','issue');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',24,'WOMM',null);
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',30,'WOMM','will ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',31,'WOMM','it ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',32,'WOMM','get ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',33,'WOMM','fixed');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',34,'WOMM','?  ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',35,'WOMM','    ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',36,'WOMM','No ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',37,'WOMM','One ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',38,'WOMM','will ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',39,'WOMM','ever ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W00284',40,'WOMM','know!');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W33005',1,'DOCR','Holy ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W33005',2,'DOCR','cow ');
    insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W33005',3,'DOCR','pie! ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W33005',1,'RTMM','This ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W33005',2,'RTMM','is ');
    insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W33005',3,'RTMM','an ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W33005',4,'RTMM','& ');
    Insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W33005',5,'RTMM','!!!  ');
    insert into WO_COMMENTS (ORDER_NO,COMMENT_SEQ,COMMENT_TYPE,COMMENT_TEXT) values ('00W33005',1,'WOMM','Test9');
    commit;
    SELECT  
          ORDER_NO  as OBJECT_ID       ,
          COMMENT_TYPE as ATTACHMENT_REF     ,
          RTRIM (XMLAGG (xmlelement (E, COMMENT_TEXT || ' ' ) order by comment_seq).extract ('//text()')
          , ',')       as NOTE     
      from WO_COMMENTS a
      where order_no in ('00W00284', '00W33005')
      GROUP BY order_no      ,
        comment_type ;What I'd like the data to look like:
    OBJECT_ID     ATTACHMENT_REF     NOTE
    00W00284     WOMM     Test1  The  big  blue  dog  died  Yet  again  an  issue  will  it  get  fixed ?        No  One  will  ever  know!
    00W33005     DOCR     Holy  cow  pie! 
    00W33005     RTMM     This  is  an  &  !!!  
    00W33005     WOMM     Test9
              With the query used, the '&' in the third record comes across as '&'. How do I deal with special characters in this case?
    I know this data has absolutely nothing to do with XML, but using the xmlagg function is sort of a trick I found to do what I need, along with it being very easy to implement. Unsure of how badly this affects performance though. Also note, this is part of a data conversion effort, so it's intended to have some of these columns coming back completely null for the moment. Any "more efficient" methods?
    I think I covered everything that folks may need...
    Would greatly appreciate any help anyone has to offer :)
    Edit: New problem with special characters. New sample data and output supplied.
    Edited by: dvsoukup on Aug 16, 2012 11:21 AM

    Hi,
    dvsoukup wrote:
    Greetings Oracle gurus!
    Problem:
    I'm trying to write a query that does some data pivoting. To be excruciatingly precise, Pivoting means taking 1 column on N rows, and displaying the information as N columns on 1 row.
    Is that what you want, or do you want String Aggregation , where you take 1 column on N rows, and display that as a concatenated list of all N items, in 1 column on 1 row?
    I've done this before in the past on smaller data sets, and they've worked great. However, now I'm doing it against a table that has well over a million records. What I'm looking for is the most efficient method in doing this. I've seen ways of doing it by utilizing "union alls" in a WITH query. UNION ALL isn't very efficient (unless you're comparing it to plain UNION), so I'll bet that won't help you. I'm not sure I know the technique you're talking about, though. Just for my curiosity, can you post a link to an example?
    I've seen ways be creating columns in the query with max() and decode() functions. That's the standard way to Pivot data in versions earlier than 11.1.
    If you really want String Aggregation , however, an alternative to XMLAGG is SYS_CONNECT_BY_PATH. If you don't need the items in any one output row in any particular order, then the user-defined aggregate function STRAGG is very handy. STRAGG can be found at the beginning of the following page, and SYS_CONNECT_BY_PATH is found later on the same page:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2196162600402
    My guess is that STRAGG would be the fastest way, but again, STRAGG doesn't guarantee that the output will be
    'X / Y / Z'; you're just as likely to get
    'X / Z / Y' or
    'Z / Y / X' or any of the other permutations.
    I'm not sure whether SYS_CONNECT_BY_PATH is faster than XMLAGG, but it's worth trying.
    So... what's the best way to pivot the data? I've seen listagg(), but that comes only with Oracle 11+ I believe... so gotta bust out some sql magic here.That's right; LISTAGG is a built-in function for string aggregation, but only in Oracle 11.2 and up.
    All the good stuff:
    Running Oracle 10.2
    Sample data: ...Thanks for posting the sample data and results.
    I'm finiding it very difficult to read and understand all of that, however. Could you remove some of the columns, and use shorter strings?
    What exactly is the part that you don't understand? I think you're saying that you need to generate a column like the output column called note. I believe you're saying that XMLAGG does what you want, but you'd like to know about other ways that might be more efficient.
    Could you post an example that only involves, say, order_no, comm_entry_date and comment_text, with a maximum length of 5 for comment_text? That would be so much easier to me to understtand the problem, and for you to understand the solution. Adapting the solution for all your columns should be very easy.
    ... I think I covered everything that folks may need... Yes, that's a very thorough message, but it would really help if you could simplify the input data.
    Would greatly appreciate any help anyone has to offer :)
    Edit: Sorry for causing the display to have a scroll bar go WAY to the right... not sure how to make it more user friendly to be able to see the data n' stuff.I can't think of any way that keeps all the columns and data that you need in your real problem. That's why I'd like you to reduce the problem to something much simpler.
    I know you need to have several boilerplate columns, like object_name, in your results, but do they need to be in the problem you post?
    I know you need to GROUP BY 4 expressions, but if you see a solution that GROUPs BY 2 of them, you should be able to add the others.
    I know your strings can be 80 characters long, but can't you test with strings no longer than 5 characters?

  • Windows 8.1 Data reordering issue with Intel Adaptors

    According to Intel, there is a data reordering issue with their adaptors and probably this dumb WIDI software. This is from Intel site. they say some are fixed, "A future Windows 8 fix will address this issue for other Intel wireless adapters." I
    have one Nope, still broke. I get drops all the time. Brand new Toshiba laptop I7 16 gigs of ram and a SSD and a 2 gig Vid card. Would be nice to be able to play games but I get dropped all the time. Now would Microsoft quit hiding
    about this, and fix the darn thing. Also i'm a system admin for 13 years. I have build over 1000 PCs and servers. I know bad software. Please fix this. PLEASE. Its not going to just go away and its not just Toshiba, I have seen other companies with the
    same problem. If there is a Fix PLEASE POST IT. Or even a workaround I have tried everything.
    http://www.intel.com/support/wireless/wlan/sb/CS-034535.htm
     

    Hi,
    Have your first tried the software fix under this link for your network adapter?
    http://www.intel.com/support/wireless/wtech/proset-ws/sb/CS-034041.htm
    Please Note: The third-party product discussed here is manufactured by a company that is independent of Microsoft. We make no warranty, implied or otherwise, regarding this product's performance or reliability.
    Also, you can try to check if there is any driver update under Device manager from manufacture's website.
    Kate Li
    TechNet Community Support
    Yep didn't work. Still get drops all the time, had to run a Cat 5E cable to my laptop from my modem, because I have Atheros Gigabyte Lan adaptor. Works Great. The Wireless still drops all the time. Has Microsoft let out the patch to fix this or is it coming in
    April in the 8.1 patch that's coming. Funny thing is all for Widi, I don't even use widi, I got the software to do that from Samsung works better on my TV. Intel and Microsoft need to get this fixed. because their driving off gamers and that's the
    people that make sure they buy Microsoft so they can play games. With the wireless link dead and a great laptop worthless what's the point. Ive been in IT for 13 years building PCs  and Servers how I knew how to run a 60 FT Cat 5e line thru
    a 2 story house and terminate it. Most people don't. Fix the problem.  

  • Data Load Issue "Request is in obsolete version of DataSource"

    Hello,
    I am getting a very strange data load issue in production, I am able to load the data upto PSA, but when I am running the DTP to load the data into 0EMPLOYEE ( Master data Object) getting bellow msg
    Request REQU_1IGEUD6M8EZH8V65JTENZGQHD not extracted; request is in obsolete version of DataSource
    The request REQU_1IGEUD6M8EZH8V65JTENZGQHD was loaded into the PSA table when the DataSource had a different structure to the current one. Incompatible changes have been made to the DataSource since then and the request cannot be extracted with the DTP anymore.
    I have taken the follwoing action
    1. Replicated the data source
    2. Deleted all request from PSA
    2. Activated the data source using (RSDS_DATASOURCE_ACTIVATE_ALL)
    3. Re transported the datasource , transformation, DTP
    Still getting the same issue
    If you have any idea please reply asap.
    Samit

    Hi
    Generate your datasource in R/3 then replicate and activate the transfer rules.
    Regards,
    Chandu.

  • ORA-01403 No Data Found Issue

    Hi,
    Im very new to streams and having a doubt regarding ORA-01403 issue happening while replication. Need you kind help on this regard. Thanks in advance.
    Oracle version : 10.0.3.0
    1.Suppose there are 10 LCRs in a Txn and one of the LCR caused ORA-01403 and none of the LCRs get executed.
    We can read the data of this LCR and manually update the record in the Destination database.
    Eventhough this is done, while re-executing the transaction, im getting the same ORA-01403 on the same LCR.
    What could be the possible reason.
    Since, this is a large scale system with thousands of transactions, it is not possible to handle the No data found issues occuring in the system.
    I have written a PL/SQL block which can generate Update statements with the old data available in LCR, so that i can re-execute the Transaction again.
    The PL/SQL block is given below. Could you please check if there are any issues in this while generating the UPDATE statements. Thank you
    /* Formatted on 2008/10/23 14:46 (Formatter Plus v4.8.7) */
    --Script for generating the Update scripts for the Message which caused the 'NO DATA FOUND' error.
    DECLARE
    RES NUMBER; --No:of errors to be resolved
    RET NUMBER; --A number variable to hold the return value from getObject
    I NUMBER; --Index for the loop
    J NUMBER; --Index for the loop
    K NUMBER; --Index for the loop
    PK_COUNT NUMBER; --To Hold the no:of PK columns for a Table
    LCR ANYDATA; --To Hold the Logical Change Record
    TYP VARCHAR2 (61); --To Hold the Type of a Column
    ROWLCR SYS.LCR$_ROW_RECORD; --To Hold the LCR caused the error in a Txn.
    OLDLIST SYS.LCR$_ROW_LIST; --To Hold the Old data of the Record which was tried to Update/Delete
    NEWLIST SYS.LCR$_ROW_LIST;
    UPD_QRY VARCHAR2 (5000);
    EQUALS VARCHAR2 (5) := ' = ';
    DATA1 VARCHAR2 (2000);
    NUM1 NUMBER;
    DATE1 TIMESTAMP ( 0 );
    TIMESTAMP1 TIMESTAMP ( 3 );
    ISCOMMA BOOLEAN;
    TYPE TAB_LCR IS TABLE OF ANYDATA
    INDEX BY BINARY_INTEGER;
    TYPE PK_COLS IS TABLE OF VARCHAR2 (50)
    INDEX BY BINARY_INTEGER;
    LCR_TABLE TAB_LCR;
    PK_TABLE PK_COLS;
    BEGIN
    I := 1;
    SELECT COUNT ( 1)
    INTO RES
    FROM DBA_APPLY_ERROR;
    FOR TXN_ID IN
    (SELECT MESSAGE_NUMBER,
    LOCAL_TRANSACTION_ID
    FROM DBA_APPLY_ERROR
    WHERE LOCAL_TRANSACTION_ID =
    '2.85.42516'
    ORDER BY ERROR_CREATION_TIME)
    LOOP
    SELECT DBMS_APPLY_ADM.GET_ERROR_MESSAGE
    (TXN_ID.MESSAGE_NUMBER,
    TXN_ID.LOCAL_TRANSACTION_ID
    INTO LCR
    FROM DUAL;
    LCR_TABLE (I) := LCR;
    I := I + 1;
    END LOOP;
    I := 0;
    K := 0;
    dbms_output.put_line('size >'||lcr_table.count);
    FOR K IN 1 .. RES
    LOOP
    ROWLCR := NULL;
    RET :=
    LCR_TABLE (K).GETOBJECT
    (ROWLCR);
    --dbms_output.put_line(rowlcr.GET_OBJECT_NAME);
    PK_COUNT := 0;
    --Finding the PK columns of the Table
    SELECT COUNT ( 1)
    INTO PK_COUNT
    FROM ALL_CONS_COLUMNS COL,
    ALL_CONSTRAINTS CON
    WHERE COL.TABLE_NAME =
    CON.TABLE_NAME
    AND COL.CONSTRAINT_NAME =
    CON.CONSTRAINT_NAME
    AND CON.CONSTRAINT_TYPE = 'P'
    AND CON.TABLE_NAME =
    ROWLCR.GET_OBJECT_NAME;
    dbms_output.put_line('Count of PK Columns >'||pk_count);
    DEL_QRY := NULL;
    DEL_QRY :=
    'DELETE FROM '
    || ROWLCR.GET_OBJECT_NAME
    || ' WHERE ';
    INS_QRY := NULL;
    INS_QRY :=
    'INSERT INTO '
    || ROWLCR.GET_OBJECT_NAME
    || ' ( ';
    UPD_QRY := NULL;
    UPD_QRY :=
    'UPDATE '
    || ROWLCR.GET_OBJECT_NAME
    || ' SET ';
    OLDLIST :=
    ROWLCR.GET_VALUES ('old');
    -- Generate Update Query
    NEWLIST :=
    ROWLCR.GET_VALUES ('old');
    ISCOMMA := FALSE;
    FOR J IN 1 .. NEWLIST.COUNT
    LOOP
    IF NEWLIST (J) IS NOT NULL
    THEN
    IF J <
    NEWLIST.COUNT
    THEN
    IF ISCOMMA =
    TRUE
    THEN
    UPD_QRY :=
    UPD_QRY
    || ',';
    END IF;
    END IF;
    ISCOMMA := FALSE;
    TYP :=
    NEWLIST
    (J).DATA.GETTYPENAME;
    IF (TYP =
    'SYS.VARCHAR2'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETVARCHAR2
    (DATA1
    IF DATA1 IS NOT NULL
    THEN
    UPD_QRY :=
    UPD_QRY
    || NEWLIST
    (J
    ).COLUMN_NAME;
    UPD_QRY :=
    UPD_QRY
    || EQUALS;
    UPD_QRY :=
    UPD_QRY
    || ' '
    || ''''
    || SUBSTR
    (DATA1,
    0,
    253
    || '''';
    ISCOMMA :=
    TRUE;
    END IF;
    ELSIF (TYP =
    'SYS.NUMBER'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETNUMBER
    (NUM1
    IF NUM1 IS NOT NULL
    THEN
    UPD_QRY :=
    UPD_QRY
    || NEWLIST
    (J
    ).COLUMN_NAME;
    UPD_QRY :=
    UPD_QRY
    || EQUALS;
    UPD_QRY :=
    UPD_QRY
    || ' '
    || NUM1;
    ISCOMMA :=
    TRUE;
    END IF;
    ELSIF (TYP =
    'SYS.DATE'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETDATE
    (DATE1
    IF DATE1 IS NOT NULL
    THEN
    UPD_QRY :=
    UPD_QRY
    || NEWLIST
    (J
    ).COLUMN_NAME;
    UPD_QRY :=
    UPD_QRY
    || EQUALS;
    UPD_QRY :=
    UPD_QRY
    || ' '
    || 'TO_Date( '
    || ''''
    || DATE1
    || ''''
    || ', '''
    || 'DD/MON/YYYY HH:MI:SS AM'')';
    ISCOMMA :=
    TRUE;
    END IF;
    ELSIF (TYP =
    'SYS.TIMESTAMP'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETTIMESTAMP
    (TIMESTAMP1
    IF TIMESTAMP1 IS NOT NULL
    THEN
    UPD_QRY :=
    UPD_QRY
    || ' '
    || ''''
    || TIMESTAMP1
    || '''';
    ISCOMMA :=
    TRUE;
    END IF;
    END IF;
    END IF;
    END LOOP;
    --Setting the where Condition
    UPD_QRY := UPD_QRY || ' WHERE ';
    FOR I IN 1 .. PK_COUNT
    LOOP
    SELECT COLUMN_NAME
    INTO PK_TABLE (I)
    FROM ALL_CONS_COLUMNS COL,
    ALL_CONSTRAINTS CON
    WHERE COL.TABLE_NAME =
    CON.TABLE_NAME
    AND COL.CONSTRAINT_NAME =
    CON.CONSTRAINT_NAME
    AND CON.CONSTRAINT_TYPE =
    'P'
    AND POSITION = I
    AND CON.TABLE_NAME =
    ROWLCR.GET_OBJECT_NAME;
    FOR J IN
    1 .. NEWLIST.COUNT
    LOOP
    IF NEWLIST (J) IS NOT NULL
    THEN
    IF NEWLIST
    (J
    ).COLUMN_NAME =
    PK_TABLE
    (I
    THEN
    UPD_QRY :=
    UPD_QRY
    || ' '
    || NEWLIST
    (J
    ).COLUMN_NAME;
    UPD_QRY :=
    UPD_QRY
    || ' '
    || EQUALS;
    TYP :=
    NEWLIST
    (J
    ).DATA.GETTYPENAME;
    IF (TYP =
    'SYS.VARCHAR2'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETVARCHAR2
    (DATA1
    UPD_QRY :=
    UPD_QRY
    || ' '
    || ''''
    || SUBSTR
    (DATA1,
    0,
    253
    || '''';
    ELSIF (TYP =
    'SYS.NUMBER'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETNUMBER
    (NUM1
    UPD_QRY :=
    UPD_QRY
    || ' '
    || NUM1;
    END IF;
    IF I <
    PK_COUNT
    THEN
    UPD_QRY :=
    UPD_QRY
    || ' AND ';
    END IF;
    END IF;
    END IF;
    END LOOP;
    END LOOP;
    UPD_QRY := UPD_QRY || ';';
    DBMS_OUTPUT.PUT_LINE (UPD_QRY);
    --Generate Update Query - End
    END LOOP;
    END;

    Thanks for you replies HTH and Dipali.
    I would like to make some points clear from my side based on the issue i have raised.
    1.The No Data Found error is happening on a table for which supplemental logging is enabled.
    2.As per my understanding, the "Apply" process is comparing the existing data in the destination database with the "Old" data in the LCR.
    Once there is a mismatch between these 2, ORA-01403 is thrown. (Please tell me whether my understanding is correct or not)
    3.This mismatch can be on date field or even on the timestamp millisecond as well.
    Now, the point im really wondering about :
    Some how a mismatch got generated in the destination database (Not sure about the reason) and ORA-01403 is thrown.
    If we could update the Destination database with the "Old" data from LCR, this mismatch should be resolved isnt it?
    Reply to you Dipali :
    If nothing is working out, im planning to put a conflict handler for all tables with "OVERWRITE" option. With the following script
    --Generate script for applying Conflict Handler for the Tables for which Supplymentary Logging is enabled
    declare
    count1 number;
    query varchar2(500) := null;
    begin
    for tables in (
    select table_name from user_tables where table_name IN ("NAMES OF TABLES FOR WHICH SUPPLEMENTAL LOGGING IS ENABLED")
    loop
    count1 := 0;
    dbms_output.put_line('DECLARE');
    dbms_output.put_line('cols DBMS_UTILITY.NAME_ARRAY;');
    dbms_output.put_line('BEGIN');
    select max(position) into count1
    from all_cons_columns col, all_constraints con
    where col.table_name = con.table_name
    and col.constraint_name = con.constraint_name
    and con.constraint_type = 'P'
    and con.table_name = tables.table_name;
    for i in 1..count1
    loop
    query := null;
    select 'cols(' || position || ')' || ' := ' || '''' || column_name || ''';'
    into query
    from all_cons_columns col, all_constraints con
    where col.table_name = con.table_name
    and col.constraint_name = con.constraint_name
    and con.constraint_type = 'P'
    and con.table_name = tables.table_name
    and position = i;
    dbms_output.put_line(query);
    end loop;
    dbms_output.put_line('DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(');
    dbms_output.put_line('object_name => ''ICOOWR.' || tables.table_name|| ''',');
    dbms_output.put_line('method_name => ''OVERWRITE'',');
    dbms_output.put_line('resolution_column => ''COLM_NAME'',');
    dbms_output.put_line('column_list => cols);');
    dbms_output.put_line('END;');
    dbms_output.put_line('/');
    dbms_output.put_line('');
    end loop;
    end;
    Reply to u HTH :
    Our Destination database is a replica of the source and no triggers are running on any of these tables.
    This is not the first time im facing this issue. Earlier, we had to take big outage times and clear the Replica database and apply the dump from the source...
    Now i cant think about that situation.

  • 4G LTE data reception issue in area of work building

    Hi, I'm having a data reception issue in a certain area at work.  The signal indicator at the upper right of the homescreen shows "4GLTE" but this is clearly inaccurate since I am not able to navigate to websites or send/receive multimedia messages.  If I move ~30 feet east in the building, the reception is restored.  Two people with iPhone 5 devices have the same issue.  However, the Verizon iPhone 5 allows you to turn off LTE.  Once this was done and the signal fell back to 3G, reception was restored, albeit with slower speeds, but at least reception wasn't completely blocked.  I understand 4G is not available in all areas, but in this case, the phone is not automatically switching to 3G and there is no workaround because there is no option to turn off LTE on the Z10.  In the "Settings" -> "Network Connections" -> "Mobile Network" -> "Network Technology" dropdown, the only values are:
    UMTS/GSM (when I switch to this, no networks are found)
    Global (the current selection)
    LTE/CDMA
    This is a big problem for me because for 8+ hours in the day I can't receive MMS messages or navigate to websites.

    Hi, Nate650,
    Sorry to hear about your problem with 4G. First, let me ask, have you updated your Z10 to the latest official software version? I had a similar problem with my Z10. After about an hour on the phone with CS, we figured out it was a problem with the tower near me. The problem was fixed by VZW and I have not had connection issues. You are right, though, about the Z10 falling back to 3G. Mine did before the update but not since.
    Doc

  • Logical Standby Data Consistency issues

    Hi all,
    We have been running a logical standby instance for about three weeks now. Both our primary and logical are 11g (11.1.0.7) databases running on Sun Solaris.
    We have off-loaded our Discoverer reporting to the logical standby.
    About three days ago, we started getting the following error message (initially for three tables, but from this morning on a whole lot more)
    ORA-26787: The row with key (<coulmn>) = (<value>) does not exist in table <schema>.<table>
    This error implies that we have data consistency issues between our primary and logical standby databases, but we find that hard to believe
    because the "data guard" status is set to "standby", implying that schemas' being replicated by data guard are not available for user modification.
    any assistance in this regard would be greatly appreciated.
    thanks
    Mel

    It is a bug : Bug 10302680 . Apply the corresponding Patch 10302680 to your standby db.

  • How to get material's last posting date of issuing to production order?

    Hi,
    In my scenario, I need to get material's last posting date of issuing to production order (e.g. mov. typ. 261).
    I tried to select the material documents whose movement type is 261, and restrict the posting date from month to month each time, until the first material document is selected.
    But this method seems quite inefficient.
    What kind of algorithm is more effient to do this?
    Thanks
    Wesley

    Hi,
    select max( budat )
      from mkpf
      into gv_budat
      where mblnr in ( select mblnr
                         from aufm
                        where aufnr = gv_aufnr "(Prod. Order)
                            and  matnr = gv_matnr "(Issued Material)
                            and bwart = '261' ).
    Edited by: Azeem Ahmed Matte on Mar 12, 2010 12:33 PM

  • How to get the previous state of my data after issuing coomit method

    How to get the previous state of some date after issuing commit method in entity bean (It should not use any offline storage )

    >
    Is there any way to get the state apart from using
    offline storage ?As I said the caller keeps a copy in memory.
    Naturally if it is no longer in memory then that is a problem.
    >
    and also what do you mean by auditlog?
    You keep track of every change to the database by keeping the old data. There are three ways:
    1. Each table has a version number/delete flag for each record. A record is never updated nor deleted. Instead a new record is created with a new version number and with the new data.
    2. Each table has a duplicate table which has all of the same columns. When the first table is modified the old data is moved to the duplicate table.
    3. A single table is used which has columns for 'table', 'field', 'data' and 'activity' (update, delete). When a change is made in any table then this table is updated. This is generally of limited useability due to the difficulty in recovering the data.
    All of the above can have a user id, timestamp, and/or additional information which is relevant to the data being changed.
    Note that ALL of this is persisted storage.
    I am not sure what this really has to do with "offline storage" unless you are using that term to refer to backed up data which is not readily available.

  • Has anyone found a solution for iPhone 5 data leak issues?

    Up until about a week ago I was using a 3GS and the data leak issues seemed to be fixed with the newest iOS 6 update. However, I recently got an iPhone 5 and I've noticed it uses around 1 MB per hour no matter what I'm actually doing on the phone. I actually went to sleep last night, turning of cellular data AND wifi and it STILL used about 4 MB of data!! What is up with this?? I am a pretty conservative user of data when not on wifi, but I'm only 2 days in to my bill cycle and already on pace to go over my 2 GB limit by the end of the month. Please help! I do not want to switch my plan and play more! I am on AT&T by the way.

    Have you tried these basic troubleshooting steps?
    Restart / Reset
    http://support.apple.com/en-us/HT201559
    Restore from backup
    Restore as new
    http://support.apple.com/en-us/HT201252
    If no joy, make an appointment with the Apple genius bar for an evaluation.

  • TileList data load issue

    I am having an issue where the data that drives a tilelist
    works correctly when the tile list is not loaded on the first page
    of the application. When it is put on a second page in a viewstack
    then the tilelist displays correctly when you navigate to it. When
    the tilelist is placed in the first page of the application I get
    the correct number of items to display in the tilelist but the
    information the item renderer is supposed to display, ie a picture,
    caption and title, does not. The strange thing is that a Tree
    populates correctly given the same situation. Here is the sequence
    of events:
    // get tree is that data for the tree and get groups is the
    data for the tilelist
    creationComplete="get_tree.send();get_groups.send();"
    <mx:HTTPService showBusyCursor="true" id="get_groups"
    url="[some xml doc]" resultFormat="e4x"/>
    <mx:XMLListCollection id="myXMlist"
    source="{get_groups.lastResult.groups}"/>
    <mx:HTTPService showBusyCursor="true" id="get_tree"
    url="[some xml doc]" resultFormat="e4x" />
    <mx:XMLListCollection id="myTreeXMlist"
    source="{get_tree.lastResult.groups}"/>
    And then the data provider of the tilelist and tree are set
    accordingly. I tried putting moving the data calls from the
    creation complete to the initialize event thinking that it would
    hit earlier in the process and be done by the time the final
    completion came about but that didn't help either. I guess I'm just
    at a loss as to why the tree works fine no matter where I put it
    but the TileList does not. It's almost like the tree and the
    tilelist will sit and wait for the data but the item renderer in
    the tilelist will not wait. Which would explain why clicking on the
    tile list still produces the correct sequence of events but the
    visual component of the tilelist is just not working right. Anyone
    have any ideas?

    Ok, so if ASO value is wrong, then its a data load issue and no point messing around with the BSO app. You are loading two transactions to the exact same intersection. Make sure your data load is set to aggregate values and not overwrite.

  • When is the Next update of IOS..? ****** of with Data Loss issue

    I am using Iphone 4 for past 2 years in India on Dococmo Network, since after the update of IOS 5 and 5.0.1 I
    am FAcing the frequent Data loss in Mobile...!!
    Time beign i can over come it by turing off the cellular connetion to on and off mode where it resetting the data connection...!!!

    Since you haven't bothered to describe what this mysterious "data connection issue" is, we have no way to confirm or deny your statement that the problem is widespread.
    The fact remains, you're using it on an unsupported carrier.
    If you'd like to try and describe WHAT THE PROBLEM IS instead of getting defensive about it, we might be able to help. Otherwise, your initial question has been answered. No one here can tell you when the next update to iOS will be released.

  • [svn] 1543: Bug: BLZ-152-lcds custom Date serialization issue - need to add java.io. Externalizable as the first type tested in AMF writeObject() functions

    Revision: 1543
    Author: [email protected]
    Date: 2008-05-02 15:32:59 -0700 (Fri, 02 May 2008)
    Log Message:
    Bug: BLZ-152-lcds custom Date serialization issue - need to add java.io.Externalizable as the first type tested in AMF writeObject() functions
    QA: Yes - please check that the fix is working with AMF3 and AMFX and you can turn on/off the fix with the config option.
    Doc: No
    Checkintests: Pass
    Details: The problem in this case was that MyDate.as was serialized to MyDate.java on the server but on the way back, MyDate.java was serialized back to Date.as. As the bug suggests, added an Externalizable check in AMF writeObject functions. However, I didn't do this for AMF0Output as AMF0 does not support Externalizable. To be on the safe side, I also added legacy-externalizable option which is false by default but when it's true, it restores the current behavior.
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-152
    Modified Paths:
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/endpoints/AbstractEndpoint.ja va
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/io/SerializationContext.java
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/io/amf/Amf3Output.java
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/io/amfx/AmfxOutput.java
    blazeds/branches/3.0.x/resources/config/services-config.xml

  • Why the delivery date is the same date as 'transptn plan date" & loading date' & ' good issue' & GR end date'

    Hi Experts,
    why the delivery date is the same date as ‘transptn plan date” & loading date’ & ‘ good issue’ & GR end date’.
    in shipping tab i can see Planned Deliv. Time  170 Days ... wat could be the reason.
    Many Thanks:
    Raj Kashyap

    Hi Jurgen,,
    Thanks for quick reply!!
    But i didnot find any things like that .. what could be the customizing .. and we are using GATP from APO side.
    \Raj Kashyap

  • Error 10000 Date format issue

    Hi all,
    Has anyone seen the following error please or has a troubleshooting hint: -
    "[NT AUTHORITY\SYSTEM (15/10/2012 18:35:12) - Service request cancelled due to an error.
    Error Code: 10000
    Error Description: Failed to create lease requisition.
    Fault code: soap:Server
    Fault string: Service Form Field: 'WarningDate2' has Date format issue.
    Fault details: REQ_0024Service Form Field: 'WarningDate2' has Date format issue.
    CIAC = 3.01
    Date and Time format on the CCP, CPO, vmware and SQL servers all Italian (dd/mm/yy)
    This only happens when we add a Lease Time on the request.
    Do they all have to be set to the US format for this to work?
    If this is a regional setting thing, do I have to change the format on all of the servers (CIAC components)?
    Cheers
    md

    This test program might help...
    import java.util.*;
    import java.text.*;
    public class ExpandYear
        public static void main(String[] args) throws ParseException
         SimpleDateFormat sdf_2dyear = new SimpleDateFormat("MM/dd/yy");
         SimpleDateFormat sdf_4dyear = new SimpleDateFormat("MM/dd/yyyy");
         String test1 = "3/21/00";
         System.out.println("test1: " + test1 + " to : " +
                      sdf_4dyear.format(sdf_2dyear.parse(test1)));
         String test2 = "4/9/99";
         System.out.println("test2: " + test2 + " to : " +
                      sdf_4dyear.format(sdf_2dyear.parse(test2)));

Maybe you are looking for