Row to column conversion in large result set

I need to get the differnt levels of alerts for each user, with the total alert counts and counts for each type of alert.
SELECT COUNT(*) TOTAL,
        u.name,
        d.deptname,
        COUNT(CASE WHEN al.severity = 5 THEN 1 END) INDETERMINATE,
        COUNT(CASE WHEN al.severity = 4 THEN 1 END) WARNING,
        COUNT(CASE WHEN al.severity = 3 THEN 1 END) MINOR,
        COUNT(CASE WHEN al.severity = 2 THEN 1 END) MAJOR,
        COUNT(CASE WHEN al.severity = 1 THEN 1 END) CRITICAL
FROM alerts al, user u, dept d
WHERE al.userid = u.userid
  AND u.deptid = d.deptid
  AND al.time = sysdate - 4/24
  AND al.time <= sysdate
GROUP BY u.name, d.deptname  This might be inefficient especially when the amount of data is huge and potentially you could be looking at millions of rows in a given interval. So I was thinking of grouping by severity, which would eliminate the CASE statements, but then the problem on group by is that I get it as a rows instead of columns.
Would anyone have a suggestion as to do it effeciently especially one that has millions of rows? Thanks.

I guess eventually we are probably going to move towards that, but using a staright forward query is the immediate solution and any help on that would be great. Thanks.

Similar Messages

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • How to handle large result sets?

    Hi All,
    I have a large result set to be displayed to user using jsp's. Problem is that result set is too big, so I can't display all the records in a single push. I want to show the results page by page say 25 per page. Now for every page I have to fetch data from database, means there are going to be many database calls which is not advisable. Or i can cache data in a CachedRowSet to reduce database calls, but in this case you have to store all the data in memory which is not a good solution in case you have very large data sets. Can anybody suggest me a solution to this problem?

    The best thing for you to do is to implmeneting paging logic in conjunction with a scrollable resultset (JDBC 2.0+).
    The logic would go like this assuming 30 rows per page:
    - keep track of which page the user is on (e.g. page 3)
    - issue the full sql
    - scroll thru only the rows in the current page (e.g. rows 90-120)
    - copy the page's rows to value objects
    - close the resultset, statement, and connection
    In the above example, you would scroll to row 90 using rs.absolute(90).
    The efficiency comes from the fact that you're using a scrollable resultset. By using this, only the rows that you scroll thru are extracted out from the database. I performed some simple testing and with my data, and the scrollable resultset was about 10x in performance.
    Good luck!

  • Web Services with Large Result Sets

    Hi,
    We have an application where in a call to a web service could potentially yield a large result set. For the sake of argument, lets say that we cannot limit the result set size, i.e., by criteria narrowing or some other means.
    Have any of you handled paging when using Web Services? If so can you please share your experiences considering Web Services are stateless? Any patterns that have worked? I am aware of the Value List pattern but am looking for previous experiences here.
    Thanks

    Joseph Weinstein wrote:
    Aswin Dinakar wrote:
    I ran the test again and I removed the ResultSet.Fetch_Forward and it
    still gave me the same error OutOfMemory.
    The problem to me is similar to what Slava has described. I am parsing
    the result set in memory storing the results in a hash map and then
    emptying the post processed results into a table.
    The hash map turns out to be very big and jvm throws a OutOfMemory
    Exception.
    I am not sure how I can turn this around -
    I can partition my query so that it returns smaller chunks or "blocks"
    of data each time(say a page of data or two pages of data). Then I can
    store a page of data in the table. The problem with this approach is
    that it is not exactly transactional. Recovery would be very difficult
    in this approach.
    I could do this in a try catch block page by page and then the catch
    could go ahead and delete the rows that got committed. The question then
    becomes what if that transaction fails ?It sounds like you're committing the 'cardinal performance sin of DBMS processing',
    of shovelling lots of raw data out of the DBMS, processing it in some small way,
    and sending it (or some of it) back. You should instead do this processing in
    a stored procedure or procedures, so the data is manipulated where it is. The
    DBMS was written from the ground up to be a fast efficient set-based processor.
    Using clever SQL will pay off greatly. Build your saw-mills where the trees are.
    JoeYes we did think of stored procedures. Like I mentioned yesterday some of the post
    processing depends on unicode and specific character sets. Java seemed ideally suited
    to this since it handles these unicode characters very well and has all these libraries
    we can use. Moving this to DBMS would mean we would make that proprietary (not that we
    wont do it if it became absolutely essential) but its one of the reasons why the post
    processing happens in java. Now that you mention it stored procedures seem the best
    option.

  • I-bot not emailing when report returns large result set..

    Hi,
    I am trying to set up an i-bot to run daily and email the results to the user. Assuming the report in question is Report_A.
    Report_A returns around 60000 rows of data without any filter condition. When I tried to set up thei-bot for Report_A (No filter conditions on the report) the ibot is publishing results to dashboard but is not delivering via email. When I introduce a filter in Report_A to reduce the data returned then everything works fine and email is being sent out successfully.
    So
    1) Is there a size limit for i-bots to deliver by email?
    2) Is there a way to increase the limits if any so the report can be emailed even when returning large result sets?
    Please let me know.

    Sorry for late reply
    Below is the log file for one of the i-bots. Now I am getting an error message "***kmsgPortalGoRequestHasBeenCancelled: message text not found ***" and the i-bot alert message shows as "Cancelled".
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.551
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.553
    iBotID: /shared/_ibots/common/TM/Claims Report
    ...Trying iBot Get Response Content loop again.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.554
    ... Sleeping for 8 seconds.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.642
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.644
    iBotID: /shared/_ibots/common/TM/Claims Report
    ...Trying iBot Get Response Content loop again.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.644
    ... Sleeping for 6 seconds.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:18.730
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:18.734
    iBotID: /shared/_ibots/common/TM/Claims Report
    Exceeded number of request retries.

  • How to Create a new column from two different result sets

    How to Create a new column from two different result sets, both the result set uses the different date dimensions.

    i got solutions for this is apply filters in column formula it self, based on the requirement.

  • Displaying large result sets in Table View u0096 request for patterns

    When providing a table of results from a large data set from SAP, care needs to be taken in order to not tax the R/3 database or the R/3 and WAS application servers.  Additionally, in terms of performance, results need to be displayed quickly in order to provide sub-second response times to users.
    This post is my thoughts on how to do this based on my findings that the Table UI element cannot send an event to retrieve more data when paging down through data in the table (hopefully a future feature of the Table UI Element).
    Approach:
    For data retrieval, we need to have an RFC with search parameters that retrieves a maximum number of records (say 200) and a flag whether 200 results were returned. 
    In terms of display, we use a table UI Element, and bind the result set to the table.
    For sorting, when they sort by a column, if we have less than the maximum search results, we sort the result set we already have (no need to go to SAP), but otherwise the RFC also needs to have sort information as parameters so that sorting can take place during the database retrieval.  We sort it during the SQL select so that we stop as soon as we hit 200 records.
    For filtering, again, if less than 200 results, we just filter the results internally, otherwise, we need to go to SAP, and the RFC needs to have this parameterized also.
    If the requirement is that the user must look at more than 200 results, we need to have a button on the screen to fetch the next 200 results.  This implies that the RFC will also need to have a start point to return results from.  Similarly, a previous 200 results button would need to be enabled once they move beyond the initial result set.
    Limitations of this are:
    1.     We need to use custom RFC function as BAPI’s don’t generally provide this type of sorting and limiting of data.
    2.     Functions need to directly access tables in order to do sorting at the database level (to reduce memory consumption).
    3.     It’s not a great interface to add buttons to “Get next/previous set of 200”.
    4.     Obviously, based on where you are getting the data from, it may be better to load the data completely into an internal table in SAP, and do sorting and filtering on this, rather than use the database to do it.
    Does anyone have a proven pattern for doing this or any improvements to the above design?  I’m sure SAP-CRM must have to do this, or did they just go with a BSP view when searching for customers?
    Note – I noticed there is a pattern for search results in some documentation, but it does not exist in the sneak preview edition of developer studio.  Has anyone had in exposure to this?
    Update - I'm currently investigating whether we can create a new value node and use a supply function to fill the data.  It may be that when we bind this to the table UI element, that it will call this incrementally as it requires more data and hence could be a better solution.

    Hi Matt,
    i'm afraid, the supplyFunction will not help you to get out of this, because it's only called, if the node is invalid or gets invalidated again. The number of elements a node contains defines the number of elements the table uses for the determination of the overall number of table rows. Something quite similar to what you want does already exist in the WD runtime for internal usage. As you've surely noticed, only "visibleRowCount" elements are initially transferred to the client. If you scroll down one or multiple lines, the following rows are internally transferred on demand. But this doesn't help you really, since:
    1. You don't get this event at all and
    2. Even if you would get the event, since the number of node elements determines the table's overall rows number, the event would never request to load elements with an index greater than number of node elements - 1.
    You can mimic the desired behaviour by hiding the table footer and creating your own buttons for pagination and scrolling.
    Assume you have 10 displayed rows and 200 overall rows, What you need to be able to implement the desired behaviour is:
    1. A context attribute "maxNumberOfExpectedRows" type int, which you would set to 200.
    2. A context attribute "visibleRowCount" type int, which you would set to 10 and bind to table's visibleRowCount property.
    3. A context attribute "firstVisibleRow" type int, which you would set to 0 and bind to table's firstVisibleRow property.
    4. The actions PageUp, PageDown, RowUp, RowDown, FirstRow and LastRow, which are used for scrolling and the corresponding buttons.
    The action handlers do the following:
    PageUp: firstVisibleRow -= visibleRowCount (must be >=0 of course)
    PageDown: firstVisibleRow += visibleRowCount (first + visible must be < maxNumberOfExpectedRows)
    RowDown/Up: firstVisibleRow++/-- with the same restrictions as in page "mode"
    FirstRow/LastRow is easy, isn't it?
    Since you know, which sections of elements has already been "loaded" into the dataSource-node, you can fill the necessary sections on demand, when the corresponding action is triggered.
    For example, if you initially display elements 0..9 and goto last row, you load from maxNumberOfExpected (200) - visibleRows (10) entries, so you would request entries 190 to 199 from the backend.
    A drawback is, that the BAPIs/RFCs still have to be capable to process such "section selecting".
    Best regards,
    Stefan
    PS: And this is meant as a workaround and does not really replace your pattern request.

  • Adding hard coded column values to a result set.

    DB version:10gR2
    Example from sample schema SCOTT
    select e.ename, e.empno, d.loc
    from
    emp e inner join dept d on (e.deptno=d.deptno)
    and e.ename like 'M%';
    ENAME           EMPNO LOC
    MARTIN           7654 CHICAGO
    MILLER           7934 DALLAS
    For every ename like 'MARTIN' the following two rows (hardcoded column values 001-985 and 003-745 ) should get created.
    PAY_TYPES   PAY_CODES               
       001           985               
       003           745    
       Everything else must remain same.
    The result set should look like the following. Here two rows got created for MARTIN and MILLER.Everything else remains the same
      ENAME           EMPNO  PAY_TYPES   PAY_CODES               LOC
      MARTIN           7654  001           985               CHICAGO
      MARTIN        7654  003           745                       CHICAGO
      MILLER           7934  001           985                  DALLAS
      MILLER           7934  003           745                  DALLAS
      How is this possible?

    Hello Scott,
    Try this,
    SELECT ENAME, EMPNO, PAY_TYPES, PAY_CODES, LOC
      FROM (SELECT K.*, ROW_NUMBER() OVER(PARTITION BY ENAME ORDER BY ENAME) ID
              FROM (SELECT *
                      FROM (SELECT ENAME, EMPNO, LOC
                              FROM EMP E, DEPT D
                             WHERE E.DEPTNO = D.DEPTNO
                               AND ENAME LIKE 'M%')
                     WHERE CONNECT_BY_ISLEAF = 1
                    CONNECT BY LEVEL <= 2) K)
    MODEL
       DIMENSION BY (ENAME, ID)
       MEASURES ( EMPNO, LOC, '000' AS PAY_TYPES, '000' AS PAY_CODES )
       RULES    ( PAY_TYPES[FOR (ENAME) IN (SELECT ENAME FROM EMP WHERE ENAME LIKE 'M%' ),1] = '001' ,
                  PAY_CODES[FOR (ENAME) IN (SELECT ENAME FROM EMP WHERE ENAME LIKE 'M%' ),1] = '985' ,
                  PAY_TYPES[FOR (ENAME) IN (SELECT ENAME FROM EMP WHERE ENAME LIKE 'M%' ),2] = '003' ,
                  PAY_CODES[FOR (ENAME) IN (SELECT ENAME FROM EMP WHERE ENAME LIKE 'M%' ),2] = '745' )   Regards,
    Christian Balz

  • How to save memory when processing large result set

    I need to dump multi millions of rows of data into excel files
    I query the tables and open excel to write in
    The problem is even I chopped the result into hundred files, close excel completely after 65536 rows, the memory usage keeps going up as the result set is looped and at one point hit the heap size
    How can I release the memory has been used in the result set?
    Thank you

    mycoffee wrote:
    936517 wrote:
    I think resultSet.close() will do what you want (you shouldn't have to set resultSet=null when you're done with it).
    You can't force the garbage collector to run and reclaim memory. It uses an intelligent algorithm to do so .
    I question why your project is sending millions of records to excel. Who is going to read a 10,000 page excel document(s)?
    Instead, I suggest you provide a (intelligent) filter mechanism to allow users to get a subset of data to send to an excel document rather than all data. For example: instead of sending him the entire telephone book, have him search for results based on lastName and/or firstName. That will cut down on the number of records returned. Next, does the user really need all the columns of data in each record? That will cut it down further.
    You can search Google for 'java heap size' to increase the memory for your program. However, your 65536 limit is probably due to Excel's limitation and not your Java program.Sorry I could not explain the need,
    No. That is not issue here. I already use max heap size I can
    but I can handle it now. Open files, write directly to the file instead of holding the data and dumping all at once. I save all the overhead and it works fine even the result set still consumes almost all the memory.is it possible you are using mysql? the mysql jdbc driver has a terrible default setup in that it keeps all results for the result set in memory ! i think some of the latest drivers finally allow you to stream results sensibly, but you have to use the correct options.

  • Hide or Remove OCI Check Box column from Standard MDM Result Set iview

    Hi,
    Can we hide or remove the OCI Check box column from the Standard MDM Result set ivew?
    Though i have made OCI disabled, I dont want to display the check boxes along with the product list.
    Can you please help?
    Thanks and best regards,
    Arun prabhu S

    Hi Arun,
    If you are talking about the very first check box, then that is not related to OCI. Use of check box is to anable multiple item selection which can be used adding the records in the workflow, for comparison etc.
    Regards,
    Jitesh Talreja

  • Row to Column conversion

    Hi,
    i am having following table
    SELECT* FROM Col_to_row;
    output is:
    A      B      C
    X      Y      Z
    1      2      3
    i want to write query for below output, need to convert row into column for n number of rows for example i have taken only 3 rows:
    A X 1
    B Y 2
    C Z 3

    Three things to notice in this solution:
    1) I am ordering by the original column name. You may want to order by something else.
    2) When pivoting, you must indicate the name, number and datatype of the columns, so if there is a fourth line of input you have to add a column to the query.
    3) This works on Oracle 11GR1 and above. Next time, be sure to indicate your Oracle version from the start.create table COL_TO_ROW(C1, C2, C3) as select
    'A', 'B', 'C' from DUAL union all select
    'X', 'Y', 'Z' from DUAL union all select
    '1', '2', '3' from dual;
    select * from (select rownum rn, a.* from COL_TO_ROW a)
    UNPIVOT(VAL for COL in(C1, C2, C3))
    PIVOT(max(VAL) for RN in (1,2,3))
    order by col;
    COL 1 2 3
    C1  A X 1
    C2  B Y 2
    C3  C Z 3

  • Row to column conversion during excel download

    Dear Experts,
    I have an internal table with one column.
    It has some entries say ten.
    I want this ten rows of entries to be displayed in the excel sheet as a single row. It means each row entry of the internal table should fit in to each cell.
    Is there any possibility with GUI_DOWNLOAD FM to do this?
    If not, then how could i do that?
    Please let me know its urgent.
    Regards,
    Shashi

    Hi,
    i m not aware of whether it can be done through FM 'GUI_DOWNLOAD' but i have one sugeestion.
    You can first transpose your internal table and then download it.
    Sample code for obtaining the transpose is as follows:
    *& Report Z_TRANPOSEALV *
    *& Author : Swarna.S.
    *& AS: This simple ALV report display is in a transposed way
    *& Publised at ****************
    REPORT Z_TRANSPOSEALV .
    Type pools declaration for ALV
    TYPE-POOLS: slis.
    *Declarations for ALV, dynamic table and col no for transpose
    DATA: l_col TYPE sy-tabix,
    l_structure TYPE REF TO data,
    l_dyntable TYPE REF TO data,
    wa_lvc_cat TYPE lvc_s_fcat,
    lt_lvc_cat TYPE lvc_t_fcat,
    lt_fieldcatalogue TYPE slis_t_fieldcat_alv,
    wa_fieldcat TYPE slis_fieldcat_alv,
    lt_fieldcat TYPE slis_t_fieldcat_alv,
    lt_layout TYPE slis_layout_alv.
    *Field symbols declarations
    FIELD-SYMBOLS :
    <header> TYPE ANY,
    <dynheader> TYPE ANY,
    <dyndata> TYPE ANY,
    <ls_table> TYPE ANY,
    <dynamictable> TYPE STANDARD TABLE,
    <it_table> TYPE STANDARD TABLE.
    *Input the name of the table
    PARAMETERS p_table TYPE dd02l-tabname OBLIGATORY.
    *Initialization event
    INITIALIZATION.
    *Start of selection event
    START-OF-SELECTION.
    Create internal table of dynamic type
    CREATE DATA l_dyntable TYPE STANDARD TABLE OF (p_table)
    WITH NON-UNIQUE DEFAULT KEY.
    ASSIGN l_dyntable->* TO <it_table>.
    *select statement to select data from the table as input into
    *our dynamic internal table.
    *Here i have restricted only till 5 rows.
    *You can set a variable and give no of rows to be fetched
    *The variable can be set in your select statement
    SELECT * INTO CORRESPONDING FIELDS OF TABLE <it_table>
    FROM (p_table) up to 5 rows.
    *Fieldcatalogue definitions
    wa_lvc_cat-fieldname = 'COLUMNTEXT'.
    wa_lvc_cat-ref_table = 'LVC_S_DETA'.
    APPEND wa_lvc_cat TO lt_lvc_cat.
    wa_fieldcat-fieldname = 'COLUMNTEXT'.
    wa_fieldcat-ref_tabname = 'LVC_S_DETA'.
    wa_fieldcat-key = 'X'..
    APPEND wa_fieldcat TO lt_fieldcat.
    DESCRIBE TABLE <it_table>.
    DO sy-tfill TIMES.
    For each line, a column 'VALUEx' is created in the fieldcatalog
    Build Fieldcatalog
    WRITE sy-index TO wa_lvc_cat-fieldname LEFT-JUSTIFIED.
    CONCATENATE 'VALUE' wa_lvc_cat-fieldname
    INTO wa_lvc_cat-fieldname.
    wa_lvc_cat-ref_field = 'VALUE'.
    wa_lvc_cat-ref_table = 'LVC_S_DETA'.
    APPEND wa_lvc_cat TO lt_lvc_cat.
    Build Fieldcatalog
    CLEAR wa_fieldcat.
    wa_fieldcat-fieldname = wa_lvc_cat-fieldname.
    wa_fieldcat-ref_fieldname = 'VALUE'.
    wa_fieldcat-ref_tabname = 'LVC_S_DETA'.
    APPEND wa_fieldcat TO lt_fieldcat.
    ENDDO.
    Create dynamic internal table
    CALL METHOD cl_alv_table_create=>create_dynamic_table
    EXPORTING
    it_fieldcatalog = lt_lvc_cat
    IMPORTING
    ep_table = l_dyntable. ASSIGN l_dyntable->* TO <dynamictable>.
    Create structure as structure of the internal table
    CREATE DATA l_structure LIKE LINE OF <dynamictable>.
    ASSIGN l_structure->* TO <header>.
    Create structure = structure of the internal table
    CREATE DATA l_structure LIKE LINE OF <it_table>.
    ASSIGN l_structure->* TO <ls_table>.
    Create field catalog from our table structure
    CALL FUNCTION 'REUSE_ALV_FIELDCATALOG_MERGE'
    EXPORTING
    i_structure_name = p_table
    CHANGING
    ct_fieldcat = lt_fieldcatalogue
    EXCEPTIONS
    inconsistent_interface = 1
    program_error = 2
    OTHERS = 3.
    IF sy-subrc 0.
    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF. DESCRIBE TABLE lt_fieldcatalogue.
    Fill the internal to display <dynamictable>
    DO sy-tfill TIMES.
    IF sy-index = 1.
    READ TABLE lt_fieldcatalogue INTO wa_fieldcat INDEX 1.
    ENDIF.
    For each field of it_table
    ASSIGN COMPONENT 1 OF STRUCTURE <header> TO <dynheader>.
    IF sy-subrc NE 0. EXIT .ENDIF.
    READ TABLE lt_fieldcatalogue INTO wa_fieldcat INDEX sy-index.
    Fill 1st column
    <dynheader> = wa_fieldcat-seltext_m.
    IF <dynheader> IS INITIAL.
    <dynheader> = wa_fieldcat-fieldname.
    ENDIF.
    *Filling the other columns
    LOOP AT <it_table> INTO <ls_table>.
    l_col = sy-tabix + 1.
    ASSIGN COMPONENT sy-index OF STRUCTURE <ls_table> TO <dyndata>.
    IF sy-subrc NE 0. EXIT .ENDIF.
    ASSIGN COMPONENT l_col OF STRUCTURE <header> TO
    <dynheader>.
    IF sy-subrc NE 0. EXIT .ENDIF.
    WRITE <dyndata> TO <dynheader> LEFT-JUSTIFIED.
    ENDLOOP.
    APPEND <header> TO <dynamictable>.
    ENDDO.
    *Layout for ALV output
    lt_layout-zebra = 'X'.
    lt_layout-no_colhead = 'X'..
    lt_layout-colwidth_optimize ='X'.
    lt_layout-window_titlebar = 'ALV GRID TRANSPOSED'.
    *ALV Grid output for display
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
    EXPORTING
    is_layout = lt_layout
    it_fieldcat = lt_fieldcat
    TABLES
    t_outtab = <dynamictable>.
    Regards,
    Karuna.

  • Include Column type in XML result set

    Hi
    I'm trying to get result of my query as XML but how I can specify columns data type in xml attributes?
    This is my query:
    SELECT * FROM Acc.GL FOR XML AUTO, ELEMENTS, TYPE, BINARY BASE64;
    Unfortunately in XML result, column types are not specified:
    <row>
    <Id>23</Id>
    <GLCode>1</GLCode>
    <BranchRef>1</BranchRef>
    <Title>Foo Title</Title>
    <Balance>1</Balance>
    </row>
    <row>
    <Id>24</Id>
    <GLCode>2</GLCode>
    <BranchRef>1</BranchRef>
    <Title>Bar Title</Title>
    <Balance>1</Balance>
    </row>
    What I expect is something like this:
    <row>
    <Id type="int">23</Id>
    <GLCode type="int">1</GLCode>
    <BranchRef type="int">1</BranchRef>
    <Title type="nvarchar">Foo Title</Title>
    <Balance type="int">1</Balance>
    </row>
    <row>
    <Id type="int">24</Id>
    <GLCode type="int">2</GLCode>
    <BranchRef type="int">1</BranchRef>
    <Title type="nvarchar">Bar Title</Title>
    <Balance type="int">1</Balance>
    </row>
    What should I change in my query?
    Jalalx

    how I can specify columns data type in xml attributes?
    Hello,
    XML Format don't allow to specify the data types in this way, you have to create XSD = "Schema Definition" instead, here are the data types defined.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Should ROWNUM return what you'd naturally expect when used with an ORDER BY on a very large result set?

    For security reasons, I have removed some of the column names and the table name from this query, but there was no WHERE clause, and this includes the complete ORDER BY clause.
    select p.cc,
        p.yy,
        p.mm,
        p.dd,
        p.ref_no,
        p.seconds,
        p.deci_mal,
        cast(rownum as number(14)) as arpds_id
      from p
      order by p.cc,
        p.yy,
        p.mm,
        p.dd,
        p.ref_no,
        p.seconds,
        p.deci_mal
    We were adding a Primary Key (ARPDS_ID) to the table containing legacy payments, which had thousands of rows where every single column was identical, and had over 15 million rows total.
    I expected the ROWNUM to represent the row number *after* the rows were sorted by the ORDER BY clause.  However, we identified at least one account where this was not the case.
    Here are the relevant fields for that account, sorted by my ARPDS_ID.  As you can see, the second row clearly has a higher REF_NO, but a lower ARPDS_ID than the third row.
    CC YY MM DD REF_NO SECONDS DECI_MAL        ARPDS_ID
    20 13 05 25 1258   57      00              15584205
    20 13 06 04 1718   56      00              15669414
    20 13 06 04 1711   48      00              15682078
    Can anyone help me understand why this happened?  I didn't believe it when it was first brought to my attention, and had the database restored that contained the actual tmp (not global temporary) table we created with the query above so I could prove to myself that the query above *actually* created these results.  It was not until I saw that with my own eyes that I believed that this happened.
    Thanks
    Message was edited by: mattknowles
    PS - What happened to the Preview Post functionality?  Am I just not seeing it?

    Hi,
    Whatever you'd naturally expect, ROWNUM probably doesn't do it.
    ROWNUM is assigned as rows are fetched from the table (or subquery), before the ORDER BY clause.
    If you want ROWNUM to reflect any particular order, then do an ORDER BY in a sub-query, and use ROWNUM in the super-query,
    WITH  ordered_data  AS
        SELECT    cc, yy, mm, dd, ref_no, seconds, deci_mal
        FROM      p
        ORDER BY  cc, yy, mm, dd, ref_no, seconds, deci_mal
    SELECT  o.*
    ,       CAST (ROWNUM AS NUMBER (14))   AS arpds_id
    FROM    ordered_data  o
    If the problem is more complicated than what you posted, you may want to use the analytic ROW_NUMBER function instead of ROWNUM.

  • Creating XML from large result sets

    We are trying to create XML docs from queries that return somewhere in the neighborhood of 13000+ rows, each with as many as 30 columns. It seems someplace deep within the xmlgen utility or the xsu methods, we throw an OutOfMemoryError. This happens whether we run as a java stored procedure, a pl/sql stored procedure, pl/sql script or java servlet. It looks like the top of the stack shows the problem occurs trying to extendCapacity in a StringBuffer. We're running 8.1.6 with java 1.2.2 and xsu 1.2. Has anyone had to query large amounts of data like this and if so, how did you go about it? Is there a slick way to break things up and then string everything back into 1 big xml doc? Any help would be greatly appreciated. Thanks.

    I guess you can use pagination,
    either using xsu or Top N Queries and then you could merge all the pages in an unique XML or DOM tree prior to persist it to disk.
    With XSU:
    Go to the following link for a clear explanation:
    <http://technet.oracle.com/docs/products/oracle8i/doc_library/817_doc/appdev.817/a86030/adx04x11.htm#1008351>
    With Top N queries:
    Go to the following thread of this forum to see a similar question I raised and it was answered in detail:
    <http://technet.oracle.com:89/ubb/Forum11/HTML/004711.html>
    - Jose Angel -

Maybe you are looking for