CLR Procedure and returning large number of rows

I have a CLR stored procedure coded in C# that retrieves data from a web service, and returns that data using SendResultsStart/SendResultsRow/SendResultsEnd. This all works fine, except when the data from the web service is tens or even thousands of records.
The code itself takes about 3 minutes on average to do all it's work with around 50000-60000 records, but the procedure does not return in SSMS for about another 10-15 minutes, during which time the CPU and memory usage go up significantly.
To rule out any of the CLR code as the culprit, I created a very simple CLR procedure that just loops to return 100000 records with int and nvarchar(256) fields with the current count, and "ABC" followed by the count.  Here is the code:
[Microsoft.SqlServer.Server.SqlProcedure]
public static void ABC()
System.Diagnostics.Stopwatch ExecuteTimer = System.Diagnostics.Stopwatch.StartNew();
SqlMetaData[] ResultMetaData = new SqlMetaData[2];
ResultMetaData[0] = new SqlMetaData("count", SqlDbType.Int);
ResultMetaData[1] = new SqlMetaData("text", SqlDbType.NVarChar, 256);
SqlContext.Pipe.SendResultsStart(new SqlDataRecord(ResultMetaData));
for (int x = 0; x < 100000; x++)
SqlDataRecord ResultItem = new SqlDataRecord(ResultMetaData);
ResultItem.SetValue(0, x);
ResultItem.SetValue(1, "ABC" + x.ToString());
SqlContext.Pipe.SendResultsRow(ResultItem);
SqlContext.Pipe.SendResultsEnd();
TimeSpan ExecTime = ExecuteTimer.Elapsed;
SqlContext.Pipe.Send("Elapsed Time: " + ExecTime.Minutes.ToString() + ":" + ExecTime.Seconds.ToString() + "." + ExecTime.Milliseconds.ToString());
I then executed procedure ABC in SSMS, and it took 21 minutes to return.  All of the data rows were visible after just a couple of seconds, but the query continued to run as the CPU and memory went up again.
Is this really how long it should take to return 100000 rows, or am I missing something?  Is there a better approach than using SendResultsStart/SendResultsRow/SendResultsEnd?
I've googled this to death and haven't found anything that helped or even explained why this is.
I would greatly appreciate any suggestions or alternate methods to achieve this faster.
Thanks!
Alex

When you create a new object, space on the garbage-collected heap is allocated for that object, and the address will be stored in a reference. Some time later, there will no longer be any references that hold the address of the allocated object. It doesn't
matter whether the reference count went to 0 because the reference was set to null or because the reference was on the stack and is no longer in lexical scope, the end result is the same: the garbage collector will, at some point in time, have to perform the
book-keeping operations necessary to identify that the space allocated for that now-unreferenced object can be re-used. When, on the other hand, you only create a single SqlDataRecord object and hold onto the reference, all of the book-keeping operations
associated with creating 100,000 objects are eliminated. This is why the documentation for the SqlDataReader class advises that:
When writing common language runtime (CLR) applications, you should re-use existing
SqlDataRecord objects instead of creating new ones every time. Creating many new
SqlDataRecord objects could severely deplete memory and adversely affect performance.

Similar Messages

  • Loop through a csv file and return the number of rows in it?

    What would be simplest way to loop through a csv file and
    return the number of rows in it?
    <cffile action="read" file="#filename#" output="#csvstr#"
    >
    <LOOP THROUGH AND COUNT ROWS>

    ListLen(). Use chr(13) as your delimiter

  • Oracle Error 01034 After attempting to delete a large number of rows

    I sent the command to delete a large number of rows from a table in an oracle database (Oracle 10G / Solaris). The database files are located at /dbo partition. Before the command the disk space utilization was at 84% and now it is at 100%.
    SQL Command I ran:
    delete from oss_cell_main where time < '30 jul 2009'
    If I try to connect to the database now I get the following error:
    ORA-01034: ORACLE not available
    df -h returns the following:
    Filesystem size used avail capacity Mounted on
    /dev/md/dsk/d6 4.9G 5.0M 4.9G 1% /db_arch
    /dev/md/dsk/d7 20G 11G 8.1G 59% /db_dump
    /dev/md/dsk/d8 42G 42G 0K 100% /dbo
    I tried to get the space back by deleting all the data in the table oss_cell_main :
    drop table oss_cell_main purge
    But no change in df output.
    I have tried solving it myself but could not find sufficient directed information. Even pointing me to the right documentation will be higly appreciated. I have already looking at the following:
    du -h :
    du -h8K ./lost+found
    1008M ./system/69333
    1008M ./system
    10G ./rollback/69333
    10G ./rollback
    27G ./data/69333
    27G ./data
    1K ./inx/69333
    2K ./inx
    3.8G ./tmp/69333
    3.8G ./tmp
    150M ./redo/69333
    150M ./redo
    42G .
    I think its the rollback folder that has increased in size immensely.
    SQL> show parameter undo
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 10800
    undo_tablespace string UNDOTBS1
    select * from dba_tablespaces where tablespace_name = 'UNDOTBS1'
    TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
    MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
    ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
    UNDOTBS1 8192 65536 1
    2147483645 65536 ONLINE UNDO LOGGING NO LOCAL
    SYSTEM NO MANUAL DISABLED NOGUARANTEE NO
    Note: I can reconnect to the database for short periods of time by restarting the database. After some restarts it does connect but for a few minutes only but not long enough to run exp.

    Check the alert log for errors.
    Select file_name, bytes from dba_data_files order by bytes;
    Try to shrink some datafiles to get space back.

  • JDev: af:table with a large number of rows

    Hi
    We are developing with JDeveloper 11.1.2.1. We have a VO that returns > 2.000.000 of rows and that we display in a af:table with access mode 'scrollable' (the default) and 'in Batches of' 101. The user can select one row and do CRUD operations in the VO with popups. The application works fine but I read that scroll very large number of rows is not a good idea because can cause OutOfMemory exception if the user uses the scroll bar many times. I have tried with access mode in 'Range Paging' but the application works in strange ways. Sometimes when I select a row to edit, if the selected row is the number 430 in the popup is show it the number 512 and when I want to insert a new row throws this exception:
    oracle.jbo.InvalidOperException: JBO-25053: No se puede navegar con filas no enviadas en RangePaging RowSet.
         at oracle.jbo.server.QueryCollection.get(QueryCollection.java:2132)
         at oracle.jbo.server.QueryCollection.fetchRangeAt(QueryCollection.java:5430)
         at oracle.jbo.server.ViewRowSetIteratorImpl.scrollRange(ViewRowSetIteratorImpl.java:1329)
         at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStartWithRefresh(ViewRowSetIteratorImpl.java:2730)
         at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStart(ViewRowSetIteratorImpl.java:2715)
         at oracle.jbo.server.ViewRowSetImpl.setRangeStart(ViewRowSetImpl.java:3015)
         at oracle.jbo.server.ViewObjectImpl.setRangeStart(ViewObjectImpl.java:10678)
         at oracle.adf.model.binding.DCIteratorBinding.setRangeStart(DCIteratorBinding.java:3552)
         at oracle.adfinternal.view.faces.model.binding.RowDataManager._bringInToRange(RowDataManager.java:101)
         at oracle.adfinternal.view.faces.model.binding.RowDataManager.setRowIndex(RowDataManager.java:55)
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlHierBinding$FacesModel.setRowIndex(FacesCtrlHierBinding.java:800)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    <LoopDiagnostic> <dump> [8261] variableIterator variables passivated >>> TrackQueryPerformed def
    <LifecycleImpl> <_handleException> ADF_FACES-60098:El ciclo de vida de Faces recibe excepciones no tratadas en la fase RENDER_RESPONSE 6
    What is the best way to display this amount of data in a af:table and do CRUD operations?
    Thanks
    Edited by: 972255 on 05/12/2012 09:51

    Hi,
    honestly, the best way is to provide users with an option to filter the result set displayed in the table to reduce the result set size. No-one will query 2.00.000 rows using the table scrollbar.
    So one hint for optimization would be a query form (e.g. af:query)
    To answer your question "srollable" vs. "page range", see
    http://docs.oracle.com/cd/E21043_01/web.1111/b31974/bcadvvo.htm#ADFFD1179
    Pay attention to what is written in the context of +"The range paging access mode is typically used for paging through read-only row sets, and often is used with read-only view objects.".+
    Frank

  • Return specific number of rows depending on the data in a column in table

    Hi,
    I have a table named orders which has column orderid and noofbookstoorder in addition to other columns.
    I want to query the orders table and depending on the value of the 'noofbookstoorder' value return that number of rows.
    Eg
    Orderid noofbookstoorder
    1 1
    2 3
    3 2
    when I query the above data saying
    select * from orders where orderid=2;
    since it has noofbookstoorders value as 3 the query should return 3 rows and when I query
    select * from orders where orderid=3;
    it should return 2 rows and
    select * from orders where orderid=1;
    should return 1 row.
    Is it possible to achieve this. If yes, then how do I write my query.
    Thanks in advance.

    with t as (
               select 1 Orderid,1 noofbookstoorder from dual union all
               select 2,3 from dual union all
               select 3,2 from dual
    select  t.*
      from  t,
            table(cast(multiset(select 1 from dual connect by level <= noofbookstoorder) as sys.OdciNumberList))
      where Orderid = <order-id>
    /For example:
    SQL> with t as (
      2             select 1 Orderid,1 noofbookstoorder from dual union all
      3             select 2,3 from dual union all
      4             select 3,2 from dual
      5            )
      6  select  t.*
      7    from  t,
      8          table(cast(multiset(select 1 from dual connect by level <= noofbookstoorder) as sys.OdciNumberList))
      9    where Orderid = 2
    10  /
       ORDERID NOOFBOOKSTOORDER
             2                3
             2                3
             2                3
    SQL> with t as (
      2             select 1 Orderid,1 noofbookstoorder from dual union all
      3             select 2,3 from dual union all
      4             select 3,2 from dual
      5            )
      6  select  t.*
      7    from  t,
      8          table(cast(multiset(select 1 from dual connect by level <= noofbookstoorder) as sys.OdciNumberList))
      9    where Orderid = 3
    10  /
       ORDERID NOOFBOOKSTOORDER
             3                2
             3                2
    SQL> with t as (
      2             select 1 Orderid,1 noofbookstoorder from dual union all
      3             select 2,3 from dual union all
      4             select 3,2 from dual
      5            )
      6  select  t.*
      7    from  t,
      8          table(cast(multiset(select 1 from dual connect by level <= noofbookstoorder) as sys.Odc
    iNumberList))
      9    where Orderid = 1
    10  /
       ORDERID NOOFBOOKSTOORDER
             1                1
    SQL>  -- And if you want to select multiple orders
    SQL> with t as (
      2             select 1 Orderid,1 noofbookstoorder from dual union all
      3             select 2,3 from dual union all
      4             select 3,2 from dual
      5            )
      6  select  t.*
      7    from  t,
      8          table(cast(multiset(select 1 from dual connect by level <= noofbookstoorder) as sys.Odc
    iNumberList))
      9    where Orderid in (2,3)
    10  /
       ORDERID NOOFBOOKSTOORDER
             2                3
             2                3
             2                3
             3                2
             3                2
    SQL> SY.
    Edited by: Solomon Yakobson on Oct 26, 2009 7:36 AM

  • Af:table Scroll bars not displayed in IE11 for large number of rows

    Hi. I'm using JDeveloper 11.1.2.4.0.
    The requirements of our application are to display a table potentially displaying very large numbers of rows (sometimes in excess 3 million). While the user does not need to scroll through this many rows, the QBE facility allows drill-down into specific information in the rowset. We moved up to JDeveloper 11.1.2.4.0 primarily so IE11 could be used instead of IE8 to overcome input latency in ADF forms.
    However, it seems that IE11 does not enable the vertical or horizontal scroll bars for the af:table component when the table contains greater than (approx) 650,000 rows. This is not the case when the Chrome browser is used. Nor was this the case on IE8 previously (using JDev 11.1.2.1.0).
    When the table is filtered using the QBE (to a subset < 650,000 rows), the scroll bars are displayed correctly.
    In the code the af:table component is surrounded by an af:panelCollection component which is itself surrounded by an af:panelStretchLayout component.
    Does anyone have any suggestions as to how this behaviour can be corrected? Is it purely a browser problem, or might there be a programmatic workaround in ADF?
    Thanks for your help.

    Thanks for your response. That's no longer an option for us though...
    Some further investigation into the generated HTML has yielded the following information...
    The missing scroll bars appear to be as a consequence of the setting of a style for the horizontal and vertical scroll bars (referenced as vscroller and hscroller in the HTML).  The height of the scrollbar appears to be computed by multiplying the estimated number of rows in the iterator on which the table is based by 16 to give a scrollbar size proportional to the amount of data in the table, although it is not obvious why that should be done for the horizontal scroller.  If this number is greater than or equal to 10737424 pixels then the scroll bars do not display in IE11.
    It would seem better to be setting this height to a sensible limiting number of pixels for a large number of rows?
    Alternatively, is it possible to find where this calculation is taking place and override its behaviour?
    Thanks.

  • How do you return the number of Rows in a ResultSet??

    How do you return the number of Rows in a ResultSet? It's easy enough to do in the SQL query using COUNT(*) but surely JDBC provides a method to return the number of rows.
    The ResultSetMetaData interface provides a method for counting the number of columns but nothing for the rows.
    Thanks

    No good way before JDBC2.0. u can use JDBC2.0 CachedRowSet.size() to retrieve the number of rows got by a ResultSet.

  • How to Capture a Table with large number of Rows in Web UI Test?

    HI,
    Is there any possibility to capture a DOM Tabe with large number of Rows (say more than 100) in Web UI Test?
    Or is there any bug?

    Hi,
    You can try following code to capture the table values.
    To store the table values in CSV :
    *web.table( xpath_of_table ).exportToCSVFile("D:\exporttable.csv", true);*
    TO store the table values in a string:
    *String tblValues=web.table( xpath_of_table ).exportToCSVString();*
    info(tblValues);
    Thanks
    -POPS

  • Search for a word and return all the  lines (row) from the text file..

    Hi all,
    I need a help on how to search a string from the text file and returns all the lines (rows) where the searched string are found. I have included the code, it finds the indexof the string but it does not return the entire line. I would appreciate your any help.
    public class SearchWord
         public static void main(String[] args){
         //Search String
         String searchText = "man";
         //File to search (in same directory as .class file)
         String fileName = "C:\\Workspace\\MyFile.txt";
         //StringBuilder allows to create a string by concatinating
         //multiple strings efficiently.
         StringBuilder sb =
         new StringBuilder();
         try {
         //Create the buffered input stream, which reads
         //from a file input stream
         BufferedInputStream bIn =
         new BufferedInputStream(
         new FileInputStream(fileName));
         //Holds the position of the last byte we have read
         int pos = 0;
         //Holds #of available bytes in our stream
         //(which is the file)
         int avl = bIn.available();
         //Read as long as we have something
         while ( avl != 0 ) {
         //Holds the bytes which we read
         byte[] buffer = new byte[avl];
         //Read from the file to the buffer
         // starting from <pos>, <avl> bytes.
         bIn.read(buffer, pos, avl);
         //Update the last read byte position
         pos += avl;
         //Create a new string from byte[] we read
         String strTemp =
         new String(buffer);
         //Append the string to the string builder
         sb.append(strTemp);
         //Get the next available set of bytes
         avl = bIn.available();
         catch(IOException ex) {
         ex.printStackTrace();
         //Get the concatinated string from string builder
         String fileText = sb.toString();
         int indexVal = fileText.indexOf(searchText);
         //Displays the index location in the file for a given text.
         // -1 if not found
         if (indexVal == -1)
              System.out.println("No values found");
         else
              System.out.println("Search for: " + searchText);     }
    }

    Hi, you can use servlet class and use this method to get the whole line of searched string. You can override the HttpServlet to treat that class as servlet.
    public class ReportAction extends HttpServlet {
    protected void doPost(HttpServletRequest request, HttpServletResponse response)
    throws ServletException, IOException {
    //write your whole logic.
    BufferedReader br = new BufferedReader(new FileReader("your file name"));
    String line = "";
    while(line = br.readLine() != null) {
        if(line.contains("your search string")) {
            System.out.println("The whole line, row is :"+line);
    }

  • New table without statistics returns invalid number of rows

    Hi,
    I've been searching for a while now for an explanation for the following "problem"
    We have an Oracle 11.1.0.7 database on AIX5.3
    In this database we have two tables, called KRT_PRODUCTS_INFO and KRT_STRUCTURES_INFO ( the table name don't really matter ).
    The scenario is as following:
    If we recreate these tables like:
    CREATE TABLE KRT_PRODUCT_INFO_BUP AS SELECT * FROM KRT_PRODUCT_INFO;
    DROP TABLE KRT_PRODUCT_INFO CASCADE CONSTRAINTS;
    CREATE TABLE KRT_PRODUCT_INFO (...) TABLESPACE PIM_DATA NOLOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING;
    CREATE INDEX KRT_PRODUCT_INFO_X1 ON KRT_PRODUCT_INFO (PRODUCT_NUMBER) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_PRODUCT_INFO_X2 ON KRT_PRODUCT_INFO (PIM_ARTICLEREVISIONID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    INSERT INTO KRT_PRODUCT_INFO (SELECT * FROM KRT_PRODUCT_INFO_BUP);
    COMMIT;
    CREATE TABLE KRT_STRUCTURE_INFO_BUP AS SELECT * FROM KRT_STRUCTURE_INFO;
    DROP TABLE KRT_STRUCTURE_INFO CASCADE CONSTRAINTS;
    CREATE TABLE KRT_STRUCTURE_INFO (...) TABLESPACE PIM_DATA NOLOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING;
    CREATE INDEX KRT_STRUCTURES_X1 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_REV_ID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_STRUCTURES_X2 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_IDENTIFIER) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_STRUCTURES_X3 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_ID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    INSERT INTO KRT_STRUCTURE_INFO (SELECT * FROM KRT_STRUCTURE_INFO_BUP);
    COMMIT;
    and we run a complex query with these two tables, this query only return a couple of rows ( exactly 24 !!! )
    If we however generate statistics on these tables after creation, the correct number of rows is returned, being 1.167.991 rows
    The statistics are gathered using:
    BEGIN
    SYS.DBMS_STATS.GATHER_TABLE_STATS (
    OwnName => 'PIM_KRG'
    ,TabName => 'KRT_PRODUCT_INFO'
    ,Estimate_Percent => NULL
    ,Method_Opt => 'FOR ALL COLUMNS SIZE REPEAT '
    ,Degree => NULL
    ,Cascade => TRUE
    ,No_Invalidate => FALSE);
    END;
    BEGIN
    SYS.DBMS_STATS.GATHER_TABLE_STATS (
    OwnName => 'PIM_KRG'
    ,TabName => 'KRT_STRUCTURE_INFO'
    ,Estimate_Percent => NULL
    ,Method_Opt => 'FOR ALL COLUMNS SIZE REPEAT '
    ,Degree => NULL
    ,Cascade => TRUE
    ,No_Invalidate => FALSE);
    END;
    /I can imagine that the 'plan' for the query used is wrong because of missing statistics.
    But I can't imagine that it would actually return an incorrect number of rows.
    I tested this behaviour in Toad and sqlplus ( first thought it was Toad ), and both behave the same.
    Another fact is, that the "problem" is NOT reproducable on our TEST environment, that runs on Oracle 11.1.0.7 on Windows2008
    Just to be sure this is the "complex" query used. It is not developed by me, and I think it looks somewhat strange but that shouldn't matter:
    SELECT sr."Identifier" STRUCTURE_IDENTIFIER
    , ar_i."Identifier" ITEM_NUMBER
    , SUM (REPLACE (NVL (s.HIDE_LE10, 0) + NVL (p.HIDE_LE10, 0), 2, 1))
    hide_le10
    , SUM (REPLACE (NVL (s.HIDE_LE30, 0) + NVL (p.HIDE_LE30, 0), 2, 1))
    hide_le30
    , SUM (REPLACE (NVL (s.HIDE_LE40, 0) + NVL (p.HIDE_LE40, 0), 2, 1))
    hide_le40
    , SUM (REPLACE (NVL (s.HIDE_LE50, 0) + NVL (p.HIDE_LE50, 0), 2, 1))
    hide_le50
    , SUM (REPLACE (NVL (s.HIDE_LE55, 0) + NVL (p.HIDE_LE55, 0), 2, 1))
    hide_le55
    , SUM (REPLACE (NVL (s.HIDE_LE60, 0) + NVL (p.HIDE_LE60, 0), 2, 1))
    hide_le60
    , SUM (REPLACE (NVL (s.HIDE_LE70, 0) + NVL (p.HIDE_LE70, 0), 2, 1))
    hide_le70
    , SUM (REPLACE (NVL (s.HIDE_LE75, 0) + NVL (p.HIDE_LE75, 0), 2, 1))
    hide_le75
    , SUM (REPLACE (NVL (s.HIDE_LE58, 0) + NVL (p.HIDE_LE58, 0), 2, 1))
    hide_le58
    , SUM (REPLACE (NVL (s.HIDE_LE80, 0) + NVL (p.HIDE_LE80, 0), 2, 1))
    hide_le80
    , SUM (REPLACE (NVL (s.HIDE_LE90, 0) + NVL (p.HIDE_LE90, 0), 2, 1))
    hide_le90
    , SUM (REPLACE (NVL (s.HIDE_LE92, 0) + NVL (p.HIDE_LE92, 0), 2, 1))
    hide_le92
    , SUM (REPLACE (NVL (s.HIDE_LE94, 0) + NVL (p.HIDE_LE94, 0), 2, 1))
    hide_le94
    , SUM (REPLACE (NVL (s.HIDE_LE96, 0) + NVL (p.HIDE_LE96, 0), 2, 1))
    hide_le96
    , COUNT (*) cnt
    FROM KRAMP_HPM_MAIN."StructureRevision" sr
    , KRAMP_HPM_MAIN."StructureGroupRevision" sgr
    , KRAMP_HPM_MASTER."ArticleStructureMap" asm
    , KRAMP_HPM_MASTER."ArticleRevision" ar_p
    , KRAMP_HPM_MASTER."ArticleDetail" ad_p
    , KRAMP_HPM_MASTER."ArticleRevision" ar_i
    , KRAMP_HPM_MASTER."ArticleDetail" ad_i
    , KRAMP_HPM_MASTER."ArticleReference" ar
    , KRT_STRUCTURE_INFO s
    , KRT_PRODUCT_INFO p
    WHERE sr."StructureID" = sgr."StructureID"
    AND sgr."StructureGroupID" = asm."StructureGroupID"
    AND ar_p."ID" = asm."ArticleRevisionID"
    AND ar_p."ID" = ad_p."ArticleRevisionID"
    AND ad_p."Res_Text100_02" = 'PRODUCT'
    AND ar_i."ID" = ad_i."ArticleRevisionID"
    AND ad_i."Res_Text100_02" = 'ARTICLE'
    AND ar."ArticleRevisionID" = ar_p."ID"
    AND ar."ReferencedSupplierAID" = ar_i."Identifier"
    AND s.STRUCTURE_GRP_REV_ID = sgr."ID"
    AND p.PIM_ARTICLEREVISIONID = ar_p."ID"
    GROUP BY sr."Identifier", ar_i."Identifier";Any ideas are welcome..
    Thanks
    FJFranken

    Hemant K Chitale wrote:
    These two tables are in the PIM_KRG schema while the other tables in the query are distributed across two other schemas "KRAMP_HPM_MAIN" and "KRAMP_HPM_MASTER" ?
    Do you happen to have the same table names occurring in multiple schemas - the query is then referencing the data in the wrong schema ?
    Hemant K ChitaleHi,
    This is not the case. The KRAMP_HPM schema's are application dedicated schema's
    And this also then does not explain why the results are correct after generating statistics.
    Anyway thanks for the tip.
    FJFranken

  • Calling stored procedure and returning multiple resultsets

    Hello,
    Is it possible to create a procedure that return multiple result sets?
    e.g.
    procedure GetDataFromTables() is
    begin
    select * from table_one;
    select * from table_two;
    end GetDataFromTables;
    And I want to call this procedure that returns multiple resultsets (one for table_one and another for table_two)
    I have referred to the OCCI sample occiproc.cpp, but I am not sure how to handle multiple resultsets.
    Your help is highly appreciated.

    Thank You,
    Got the REF cursor in storedproc.cpp.
    But as it is documented, getCursor() gets the REF CURSOR value of an OUT parameter as a ResultSet.
    So, is it true that if I have to write a procedure that selects * from 50 tables (50 SQL statements) , I have to set 50 OUT parameters to get the result sets.?
    Is there any better way of doing this?
    e.g. in DB2, we can open multiple cursors in the procedure body and then get the result set one after another.
    CREATE PROCEDURE get_staging_data ()
    RESULT SETS 1 <no. of result sets>
    LANGUAGE SQL
    BEGIN ATOMIC
    DECLARE L_SQL varchar(5000);
    DECLARE c CURSOR WITH RETURN TO CLIENT FOR L_STMT;
    SET L_SQL = '';
    SET L_SQL = 'SELECT * FROM ';
    SET L_SQL = L_SQL || IN_Tab_Name ;
    SET L_SQL = L_SQL || ' WHERE VERIFY_FLAG=''S'' FOR READ ONLY OPTIMIZE FOR 2000 ROWS' ;
    PREPARE L_STMT FROM L_SQL;
    OPEN c; < can open multiple cursors in the prodedure body>
    END

  • Java executed SQL statement to XML - large number of rows

    Hello,
    I have to write some code to generate XML as the result of a query. AFAIK I am using the latest versions of the relevant Java libraries etc.
    I have found that using a max_size function above 2000 results in 'Exception in thread "main" java.lang.OutOfMemoryError' errors
    I thought I could overcome this by reading the data in batches using the skip_rows functionality.
    I have included the code I am using below and would appreciate any help.
    Best Regards,
    Mark Robbins
    import java.sql.*;
    import java.math.*;
    import oracle.xml.sql.query.*;
    import oracle.xml.sql.docgen.*;
    import oracle.jdbc.*;
    import oracle.jdbc.driver.*;
    public class XMLRetr
    public static void main(String args[]) throws SQLException
    String tabName = "becc";
    String user = "test/test";
    DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
    //initiate a JDBC connection
    Connection conn =
    DriverManager.getConnection("jdbc:oracle:oci8:"+user+"@test");
    // initialize the OracleXMLQuery
    OracleXMLQuery qry = new OracleXMLQuery(conn,"select * from "+tabName+" where TRUNC(SAMPLED) < \'14-NOV-99\'" );
    // structure the generated XML document
    System.out.println("Before setting parameters");
    qry.setMaxRows(8000); // set the maximum number of rows to be returned
    qry.setRowsetTag("ROOTDOC"); // set the root document tag
    qry.setRowTag("DBROW"); // sets the row separator tag
    qry.setRaiseException(true);
    qry.keepCursorState(true);
    // create the document generation factory. Note: there are methods in OracleXMLQuery
    // which don't require that an OracleXMLDocGen object be passed in; but rather, they
    // create an OracleXMLDocGen object of their own.
    OracleXMLDocGen doc = (OracleXMLDocGen) new OracleXMLDocGenString();
    for (int rowCnt = 1; rowCnt < 8000; rowCnt = rowCnt + 1000)
    // get the xml
    System.out.println("Before skip rowCnt:"+rowCnt);
    qry.setSkipRows(rowCnt); // process next batch
    System.out.println("Before getXML");
    // this is where I get the exception on the second iteration of the loop
    qry.getXML(doc);
    System.out.println("Displaying document");
    // System.out.println(doc.getXMLDocumentString());
    System.out.println("Row number:"+rowCnt);
    System.out.flush();
    System.out.println("End of document");
    qry.close();
    null

    I used qry.getXMLString()
    But called it for every row, ie, set
    qry.maxRows(1) and qry.skipRows(rowcount-1).
    The down side is I had to postprocess the String to remove the processing instruction and doc level tags.

  • Cursor & large number of rows

    hello !
    i have an application wich interrogates my Oracle 8i Databse using OCI, my problem is that the number of rows returned is so large ( the totality of rows is 1 million) , how can i have rows returned by packets (as the same way as results returned by a search tool)

    Use bulk binds (see http://technet.oracle.com/sample_code/tech/pl_sql/htdocs/bulkdemo.txt)
    hello !
    i have an application wich interrogates my Oracle 8i Databse using OCI, my problem is that the number of rows returned is so large ( the totality of rows is 1 million) , how can i have rows returned by packets (as the same way as results returned by a search tool)

  • Displayin message and fixing the number of rows in a table

    Hi Experts,
    I have a requirement like this
    there is a table, where i need to diplay a message when there are no records and no rows should be visible on that table.
    Also, when the records are populated from the context, i need to fix the number of rows in that table and display the records.
    Please let me know hw this can be achieved.
    Also in the table I have Link to URL, please let me know how to handle this reference, when i set the reference Property, its giving me an error stating that the file doesn't exist when the table gets loaded.
    Thanks in Advance
    Regards,
    Palani

    Hi
    Oh!!  You should to  explain it at first thread itself that you want to display a JPG imange or other WDWebResourceTypetaken form the backend.
    1. So this is not at all a URL
    2. You have to convert binary data to WDWebResource then display it in either Image UI element or other (like PDF , txt etc)
    3
    try
                   //        Read the datasource of the FileUpload
                   IWDResource res = wdContext.currentContextElement().getResource();
                   InputStream in = res.read(false);
                   ByteArrayOutputStream bOut = new ByteArrayOutputStream();
                   int length;
                   byte[] part = new byte[10 * 1024];
                   while ((length = in.read(part)) != -1)
                        bOut.write(part, 0, length);
                   in.close();
                   bOut.close();
                   IPrivateUploadCompView.IImageTableElement ele = wdContext.nodeImageTable().createImageTableElement();
                   ele.setImage(wdContext.currentContextElement().getResource().getUrl(0));
                   ele.setText(res.getResourceName());
                   wdContext.nodeImageTable().addElement(ele);
              catch (Exception e)
                   wdComponentAPI.getMessageManager().reportWarning(e.toString());
    Here I assume that you convert that data to IWDResource type or
    4.
    WDWebResource.getWebResource(wdContext.currentContextElement().getresource(), type);
    // getResource of type binary which u read from BAPI and set it in local context , type is MIMETYPE or hardcode it as "JPG"
    5. Further help
       [Help1|To Displayan Image in Webdynpro Java from Standard Function Module;
    [Help2|http://wiki.sdn.sap.com/wiki/display/KMC/GettinganimagefromKMDocumentstobeusedinWeb+Dynpro]
    [Help3|http://wiki.sdn.sap.com/wiki/display/KMC/GettinganimagefromKMDocumenttobeusedinWeb+DynPro]
    It might code look strange at first , please do some reserch in SDN ,I did my best at this level.
    Best Regards
    Satish Kumar

  • Deleing large number of rows from table

    Hi,
    Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.
    Method I have used:
    1. Created Temp table
    2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
    delete from B where exists (select 1 from temp where b.col1=temp.col1);
    3. please suggest options for me why it is taking too much time for deleting records in table A.
    Is there any thing that during deleting data from such master table, it is reffering to all dependent tables even if dependent data is not present ??? If yes then couls you please please suggest options for me to remove this ? I hope it won't go for CHECK constraints during deleting data.
    Thanks,
    Avinash
    Edited by: user12952025 on Apr 30, 2013 2:55 AM
    Edited by: user12952025 on Apr 30, 2013 2:56 AM
    Edited by: user12952025 on Apr 30, 2013 2:57 AM

    user12952025 wrote:
    Hi,
    Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.What attribute of the Foreign key is specified? Is it On Delete Cascade? If yes, then in a way, deleting data fro Child tables is un-necessary. Only a Delete from parent shall suffice.
    >
    Method I have used:
    1. Created Temp table
    2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
    delete from B where exists (select 1 from temp where b.col1=temp.col1);
    3. please suggest options for me why it is taking too much time for deleting records in table A.
    Is there any thing that during deleting data from such master table, it is reffering to all dependent tables even if dependent data is not present ??? If yes then couls you please please suggest options for me to remove this ? I hope it won't go for CHECK constraints during deleting data.One another way is to "Switch-Off" the relationship while deleting the data.
    ALTER TABLE table_name
    disable CONSTRAINT constraint_nameAnd then Delete the data from each of tables.
    You did specify the number of rows in each table, it would have been better to mention the number of rows to be deleted.
    It is not a hard-and-fast way, but would generally perform better, to copy the data (to be retained) from Parent Table into a Temporary Table, Drop Parent Table and rename teh Temporary table to parent table. Similar can be performed on Child tables.
    You may then Enable the Foreign key constraints.

Maybe you are looking for