Performance problem with sproc and out parameter ref cursor

Hi
I have sproc with Ref Cursor as an OUT parameter.
It is extremely slow looping over the ResultSet (does it record by record in the fetch).
so I have added setPrefetchRowCount(100) and setPrefetchMemorySize(6000)
pseudo code below:
string sqlSmt = "BEGIN get_tick_data( :v1 , :v2); END;";
Statement* s = connection->createStatement(sqlStmt);
s->setString(1, i1);
// cursor ( f1 , f2, f3 , f4 , i1 ) f for float type and i for interger value.
// 5 columns as part of cursor with 4 columns are having float value and
// 1 column is having int value assuming 40 bytes for one rec.
s->setPrefetchRowCount (100);
s->PrefetchMemorySize(6000);
s->registerOutParam(2,OCCICURSOR);
s->execute();
ResultSet* rs = s->getCursor(2);
while (rs->next()) {
// do, and do v slowly!
}

Hi,
I have the same problem. It seems, when retrieving cursor, that "setPrefetchRowCount" is not taking into account by OCCI. If you have a SQL statement like "SELECT STR1, STR2, STR3 FROM TABLE1" that works fine but if your SQL statement is a call to a stored procedure returning a cursor each row fetching need a roudtrip.
To avoid this problem you need to use the method "setDataBuffer" from the object "ResultSet" for each column of your cursor. It's easy to use with INT type and STRING type, a lit bit more complex with DATE type. But until now, I'm not able to do the same thing with REF type.
Below a sample with STRING TYPE (It's assuming that the cursor return only one column of STRING type):
try
  l_Statement = m_Connection->createStatement("BEGIN :1 := PACKAGE1.GetCursor1(:2); END;");
  l_Statement->registerOutParam(1, oracle::occi::OCCINUMBER, sizeof(l_CodeErreur));
  l_Statement->registerOutParam(2, oracle::occi::OCCICURSOR);
  l_Statement->executeQuery();
  l_CodeErreur = l_Statement->getNumber(1);
  if ((int) l_CodeErreur     == 0)
    char                         l_ArrayName[5][256];
    ub2                          l_ArrayNameSize[5];
    l_ResultSet  = l_Statement->getCursor(2);
    l_ResultSet->setDataBuffer(1, l_ArrayName,   OCCI_SQLT_STR, sizeof(l_ArrayName[0]),   l_ArrayNameSize,   NULL, NULL);
    while (l_ResultSet->next(5))
      for (int i = 0; i < l_ResultSet->getNumArrayRows(); i++)
        l_Name = CString(l_ArrayName);
l_Statement->closeResultSet(l_ResultSet);
m_Connection->terminateStatement(l_Statement);
catch (SQLException &p_SQLException)
I hope that sample help you.
Regards                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • [URGENT] Performance problem with BC4J and partioned data

    Hi all,
    I have a big performance probelm with BC4J and partitioned data. As as partitioned table shouldn't have a primary key like a sequence (or something else) my partitioned table doesn't have any primary key.
    When I debug my BC4J application I can see a message showing me "ignoring row with no primary key" from EntityCache. It takes a long time to retrieve my data even if I use the partition keys. A quick & dirty forms application was multiple times faster!
    Is this a bug in BC4J, or is BC4J not suitable for partitioned data? Can anyone give me a hint what to do, do make the BC4J application fast even with partitioned data? In a non-partitioned environment the application works quite well. So it seams that it must be an "error" somewhere in this part.
    Thanks,
    Axel

    Here's a SQL statement that creates the table.
    CREATE TABLE SEARCH
    (SEAR_PARTKEY_DAY              NUMBER(4)        NOT NULL
    ,SEAR_PARTKEY_EMP            VARCHAR2(2)      NOT NULL
    ,SEAR_ID                     NUMBER(20)       NOT NULL
    ,SEAR_ENTRY_DATE             TIMESTAMP        NOT NULL
    ,SEAR_LAST_MODIFIED            TIMESTAMP             NOT NULL
    ,SEAR_STATUS                 VARCHAR2(100)    DEFAULT '0'
    ,SEAR_ITC_DATE               TIMESTAMP        NOT NULL
    ,SEAR_MESSAGE_CLASS          VARCHAR2(15)     NOT NULL
    ,SEAR_CHIPHERING_TYPE        VARCHAR2(256)   
    ,SEAR_GMAT                   VARCHAR2(1)      DEFAULT 'U'
    ,SEAR_NATIONALITY            VARCHAR2(3)      DEFAULT 'XXX'
    ,SEAR_MESSAGE_ID             VARCHAR2(32)     NOT NULL
    ,SEAR_COMMENT                VARCHAR2(256)    NOT NULL
    ,SEAR_NUMBER_OF              NUMBER(3)        NOT NULL
    ,SEAR_INTERCEPTION_SYSTEM    VARCHAR2(40)    
    ,SEAR_COMM_PRIOD_H           NUMBER(5)        DEFAULT -1
    ,SEAR_PRIOD_R                  NUMBER(5)        DEFAULT -1
    ,SEAR_INMARSAT_CES           VARCHAR2(40)    
    ,SEAR_BEAM                   VARCHAR2(10)    
    ,SEAR_DIALED_NUMBER          VARCHAR2(70)    
    ,SEAR_TRANSMIT_NUMBER        VARCHAR2(70)    
    ,SEAR_CALLED_NUMBER          VARCHAR2(40)    
    ,SEAR_CALLER_NUMBER          VARCHAR2(40)    
    ,SEAR_MATERIAL_TYPE          VARCHAR2(3)      NOT NULL
    ,SEAR_SOURCE                 VARCHAR2(10)    
    ,SEAR_MAPPING                VARCHAR2(100)    DEFAULT '__REST'
    ,SEAR_DETAIL_MAPPING         VARCHAR2(100)
    ,SEAR_PRIORITY               NUMBER(3)        DEFAULT 255
    ,SEAR_LANGUAGE               VARCHAR2(5)      DEFAULT 'XXX'
    ,SEAR_TRANSMISSION_TYPE      VARCHAR2(40)    
    ,SEAR_INMARSAT_STD           VARCHAR2(1)     
    ,SEAR_FILE_NAME              VARCHAR2(100)    NOT NULL
    PARTITION BY RANGE (SEAR_PARTKEY_DAY, SEAR_PARTKEY_EMP)
      PARTITION SEARCH_MAX VALUES LESS THAN (MAXVALUE, MAXVALUE) MIRA4_SEARCH_EVEN
    );of course SEAR_ID is filled by a sequence but the field is not the primary key as it would decrease the performance of partitioned data.
    We moved to native JDBC with our application and the performance is like we never expected to be!

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Memory Problem with SEt and GET parameter

    hi,
    I m doing exits. I have one exit for importing and another one for changing parameter.
    SET PARAMETER exit code is ....
    *data:v_nba like eban-bsart,
           v_nbc like eban-bsart,
           v_nbo like eban-bsart.
           v_nbc = 'CAPX'.
           v_nbo = 'OPEX'.
           v_nba = 'OVH'.
    if im_data_new-werks is initial.
      if im_data_new-knttp is initial.
        if im_data_new-bsart = 'NBC' or im_data_new-bsart = 'SERC' or im_data_new-bsart = 'SERI'
           or im_data_new-bsart = 'SER' or im_data_new-bsart = 'SERM' or im_data_new-bsart = 'NBI'.
          set parameter id 'ZC1' field v_nbc.
        elseif im_data_new-bsart = 'NBO' or im_data_new-bsart = 'NBM' or im_data_new-bsart = 'SERO'.
          set parameter id 'ZC2' field v_nbo.
        elseif im_data_new-bsart = 'NBA' or im_data_new-bsart = 'SERA'.
          set parameter id 'ZC3' field  v_nba.
        endif.
      endif.
    endif. *
    and GET PARAMETER CODE IS....
      get parameter id 'ZC1' field c_fmderive-fund.
      get parameter id 'ZC2' field c_fmderive-fund.
      get parameter id 'ZC3' field c_fmderive-fund.
    FREE MEMORY ID 'ZC1'.
      FREE MEMORY ID 'ZC2'.
       FREE MEMORY ID 'ZC3'.
    In this code i m facing memory problem.
    It is not refreshing the memory every time.
    So plz give me proper solution.
    Its urgent.
    Thanks
    Ranveer

    Hi,
       I suppose you are trying to store some particular value in memory in one program and then retieve it in another.
    If so try using EXPORT data TO MEMORY ID 'ZC1'. and IMPORT data FROM MEMORY ID 'ZC1'.
    To use SET PARAMETER/GET PARAMETER the specified parameter name should be in table TPARA. Which I don't think is there in your case.
    Sample Code :
    Data declarations for the function codes to be transferred
    DATA : v_first  TYPE syucomm,
           v_second TYPE syucomm.
    CONSTANTS : c_memid TYPE char10 VALUE 'ZCCBPR1'.
    Move the function codes to the program varaibles
      v_first  = gv_bdt_fcode.
      v_second = sy-ucomm.
    Export the function codes to Memory ID
    EXPORT v_first
           v_second TO MEMORY ID c_memid.        "ZCCBPR1  --- Here you are sending the values to memory
    Then retrieve it.
    Retrieve the function codes from the Memory ID
      IMPORT v_first  TO v_fcode_1
             v_second TO v_fcode_2
      FROM MEMORY ID c_memid.                                   "ZCCBPR1
      FREE MEMORY ID c_memid.                                   "ZCCBPR1
    After reading the values from memory ID free them your problem should be solved.
    Thanks
    Barada
    Edited by: Baradakanta Swain on May 27, 2008 10:20 AM

  • Stored procedure with IN and OUT parameter

    HI all,
    here is code example
    declare
             in_dt date  := '1-feb-2010' ;
    col1 ...;
    col2 ...;
    col3 ...;       
    begin
      select e.*
      into col1,
            col2,
            col3
      from table_xyz e
      where e.start_dt  = in_dt;
    end;
    How do i convert the above code into stored procedure by accepting "in_dt" as IN parameter and getting the result set displayed (output) through OUT parameter say "cur_out" (OUT paramter)
    Thank you so much !!! I really appriciate it !!

    i ran my procedure which is very similar syndra posted
    create or replace procedure foo(p_dt in date, cv out sys_refcursor) as
    begin
    open cv for
    select e.*
    from table_xyz e
    where start_dt = p_dt;
    end;
    /Here is how is executed
    DECLARE
      P_DT DATE;
      CV SYS_REFCURSOR;
    BEGIN
      P_DT := '10-oct-2005';
      -- CV := NULL;  Modify the code to initialize this parameter
      scott.foo ( P_DT, CV );
      COMMIT;
    END;           
    -- i get PL/SQL procedure successfully complted , But i dont see the result set Or output
    - How do i see the output when i m using refcursor ?? i tried using print , but nothing didnt work
    - Any idea ??
    Thank you!!
    Edited by: user642297 on Jun 24, 2010 1:35 PM

  • Problem with Latches and Parallelising PL/SQL Cursors

    In 9i i have process that do the following:
    PROCEDURE LoadClients IS
      CURSOR cChanges IS
        SELECT ...
        WHERE MOD(PK,pnTotalJobs>) = pnThisJob To call this I do the following
      nTOTALJOBS CONSTANT := 2;
    BEGIN
      FOR i IN 1..nTOTALJOBS LOOP
        DBMS_JOB.SUBMIT(LoadClient(pnTotalJobs=>nTOTALJOBS, pnThisJob=>i-1));
      END LOOP;
      COMMIT;So by changing the value of the constant - nTOTALJOBS - I can control the number of instances of LoadClient that run. This means I only need one procedure which can be called multiple times, making maintenance simpler than having multiple procedures.
    This process has been running successfully on several procedures of over a year. However recently I have been having trouble with latches in memory. I think these are related to using the variables in the cursor and part of the oracle treating the different queries as being identical. This results in locks in memory and none of the jobs running. In most cases I'm setting nTOTALJOBS to 2. I'm looking into the further with my DBAs.
    My questions are:
    1. Has anyone come across this sort of problem before?
    2. If so is there something I can do in my code or DB parameters to alleviate or test this?
    Notes.
    1. The process uses row by row processing in PL/SQL This done to get row by row error handling. I'm not going to change this in the short-term
    2. The SQL does a call over a database link, I don't know if this has any effect.
    3. This method does have a proven performance benefit
    4. The code snippets above are just illustrative, I'm not asking for syntax corrections.
    Thank you for taking the time to read through this.

    I would first investigate the "problem with latches" to be sure that it has anything to do with these jobs that you are submitting.
    Your concern seems to be "because I am submitting multiple copies of the same PLSQL procedure to run concurrently, am I running into latching issues ?".
    Take a step back and look at
    a. MultiUser OLTP environments : At any time you can have more than a few users running the same SQL queries submitted through Forms / Client Frontends.
    b. ERP Report Extractions (eg in Oracle EBusiness Suite or Peoplesoft) : At any time (say during the daily batch runs ?) you can have multiple copies of the same report (with different bind variables for different business_units / countries etc) running concurrently and extracting reports.
    The question would be : How many concurrent executions of the same code is too much ?
    I would say that latching issues come up on :
    a. Shared Pool : Multiple sessions attempting to load SQL statements and Parse SQL statements
    You could reduce some of this with reusable SQLs and bind variables instead of literals in quick running jobs / user front end screens etc
    b. Buffer Cache : Hot Blocks being frequently pinned -- eg index root blocks or "busy" table blocks.
    If you are running, say 4-8 concurrent copies of the the same code on a 4 CPU or 8 CPU machine, you may not be running into latching issues on the code -- "may not be" -- we can't be too sure, until we identify which latches are under "issue".
    However, if the queries are executing nested loops and frequently accessing the same index blocks, you may more likely be having issues with latches on cache buffer chains.
    So, this is what I would suggest :
    a. Identify the size and scale of the "issues" -- how much CPU time / wait time is accounted for by the latches in relation to the total response time of each of the jobs
    b. Identify which latches are the ones under contention
    c. On the basis of which latches are the holding up performance, decide your next course of action.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Getting resultset from out parameter ref cursor

    hi,
    i have written one procedure which return one refcursor as out parameter.
    when i am calling that proceduer how can i retrive that resultsets in my current program.
    ex
    DECLARE
    type A is ref cursor;
    B A;
    x NUMBER :=0;
    BEGIN
    APPSEARCH(NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,B);
    FOR i in b
    loop
    x:= x+1;
    end loop;
    DBMS_OUTPUT.PUT_LINE('ANS '||x);
    END;
    but it is giving error
    please advice
    thanks
    siva

    Thanks alessandro,
    SQL> DECLARE
    2 type A is ref cursor;
    B A;
    3 4 C b%rowtype;
    5 d NUMBER;
    6 BEGIN
    7 APPSEARCH(NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,B);
    8 loop
    9 fetch b into c;
    10 exit when b%notfound;
    11 DBMS_OUTPUT.PUT_LINE('ANS '||c%rowcount);
    12 end loop;
    13 close b;
    14 END;
    15 /
    C b%rowtype;
    ERROR at line 4:
    ORA-06550: line 4, column 4:
    PLS-00320: the declaration of the type of this expression is incomplete or
    malformed
    ORA-06550: line 4, column 4:
    PL/SQL: Item ignored
    ORA-06550: line 9, column 17:
    PLS-00320: the declaration of the type of this expression is incomplete or
    malformed
    ORA-06550: line 9, column 4:
    PL/SQL: SQL Statement ignored
    ORA-06550: line 11, column 35:
    PLS-00320: the declaration of the type of this expression is incomplete or
    malformed
    ORA-06550: line 11, column 6:
    PL/SQL: Statement ignored
    i dont know how to transfer the resultset from the procedure refcursor out parameter to my local cursor variable please help

  • Performance problems with JDBC and mysql

    Hi,
    I have here a "one table" database which has 1.000.000 data rows. The problem is, that deleting a special entry (ie delete from mytable where da_value="foo1") takes between 10 and 60 seconds.
    Does anyone can help me out of this performance trap.
    LoCal

    table indexed and keys used?
    always preferred postgres! :)

  • Performance problems with combobox and datasource

    We have a perfomance problem, when we are connecting a datatable object or something like this to a datasource property of a combobox. Below you find the source code. The SQL-Statement reads about 40000 rows and the result (all 40000) should be listed in the combobox. There is duration about 30 second before this process has finished. Any suggestions?
    Dim ds As New DataSet
    strSQL = "Select * from am.city"
    conn = New Oracle.DataAccess.Client.OracleConnection(Configuration.ConfigurationSettings.AppSettings("conORA"))
    comm = New Oracle.DataAccess.Client.OracleCommand(strSQL)
    da = New Oracle.DataAccess.Client.OracleDataAdapter(strSQL, conn)
    conn.Open()
    da.Fill(ds)
    conn.Close()
    Dim dt As New DataTable
    dt = ds.Tables(0)
    ComboBox1.DataSource = dt
    ComboBox1.ValueMember = dv.Table.Columns("id").ColumnName
    ComboBox1.DisplayMember = dv.Table.Columns("city").ColumnName

    But how long does it take to fill the DataTable?
    I can fill a 40000 row datatable in under 4 seconds.
    DataBinding a combo box to that many rows is pretty expensive, and not normally recommended.
    David
    Dim strConnection As String = "Data Source=oracle;User ID=scott;Password=tiger;"
    Dim conn As OracleConnection = New OracleConnection(strConnection)
    conn.Open()
    Dim cmd As New OracleCommand("select * from (select * from all_objects union all select * from all_objects) where rownum <= 40000", conn)
    Dim ds As New DataSet()
    Dim da As New OracleDataAdapter(cmd)
    Dim begin As Date = Now
    da.Fill(ds)
    Console.WriteLine(ds.Tables(0).Rows.Count & " rows loaded in " & Now.Subtract(begin).TotalSeconds & " seconds")
    outputs
    40000 rows loaded in 3.734375 seconds

  • Performance problems with EP 6 and MS IE

    Hi everybody,
    since a couple of days, we are facing a sever performance problem with our SAP EP 6.0. When I access the system with MS Internet Explorer 6.0, it takes 5-10 minutes after the Login. With Firefox Browser, the performance is ok. Therefore I assume that it must be a problem with the IE settings. Does anybody know a solution?
    Best regards,
       Michael

    There are a few things that this could be.  I've seen the setting "Empty Temporary Internet Files folder when browser is closed" cause a lot of performance problems (This is in the advanced settings in your IE).
    This will cause your cache to be cleared out each time the browser is closed and cause a lot more data to be downloaded each time you login to the system.
    For more analysis I'd recommend putting a tool like HTTPWatch into your IE browser and seeing which requests are using the most time.

  • Frustratingly, since I upgraded to Yosemite 10.10, I too am having the worst problems with WiFi dropping out. I've never had this problem before on my iMac 27-inch mid 2011 model. Turning WiFi off and then back on again sometimes works. Help please.

    Frustratingly, since I upgraded to Yosemite 10.10, I too am having the worst problems with WiFi dropping out. I've never had this problem before on my iMac 27-inch mid 2011 model. Turning WiFi off and then back on again sometimes works. Help please. I've already tried a lot of your suggested fixes, but without success. Why hasn't Apple Fixed this?

    Please test after taking each of the following steps that you haven't already tried. Stop when the problem is resolved. Back up all data before making any changes.
    Step 1
    Take the applicable steps in this support article. The Wireless Diagnostics program generates a large file of information about your system, which would be used by Apple Engineering in case of a support incident. Don't post the contents here.
    Step 2
    Disconnect all USB 3 devices. If you don't know which are USB 3, disconnect all USB devices except keyboard and mouse.
    Step 3
    If you're not using a wireless keyboard or trackpad, disable Bluetooth by selecting Turn Bluetooth Off from the menu with the Bluetooth icon. If you don't have that menu, open the Bluetooth preference pane in System Preferences and check the box marked Show Bluetooth in menu bar. Test. If you find that Wi-Fi works better with Bluetooth disabled, you should use the 5 GHz Wi-Fi band. Your router may not support it; in that case, you need a new router.
    Step 4
    Open the Energy Saver pane in System Preferences and unlock the settings, if necessary. Select the Power Adapter  tab, if there is one. Uncheck the box marked
              Wake for Wi-Fi network access
    if it's checked.
    Step 5
    Open the Network pane in System Preferences and make a note of your settings in the Wi-Fi service. It may be helpful to take screenshots of the various tabs in the preference pane. If the preference pane is locked, unlock it by clicking the padlock icon and entering your administrator password. Delete Wi-Fi from the service list on the left by selecting it and clicking the minus-sign button at the bottom. Then recreate the service by clicking the plus-sign button and following the prompts.
    Step 6
    In the Wi-Fi settings, select
              Advanced... ▹ TCP/IP ▹ Configure IPv6: Link-local
    Click OK and then Apply.
    Step 7
    Reset the System Management Controller.
    Step 8
    Reset the PRAM.
    Step 9
    Launch the Keychain Access application. Search for and delete all AirPort network password items that refer to the network. Make a note of the password first.
    Step 10
    Make a "Genius" appointment at an Apple Store, or go to another authorized service center.

  • Problem with XMLTABLE and LEFT OUTER JOIN

    Hi all.
    I have one problem with XMLTABLE and LEFT OUTER JOIN, in 11g it returns correct result but in 10g it doesn't, it is trated as INNER JOIN.
    SELECT * FROM v$version;
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    "CORE     11.2.0.1.0     Production"
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    --test for 11g
    CREATE TABLE XML_TEST(
         ID NUMBER(2,0),
         XML XMLTYPE
    INSERT INTO XML_TEST
    VALUES
         1,
         XMLTYPE
              <msg>
                   <data>
                        <fields>
                             <id>g1</id>
                             <dat>data1</dat>
                        </fields>
                   </data>
              </msg>
    INSERT INTO XML_TEST
    VALUES
         2,
         XMLTYPE
              <msg>
                   <data>
                        <fields>
                             <id>g2</id>
                             <dat>data2</dat>
                        </fields>
                   </data>
              </msg>
    INSERT INTO XML_TEST
    VALUES
         3,
         XMLTYPE
              <msg>
                   <data>
                        <fields>
                             <id>g3</id>
                             <dat>data3</dat>
                        </fields>
                        <fields>
                             <id>g4</id>
                             <dat>data4</dat>
                        </fields>
                        <fields>
                             <dat>data5</dat>
                        </fields>
                   </data>
              </msg>
    SELECT
         t.id,
         x.dat,
         y.seqno,
         y.id_real
    FROM
         xml_test t,
         XMLTABLE
              '/msg/data/fields'
              passing t.xml
              columns
                   dat VARCHAR2(10) path 'dat',
                   id XMLTYPE path 'id'
         )x LEFT OUTER JOIN
         XMLTABLE
              'id'
              passing x.id
              columns
                   seqno FOR ORDINALITY,
                   id_real VARCHAR2(30) PATH '.'
         )y ON 1=1
    ID     DAT     SEQNO     ID_REAL
    1     data1     1     g1
    2     data2     1     g2
    3     data3     1     g3
    3     data4     1     g4
    3     data5          Here's everything fine, now the problem:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
    PL/SQL Release 10.2.0.1.0 - Production
    "CORE     10.2.0.1.0     Production"
    TNS for HPUX: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    --exactly the same environment as 11g (tables and rows)
    SELECT
         t.id,
         x.dat,
         y.seqno,
         y.id_real
    FROM
         xml_test t,
         XMLTABLE
              '/msg/data/fields'
              passing t.xml
              columns
                   dat VARCHAR2(10) path 'dat',
                   id XMLTYPE path 'id'
         )x LEFT OUTER JOIN
         XMLTABLE
              'id'
              passing x.id
              columns
                   seqno FOR ORDINALITY,
                   id_real VARCHAR2(30) PATH '.'
         )y ON 1=1
    ID     DAT     SEQNO     ID_REAL
    1     data1     1     g1
    2     data2     1     g2
    3     data3     1     g3
    3     data4     1     g4As you can see in 10g I don't have the last row, it seems that Oracle 10g doesn't recognize the LEFT OUTER JOIN.
    Is this a bug?, Metalink says that sometimes we can have an ORA-0600 but in this case there is no error returned, just incorrect results.
    Please help.
    Regards.

    Hi A_Non.
    Thanks a lot, I tried with this:
    SELECT
         t.id,
         x.dat,
         y.seqno,
         y.id_real
    FROM
         xml_test t,
         XMLTABLE
              '/msg/data/fields'
              passing t.xml
              columns
                   dat VARCHAR2(10) path 'dat',
                   id XMLTYPE path 'id'
         )x,
         XMLTABLE
              'id'
              passing x.id
              columns
                   seqno FOR ORDINALITY,
                   id_real VARCHAR2(30) PATH '.'
         )(+) y ;And is giving me the complete output.
    Thanks again.
    Regards.

  • Performance problems with SAP GUI 7.10 and BEx 3.5 Patch 400?

    Hi everybody,
    we installed SAP GUI 7.10 and BEx 3.5 Patch 400 and detected hugh performance problems with this version in comparison to the SAP GUI 6.40 and BEx 3.5 or BEx 7.0 Patch 800.
    Does anybody detect the same problems?
    Best regards,
    Ulli

    Most important question when you are talking about performance-issues:
    which OC are you working on and which excel version?
    ciao
    Joke

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

  • Canon pixma pro 9500 having problems with my print out. colors are glazed and missing colors

    canon pixma pro 9500 having problems with my print out. colors are glazed and missing colors
    i have prfiled the mac and printer wiht colormunki and uplodated the profile sucessfully
    i set the printer to correct icc profile
    print out comes out same with glazed colors and missing tones/colors
    however when i use the same printer on windows and with light room, following same icc profile, colormunki calibrated profile, the print our are excellent.
    i have also tried using printer manage profile using the Mac and Aperture and get same poor prints
    can  you please assist and thanks

    I've a Pro9000 and also use Colormunki successfully.
    When I use Colormunki, I calibrate both of my monitors and the printer/paper combination. Colormunki saves the generated profiles to the correct places -- I don't load or change any profiles after that as that would mess-up what the Colormunki had just done for me, and this would likely mess-up the prints.
    Though I double-check the settings in the print dialogue, the choice for the printer handling things is already grayed-out. Colorsync handles it all just fine.

Maybe you are looking for