Resultset returning too many rows when using Group by

Hello
My problem is this
I run the following query which uses a Group By
sQury = "SELECT Distinct Searches.categoryId ,count(Searches.categoryId) AS NUMBER_OF_APPEARANCE" +
" FROM Searches " +
" GROUP BY Searches.categoryId ";
When I run this query in Access the query returns 15 rows (since I have 15 products)
When I run this query by opening Resultset I get 60 rows (Which are the number of records in Searches table)
NOTE: the first 15 rows in the resultset are the 15 rows I got in access and after that I have something like null rows with no data inside (with causes real problems to my program
I use the followin code in order to run the query
public ResultSet m_resultSet = null;
private Statement m_stat;
private Connection m_conn;
m_stat = m_conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);
m_stat.execute(sql)
m_resultSet = dbResultSet.m_stat.getResultSet();
If any one see any problem with this please help

Try removing distinct as well.
(The above suggestion also makes sense as it is not possible to update a count. Even if it has no impact on the problem.)

Similar Messages

  • WUT-113 Too many rows when using Webutil to upload a doc or pdf

    Hi All,
    I am using Webutil to upload and view files and am receiving the error msg 'WUT-113 Too many rows matched the supplied where clause'
    The process works fine for uploading photos but not when I try and upload documents and pdf files.
    I am on v 10.1.2.0.2 and using XP.
    The code example (for documents) is outlined below but think that the issue must be to do with some sort of incorrect Webutil configuration
    on my machine.
    declare
         vfilename varchar2(3000);
         vboolean boolean;
    begin
         vfilename := client_get_file_name('c:\',file_filter => 'Document files (*.doc)|*.doc|');
         vboolean := webutil_file_transfer.Client_To_DB_With_Progress
    ( vfilename,
    'lobs_table',
    'word_blob',
    'blob_id = '||:blob_id,
    'Progress',
    'Uploading File '||vfilename,
    true,
    'CHECK_LOB_PROGRESS');
    end;
    Any assistance in this matter would be appreciated.
    kind regards,
    Tom

    Hi Sarah,
    Many thanks for the reply.
    All I'm trying to do is to click on the Browse Document button in a form to upload
    a document (in this example) from my machine and save it to the db table called lobs_table
    using the webutil_file_transfer.Client_To_DB_With_Progress program.
    When I first access the form the field :blob_id is populated (by a When-create-Record trigger)
    with a value made up of sysdate in NUMBER format as DDMMHHMISS e.g. 0106101025
    When I press 'Browse Document' - a button in the form) and the dialog box is opened and
    I select a document and click ok then I see the error message 'WUT-113 Too many rows matched the supplied where clause' and yet the where clause element of the call to webutil_file_transfer.Client_To_DB_With_Progress program
    should be the :blob_id value ('blob_id = '||:blob_id) i.e. should be a single value populated in the field when I first access the form - so why am I seeing the when too many rows error?
    I may be missing something obvious as I've only just started using Webutil.
    Kind regards,
    Tom

  • BBP_OM_DETERMINE_RESP_PGRP returns to many rows when using CATEGORY_ID

    Hi,
    i need a function that gives me the purchase group of a certain category-id.
    i filled IS_RESP_ITEM_DATA-CATEGORY_ID with the correct value but the function returns all purchase groups as result. In the OM the responsiblity for purchase group and category id is 1 on 1, so it should return only 1 row.
    Suggestions?
    Thx!

    Hi,
    i need a function that gives me the purchase group of a certain category-id.
    i filled IS_RESP_ITEM_DATA-CATEGORY_ID with the correct value but the function returns all purchase groups as result. In the OM the responsiblity for purchase group and category id is 1 on 1, so it should return only 1 row.
    Suggestions?
    Thx!

  • SQL subquery returning too many rows with Max function

    Hello, I hope someone can help me, I been working on this all day. I need to get max value, and the date and id where that max value is associated with between specific date ranges. Here is my code , and I have tried many different version but it still returning
    more than one ID and date
    Thanks in advance
    SELECT
      distinctbw_s.id, 
    avs.carProd,cd_s.RecordDate,
    cd_s.milkProductionasMilkProd,
    cd_s.WaterProductionasWaterProd
    FROMtblTestbw_s
    INNERJOINtblTestCpcd_sWITH(NOLOCK)
    ONbw_s.id=cd_s.id   
    ANDcd_s.recorddateBETWEEN'08/06/2014'AND'10/05/2014'
    InnerJoin
    (selectid,max(CarVol)ascarProd
    fromtblTestCp
    whererecorddateBETWEEN'08/06/2014'AND'10/05/2014'
     groupby 
    id)avs
    onavs.id=bw_s.id
    id RecordDate carProd       MilkProd WaterProd
    47790 2014-10-05   132155   0 225
    47790 2014-10-01   13444    0 0
    47790 2014-08-06   132111    10 100
    47790 2014-09-05   10000    500 145
    47790 2014-09-20   10000    800 500
    47791 2014-09-20   10000    300 500
    47791 2014-09-21   10001    400 500
    47791 2014-08-21   20001    600 500
    And the result should be ( max carprod)
    id RecordDate carProd       MilkProd WaterProd
    47790 2014-10-05   132155  0 225
    47791 2014-08-21   20001    600 500

    Help your readers help you.  Remember that we cannot see your screen, do not know your data, do not understand your schema, and cannot test a query without a complete script.  So - remove the derived table (to which you gave the alias "avs")
    and the associated columns from your query.  Does that generate the correct results?  I have my doubts since you say "too many" and the derived table will generate a single row per ID.  That suggests that your join between the first
    2 tables is the source of the problem.  In addition, the use of DISTINCT is generally a sign that the query logic is incorrect, that there is a schema issue, or that there is a misunderstanding of the schema. 

  • Tag Query History mode returning too many rows of data

    I am running a Tag Query from HQ to a plant site and want to limit the amount of data that returns to the minimum required to display trends on a chart.  The minimum required is subjective, but will be somewhere between 22 and 169 data points for a weeks data.  Testing and viewing the result is needed to determine what is an acceptable minimum. 
    I build a Tag Query with a single tag and set it to History Mode.  I set a seven day period going midnight to midnight.  And I set the row count to 22.  When I execute the query it returns 22 data points.  But when I go to visualization, I get 565 datapoints.  So obviously that is not what I want as I want a very slim dataset coming back from the IP21 server (to minimize the load on the pipe). 
    Any suggestions?

    Hi Michael,
    it looks to me like you have enabled the "Use Screen Resolution" option in your display template or in the applet HTML. Setting this option makes the display template fetch as many rows as there are pixels in the chart area. Like setting a rowcount in the applet HTML as a param, this will override any rowcount limitations you have set at the Query Template level...
    Hope this helps,
    Sascha

  • Java exception "Too many rows" when launching

    Hi,
    I'm trying to run SQLDeveloper on AIX 5.2
    The Java software is the JAVA 1.5 :
    $ ./java -version
    java version "1.5.0"
    Java(TM) 2 Runtime Environment, Standard Edition (build pap64devifx-20070725 (SR5a))
    IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 AIX ppc64-64 j9vmap6423-20070426 (JIT enabled)
    J9VM - 20070420_12448_BHdSMr
    JIT - 20070419_1806_r8
    GC - 200704_19)
    JCL - 20070725
    When launching sqldeveloper, I get more than 3000 error lines...
    Here are the beginning :
    java.io.FileNotFoundException: <INSTALLATION-PATH>/jdev/extensions/oracle.jdeveloper.db.sqlplus.jar (Too many open files)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
    at oracle.ide.boot.JarDirs.getDirsImpl(JarDirs.java:75)
    at oracle.ide.boot.JarDirs.<init>(JarDirs.java:63)
    at oracle.ide.boot.SharedJarByPackage.<init>(SharedJarByPackage.java:19)
    at oracle.ide.boot.IdeSharedCodeSourceFactory.createCodeSource(IdeSharedCodeSourceFactory.java:30)
    at oracle.classloader.SharedCodeSourceFactory.create(SharedCodeSourceFactory.java:43)
    at oracle.classloader.SharedCodeSourceSet.subscribe(SharedCodeSourceSet.java:315)
    at oracle.classloader.SharedCodeSourceSet.subscribe(SharedCodeSourceSet.java:174)
    at oracle.classloader.SharedCodeSourceSet.subscribe(SharedCodeSourceSet.java:148)
    at oracle.classloader.SharedCodeSourceSet.subscribe(SharedCodeSourceSet.java:215)
    at oracle.classloader.PolicyClassLoader.addCodeSource(PolicyClassLoader.java:874)
    at oracle.ideimpl.extension.ExtensionManagerImpl.addToPolicyClassLoader(ExtensionManagerImpl.java:1464)
    at oracle.ideimpl.extension.ExtensionManagerImpl.addURLToClassPath(ExtensionManagerImpl.java:1401)
    at oracle.ideimpl.extension.ExtensionManagerImpl.mav$addURLToClassPath(ExtensionManagerImpl.java:116)
    at oracle.ideimpl.extension.ExtensionManagerImpl$4.addToClasspath(ExtensionManagerImpl.java:690)
    at javax.ide.extension.spi.BaseExtensionVisitor.addExtensionSourceToClasspath(BaseExtensionVisitor.java:278)
    If it could help you, the splash screen is display a short time and the progression bar is beginning but for a short time only, after what the process is dying.
    Thanks,
    Christian

    java.io.FileNotFoundException: <INSTALLATION-PATH>/jdev/extensions/oracle.jdeveloper.db.sqlplus.jar (Too many open files)
    You have exceeded your max open files limit as determined by your operating system. You need to talk to your system administrators.

  • Left outer join using date range returns too many rows

    I am trying to pull data for a website.
    Names table:
    company_name varchar2(30)
    julian_day varchar2(3)
    logins number(3)
    login_errors number(3)
    Given a julian date range (e.g. 250-252), I am displaying the information from the Names table.
    The problem is that I also need to display changes (increases/decreases) in the data. I do that by coloring the text based on a 10% increase/decrease. Data for the 3 days 250-252 would be compared to data for the previous 3 days 247-249.
    Not all companies will report data on all days, so some gaps in the data may exist. Therefore, I cannot do just a simple join.
    I am trying to write a query that will give me this information if the user chooses days 250-252.
    Company_name
    sum(logins) for days 250-252
    sum(login_errors) for days 250-252
    sum(logins) for days 247-249
    sum(login_errors) for days 247-249
    The query I'm using is:
    select cur.company_name, sum(cur.logins),
    sum(cur.login_errors), sum(old.logins), sum(old.login_errors)
    FROM names cur LEFT OUTER JOIN names old
    ON cur.company_name = old.company_name
    WHERE cur.adate>='250' and cur.adate<='252'
    and old.adate>='247' and old.adate<='249'
    GROUP by cur.company_name
    Given this data:
    Company_name adate logins login_errors
    ABC 247 10 10
    ABC 248 20 20
    ABC 249 30 30
    ABC 250 15 15
    ABC 251 25 25
    ABC 252 35 35
    My problem is that it returns:
    adate cur.logins cur.login_err old.logins old.login_err
    250 15 15 60 60
    251 25 25 60 60
    252 35 35 60 60
    How can I get it to only give me the one "old" day's data? I went with the LEFT OUTER JOIN because it's possible that there is no data for an "old" day.
    Thanks in advance.....

    Your problem stems from the join itself, and would be the same even without the OUTER JOIN. With your data, there are 3 records in cur and 3 records in old. The join matches each record in cur to each record in old resulting in 9 records in total. Without the SUM, this is clear:
    SQL> SELECT cur.company_name, cur.logins, cur.login_errors,
      2         old.logins, old.login_errors, cur.adate cad, old.adate oad
      3  FROM names cur LEFT OUTER JOIN names old
      4                 ON cur.company_name = old.company_name
      5  WHERE cur.adate>=250 and cur.adate<=252 and
      6        old.adate>=247 and old.adate<=249;
    COMPANY_NA     LOGINS LOGIN_ERRORS     LOGINS LOGIN_ERRORS        CAD        OAD
    ABC                35           35         10           10        252        247
    ABC                25           25         10           10        251        247
    ABC                15           15         10           10        250        247
    ABC                35           35         20           20        252        248
    ABC                25           25         20           20        251        248
    ABC                15           15         20           20        250        248
    ABC                35           35         30           30        252        249
    ABC                25           25         30           30        251        249
    ABC                15           15         30           30        250        249
    9 rows selected.You can do this with only one reference to the table.
    SELECT company_name,
           SUM(CASE WHEN adate BETWEEN 250 and 252 THEN logins ELSE 0 END) curlog,
           SUM(CASE WHEN adate BETWEEN 250 and 252 THEN login_errors ELSE 0 END) curerr,
           SUM(CASE WHEN adate BETWEEN 247 and 249 THEN logins ELSE 0 END) oldlog,
           SUM(CASE WHEN adate BETWEEN 247 and 249 THEN login_errors ELSE 0 END) olderr
    FROM names
    WHERE adate BETWEEN 247 and 252
    GROUP BY company_nameHTH
    John

  • Left join and where clause with not equal ( ) returns too many rows

    Say I have something like this
    Table A
    =========
    Id
    OrderNum
    Date
    StoreName
    AddressKey
    Table B
    ========
    Id
    StreetNumber
    City
    State
    select a.* from [Table A] a
    left join [Table B] b on a.AddressKey = b.Id
    where a.StoreName <> 'Burger place'
    The trouble is that the above query still returns rows that have StoreName = 'Burger place'
    One way Ive handled this is to use a table expression, select everything into that, then select from the CTE and apply the filter there.  How could you handle it in the same query however?

    Hi Joe,
    Thanks for your notes.
    INT SURROGATE PRIMARY KEY provides a small footprint JOIN ON column. Hence performance gain and simple JOIN programming.  In the Addresses table, address_id is 4 bytes as opposed to 15 bytes san. Similarly for the Orders table.
    INT SURROGATE PRIMARY KEY is a meaningless number which can be duplicated at will in other tables as FOREIGN KEY.  Having a meaningful PRIMARY KEY violates the RDBMS basics about avoiding data duplication.  If I make CelebrityName (Frank Sinatra)
    a PRIMARY KEY, I have to duplicate "Frank Sinatra" as an FK wherever it needed as opposed to duplicating SURROGATE PRIMARY KEY CelebrityID (79) a meaningless number.
    This is how we design in SQL Server world.
    QUOTE from Wiki: "
    Advantages[edit]
    Immutability[edit]
    Surrogate keys do not change while the row exists. This has the following advantages:
    Applications cannot lose their reference to a row in the database (since the identifier never changes).
    The primary or natural key data can always be modified, even with databases that do not support cascading updates across related
    foreign keys.
    Requirement changes[edit]
    Attributes that uniquely identify an entity might change, which might invalidate the suitability of natural keys. Consider the following example:
    An employee's network user name is chosen as a natural key. Upon merging with another company, new employees must be inserted. Some of the new network user names create conflicts because their user names were generated independently (when the companies
    were separate).
    In these cases, generally a new attribute must be added to the natural key (for example, an
    original_company column). With a surrogate key, only the table that defines the surrogate key must be changed. With natural keys, all tables (and possibly other, related software) that use the natural key will have to change.
    Some problem domains do not clearly identify a suitable natural key. Surrogate key avoids choosing a natural key that might be incorrect.
    Performance[edit]
    Surrogate keys tend to be a compact data type, such as a four-byte integer. This allows the database to query the single key column faster than it could multiple columns. Furthermore a non-redundant distribution of keys causes the resulting
    b-tree index to be completely balanced. Surrogate keys are also less expensive to join (fewer columns to compare) than
    compound keys.
    Compatibility[edit]
    While using several database application development systems, drivers, and
    object-relational mapping systems, such as
    Ruby on Rails or
    Hibernate, it is much easier to use an integer or GUID surrogate keys for every table instead of natural keys in order to support database-system-agnostic operations and object-to-row mapping.
    Uniformity[edit]
    When every table has a uniform surrogate key, some tasks can be easily automated by writing the code in a table-independent way.
    Validation[edit]
    It is possible to design key-values that follow a well-known pattern or structure which can be automatically verified. For instance, the keys that are intended to be used in some column of some table might be designed to "look differently from"
    those that are intended to be used in another column or table, thereby simplifying the detection of application errors in which the keys have been misplaced. However, this characteristic of the surrogate keys should never be used to drive any of the logic
    of the applications themselves, as this would violate the principles of
    Database normalization"
    LINK: http://en.wikipedia.org/wiki/Surrogate_key
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

  • I tried to send a mail message to too many addees. when the rejection came back "cannot send message using the server..." the window is too long to be able to see the choices at the bottom of it. how can i see the choices at the bottom of that window?

    I tried to send a mail message to too many addees. when the rejection came back "cannot send message using the server..." the window is too long to be able to see the choices at the bottom of it. how can I see the choices at the bottom of that window?

    I tried to send it through gmail and the acct is  a POP acct
    I'm not concerned about sending to the long address list. I just can't get the email and window that says "cannot send emai using the server..." to go away. The default must be "retry", because although I cannot see the choices at the bottom of the window if I hit return it trys again... and then of course comes back with the very long pop up window that I cannot see the bottom of so I can tell it to quit trying...

  • List of Value: Best practice when there are too many rows.

    Hi,
    I am working in JDev12c. Imagine the following scenario. We have an employee table and the organization_id as one of its attributes. I want to set up a LOV for this attribute. For what I understand, if the Organization table contains too many rows, this will create an extreme overhead (like 3000 rows), also, would be impossible to scroll down in a simple LOV. So, I have decided the obvious option; to use the LOV as a Combo Box with List of Values. Great so far.
    That LOV will be use for each user, but it doesn't really depend of the user and the list of organization will rarely change. I have a sharedApplicationModule that I am using to retrieve lookup values from DB. Do you think would be OK to put my ORGANIZATION VO in there and create the View Accessor for my LOV in the Employees View?
    What considerations should I take in term of TUNING the Organization VO?
    Regards

    Hi Raghava,
    as I said, "Preparation Failed" may be (if I recall correctly) as early as the HTTP request to even get the document for indexing. If this is not possible for TREX, then of course the indexing fails.
    What I suggested was a manual reproduction. So log on to the TREX host (preferrably with the user that TREX uses to access the documents) and then simply try to open one of the docs with the "failed" status by pasting its address in the browser. If this does not work, you have a pretty good idea what's happening.
    Unfortunately, if that were the case, this would the be some issue in network communications or ticketing and authorizatuions, which I can not tell you from here how to solve.
    In any case, I would advise to open a support message to SAP - probably rather under the portal component than under TREX, as I do not assume that this stage of a queue error has anything to do with the actual engine.
    Best,
    Karsten

  • How can I show additional tab rows when using many open tabs?

    How can I show additional tab rows when using many open tabs?

    What method (code) did you use to get the Tab bar displaying in the space used for the Navigation Toolbar (location bar)?
    The Tab bar should be displayed above the Navigation Toolbar.
    Start Firefox in <u>[[Safe Mode|Safe Mode]]</u> to check if one of the extensions (Firefox/Tools > Add-ons > Extensions) or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance).
    *Do NOT click the Reset button on the Safe Mode start window.
    *https://support.mozilla.org/kb/Safe+Mode
    *https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes

  • Too many rows found

    I have two data blocks, one data block joins two tables and second datablock is based on one table.
    first datablock has all fields with 1:1 relationship with Packing_id and second data block details has multiple rows
    for every Packing_id. I wrote 2 procs for 2 datablocks are called in respective Post-Query trigger.
    My problem is when I am running forms it gives error Message('too many rows found_orders_begin');
    Here are my codes.
    PROCEDURE post_query IS
    CURSOR mast_cur IS
    SELECT pa.ship_to_last_name,
    pa.ship_to_first_name,
    pa.ship_to_address1,
    pa.ship_to_address2,
    pa.ship_to_city,
    p.packing_id,
    FROM packing_attributes pa,packing p
    WHERE p.packing_id ; = pa.packing_id
    AND p.packing_id ; = :PACKING_JOINED.PACKING_ID;
    BEGIN
    Message('too many rows found_orders_begin');
    OPEN mast_cur;
    loop
    FETCH mast_cur INTO :PACKING_JOINED.SHIP_TO_LAST_NAME,
    :PACKING_JOINED.SHIP_TO_FIRST_NAME,
    :PACKING_JOINED.SHIP_TO_ADDRESS1,
    :PACKING_JOINED.SHIP_TO_ADDRESS2,
    :PACKING_JOINED.SHIP_TO_CITY,
    :PACKING_JOINED.PACKING_ID,
    end loop;
    CLOSE mast_cur;
    EXCEPTION
    WHEN too_many_rows THEN
    Message('too many rows found');
    WHEN no_data_found THEN
    Message('no data was found there');
    WHEN OTHERS THEN
    Message('do something else');
    END post_query;
    Detail proc
    PROCEDURE post_query IS
    CURSOR det_cur IS
    SELECT pd.quantity,
    pd.stock_number,
    FROM packing_details pd,packing p
    WHERE p.packing_id ; = pd.packing_id
    AND pd.packing_id = :PACKING_JOINED.PACKING_ID;
    BEGIN
    Message('too many rows found_pack_begin');
    OPEN det_cur;
    FETCH det_cur INTO
    :DETAILS.QUANTITY,
    :DETAILS.STOCK_NUMBER,
    CLOSE det_cur;
    EXCEPTION
    WHEN too_many_rows THEN
    Message('too many rows found');
    WHEN no_data_found THEN
    Message('no data was found there');
    WHEN OTHERS THEN
    Message('do something else');
    END post_query;
    Thanks in advance for your help.
    Sandy

    Thanks for reply.
    Maybe it gives this message because you have programmed to show this message ?
    I intentionally gave this message to see how far my code is working,if I don't give this message and execute query I get FRM-41050:You cannot update this record.
    Even though I am not updating record(I am querying record) and data block UPdate Allowed is set to NO.
    Some additional comments on your code:
    What is the loop supposed to do? You just fill the same fields in forms repeating with the values of your cursor, so after the loop the last record from your query will be shown. In general, in POST-QUERY you read Lookup's, not details.
    Sorry but I have no idea how to show detail records,thats why i tried with loop. In first proc I will have only 1 row returned so I guess I don't need loop in that proc?
    In second there will be multiple rows for one packing_id(packing_id is common column for both block), please let me know how to do that?
    Your exception-handler for NO_DATA_FOUND and TOO_MANY_ROWS are useless, for these errors cannot be raised using a cursor-for-loop
    I will remove these. Thanks
    Sandy
    Edited by: sandy162 on Apr 2, 2009 1:28 PM

  • Exception too many rows...

    Hi
    I am getting two different outputs with following code depending upon i declare the variable in first or second way...
    when i declare the variable v_empno as number(10) and too many rows exception is raised....and after that i dbms this variable..it is null...
    but when i declare the same variable as table.column%type....and the similar scenario happens and i dbms the value of variable...it is not null...rather the first value from output of the query...
    declare
    --v_empno number(10);
    v_empno emp.empno%type;
    begin
    dbms_output.put_line('before '||v_empno );
    select empno into v_empno from emp;
    dbms_output.put_line('first '||v_empno);
    exception when too_many_rows then
    dbms_output.put_line('second '||v_empno);
    dbms_output.put_line('exception'||sqlerrm);
    end;
    is there any specific reason for this....
    ur comments plz
    Thanks
    Sidhu

    In 9i:
    SQL> declare
      2  --v_empno number(10);
      3  v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11  end;
    12  /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.
    SQL> declare
      2  v_empno number;
      3  --v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11  end;
    12  /
    before
    second
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.
    SQL> edit
    Wrote file afiedt.buf
      1  declare
      2  v_empno number(10);
      3  --v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11* end;
    SQL> /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.In 10G:
    SQL> declare
      2  v_empno number(10);
      3  --v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11  end;
    12  /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.
    SQL> edit
    Wrote file afiedt.buf
      1  declare
      2  v_empno number;
      3  --v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11* end;
    SQL> /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.
    SQL> edit
    Wrote file afiedt.buf
      1  declare
      2  --v_empno number;
      3  v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11* end;
    SQL> /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.Anyhow you should not rely on the fact Oracle fetches the first value into variable
    and keeps it when the excaprion is raised.
    Tom Kyte discusses the SELECT INTO issue here:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:7849913143702726938::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:1205168148688
    Rgds.

  • ExecuteQueryForObject returned too many results jdbc error

    Hi All,
    I am getting the following error on running the tomcat 6.0 and I have Oracle 11g installed.
    08 Dec 2010 16:23:41 ERROR [QUARTZ_Worker-5] org.quartz.core.JobRunShell - Job DCTM.DCTMServerPipe threw an unhandled Exception:
    org.springframework.jdbc.UncategorizedSQLException: SqlMapClient operation; uncategorized SQLException for SQL []; SQL state [null]; error code [0]; Error: executeQueryForObject returned too many results.; nested exception is java.sql.SQLException: Error: executeQueryForObject returned too many results.
         at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.translate(SQLStateSQLExceptionTranslator.java:121)
         at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.translate(SQLErrorCodeSQLExceptionTranslator.java:322)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:212)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForObject(SqlMapClientTemplate.java:271)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForObject(SqlMapClientTemplate.java:265)
         at com.emc.documentum.bpm.daos.impl.AbstractIbatisBaseDaoImpl.queryForObject(AbstractIbatisBaseDaoImpl.java:119)
         at com.emc.documentum.bpm.bamengine.daos.impl.ServerConfigDaoImpl.getDBCurrentTime(ServerConfigDaoImpl.java:28)
         at com.emc.documentum.bpm.bamengine.services.server.factory.impl.ServersFactoryImpl.updatePipeServerTimezoneOffset(ServersFactoryImpl.java:56)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:301)
         at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:198)
         at $Proxy50.updatePipeServerTimezoneOffset(Unknown Source)
         at com.emc.documentum.bpm.bamengine.services.sharedservices.impl.TaskManagerServiceImpl.executePipe(TaskManagerServiceImpl.java:341)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:301)
         at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:198)
         at $Proxy52.executePipe(Unknown Source)
         at com.emc.documentum.bpm.bamengine.scheduler.impl.PipeJobImpl.execute(PipeJobImpl.java:17)
         at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
         at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
    Caused by: java.sql.SQLException: Error: executeQueryForObject returned too many results.
         at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForObject(MappedStatement.java:124)
         at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:518)
         at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:493)
         at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForObject(SqlMapSessionImpl.java:106)
         at org.springframework.orm.ibatis.SqlMapClientTemplate$1.doInSqlMapClient(SqlMapClientTemplate.java:273)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:209)
         ... 23 more
    How can this be resolved? Please help !!
    Thanks, T
    Edited by: 805903 on Dec 9, 2010 3:00 AM

    Error executeQueryForObject returned too many results
    Typical Error msg:
    “SqlMapClient operation; uncategorized SQLException for SQL []; SQL state [null]; error code [0]; Error: executeQueryForObject returned too many results.; nested exception is java.sql.SQLException: Error: executeQueryForObject returned too many results”
    The error is caused for using queryForObject (for a single result) instead of queryForList (when expecting multiple results).
    Example of correct solution
    In DAO:
    public List<UpdatedContractRateDO> getUpdatedCtrctRate(Map paramMap) throws DataAccessException {
    return (List<UpdatedContractRateDO>) getSqlMapClientTemplate().queryForList("charge.getUpdatedCtrctRate", paramMap);
    }

  • ExecuteQueryForObject returned too many results.

    Hi,
    My servlet accesses an Oracle db, it works fine most of the time except for when I search for one particular entry.
    When I do this the following error message is displayed:
    HTTP Status 500 -
    type Exception report
    message
    description The server encountered an internal error () that prevented it from fulfilling this request.
    exception
    javax.servlet.ServletException: Error: executeQueryForObject returned too many results.
         org.apache.struts.action.RequestProcessor.processException(RequestProcessor.java:523)
         org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:421)
         org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:224)
         org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194)
         org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:368)
    root cause
    java.sql.SQLException: Error: executeQueryForObject returned too many results.
         com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryForObject(GeneralStatement.java:108)
         com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:561)
         com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:536)
         com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForObject(SqlMapSessionImpl.java:93)
         com.ibatis.sqlmap.engine.impl.SqlMapClientImpl.queryForObject(SqlMapClientImpl.java:70)
         com.bmw.urt3fms.cardata.ctrl.SearchCarAction.execute(Unknown Source)
         org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:419)
         org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:224)
         org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194)
         org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:368)
    This is really strange as all the other searches I have done all worked except this one.
    Any ideas as to why it would display this??
    Nick

    Error executeQueryForObject returned too many results
    Typical Error msg:
    “SqlMapClient operation; uncategorized SQLException for SQL []; SQL state [null]; error code [0]; Error: executeQueryForObject returned too many results.; nested exception is java.sql.SQLException: Error: executeQueryForObject returned too many results”
    The error is caused for using queryForObject (for a single result) instead of queryForList (when expecting multiple results).
    Example of correct solution
    In DAO:
    public List<UpdatedContractRateDO> getUpdatedCtrctRate(Map paramMap) throws DataAccessException {
    return (List<UpdatedContractRateDO>) getSqlMapClientTemplate().queryForList("charge.getUpdatedCtrctRate", paramMap);
    }

Maybe you are looking for