Connection with updatable result set

I am using java and mysql.
I need to get updatable result set.
But when I check MetaData of connection I get following: "Updatable result sets are not supported"
and get read only result set.
Can You help me to get updatable result set?
Thank You in advance.
database properties
db.driver=org.gjt.mm.mysql.Driver
db.connectionString=jdbc\:mysql\://host/base
See code:
try
conn = getConnection();
DatabaseMetaData meta = conn.getMetaData();
if (meta.supportsResultSetConcurrency(
ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_UPDATABLE))
// Updatable result sets are supported
setTitle( "Updatable result sets are supported" );
else
// Updatable result sets are not supported
setTitle( "Updatable result sets are not supported" );
catch(SQLException e)
e.printStackTrace();

Well, if the JDBC driver you are using does not support updateable result sets, then you should find a different JDBC driver or a different solution to your problem. Most likely you can solve your problem with another technique than using an updateable result set.

Similar Messages

  • OBIEE: Schedule Segments with Saved Result Sets?

    Hi!
    I'd like to know how it is possible to solve this problem:
    Imagine that we have segments that were created with Segment Designer and configured with Saved Result Sets (SRS). Some of them are quicker to execute at night, so the goal is to schedule at night the execution of those sements and the respective saving of result sets. The next morning, the marketeer using Siebel Marketing, when designing and building the campaign, would only have to select the segment and then choose to use the contacts of the SRS of that segment, instead of running on-the-fly (online) the segment...
    Was may explanation clear? How is this possible?
    Thank you.
    Regards,
    Filipe Ganhão

    Update: looks like the wizard is really overwhelmed with big cubes (1.5k accounts, 20 dims @ approx 10 generations each) and just breaks.
    I haven't figured out the exact limit, but I'm able to migrate pack of about 8 dims at a time as long as I leave the accounts dim out and do that on itself.
    Cheers,
    C.

  • Working With A Result Set

    Hi Everybody,
    I'm working on a application with will retrive data from a database for the past seven days, and then find the average for each of the days, the database is basically (id,date,hour,houraverage) - I can connect to the databse okay, but the problem i have is working with result set:
    ResultSet rs = d.PerformQuery("0");
    while (rs.next())
    if (firsttime)
    recorddate = rs.getString(2);
    firsttime = false;
    System.out.println("First Time");
    numberofdays = 1;
    if (!firsttime)
    if (recorddate == rs.getString(2))
    System.out.println(" Not First Time");
    thisdaystotal = thisdaystotal + Float.parseFloat(rs.getString(4));
    numberofrecords = numberofrecords + 1;
    System.out.println(thisdaysaverage);
    System.out.println(rs.getString(2));
    if (recorddate != rs.getString(2))
    thisdaystotal = thisdaystotal + Float.parseFloat(rs.getString(4));
    numberofdays = numberofdays + 1;
    thisdaysaverage = thisdaysaverage / numberofrecords;
    recorddate = rs.getString(2);
    for (int dayssofar=1;dayssofar<=numberofdays;dayssofar++)
    System.out.println("Daily Average: " + dailyaverages[dayssofar]);
    Basiclly the flaw in the program is that it can't tell if a record has the same date as the records before it - It can tell if they are different but not same, can anybody see any flaws in the logic?
    Andrew

    Why don't you try to use SQL to get what you need. For example, you can use:
    SELECT DATE, AVG(HOURAVERAGE)
    FROM TableName
    WHERE DATE > 'DATE1'
    GROUP BY DATE
    (replace 'DATE1' with your specific date. If you are using Oracle replace it with: trunc(sysdate) -7 to get most recent 7 days).
    Then you will get your averge values. There is no need to average them in your Java program.

  • Problem with scrollable result set

    hi
    can u please tell me how i can get no of rows from a result set..
    i am using thin drivers and connecting to remote database,every thing is working fine if i use fwd only type ,but if u use sensitive type it is giving error
    code:
    stmt=con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
    i am geting error at this line
    error :
    java.lang.AbstractMethodError:oracle/jdbc/driver/oracleconnection.createStatement
    please help me

    does your jdk include jdbc 2.0? Is your driver updated to support jdbc 2.0 features? have you recently updated either(and forgot to change the classpath)?
    I am running jdk 1.3 and oracle thin(classes12.zip) have not had problems creating/using scrollable resultsets??
    Jamie

  • Update Result Set

    Hi,
    I want to update a row from a table through it's result set, using the following code:
    stmt=con.createStatement(ResultSet.TYPE_FORWARD_ONLY,ResultSet.CONCUR_UPDATABLE);
    rs=stmt.executeQuery(" select * from region_info");
    rs.next();
    rs.updateString("DETAILS","malaysia");
    rs.updateRow();
    using jdbc2.0...
    But it gives the following error:
    java.sql.SQLException: Invalid operation for read only resultset: updateString      at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:168)      at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:210)      at oracle.jdbc.driver.BaseResultSet.updateString(BaseResultSet.java:235)      at oracle.jdbc.driver.OracleResultSet.updateString(OracleResultSet.java:2647)      at test.main(test.java:61)
    wht's this pblm?
    Thanx in advance..

    create a statement with resultSetConcurrency = ResultSet.CONCUR_UPDATABLE :
    Statement stmt = con.createStatement( ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);
    ResultSet rs = stmt.executeQuery("SELECT a, b FROM TABLE2");

  • Oracle 8.1.6 Thin Driver with Multiple Result Sets

    We're using Oracle 8.1.6 on NT using the latest driver release.
    Java 1.2.2
    We're experiencing problems with resultSet.next when we have multiple result sets open. What appears to be happening when you've read the last result set entry do a .next() call which should result in a false value we actually get java.sql.SQLException: ORA-01002: fetch out of sequence.
    This seems to us that the driver is trying to go beyond the end of the result set.
    We've checked JDBC standards (and examples on this site) and the code we've got is compliant. We've also found that the code produces the correct results under Oracle 7.3.4 and 8.0.4.
    I can also say that there is no other activity on the db, so there are no issues such as roll back segments coming into play.
    Any solutions, help, advice etc would be gratefully appreciated!
    null

    Phil,
    By "multiple result sets open", do you mean you are using REF Cursors, or do you have multiple statements opened, each with its own ResultSet? If you could post an example showing what the problem is, that would be very helpful.
    You don't happen to have 'for update' clause in your SQL statement, do you?
    Thanks

  • Help with streaming result sets and prepared statements

    hi all
    I create a callable statement that is capable of streaming.
    statement = myConn2.prepareCall("{call graphProc(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}",java.sql.ResultSet.TYPE_FORWARD_ONLY,
    java.sql.ResultSet.CONCUR_READ_ONLY);
    statementOne.setFetchSize(Integer.MIN_VALUE);
    the class that contains the query is instantiated 6 times the first class streams the results beautifully and then when the second
    rs = DatabaseConnect.statementOne.executeQuery();
    is executed I get the following error
    java.sql.SQLException: Can not use streaming results with multiple result statements
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:910)
    at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1370)
    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1688)
    at com.mysql.jdbc.Connection.execSQL(Connection.java:3031)
    at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:943)
    at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1049)
    at com.mysql.jdbc.CallableStatement.executeQuery(CallableStatement.java:589)
    the 6 instances are not threaded and the result set is closed before the next query executes is there a solution to this problem it would be greatly appreciated
    thanks a lot
    Brian

    Database resources should have the narrowed scope
    possible. I don't think it's a good idea to use a
    ResultSet in a UI to generate a graph. Load the data
    into an object or data structure inside the method
    that's doing the query and close the ResultSet in a
    finally block. Use the data structure to generate
    the graph.
    It's an example of MVC and layering.
    Ok that is my bad for not elaborating on the finer points sorry, the results are not directly streamed into the graphs from the result set. and are processed in another object and then plotted from there.
    with regards to your statement in the beginning I would like to ask if you think it at least a viable option to create six connections. with that said would you be able to give estimated users using the six connections under full usage.
    just a few thoughts that I want to
    bounce off you if you don't mind. Closing the
    statement would defeat the object of of having a
    callable statement How so? I don't agree with that.
    %again I apologise I assumed that since callable statements inherit from prepared statements that they would have the pre compiled sql statement functionality of prepared statements,well If you consider in the example I'm about to give maybe you will see my point at least with regards to this.
    The statement that I create uses a connection and is created statically at the start of the program, every time I make a call the same statement and thus connection is used, creating a new connection each time takes up time and resources. and as you know every second counts
    thanks for your thoughts
    Brian.

  • Calling a Stored Procedure with a result set and returned parms

    I am calling a stored procedure from a Brio report. The stored procedure returns a result set, but also passes back some parameters. I am getting a message that states that the DBMS cannot find the SP with the same footprint. Besides the result set, the SP returns 3 out parameters: (Integer, char(60), char(40)). Is there something special I need to do in Brio to format the out parameters to be passed back from the SP.
    Thanks,
    Roland

    Did you try just declaring the vars?
    untested
    declare
      myCur SYS_REFCURSOR;
      myRaw RAW(4);
      BEGIN
        test (0, 0, myRaw, sysdate, myCur);
      END;

  • Web Services with Large Result Sets

    Hi,
    We have an application where in a call to a web service could potentially yield a large result set. For the sake of argument, lets say that we cannot limit the result set size, i.e., by criteria narrowing or some other means.
    Have any of you handled paging when using Web Services? If so can you please share your experiences considering Web Services are stateless? Any patterns that have worked? I am aware of the Value List pattern but am looking for previous experiences here.
    Thanks

    Joseph Weinstein wrote:
    Aswin Dinakar wrote:
    I ran the test again and I removed the ResultSet.Fetch_Forward and it
    still gave me the same error OutOfMemory.
    The problem to me is similar to what Slava has described. I am parsing
    the result set in memory storing the results in a hash map and then
    emptying the post processed results into a table.
    The hash map turns out to be very big and jvm throws a OutOfMemory
    Exception.
    I am not sure how I can turn this around -
    I can partition my query so that it returns smaller chunks or "blocks"
    of data each time(say a page of data or two pages of data). Then I can
    store a page of data in the table. The problem with this approach is
    that it is not exactly transactional. Recovery would be very difficult
    in this approach.
    I could do this in a try catch block page by page and then the catch
    could go ahead and delete the rows that got committed. The question then
    becomes what if that transaction fails ?It sounds like you're committing the 'cardinal performance sin of DBMS processing',
    of shovelling lots of raw data out of the DBMS, processing it in some small way,
    and sending it (or some of it) back. You should instead do this processing in
    a stored procedure or procedures, so the data is manipulated where it is. The
    DBMS was written from the ground up to be a fast efficient set-based processor.
    Using clever SQL will pay off greatly. Build your saw-mills where the trees are.
    JoeYes we did think of stored procedures. Like I mentioned yesterday some of the post
    processing depends on unicode and specific character sets. Java seemed ideally suited
    to this since it handles these unicode characters very well and has all these libraries
    we can use. Moving this to DBMS would mean we would make that proprietary (not that we
    wont do it if it became absolutely essential) but its one of the reasons why the post
    processing happens in java. Now that you mention it stored procedures seem the best
    option.

  • Stored Procedure With Multiple Result Sets As Report Source : Crosspost

    Hello Everyone,
    I have an issue where i have created a stored procedure that returns multiple result sets
    /* Input param = @SalesOrderID */
    SELECT * FROM Orders TB1
      INNER JOIN OrderDetails TB2 ON  TB1.ID = TB2.ID
    WHERE TB1.OrderID = @SalesOrderID
    SELECT * FROM Addresses
      WHERE Addresses.OrderID = @SalesOrderID AND Addresses.AddressType = 'Shipping'
    SELECT * FROM Addresses
      WHERE Addresses.OrderID = @SalesOrderID AND Addresses.AddressType = 'Billing'
    This is just a quick sample, the actual procedure is a lot more complex but this illustrates the theory.
    When I set the report source in Crystal X to the stored procedure it is only allowing me to add rows from the first result set.
    Is there any way to get around this issue?
    The reason that I would prefer to use a stored procedure to get all the data is simply performance. Without using one big stored procedure I would have to run at least 6 sub reports which is not acceptable because the number of sub reports could grow exponentially depending on the number of items for a particular sales order.
    Any ideas or input would be greatly appreciated.
    TIA
        - Adam
    P.S
    Sorry for the cross post, I originally posted this question [here|/community [original link is broken];
    but was informed that it might be the wrong forum
    Edited by: Adam Harris on Jul 30, 2008 9:44 PM

    Adam, apologies for the redirect, but it is better to have .NET posts in one place. That way anyone can search the forum for answers. (and I do not have the rights to move posts).
    Anyhow, as long as the report is created, you should be able to pass the datasets as:
    crReportDocument.Database.Tables(0).SetDataSource(dataSet.Tables("NAME_OF_TABLE"))
    Of course alternatively, (not sure if this is possible in your environment) you could create a multi-table ADO .NET dataset and pass that to the report.
    Ludek

  • E4200 not connecting with Cradlepoint CBR450 set to bridge mode

    I am trying to replace my Cradlepoint MBR1200 with a CBR450 in bridge mode, to let the E4200 take over all the WAP and DHCP handling, but having a hard time.
    I set the CBR450 in bridge mode and when I connect it to my laptop it properly passes the IP info to my Mac and allows me to connect.  I then connect it to the E4200 WAN port with it set to get IP address automatically but nothing ever happens and I am never allowed traffic in/out over the WAN.
    I *can* get internet access by leaving the CBR450 in normal mode (handling DHCP, etc) and then plugging it into the E4200 LAN port and having the E4200 set to bridge.
    Any ideas why the E4200 WAN port is not getting the bridge info and allowing connection?  FWIW, works fine connecting to a DSL modem with the DSL model set to bridge mode at my other house.  
    CBR450 and E4200 both on latest firmwares.

    When you configure/edit the Primary Lan on the CBR450 to IP pass through mode, change the lease time under DHCP Server tab to something very low, like 2 minutes. When in pass-through mode, the CBR450 still assigns the IP address from the aircard directly to the ethernet interface, but uses DHCP to do this. Sometimes when connecting to Cisco devices, it helps when the lease time is set this low. 

  • Loss of network connection with update 7.6.4

    Since updating my airport to 7.6.4, I keep losing my network (Comcast) connection.  Restarting airport gets me back online.  Anyone else experiencing this problems since the update?  (OS 10.6., MacBook Pro)

    Same issues here. Ping loss every 4th ping.
    found this thread > https://discussions.apple.com/thread/3482505?start=15&tstart=0
    Suggested is to downgrade which I did .... result is no ping loss anymore.
    I downgraded from 7.6.4 to.7.6.3
    I hope this also works for you!

  • Can not connect with Wireless Security Set

    I have a G3 iBook 12" with an airport card. I just recently got a DLink wireless router. I can connect without security. When I turn on any WEP, it says connection failed. When I use WPA2 Personal, it will connect, but it will drop the connection every few seconds then reconnect.

    Try a $ sign in front of the password it requests
    when you try to connect.
    I tried entering the password with a $ in front of it and was not able to connect.

  • EXECUTE IMEDIATE with a result set based DDL command?

    How can I get the exec immed to work in this manner?
    execute immediate 'select ' DROP INDEX '' || INDEX_NAME || '' from dba_indexes where index_name = ''%STRING%'';
    I want to retrieve the list of INDEXES from the dba_indexes where the name is like a certain STRING and drop them.
    Is there a better way to do this all in a PL/SQL session/command?
    Thanks for any help I can get,
    Miller

    I resolved issue using a spool file and executing that like such:
    /* This will remove all objects created by the migration objects */
    set heading off
    set feedback off
    spool c:\temp\cleanup.txt
    select 'DROP INDEX ' || INDEX_NAME || ' ; ' from dba_indexes where index_name like '%99%' and index_type = 'NORMAL' and index_name not like '%SYS%';
    select 'DROP TABLE ' || TABLE_NAME || ' ;'  from dba_tables where table_name like 'DYNEGY_%' and table_name <> 'DYNEGY_SCRIPT_STATISTICS';
    spool off
    @c:\temp\cleanup.txt
    /

  • Error executing a query with large result set

    Dear all,
    after executing a query which uses key figures with exception aggregation the BIA-server log (TrexIndexServerAlert_....trc) displays the following messages:
    2009-09-29 10:59:00.433 e QMediator    QueryMediator.cpp(00324) : 6952; Error executing physical plan: AttributeEngine: not enough memory.;in executor::Executor in cube: bwp_v_tl_c02
    2009-09-29 10:59:00.434 e SERVER_TRACE TRexApiSearch.cpp(05162) : IndexID: bwp_v_tl_c02, QueryMediator failed executing query, reason: Error executing physical plan: AttributeEngine: not enough memory.;in executor::Executor in cube: bwp_v_tl_c02
    --> Does anyone know what this message exactly means? - I fear that the BIA-Installation is running out of physical memory, but I appreciate any other opinion.
    - Package Wise Read (SAP Note 1157582) does not solve the problem as the error message is not: "AggregateCalculator returning out of memory with hash table size..."
    - To get an impression of the data amount I had a look at table RSDDSTAT_OLAP of a query with less amount of data:
       Selected rows      : 50.000.000 (Event 9011)
       Transferred rows :   4.800.000 (Event 9010)
    It is possible to calculate the number of cells retreived from one index by multiplying the number of records from table RSDDSTAT_OLAP by the number of key figures from the query. In my example it is only one key figure so 4.800.000 are passed to the analytical engine.
    --> Is there a possibility to find this figure in some kind of statistic table? This would be helpful for complex queries.
    I am looking forward to your replies,
    Best regards
    Bjoern

    Hi Björn,
    I recommend you to upgrade to rev 52 or 53. Rev. 49 is really stable but there are some bugs in it and if you use BW SP >= 17 you can use some features.
    Please refer to this thread , you shouldn't´t use more than 50% (the other 50% are for reporting) of your memory, therefor I have stolen this quote from Vitaliy:
    The idea is that data all together for all of your BIA Indexes does not consume more then 50% of total RAM of active blades (page memory cannot be counted!).
    The simpliest test is during a period when no one is using the accelerator, remove all indexes (including temporary) from main memory of the BWA, and then load all indexes for all InfoCubes into ain memory. Then check your RAM utilization.
    Regards,
    -Vitaliy
    Regards,
    Jens

Maybe you are looking for