JDBC conversion bug?

Hi,
Please bear with the length of this post, but something really weird is happening and I think there may be a bug in the jdbc adaptor. I'm using the oracle9i-classes12.jar file, but have also tried the ojdbc14.jar file.
I have a number of schemas across a couple of databases (10i) and have a need to adminster a table that exists in each of the schemas through one application. We have recently moved a number of the schemas onto a new database machine as the old one was using a different character set than the other (both are now using AL32UTF8).
I'm using database links and synonyms in an administrative schema to access the common table in all the other schemas. The common table looks like:
CREATE TABLE patch_log
(patch_log_id NUMBER(12,0) NOT NULL,
patch_number NUMBER(12,0) NOT NULL,
date_applied DATE,
comments VARCHAR2(512),
applied_by VARCHAR2(40));
When my application does a fetch across all the tables, it generates a sql statement for each table synonym and bundles the results together into one array. The sql statement generated looks like:
SELECT applied_by,
comments,
date_applied,
patch_log_id,
patch_number
FROM table_synonym_name;
For the tables on one database, this statement works fine, for the others I get a SQLException bundled in a JDBCAdaptorException. I wanted to make sure that it wasn't the application (created with WebObjects) so I ran the sql statements in DbVisualizer as that also uses JDBC. I get the same error there. The error/stack trace is:
java.sql.SQLException: Invalid character encountered in: failAL32UTF8Conv
     at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
     at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
     at oracle.jdbc.dbaccess.DBError.check_error(DBError.java:1130)
     at oracle.jdbc.dbaccess.DBConversion.failAL32UTF8Conv(DBConversion.java:2762)
     at oracle.jdbc.dbaccess.DBConversion.al32utf8BytesToJavaChars(DBConversion.java:2469)
     at oracle.jdbc.dbaccess.DBConversion.al32utf8BytesToJavaChars(DBConversion.java:2372)
     at oracle.jdbc.dbaccess.DBConversion.charBytesToJavaChars(DBConversion.java:884)
     at oracle.jdbc.dbaccess.DBConversion.CHARBytesToJavaChars(DBConversion.java:807)
     at oracle.jdbc.ttc7.TTCItem.getChars(TTCItem.java:298)
     at oracle.jdbc.dbaccess.DBDataSetImpl.getCharsItem(DBDataSetImpl.java:1493)
     at oracle.jdbc.driver.OracleStatement.getCharsInternal(OracleStatement.java:3355)
     at oracle.jdbc.driver.OracleStatement.getStringValue(OracleStatement.java:3556)
     at oracle.jdbc.driver.OracleResultSetImpl.getString(OracleResultSetImpl.java:434)
     at com.onseven.dbvis.sql.Selector.getValue(Unknown Source)
     at com.onseven.dbvis.sql.Selector.fetchData(Unknown Source)
     at com.onseven.dbvis.sql.Selector.execute(Unknown Source)
     at com.onseven.dbvis.sql.Selector.execute(Unknown Source)
     at com.onseven.dbvis.executor.ExecutorHandler.execute(Unknown Source)
     at com.onseven.dbvis.executor.ExecutorHandler.access$1000(Unknown Source)
     at com.onseven.dbvis.executor.ExecutorHandler$ExecutorThread.construct(Unknown Source)
     at se.pureit.swing.util.SwingWorker$2.run(Unknown Source)
     at java.lang.Thread.run(Thread.java:534)
Just to make things a little weird, if I qualify my search criteria with a patch_number, the statement works for all the tables:
SELECT applied_by,
comments,
date_applied,
patch_log_id,
patch_number
FROM table_synonym_name
WHERE patch_number = 100;
I can select from the patch tables without exception if I'm logged in as that schema owner regardless of how my sql statement is set up. It only happens when I try to access the tables via the link/synonym.
I did some playing around and discovered that if I order the columns differently in the sql statement (using the link/synonym), I can also avoid the error without a where clause:
SELECT patch_log_id,
patch_number,
date_applied,
applied_by,
comments
FROM table_synonym_name;
I have no idea why ordering the columns should or shouldn't make a difference. Not all the links/synonyms cause the exception, only those that were migrated to the new database (the old one had a different character set).
Could there be something to do with character sets that was exported/imported with those schemas that needs to be fixed? Why would the statements work with a where clause but not without?
Any help on this matter is greatly appreciated.

What was the old character set? How were the objects migrated to the new database?
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC

Similar Messages

  • Jdbc mysql bug?!?!

    Hi all,
    I'm using jsc and mysql in my webapp.
    I need to show full names in a listbox:
    I have a table with several columns, in particular I have name and surname.
    Somewhere I found a tutorial to modify sql query:
    it's quite simple, just to use CONCAT....
    If I run the query it works, but it does't name the result with alias I defined into the query... so I have no way to tell to my webapp to show this column.
    Has anyone experienced the same??
    I solved creating a view in my db and getting data from this view...

    Some else here reported a similiar "column alias ignored" type of issue not too long ago.
    They reverted to an earlier version of the mysql jdbc driver -
    connectorJ/version 5.X failed, version 3.1.XX worked.
    You can verify it's a JDBC driver bug by opening a view data window, enter "smd", a space, then your select statement, then run This will then show the metadata the driver provides. Does it show your column alias? If not, file a bug with MySQL.

  • Oracle thin JDBC driver BUG-2285052, how to solve it ?

    Hi, use ResultSet.getString to get a nchar type String from databaase, it will return the proper value.
    eg, a 254 size column , it will return more char as needed !!
    please check next from jdbc readme file. anyone can give me a temp solution ?
    BUG-2285052 (since 9.2.0.1)
    Extra space and null characters are returned by ResultSet.getString() and Scrollable ResultSet getString() for NCHAR column when the database national character set is UTF8. This behavior occurs only with JDBC THIN driver.

    Dear Michael,
    thx I found a lot of useful properties via the links you provided, but not those in question.
    michael_obrien wrote:
    For timeouts, you may need to use a hint directly on the entity though
    http://wiki.eclipse.org/Using_EclipseLink_JPA_Extensions_(ELUG)#How_to_Use_EclipseLink_JPA_Query_Hints
    http://wiki.eclipse.org/Using_EclipseLink_JPA_Extensions_(ELUG)#Timeout
    I think this refers to "Statement.setQueryTimeout".
    The properties I'm looking for focus on the connection and the network beneath.
    Are there other undocumented ;) possibilities?
    Kind Regards,
    daniela

  • Task Details - Active Core Time and ms to hh:mm:ss conversion bug

    Hey all,
      In a Azure Batch App Job, each task has a "Active Core Time" value set when it completes. There is a bug in the conversion of milliseconds to hours/minutes/seconds. It appears to overflow on a day (Core time > 24 hours)
    These are from an A4 (8 Core) machine:
    3 Hour Task: 54m 21s (89661136 ms)
    3 Hour task: 23h 9m 18s (83358872 ms)
    4 Hour Task: 7h 45m 3s (114303288 ms)
    On the first one, 89661136 is 24 hours and 54 minutes. The last one is 31.7 hours.
    ----- Ed

    Hi Ed, 
    Thanks for bringing this to our attention. Just to clarify, are you seeing this behavior in the Mission Control portal, or the task API? I can see an Active Core Time in the task details panel in the portal so i am pretty sure that is what you are referring
    to.
    I will sort out a fix for this in the near future. 
    Regards,  
    Andrew

  • RAW conversion bug with Noise Reduction

    Hello,
    I have found a serious bug in the RAW conversion when noise reduction is applied. When converting from two types of Canon RAW files (a CRW from a Powershot G6 and a CR2 from a 20d) I found that if you apply Noise Reduction to a RAW file on very low settings (the default setting in the NR function will produce this reliably) single-pixel lines appear at regular intervals throughout the image. Here is an example:
    You can see several lines in this image:
    http://farm1.static.flickr.com/140/3821480263171e76604b.jpg
    A 100% detail of which is here:
    http://farm1.static.flickr.com/179/382148021af6586d27eo.jpg
    Has anyone else had this problem? Can someone from the Aperture dev team fix this?
    -Steve G

    Well I find this filter is quite good in 'masking' block artifact that codec like xvid, or other low compression codec have. I only apply it if I find the block artifact is too much and I find this filter is less offending to my eyes than the block artifact.
    In manual it said that if you have noisy video and want to lower the size then you can use this filter. It also blur the video a bit. But I suspect it is more than blur as I try gaussian blur in time line and the result is not as good. You can see the result as well. There is the tab between source and target and you can compare the result by togling between source and target tab.
    BTW, anyone with 1 core, dual, or quad core, can you tried to encode with it? Just cancel it after few minutes as I want to see what is your processor utilization with this filter on. Also you can see how long does it take to process this video from the 'estimation time left'.

  • BC4J /  JDBC Failover BUG ?

    I have developed an ADF / BC4J application which works.
    In our environment we have a failover configuration
    I coded this approach for my ADF applications.
    My Custom connect string in the connection wizard for the BC4J is:
    jdbc:oracle:thin:@
    (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP) (HOST =aaa.bbb.yyy.zzz)(PORT = 1521))
         (ADDRESS = (PROTOCOL = TCP) (HOST =bbb.bbb.yyy.zzz)(PORT = 1521))
         (LOAD_BALANCE = yes)
         (CONNECT_DATA =
              (SERVER = DEDICATED)
              (SERVICE_NAME = db.yyy.zzz)
              (FAILOVER_MODE = (TYPE = SESSION) (METHOD = BASIC) (RETRIES = 180) (DELAY = 5))
    When I press test connection it works and says success. (The first host in the list is actually down and we are using the failover host.)
    However when I deploy the app to the OC4J Standalone Middle Tier (10.1.3.0) and then attempt to run the app I get the following error in the browser:
    JBO-30003: The application pool (zzz.yyy.bbb.CustSubDivsAppLocal) failed to checkout an application module due to the following exception:
    oracle.jbo.JboException: JBO-29000: Unexpected exception caught: oracle.jbo.JboException, msg=JBO-29000: Unexpected exception caught: oracle.jbo.JboException, msg=JBO-29000: Unexpected exception caught: oracle.jbo.DMLException, msg=JBO-26061: Error while opening JDBC connection.
    If I remove the unavailable host (the first one) from the ADDRESS list the application works properly. If I change the order of the hosts so the first one is the one thats up it also seems to work ok.
    It seems that the OC4J middle tier cannot handle the fact that the first host in the list is down and throws an exception instead of trying the second one. In JDeveloper BC4J Connection Wizard it seems to handle the down host properly.
    Is this correct or have I setup the URL incorrectly ?
    This has caused a huge headache for me as we are currently running on our failover hosts due to SAN maintenance and all my J2EE apps stopped working !!
    Thanx
    Andrew

    Hi Hans,
    A JDev developer, John S., recently created a bug report that if I remember
    correctly, said that creatRoot... was not thread safe.
    My scenario that caused this discovery (functionally identical to your looping threads cases)
    was multple concurrent requests created by rapid clicking a submit button. This creates multple threads in the web tier
    dispatching against a BC4J component calling createRoot....
    in our JSP/Servlets. Just like your code, Hans.
    The end result was some ugly exceptions and failed use cases of the 2nd or so to Nth threads.
    In short, I believe that your design won't work.
    I could work if you gave each thread a separate identity, via a separate BC4J cookie, thus simulating separate BC4J sessions.
    I'm not at all familiar with this area of the BC4J API, just know general buzz words to be dangerous and confusing
    to you. ;-(
    Hopefully John S. will jump in here and help you.
    Good luck, Curt.
    PS please post your solutions?

  • SQL Server JDBC Driver Bug?

    I ran into an unusual situation with the MS SQL Server JDBC driver that I think is a bug and could bite people in some unusual cases.
    Here is the situation that creates the problem.
    - get a connection
    - make it transactional (autocommit false)
    - discover that there is no work to do in the transaction so no commit or rollback is done.
    - make the connection non-transactional (autocommit true)
    - update other tables using the connection (should be non-transactional)
    This leaves the tables updated in the last step locked as part of a transaction!
    It doesn't behave this way with our DB2 driver.
    While I agree that the JDBC spec is vague in many areas, this seems to be well out of bounds of any reasonable interpretation.
    The workaround is to also do a rollback (in a try/catch block) before setting autocommit true.
    -- Frank

    Since database operations are rather expensive why would you do any before checking to see if something needs to be done?

  • 2012 to 2010 conversion bug

    I recently found this bug when converting a project from 2012 to 2010.
    A foor loop using a conditional tunnel will be converted into a for loop with a case structure with a build array inside fed into a shift register.
    The only problem is that this shift register doesn't get initialized, so the just keeps stacking values.
    Like this 2012 vi: 
    and converted to 2010: 
    You can see there is also an unused constant hovering inside the for loop.
    I have tested this in 2013 and the problem does not occure there.

    I remember that bug being reported and was claimed to be fixed with 2013 (or was is 2012 SP1?).  Either way, it has been fixed.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Conversion bug of to_date(x,yyyy/mm/dd)

    Dear Madam or Sir,
    my name is Christian Butzke and I am working for Perpetuum Co. Ltd. in Japan.
    I am using Oracle Database 9. I encountered maybe a bug of the to_date sql function.
    I am not sure where to post bug reports. I search one hour on the Oracle homepage for an email address, but there was none, so I tried this forum. If this forum does not handle bugs, I am sorry.
    The following script should show after each select command one row with 2005-01-01 00:00:00.0.
    CREATE TABLE test ("df" DATE);
    INSERT INTO test VALUES ('2005-01-01');
    SELECT TO_DATE("df",'yyyy/mm/dd'), "df" FROM test;
    SELECT * FROM test WHERE "df">='2005-01-01';
    SELECT * FROM test WHERE TO_DATE("df",'yyyy/mm/dd')>='2005-01-01';
    DROP TABLE test;
    Instead the first select returns:
    TO_DATE(DF,YYYY/MM/DD)     DF
    0005-01-01 00:00:00.0     2005-01-01 00:00:00.0
    The second select returns correctly the row
    DF
    2005-01-01 00:00:00.0
    However, the last select returns nothing.
    This should be a bug, shouldn't it be?
    With regards,
    Christian-Manuel Butzke
    Perpetuum Co. Ltd.

    I am using Oracle Database 9. I encountered maybe a
    bug of the to_date sql function.the bug is in your code, not in oracle!
    CREATE TABLE test ("df" DATE);
    INSERT INTO test VALUES ('2005-01-01');this is poor coding, you should use
    insert into test values (to_date('2005-01-01', 'YYYY-MM-DD'));
    SELECT TO_DATE("df",'yyyy/mm/dd'), "df" FROM test;this is also buggy code, you should use
    select "dt" from test
    to_date(date) is a source of errors
    SELECT * FROM test WHERE "df">='2005-01-01';again, this is poor coding, you compare string with date. Prefer where "df" >= to_date('2005-01-01', 'YYYY-MM-DD')
    SELECT * FROM test WHERE
    TO_DATE("df",'yyyy/mm/dd')>='2005-01-01';same as above

  • File to JDBC conversion getting error

    jdbc driver need for pi7.0. showing connection error. database side is working fine,
    PI7.0 configuration side is fine. but still showing connection error.
    Am i missing something.
    Please help me out.

    Hi Rashmi,
    Can you elaborate the error in detail.
    Check in the communication channel what parameters has  been provided and are they as per the standards.
    Refer below link for details.
    http://help.sap.com/saphelp_nw04/helpdata/en/64/ce4e886334ec4ea7c2712e11cc567c/content.htm
    Also check in communication channel monitoring for the exact error.

  • Date conversion bug?

    In Oracle 10.2:
    When I run this SQL I get, ORA-01878: specified field not found in datetime or interval error message. In 2007, DST happened the last week of October, but in 2008 and 2009, DST for Sydney happened the first week of October. Am I missing something?
    select
    to_char(from_tz( to_timestamp('2008102602', 'yyyymmddhh24') , 'Australia/Sydney') at time zone 'US/Central')
    from dual
    In 2007
    Sunday, October 5, 2008 at 1:59:59 AM     
    Sunday, October 5, 2008 at 3:00:00 AM
    Sunday, October 4, 2009 at 1:59:59 AM
    Sunday, October 4, 2009 at 3:00:00 AM

    Actually, DST changed.
    Australia Daylight Saving Time
    Changed from 2007-2008. Summer Time (Daylight Saving Time) runs in New South Wales, the Australian Capital Territory, Victoria. South Australia and Tasmania from the first Sunday in October through to the first Sunday in April.
    Also, if you look in: http://www.timeanddate.com/worldclock/timezone.html?n=240&syear=2000
    it says that summer time change happens the last week of October up to 2007, but starting 2008, it happens during the first week of October.
    Sunday, October 28, 2007 at 1:59:59 AM changes to:
    Sunday, October 28, 2007 at 3:00:00 AM     
    Sunday, October 5, 2008 at 1:59:59 AM changes to:
    Sunday, October 5, 2008 at 3:00:00 AM
    Sunday, October 4, 2009 at 1:59:59 AM changes to:
    Sunday, October 4, 2009 at 3:00:00 AM
    So it seems like Oracle isn't aware of this change?
    The query below is the DST change that should cause a time conversion error, but it doesn't:
    select
    to_char(from_tz( to_timestamp('2008100502', 'yyyymmddhh24') , 'Australia/Sydney') at time zone 'US/Central')
    from dual

  • Widening conversion bugs ?

    I dont understand .. (from A Programmer's Guide to Java Certification 3rd Edition - mughal pg 172)
    a) this work
    Integer iRef3 = (short)10; // constant in range: casting by narrowing to short,
    // widening to int, then boxing
    b) final short x = 3;
    Integer y = x;
    this work but not this.
    short x = 3;
    Integer y = x;
    The 'final' means constant expression which is used for implicit narrowing conversion. The 2nd one is not narrowing conversion so why the 2nd one didnt work?
    c) Exactly as a) but instead of Integer .. i use Long. // This doesnt work
    Long iRef3 = (short)10;
    c) This also doesnt work unless i put the final on short
    short s = 10; // narrowing (a)
    Integer iRef1 = s; // short not assignable to Integer -- error here (b)
    Since implicit conversion succeded for (a) and you dont need constant expression for widening conversion (b) .. why it doesnt work on (b).
    I thought the variable s would be widen to int and then boxing into Integer.
    Any help is appreciated
    Edited by: yapkm01 on Jul 16, 2009 9:32 PM

    A type conversion from long to float is
    commonly called a widening conversion and thus legal.
    I understand that the range of float variables
    is wider than that of long variables.
    However, a long variable can hold about 4
    billion (2^64) distinct integer values. How could
    these possibly all be represented by a float
    variable that uses only 32 bits - and thus can only
    represent 2^32 distinct values. The mantissa of a
    float variable actually uses even less bits -
    only 23.
    So, while any value of a long variable will
    fall within the range of a float variable,
    occasionally approximation should occur and
    information get lost.
    Why is then usually said that widening conversion
    doesn't lose information?Mathematically, a float's bigger. You do lose precision, though.
    What's your point?

  • PDF conversion bug

    I noticed when I try converting a pages document with a 2D pie chart to a pdf document, the uppermost section of the circle is chopped off. I tried with a 3D pie chart and it worked ok. I guessed this is a bug. Is there a way to work around it? And how do I inform Apple about this bug?

    Hello
    Welcome to the club
    a - I'm not sure that you met a bug. The only way I was able to reproduce the described behaviour was to move the chart so that a portion of it was outside the printable area which is easy to see.
    b - assuming that you think that it's really one,
    *go to "Provide Pages Feedback" in the "Pages" menu*, describe what you got.
    Then, cross your fingers, and wait at least for iWork'09
    Yvan KOENIG (from FRANCE mercredi 23 janvier 2008 8:43:05)

  • JDBC Statement Bug with OracleLite 4.0.1.2?

    I have found what appears to be a bug. Has anyone else encountered this bug. I have the following code:
    Statement stmt = conn.createStatement();
    ResultSet rs = stmt.executeQuery("select columnA,columnB from tableA");
    while (rs.next())
    System.out.println("ColumnA="+rs.getString("ColumnA"));
    System.out.println("ColumnB="+rs.getString("ColumnB"));
    rs.close();
    rs = stmt.executeQuery("select column1, column2,column3 from Table1");
    while (rs.next())
    System.out.println("Column1="+rs.getString("Column1"));
    System.out.println("Column2="+rs.getString("Column2"));
    ("Column3="+rs.getString("Column3"));
    Column 3 is the problem. I get the
    following Exception:
    System.out.printlnjava.sql.SQLException: >>> [ODBC S1002] invalid column number
    After researching it I found that you
    cannot reuse a Statement object over and
    over again. The reason is that the first
    execution of a statement initializes the
    number of columns to be returned and each
    subsequent statement executed does not
    reinitialize the column count. So in the
    case above the column count is 2, and I
    issue a query with 3 columns using the
    same statement object, hence the bug.
    I know the work around, just wanted to
    know if this is a know bug.Any thoughts.
    Mike H.

    I've never heard of anything like this either. But I'm not sure I'm properly tracking what you say. You drop a PR file into the source well and it disappears? If you have a Finder window open and you drag it to the Batch window, you actually see the file deleted in the Finder window?
    Or do you mean that the PR file doesn't get copied to the well?
    Russ

  • Multi-threaded file conversions bug

    Why with 5 PDF Generator User Accounts I get this?
    WARN  [com.adobe.service.ImpersonatedConnectionManager] BMC028: Service PDFMakerSvc: Reducing maximum pool size from 20 to 4 to match number of impersonation credentials.
    Why with 6 PDF Generator User Accounts  I get  this?
    WARN  [com.adobe.service.ImpersonatedConnectionManager] BMC028: Service PDFMakerSvc: Reducing maximum pool size from 20 to 5 to match number of impersonation credentials.
    Why with more than 4 user I randomly get this (in multithread-conversion of one identical document)?
    INFO  [com.adobe.pdfg.GeneratePDFImpl] ALC-PDG-001-000-Conversion failed : ALC-PDG-010-012-PDFMaker reported an error while printing the document.
    INFO  [com.adobe.pdfg.GeneratePDFImpl] ALC-PDG-001-000-Trying to find a fallback route if available
    INFO  [com.adobe.pdfg.GeneratePDFImpl] ALC-PDG-001-000-Couldn't obtain fallback filetype setting. Cannot try fallback route

    Thank you  for your reply Hodmi   I didnt knew about that feature of invokeDDX() function.It help me a lot.
    Hodmi wrote:
           What I understood from your reply is I don't need to do any thing with those user except just add those user to livecycle
               application,and I must ensure that those user must have admin right,is that right?
    That's pretty much correct.  I don't believe they need admin rights, just the rights to launch the native apps.
    May be you are right,but I read it at "Installing and Deploying LiveCycle® ES2 Using JBoss® Turnkey Adobe" pg no 60 at section 6.14.7  that  " Click Add and enter the user name and password of a user who has administrative privileges on theLiveCycle ES2 server
    any way no prob with that......

Maybe you are looking for

  • IPhone 3GS "NO SERVICE" but finds carrier in network selection

    I'll try and keep it brief. So my iPhone 3GS constantly says "No Service" and I don't know why. I boot it up, and it says "Searching..." for about 2 minutes, then "No Service" For almost a month now, it would randomly go out for an hour or so. The mo

  • LSMW  MM02 -  CHANGE PURCHASE ORDER TEXT

    I need to change the purchase order text  in mm02  using the LSMW.  Can you help me?

  • Adobe Reader XI freezing and not opening docs

    I have downloaded Adobe Reader XI on my PC and now my docs won't open. It freezes in IE and I have to use task manager to end task. No problems before XI version.  I uninstalled and reinstalled, same issues. Thanks in advance!

  • Problem submitting my podcast

    I am SO frustrated!!! ugh. I am having trouble posting my podcast. I'm going to http://www.apple.com/itunes/store/podcaststechspecs.html then clicking on 'submit a podcast' I then enter my feed. the button that says 'continue' doesn't work when I put

  • Best environment configuration practice

    we are deploying three different web applications which use bdbxml. they do not share data at all. we currently have it configured for all three apps to use the same environment but with different containers. we are running into problems where if one