Character Sets - socketchannel read error

I have made an FTPserver which i have just noticed is having trouble when the file name being requested/send has non-ASCII characters in it such as '�'.
Since i am using nio i thought it would just be an simple task of just changing the character set from "US-ASCII" to "UTF-8" or "UTF-16".
but when i changed it to UTF-8 there was no change, that is i still recieve an error:
java.nio.charset.MalformedInputException: Input length = 1
java.nio.charset.CoderResult.throwException(CoderResult.java:260)
java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:763)
Worker.readCommand(Worker.java:282)
Worker.handleClient(Worker.java:562)
Worker.run(Worker.java:144)
java.lang.Thread.run(Thread.java:536)
And when i tried UTF-16 no input was ever detected by my server...
This 'input length = 1' business... does this mean that there must be more than one byte in the recieved byte array or am i going off on a incorrect tangent?
Any suggestions or theories would be listened to!
Cheers
(btw i will not be able to reply to this post for another 11 hrs)

I can but it will not be of much use as it is more a logical error or a misunderstanding of how to use the character set.
well here is the relevant code anyway:
// readCommand: waits until a command is recieved from the client and then returns the command
       String readCommand(InputStream is, PrintStream ps) throws java.io.IOException
         int nread = 0, r = 0;
         String str="";
         Charset charset = Charset.forName("UTF-8");
         CharsetDecoder decoder = charset.newDecoder();
      // Direct byte buffer for reading
         ByteBuffer dbuf = ByteBuffer.allocateDirect(1024);
//      outerloop:
         do
            dbuf.clear();
            boolean quit=false;
            String message="";
            int bytesRead=0;
// while there is no input in the buffer            
            while (dbuf.position()==0)
// read in any bytes that are waiting into the buffer
               sc.configureBlocking(false);
               bytesRead=sc.read(dbuf);
               sc.configureBlocking(true);
// wait for two milliseconds before trying to read again
               try
                  wait(2);
                   catch (InterruptedException e)
// if the client has not yet finished the connection negoiation see if the client has been connected for more than 30 seconds, if so disconnect the client
               if (account==-1)
                  if (System.currentTimeMillis()>clientStartTime+30000)
                     quit=true;
               else
// if the thread has been told to quit the connection set the quit flag
                  if (exitThread)
                       message="421 Quitting...";
                       quit=true;
// If there is a sessionlimit see if the connection has exceeded the limit, if so set the quit flag
                  if (staticSessionLimit[account]>0)
                     if (System.currentTimeMillis()>clientStartTime+staticSessionLimit[account])
                        message="421 Session Limit Exceeded... please re-connect in 30 seconds";
                        quit=true;
// if there is an inactivity timout see if the client has not been active for more than the timeout, if so set the quit flag
                  if (staticInactivityTimeout[account]>0)
                     if (System.currentTimeMillis()>lastActiveTime+staticInactivityTimeout[account])
                        message="421 Inactivity Timeout Exceeded... closing connection";
                        quit=true;
// if the quit flag is set return "QUIT" which tells the thread to close the connection and return to the pool
               if (quit)
                  ps.print(message);
                  ps.write(EOL);
                  ps.flush();
                  return "QUIT";
            dbuf.flip();
               CharBuffer cb=null;
// if there are bytes in the buffer, add the data to the recieved command string (str)
               if (bytesRead>=0)
                    cb = decoder.decode(dbuf);
                      str=str+cb.toString();
// repeat while EOL is not detected
         } while (str.length()<2 || !str.substring(str.length()-2).equals("\r\n"));
         str=str.substring(0,str.length()-2);
         FTPServer.log("STRING: "+str);
         lastActiveTime=System.currentTimeMillis();
// update the last action requested by the client in the governor
         governor.addAction(str,this);
         return str;
      }

Similar Messages

  • Character set marker unknown error while importing data in 10g from 8i

    Hi All,
    I am trying to import the whole database schema wise (one by one) through oracle 10g database control page (which is browser based)
    But when i try to import the objects of a schema i get the error that
    "IMP-00037 : Character set marker unknown "
    Import terminated unsuccessfully
    Would anybody please tell me about the mentioned error ???
    Kindly provide the solution.
    Mentioned below are the character sets available in both the version.
    Character sets in oracle 8.1.7 :-
    Database Character Set :: WE8ISO8859P1
    National Character Set :: WE8ISO8859P1
    Character sets in oracle 10.2.0.1.0 :-
    Database Character Set :: WE8ISO8859P1
    National Character Set :: AL16UTF16
    Regards
    Milin...
    Message was edited by:
    user640001

    Hi,
    As you have asked, i have mentioned the export command below to get the full database export file.
    And I have copied this export file (dump) to another machine(Server) where i have to import that file to upgrade the database to 10g Rel 2.
    SET CC=%DATE:~4,2%-%DATE:~7,2%-%DATE:~10,4%
    exp username/password@db_name
    file=D:\PAY_BKP\EXP_FULL_PAY_%CC%.DMP log=D:\PAY_BKP\EXP_FULL_PAY.log
    indexes=yes
    full=yes
    Kindly provide guidance.
    Regards
    Milin

  • Conversion error, from character set 4102 to character set 4103

    Hi,
    We've developed a JCO server(in Java) with an ABAP report the function provided by the JCO server.
    MetaData:
         static {
              repository = new Repository("SMSRepository");
              fmeta = new JCO.MetaData("ZSMSSEND");
              fmeta.addInfo("TO", JCO.TYPE_CHAR, 255,   0,  0, JCO.IMPORT_PARAMETER, null);
              fmeta.addInfo("CONTENT", JCO.TYPE_CHAR, 255,   0,  0, JCO.IMPORT_PARAMETER, null);
              fmeta.addInfo("RETN", JCO.TYPE_CHAR, 255,   0,  0, JCO.EXPORT_PARAMETER, null);
              repository.addFunctionInterfaceToCache(fmeta);     
    Server parameters:
           Properties prop = new Properties();
           prop.put("jco.server.gwhost","shaw2k07");
           prop.put("jco.server.gwserv","sapgw01");
           prop.put("jco.server.progid","JCOSERVER01");
           prop.put("jco.server.unicode","1");
           srv = new SMSServer(prop,repository);
    If we run JCO server in both my client machine(from developer studio) and in the WAS machine(stand alone Java program), everything is ok. In the Abap side, the SM59 unicode test return the destination is an unicode system, and the ABAP report call the function can run smoothly.
    But we package this JCO server to a web application and deploy to WAS, problem occured. The SM59 unicode test still say the destination is an unicode system. But the ABAP report runs with an ABAP DUMP:
    Conversion error between two character set
    RFC_CONVERSION_FIELD
    Conversion error "RETN" from character set 4102 to character set 4103
    A conversion error occurred during the execution of a Remote Function
    Call. This happened either when the data was received or when it was
    sent. The latter case can only occur if the data is sent from a Unicode
    system to a non-Unicode system.
    I read the jrfc.trc log, it shows it receives data in unicode 4103(that's ok), but send data in unicode 4102(that's the problem).4102 is UTF-16 Big Endian and 4103  UTF-16 Little Endian. Our system is windows on intel 32 aritechture, so based on Note 552464, it should be 4103.
    Why it sends data (Java JCO server send output parameter to ABAP) in 4102?????
    What's the problem??? Thank you very much!!
    Best Regards,
    Xiaoming Yang
    Message was edited by:
            Xiaoming Yang

    Hello Experts,
    Any replies on this?
    I am also getting a similar kind of error.
    Do you have any idea on this?
    Thanks and Best Regards,
    Suresh

  • Getting ORA-01429 error while changing character set

    When I am changing character set from WE8DEC to AL32UTF8, I am getting ORA-01429 error
    SQL> ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8 ;
    ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01429: Index-Organized Table: no data segment to store overflow row-pieces

    Chockalingam wrote:
    I am using above steps as per oracle doc only.
    http://docs.oracle.com/cd/B10500_01/server.920/a96529/ch10.htm
    No, you are not.
    - You are not using the correct version doc vs. Oracle server version. Try to find the same suggestion in the relevant doc.
    - The doc you reference specifically says "... it can be used only under special circumstances. The ALTER DATABASE CHARACTER SET statement does not perform any data conversion, so it can be used +if and only if the new character set is a strict superset of the current character set+." (emphasis is mine)
    You do not have a strict superset.
    - Also the special clauses you have used are not documented - for a reason.
    Please edit your posts above to remove the ill-advice (steps with internal use only clauses) that does not belong on a forum.
    Edited by: orafad on Mar 16, 2012 9:47 PM

  • Reading in Latin Extended-A character set from a text file

    Hello all,
    I am writing a small program that reads in a text file containing special characters (beyond the ASCII char set) and converting it into "regular" characters. For example I would read in a uaccent and replace it with a u.
    Now I realize that Unicode support is built into Java from ground up but it goes only so far, you actually have to have the relevant character set to read it. My code is as follows:
    InputStreamReader inStreamReader = new InputStreamReader(new FileInputStream("input.txt"), "ISO-8859-1");
    BufferedReader bufferedReader = new BufferedReader(inStreamReader);
    String line = null;
    StringBuffer buff = new StringBuffer();
    while((line = bufferedReader.readLine()) != null) {
    char[] charArray = line.toCharArray();
    for(int i = 0; i < charArray.length; i++) {
    int x = (int)charArray;
    switch(x) {
    case 224: // this is agrave .. we need to replace it with a
    buff.append('a');
    break;
    case 230: // this is aelig .. we need to replace it with ae
    buff.append("ae");
    break;
    ///////// and so on
    Since I am reading in as ISO-8859-1, this works up to unicode 255. For the rest of the characters, apparently I need a Latin Extended-A and Latin Extended-B character set. How can I get that installed on my Windows OS machine? I am using jdk 1.4.1 on Windows XP. Any help is appreciated.
    Thanks,
    -vk4t

    vkat wrote:
    Since I am reading in as ISO-8859-1, this works up to unicode 255. For the rest of the characters, apparently I need a Latin Extended-A and Latin Extended-B character set. How can I get that installed on my Windows OS machine? I am using jdk 1.4.1 on Windows XP. Any help is appreciated.If your file has characters outside of 8859-1's range (0 - 255), then it isn't ISO-8859-1 encoded. You need to know what encoding was used to store the file. It sounds like you it actually may be Unicode text, in which case you need to know which encoding (UTF8, UTF16, etc) was used.

  • Changing Character set in SAP BODS Data Transport

    Hi Experts,
    I am facing issue in extracting data from SAP.
    Job details: I am using an ABAP data Flow which fetches the data from SAP and loads into Oracle table using Data Transport.
    Its giving me below error while executing my job:
    (12.2) 05-06-11 11:54:30 (W) (3884:2944) FIL-080102: |Data flow DF_SAP_EXTRACT_QMMA|Transform R3_QMMA_EXTRACT__AL_ReadFileMT_Process
                                                         End of file was found without reading a complete row for file <D:/DataService/SAP/Local/Z_R3_QMMA>. The expected number of
                                                         columns was <30> while the number of columns actually read was <10>. Please check the input file for errors or verify the
                                                         schema specification for the file format. The number of rows processed was <8870>.
    reason: When analyzed I found the reason for this is presence of special characters in data. So while generating the data file in SAP working directory which is available on SAP Application server the SAP code page is 1100 due to which the delimeter of the file and the special characters are represented with #. So once the ABAP is executed and data is read from the file it is treating the # as delimiter and throwing the above error.
    I tried to replace the special characters in ABAP data Flow but the ABAP data Flow doesnot support replace_substr function. I also tried changing the Code Page value to UTF-8 in SAP datastore properties but this didnt work as well.
    Please let  me know what needs to be done to resolve this issue. Is there any way we change the character set while reading from the generated data file in BODS to convert code page 1100 to UTF-8.
    Thanks in advance.
    Regards,
    Sudheer.

    Unfortunately, I am no longer working on this particular project/problem. What I did discover though, is that /127 actually refers to character <control>+<backspace>. (http://en.wikipedia.org/wiki/Delete_character)
    In SAP this and any other unknown characters get converted to #.
    The conclusion I came to at the time, was that these characters made their way into the actual data and was causing the issue. In fact I think it is still causing the issue, since no one takes responsibility for changing the records, even after being told exactly which records need to be updated ;-)
    I think I did try to make the changes on the above mentioned file, but without success.

  • Imp fails with character set marker unknown

    Hi All,
    I'm trying to import a dump that was done in WE8ISO8859P1 with Oracle 9i 9.2.0.5.0 into the Western European XE version on Windows XP Professional SP2.
    I get an IMP-00037 Character set marker unknown error
    I've tried setting NLS_LANG to American_UNITED KINGDOM.WE8ISO8859P1 but the same result. Obviously I'm just a novice but you've to start somewhere right, any help appreciated

    Setting NLS_LANG to either character set should be fine. WE8MSWIN1252 is a superset of WE8ISO8859P1 ( http://aswan.gatech.edu/docs/oracle/10g/server.101/b10749/applocal.htm#636814 ) .
    From the docs:
    There are two character set conversions when you Import a dump file. The first one is performed by Import executable from the character set of the dump file (which is equal to NLS_LANG of Export session) to the character set of NLS_LANG of the Import session. The second conversion from the Import NLS_LANG to the database character set is performed by SQL*Net.
    Here is more information in the docs:
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#i1021901
    ~Dietmar.

  • Character set error during startup

    Hi all
    This is a follow up of the following problem: Paralel Install of Ora8i and Ora9i
    Please read it before continue.
    This seems to be a database problem, so that I post it here:
    I'm going forward with my problem. It seems that's a problem with versions in the startup routine. When I try to start the instances with the 9i dbstart tool, all is going well.
    Now I changed the oracle script in /etc/init.d in a was that it will explicitly start the 9i dbstart tool. Now the instances come up normally, except the following error during mount of the instance:
    ORA-12709: error while loading create database character set.
    For me it looks like an mistaken during the creation of the database. Or perhaps its again a version problem.
    Any clue around for this?
    Greetings
    Salvatore

    Ok Here the answers:
    1) Error during startup
    2) Error during startup
    3) Error during startup
    The Oracle Environment seems to be OK. I can use all variables in the oracle script during OS boot.
    I suspect that something is going wrong during the database creation. But in the Oracle Error Message Documentation I read "contact Support Services". So I'm unsure what to do. AFAIK we don't have a support contract, nor I can use any paid service.

  • Character set migration error to UTF8 urgent

    Hi
    when we migrated from ar8iso889p6 to utf8 characterset we are facing one error when i try to compile one package through forms i am getting error program unit pu not found.
    When i running the source code of that procedure direct from database using sqlplus its running wihtout any problem.How can i migrate this forms from ar8iso889p6 to utf8 characterset. We migrated from databas with ar8iso889p6 oracle 81.7 database to oracle 9.2. database with character set UTF8 (windows 2000) export and import done without any error
    I am using oracle 11i inside the calling forms6i and reports 6i
    with regards
    ramya
    1) this is server side program yaa when connecting with forms i am getting error .When i am running this program using direct sql its working when i running compiling i am getting this error.
    3) yes i am using 11 i (11.5.10) inside its calling forms 6i and reports .Why this is giving problem using forms.Is there any setting changing in forms nls_lang
    with regards

    Hi Ramya
    what i understand from your question is that you are trying to compile a procedure from a forms interface at client side?
    if yes you should check the code in the forms that is calling the compilation package.
    does it contains strings that might be affected from the character set change???
    Tony G.

  • Character set error oracle 10g

    I have a 10g TARGET database with a single byte character set of western european and 9i SOURCE databse with multibyte character of UTF8 since the character sets are different to load data from 9i to 10g I am using national character set NCHAR columns on target database to store the multi byte data :
    this is the table i am working on loading
    CREATE TABLE RAN_TEST1_MDL
    ( MODEL_ID NUMBER(15) NOT NULL,
    PRODUCT_ID NUMBER(15) NULL,
    MODEL_CODE NVARCHAR2(540) NULL,
    ODM_CODE NVARCHAR2(900) NULL,
    MODEL_DESC NVARCHAR2(1200) NULL )
    tablespace csn_d_01 LOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING
    The table is test table on oracle 10g database .
    This is the query I am running
    INSERT /*+append*/ INTO WORK_HIER_MDL(
    MODEL_ID,
    PRODUCT_ID,
    MODEL_CODE,
    ODM_CODE,
    MODEL_DESC
    SELECT
    MODEL_ID,
    PRODUCT_ID,
    MODEL_CODE,
    ODM_CODE,
    MODEL_DESC
    FROM SHLD_HIER_MDL
    shld_hier_mdl is source table from oracle 9i multi byte UTF8 database.
    WORK_HIER_MDL is target table on oracle 10g single byte western european databse
    Error : ORA-29275: partial multibyte character
    When I describe the source table SHLD_HIER_MDL ( on 9i oracle accesed thru a db link ) I get the following error
    ORA-01460: unimplemented or unreasonable conversion requested
    I think ORA-29275 and ORA-01460 are correleted . Can anyone suggest what could be the cause for this ? Thanks

    Error:     ORA-29275 (ORA-29275)
    Text:     partial multibyte character
    Cause:     The requested read operation could not complete because a partial
         multibyte character was found at the end of the input.
    Action:     Ensure that the complete multibyte character is sent from the
         remote server and retry the operation. Or read the partial
         multibyte character as RAW.
    you can export the table and import on 10g.Rename the table,create your test table and use IAS.

  • Error " conversion error between two character sets" in PI MONI

    Hi Experts
    I am doing file to Idoc scenario. I am getting the following error in PI MONI "conversion error between two character sets".
    please suggest me how to solve the issue.
    thanx in advance.

    Hi Mickael
    Below is the complete error message found in PI MONI.
    SAP:Error SOAP:mustUnderstand="1" xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/">
      <SAP:Category>XIServer</SAP:Category>
      <SAP:Code area="INTERNAL">SYSTEM_DUMP</SAP:Code>
      <SAP:P1 />
      <SAP:P2 />
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:Stack>PI Server : XBTO80__0000 : Conversion error between two character sets.</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>

  • Character set Conversion Buffer Overflow Error

    Hi,
    I have got an issue while loading data from a flat file to a staging table. i.e., Character set Conversion Buffer Overflow. Suppose there are 10,000 records in a flat file, after running control file only 100+ records are loading to the staging table. Remaining are errored out. I think there is no issue with control file because when I load data from different flat file containing same no. of records as the previous flat file, it is loading all the records. what could be the reason and solution for this issue.
    Can anyone please suggest me how to resolve this issue.

    DBNS_OUTPUT is a poor choice for debugging. It has very limited used. And as you've discovered, merely debugging code can now result in new exceptions in the code.
    The proper approach would be to create your own debug procedure (or package). Have your code call this instead of DBMS_OUTPUT.
    In your debug procedure, you can decide what you want to do with that debug data for that specific program in the current environment and circumstances.
    The program that runs could be a DBMS_JOB in which case DBMS_OUTPUT is useless. The program can be called several layers deep from other PL/SQL code.. and you want to know just who is calling your code. Etc.
    Having your own debug procedure allows you to:
    - create an autonomous transaction and log the debug data to a log table
    - write it to a DBMS_PIPE for interactive debugging
    - write it to DBMS_OUTPUT
    - record the PL/SQL call stack to determine who is calling who
    - record the current session's environment (e.g. session_context)
    - record the current session's statistics, opens cursors, current SQL, etc. (courtesy of the V$ views)
    etc. etc.
    In other words, your debug procedure gives you the flexibility to decide on HOW to handle the debugging.
    And when you code goes into production, your debug procedure ships with, containing a simple NULL command.. Which means that at any time the DBA can (when the need arise), add his/her debug methods into it in order to trace a production problem.
    Using DBMS_OUTPUT is a very poor, and often just wrong, choice.
    It is fine for writing a quick test. But when you are developing production code and using DBMS_OUTPUT, you must ask yourself whether you have made the right choice.
    And this is not just about wrapping DBMS_OUTPUT. But also wrapping other system calls like RAISE_APPLICATION_ERROR and so on.

  • Error Connecting to MySql (character set index '49')

    I downloaded the Oracle Sql Developer extensions and went to Tools / Preferences / Third Part and set the location for the Jar file.
    When I try and test the connection to the MySql database I get this error:
    An error was encountered performing the requested operation:
    Unknown initial charater set index '49' received from the server.
    Initial client character set can be forced via the 'characterEncoding' property.
    Vendor code 0
    My dba's used Sql developer to connect to the same MySql database fine. If I use MySql Workbench, I can connect. (I prefer Sql Developer, so I was hoping to get this connection to work.)
    I even tried going to MySql to get their Connection/J JDBC jar file; but the same results.
    Any suggestions?
    Thanks,
    Mike

    Are your colleagues using the same sqldev/JDBC versions? Which?
    Did you follow the guide on setting it up, and the driver from http://dev.mysql.com/downloads/connector/j/ ?
    K.

  • Character set error comming in  stored procedure

    i am writing these SQL lines when execute these giving me error
    SQL Lines
    SELECT DocDate,
    NVL(SMTransType.Description, '<Not Provided>') DESCRIPTION,
    CASE
    WHEN DocType = 'GRN' THEN ( SELECT GrnNo
    FROM IVGrnMst
    WHERE ID = DocID )
    END "ReceiptNo",
    CASE
    WHEN DocType = 'PUR' THEN ( SELECT PrNo ---- its data type VARCHAR2(50)
    FROM IVPurReturnMst
    WHERE ID = DocID )
    WHEN DocType = 'SCP' THEN ( SELECT ConsumptionCode --its data type NVARCHAR2(50)*
    FROM IvStoreConsumptionMst
    WHERE ConsumptionId = DocID )
    END "IssueNo"
    FROM IVStockRegister Stock INNER JOIN
    SMTransType
    ON Stock.DocType = SMTransType.Code;
    SQL SQL Error: ORA-12704: character set mismatch
    12704. 00000 - "character set mismatch"

    are you sure your SELECTs on in the case after when return single value?
    try this, please
    SELECT DocDate,
      NVL(SMTransType.Description, '<Not Provided>') DESCRIPTION,
      CASE
        WHEN DocType = 'GRN'
        THEN
          ( SELECT MIN(GrnNo) FROM IVGrnMst WHERE ID = DocID
      END "ReceiptNo",
      CASE
        WHEN DocType = 'PUR'
        THEN
          (SELECT MIN(PrNo) ---- its data type VARCHAR2(50)
          FROM IVPurReturnMst
          WHERE ID = DocID
        WHEN DocType = 'SCP'
        THEN
          (SELECT MIN(ConsumptionCode) --its data type NVARCHAR2(50)
          FROM IvStoreConsumptionMst
          WHERE ConsumptionId = DocID
      END "IssueNo"
    FROM IVStockRegister Stock
    INNER JOIN SMTransType
    ON Stock.DocType = SMTransType.Code;

  • Character set not supported!! DBConversion error. Need Help!

    I installed JDeveloper9.2 on windows 2000. When I try to make a database server connection by using OCI8 driver, it gives me a "Character set not supported DBConversion error". After that I tried to connect database by using Thin driver, everything is OK. I really need your help.
    I used database connection wizard to make the connection. Connection type I selected ORACLE(JDBC) and drive I selected OCI8. After that I tried to test connection, it gave me above error. I noticed that JDeveloper did not talked with database server at all, I tried to connect with a database server which does not exist, it still gave me the same error.

    Just ran into this myself yester day.
    There is a Metalink Note on it.
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showFrameDocument?p_database_id=NOT&p_id=190281.1
    Problem Description
    You are using OC4J and trying to connect to a database using JDBC OCI and
    are getting:
    "java.sql.SQLException: Character Set Not Supported !!: DBConversion"
    Solution Description
    Replace the <OC4J_HOME>\jdbc\lib\classes12dms.jar with
    <ORACLE_HOME>\jdbc\lib\classes12.jar and rename it with classes12dms.jar.
    Explanation
    It seems there is a mismatch of classes12.zip supplied with OC4J 9.0.2
    and the Oracle9i client libraries ocijdbc8.dll or ocijdbc8.so.
    OC4J 9.0.2 does not use jdbc\lib\classes12.jar instead it uses
    jdbc\lib\classes12dms.jar. So, in order to use the 9.0.1 client with OC4J, you
    will need to make a copy of classes12.jar and rename it to classes12dms.jar
    References
    [NOTE:108876.1] Creating Connection gives "No ocijdbc8 in java.library.path"
    [NOTE:174808.1] JDev9i and OCI Connections
    I copied and renamed the jar (classes12.jar) as they stated.
    Note: It should be in the directory you set in the JDev.conf, mine is
    AddNativeCodePath D:\OraNT\9iDS\bin
    Didn't try the other reply's suggestion of setting an environment variable.

Maybe you are looking for