Cfsqltype mismatch in CF8 with Oracle?

I'm running into a strange problem, and I'm wondering if
anyone else has noticed the same. I have a query that is returning
zero results in ColdFusion, but the same query returns 22 results
via the database.
The problem appears to be the cfsqltype. The database column
is data type CHAR(5). When I specify the cfsqltype as cf_sql_char,
the query returns zero results. When I specify the cfsqltype as
cf_sql_integer, the query returns 22 results.
Has anyone else run into this?

cherdt wrote:
> I'm running into a strange problem, and I'm wondering if
anyone else has
> noticed the same. I have a query that is returning zero
results in ColdFusion,
> but the same query returns 22 results via the database.
>
> The problem appears to be the cfsqltype. The database
column is data type
> CHAR(5). When I specify the cfsqltype as cf_sql_char,
the query returns zero
> results. When I specify the cfsqltype as cf_sql_integer,
the query returns 22
> results.
Your code is exactly the opposite of your explanation.
However, you
should try to append spaces to you value to make sure it is
exactly the
correct number of characters, i.e.:
<cfqueryparam
value="#Left(trim(value) & ' ', y)#"
cfsqltype="cf_sql_char"
maxlength="#y#">
Jochem
Jochem van Dieten
Adobe Community Expert for ColdFusion

Similar Messages

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • Help with Oracle CLOB

    I have CF8 and Oracle 10g. I am trying to write a cfc for use
    by a Flex3 application over the AMF channel. What should the
    ReturnType be for the retieve function? If I use Query, and return
    the CLOB as a QueryObject, I can easily put the CLOB into a
    datagrid. But I really want to put the CLOB into a RichTextEdit
    control, allow the user to edit, and call an update function to put
    it back in the CLOB. I'm just testing right now. the real
    application will use a cfc to pull back a lot of data to populate a
    form that will have seven or eight CLOBs for different remark
    areas.
    I have posted this question here because there isn't a
    separate forum for questions from the intersection of CF and Flex.
    Possibly Acrobat forms would be a better solution than Flex
    for the UI, but I have no experience using Acrobat to do the data
    retrieve and update.
    Thank you for any advice.
    Scott

    What confused me was that CF .cfcs are the middleman here,
    and just like you said you either use the CLOB or don't depending
    on which way you are facing. Here are two simple examples from my
    .cfc that work. A Retrieve function and an Update function. The
    description is the CLOB, but the .cfc only uses the CLOB type when
    it is going back into ORACLE.
    <cffunction name="RetrieveDescription" access="remote"
    returntype="string" >
    <cfargument name="PK" required="true" type="string">
    <cfquery name="RetrieveDescription"
    datasource="dSource">
    select DESCRIPTION
    from DRDESCRIPTION
    where CONTROLNUMBER=<cfqueryparam
    cfsqltype="cf_sql_varchar" value="#arguments.PK#" />
    </cfquery>
    <cfreturn #RetrieveDescription.DESCRIPTION#/>
    </cffunction>
    <cffunction name="UpdateDescription" access="remote"
    returntype="string" >
    <cfargument name="newDescription" required="true"
    type="string">
    <cfargument name="descriptionUpPK" required="true"
    type="string">
    <cfquery name="UpdateDescription"
    datasource="dSource">
    update DRDESCRIPTION
    set DESCRIPTION = <cfqueryparam cfsqltype="cf_sql_CLOB"
    value="#arguments.newDescription#" />
    where CONTROLNUMBER=<cfqueryparam
    cfsqltype="cf_sql_varchar" value="#arguments.descriptionUpPK#"
    />
    </cfquery>
    <cfset msg = 'The DR was updated.'>
    <cfreturn msg/>
    </cffunction>

  • Error while installing Oracle Apps server 10.1.3 with Oracle DB 11.2.0.2

    Error while installing Oracle Apps server 10.1.3 with Oracle DB 11.2.0.2 residing in the same server and being used by Apps server as it's metadata.
    bash-3.00$ export ORACLE_HOME=/data/ora11g/app/ora11g/product/11.2.0
    bash-3.00$ cd /data/OAS/install/soa_schemas/irca/
    bash-3.00$ ./irca.sh
    Integration Repository Creation Assistant (IRCA) 10.1.3.1.0
    (c) Copyright 2006 Oracle Corporation. All rights reserved.
    ERROR: Cannot find library - /data/ora11g/app/ora11g/product/11.2.0/jdbc/lib/ojdbc14.jar
    Please verify that the ORACLE_HOME is set correctly.
    bash-3.00$

    Hi Craig,
    Database 11gR2 could be used for Installing Application Server 10.1.3.x but with some limitation.
    So please review the note:-887365.1 Oracle Database 11g Release 2 (11.2) Certification for Oracle Application Server 10g (10.1.2, 10.1.3, 10.1.4)
    Section :- Oracle Application Server 10g Release 3 (10.1.3)
    Regards,
    Praaksh.

  • Install Solution Manager in RedHat Enterprise 6 with Oracle 11

    Hello ,
    In the marketplace-PAM,  RedHat Enterprise 6 is not listed as approved for EHP1 Solution Manager 7.0 64 bit (unicode) with Oracle.
    Does anyone have any information on this subject?
    Has anyone installed the Solution Manager 7.0/Oracle with RedHat Enterprise 6?
    tks

    Yes, you are correct. Apart from PAM, following Note confirms this. RHEL 6 is not supported yet for Oracle, is in planning stage.
    Note 1565179 - SAP software and Oracle Linux
    Thanks

  • Unable to boot bankapp servers with Oracle 8.1.7 in windows2000

    Hello,I tried to run bankapp examples with oracle 8.1.7 in windows2000. But when
    I booted the servers using tmboot, there are some errors.(1)I used the following
    RM entries:Oracle_XA;xaosw;D:\Oracle\Ora81\precomp\lib\msvc\oraSQL8.lib D:\Oracle\Ora81\precomp\lib\msvc\oraSQX8.lib
    D:\Oracle\Ora81\RDBMS\xa\ORAXA8.lib (2)The OPENINFO string in the ubbshm file
    is:DEFAULT:TMSNAME=TMS_ORA TMSCOUNT=2 LMID=SITE1
    BANKB1     GRPNO=1 OPENINFO="Oracle_XA:Oracle_XA+Acc=P/scott/tiger+SesTm=100+LogDir=.+MaxCur=5"(3)the
    follow is the error message when I booted the servers:
    .. 174841.GLOBALDB!BALC.852.2572.0: 07-26-2001: Tuxedo Version 8.0 32-bit Windows.
    174841.GLOBALDB!BALC.852.2572.0: LIBTUX_CAT:262: INFO: Standard main starting
    174841.GLOBALDB!BALC.852.2572.0: LIBTUX_CAT:466: ERROR: tpopen TPERMERR xa_open
    returned XAER_RMERR
    174841.GLOBALDB!BALC.852.2572.0: tpsvrinit: failed to open database due to
    174841.GLOBALDB!BALC.852.2572.0: tpopen failed, tperrno: 16
    174841.GLOBALDB!BALC.852.2572.0: LIBTUX_CAT:250: ERROR: tpsvrinit() failed
    174841.GLOBALDB!tmboot.480.548.-2: CMDTUX_CAT:825: ERROR: Process BALC at SITE1
    failed with /T tperrno (TPERMERR - resource manager error) ...
    Can anyone help? Thanks a lot!
    Best Regards
    Lily

    I found the answer myself in an earlier post.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by The Oracle Reports Team:
    Reports 6i will connect to Oracle8i, however you need to connect over Net8 - i.e. on the database side, you need to set up the TNS listener to accept connections, and on the Reports side (which, incidentally, needs to be installed in a separate Oracle_home) you need to configure the tnsnames.ora file to connect to the 'remote' database (since it is in a different oracle_home, to all intents and purposes it may as well be on another machine - it's all considered 'remote').
    Regards
    The Oracle Reports Team http://technet.oracle.com <HR></BLOCKQUOTE>
    null

  • Registering a Partner application with Oracle SSO 10gR2

    Hi Everybody
    I'd like to ask a question around registering a partner application with Oracle SSO.
    I have entered my home_url, logout_url and cancel_url e.g. home_url is https://vevopuitest1.co.uk/vevo_test1 and so on for the other fields.
    When I save the details some information is automatically created e.g. Site Id, Site Token etc.
    The bit that I am particularly interested in are the fields Single Sign-On URL and Single Sign-Off URL.
    For my purposes these fields are respectively: https://cwassotest1.co.uk/pls/orasso/orasso.wwsso_app_admin.ls_login and https://cwassotest1.co.uk/pls/orasso/orasso.wwsso_app_admin.ls_logout
    My questions are:
    1. Where do these values come from?
    2. Can I view them anywhere, say, in Oracle Directory Manager or using ldif queries?
    I would like to be able to verify these values.
    Many Thanks
    Andy

    I'm afraid this won't answer your question completely, but AFAIK in principle it does not matter on which machine SSO is running, as long as it passes the user id and credentials properly through the HTTP Header. Even more: in practice it is very common to have SSO running on a different machine than where your app runs.
    So what I would do is find out how to use ADF Faces with SSO. Perhaps someone else can provide pointers on that.
    Jan Kettenis

  • Forms 6.0 how to query clob column with oracle 9.2 DB

    hi every body,
    i made install for oracle 9.2 oracle DB every thing goes ok but when i made query in my form version 6.0 which have CLOB column the form closed automatically without any message?
    and just for know when i run the same form with oracle 8.1.7 DB the form made query normally without any problem.
    i want your help please.
    Message was edited by:
    mshaqalaih

    I know there was a problem in 6i where you would get a crash if your query returned more than {Max Length} characters of the field representing the CLOB column.

  • Using Java Access bridge (Accessibility) with oracle forms 6.0 client

    I'm trying to use JAB (java Access bridge ) to capture events in oracle forms 6.0 client .
    I've a Jinitiator 1.1.8 and JRE version 1.4x
    I've configured JAB as per the installation guide . However the events don't surface in Java Monkey .
    Has anybody encountered similar issue ? what is the solution for the issue ??
    Also on one of the forums I read Jinitiator 1.3x and above is automatically recognised by Java Access bridge .
    For jinitiator version less than 1.3 manual configuration is required . however I haven;'t been able to find any on the oracle forms KB
    Also here is the excerpt from the link http://www.oracle.com/us/corporate/accessibility/faqs/index.html
    Q: Are there special steps for using Java-based applications with assistive technology?
    A: If the Oracle application is written in Java, such as JDeveloper or Oracle Forms (runtime), customers must first install the latest version of Sun's Java Access Bridge. The Java Access Bridge provides the integration with screen readers such as JAWS or SuperNova that support Java. You just download the Access Bridge and install it. Sun's AccessBridge 2.0x recognizes Oracle's JInitiator 1.3x and above so no manual configuration steps are necessary. The Access Bridge is available from: http://java.sun.com/products/accessbridge. At the time this document was written, Access Bridge 2.0.1 is the most current publicly available production release; Oracle recommends upgrading to this version. Sun's AccessBridge is bundled with Oracle Universal Installer (OUI) and can be found in the Java Runtime Engine (JRE). More information for configuring such products is in the respective product documentation, or on http://www.oracle.com/us/corporate/accessibility/products/index.html.
    Edited by: 974810 on 4 Dec, 2012 1:10 AM

    ODP.NET requires Oracle Client 9.2 or higher.
    You can find additional information about ODP.NET from the FAQ:
    http://www.oracle.com/technology/tech/windows/odpnet/faq.html
    and the ODP.NET homepage:
    http://www.oracle.com/technology/tech/windows/odpnet/index.html
    Hope that helps,
    Mark

  • Using long vs. clob datatype with Oracle 8.1.7 and interMedia

    I am trying to determine the best datatype to use for a column I
    wish to search using the interMedia contains() function. I am
    developing a 100% java application using Oracle 8.1.7 on Linux.
    I'd prefer to use the standard JDBC API's for PreparedStatement,
    and not have to use Oracle extensions if possible. I've
    discovered that there are limitations in the support for LOB's
    in Oracle's 100% java driver. The PreparedStatement methods
    like setAsciiStream() and setCharacterStream() are documented to
    have flaws that may result in the corruption of data. I have
    also noticed that socket exceptions sometimes occur when a large
    amount of data is transferred. If I use the long datatype for
    my table column, the setCharacterStream() method seems to
    transfer the data correctly. When I try to search this column
    using the interMedia contains() function, I get strange
    results. If I run my search on Oracle 8.1.6 for Windows, the
    results seem to be correct. If I run the same search on Oracle
    8.1.7 for Linux, the results are usually incorrect. The same
    searches seem to work correctly on both boxes when I change the
    column type from long to clob. Using the clob type may not be
    an option for me since the standard JDBC API's to transfer data
    into internal clob fields are broken, and I may need to stick
    with standard JDBC API's. My customer wishes to purchase a
    version of Oracle for Linux that will allow us to implement the
    search capability he requires. Any guidance would be greatly
    appreciated.

    I've finally solved it!
    I downloaded the following jre from blackdown:
    jre118_v3-glibc-2.1.3-DYNMOTIF.tar.bz2
    It's the only one that seems to work (and god, have I tried them all!)
    I've no idea what the DYNMOTIF means (apart from being something to do with Motif - but you don't have to be a linux guru to work that out ;)) - but, hell, it works.
    And after sitting in front of this machine for 3 days trying to deal with Oracle's, frankly PATHETIC install, that's so full of holes and bugs, that's all I care about..
    The one bundled with Oracle 8.1.7 doesn't work with Linux redhat 6.2EE.
    Don't oracle test their software?
    Anyway I'm happy now, and I'm leaving this in case anybody else has the same problem.
    Thanks for everyone's help.

  • Installation Problem in SAPNW7.0 with Oracle on RHEL5.4

    Hi Everyone,
            I am installing SAP NETWEAVER7.0 SR3 with ORACLE 10.2.0.4 on RHEL5.4 Operating System. I have installed the java
    and i have install the ORACLE 10.2.0.4 Successfully. I set the environment variable in vi .bash_profile and i set the kernal parameters with their values in vi /etc/sysctl.conf file then run the command sysctl -p. Later i run the ./sapinst script the cd /NW7.0/
    BS_Installation Master_2005/ IM_LINUX_x86_64. It went fine up to the 24th phase. At the 24th phase "start instance" i got the error.
    The error message is
    " ERROR 2010-10-10 12:03:03.542
    CJS-20022  Could not start instance 'DVEBMGS00' of SAP system QAS"
    I did try my level best to resolve it. Can any one tell me what could be the issue and how to resolve it.
    Thanks

    startdb.log
    Sun Oct 10 16:45:11 IST 2010
    LOGFILE FOR STARTING ORACLE
    Trying to start QAS database ...
    Sun Oct 10 16:45:11 IST 2010
    checking required environment variables
    ORACLE_HOME  is >/oracle/QAS/102_64<
    ORACLE_SID   is >QAS<
    Sun Oct 10 16:45:11 IST 2010
    check initora
    Sun Oct 10 16:45:11 IST 2010
    check initora
    Sun Oct 10 16:45:11 IST 2010
    checking V2 connect
    TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 10-OCT-2010 16:45:11
    Copyright (c) 1997, 2005, Oracle.  All rights reserved.
    Used parameter files:
    /usr/sap/QAS/SYS/profile/oracle/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (COMMUNITY = SAP.WORLD) (PROTOCOL = TCP) (HOST = quality) (PORT = 1527))) (CONNECT_DATA = (SID = QAS) (GLOBAL_NAME = QAS.WORLD)))
    OK (10 msec)
    tnsping: V2 connect to QAS
    Sun Oct 10 16:45:11 IST 2010
    Connect to the database to check the database state:
    R3trans: connect check finished with return code: 12
    Database not available
    Sun Oct 10 16:45:11 IST 2010
    Shutdown database
    First trying to shutdown the database - May be,
    the database is in the nomount or mount state
    Sun Oct 10 16:45:12 IST 2010
    starting database
    SQL*Plus: Release 10.2.0.1.0 - Production on Sun Oct 10 16:45:12 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    ERROR:
    ORA-09925: Unable to create audit trail file
    Linux-x86_64 Error: 13: Permission denied

  • On Sun fire v490 - Solaris 10 with Oracle 8.1.7.4 & Sybase 12.0

    Hi,
    We are going to upgrade our server with this configuration -
    Sun Fire V490     2 x 1.05 GHz UltraSPARC IV CPU
    8096MB RAM     2 x73GB local disk
    2x FC 2GB Sun/QLogic HBAs
    DAT72
    On one machine we will have Sun Solaris v10 with
    Oracle DB v8.1.7.4 & Second one will be Sun Solaris v10 with Sybase DB v12.0.0.6.
    Now our question is - Sun fire have Hyper-thread CPUs ��� will the O/S and databases (Oracle and Sybase) view the proposed system as a true 4 CPU platform? Will parameters used to tune the database such as Sybase max online engines still operate in the same manner as before?
    Our old machine configuration was - Sun E450     4x400MHz CPU     1024MB RAM     2 x18; 8x36GB disks

    Questions on Oracle and Sybase should be directed to a database forum, this forum is for Sun hardware support.
    Here is a link to a DB forum I look at from time to time:
    http://www.dbforums.com/index.php
    The topic of tuning Oracle or Solaris is way beyond the scope of this forum, I have attempted to go into it before but didn't get any feedback and I would only like to spend lots of time on it if I was being paid!!! On the memory side, keep in mind that Oracle 9i 64-bit can address a maximum of 2 ^ 64 ( 16777216 TB ) memory, prior to that the DBA had to define memory parameters in init.ora. To be honest the last time I worked with a Oracle 8 database I shut a HP K class server down permanently that had been migrated to Oracle 9i on Solaris by an Oracle consultant and I can't remember all the tuning trick etc.

  • Installation problem under AIX 5.3 with Oracle 10g

    Hello,
    I start an installation in AIX 5.3 machine with oracle 10. I install oracle 10.0 and pathes for oracle 10.2.0.2.0. I also install and all the interim patches through MOpatch utility. I also install the latest R3trans and R3load for unicode installation. But i have 4 errors under phase IMPORT ABAP. In this phase 33 packages completed succesfully. I have 4 packages with errors.
    In ImportMonitor.Console.Log i have the following errors:
    Import Monitor jobs: running 1, waiting 4, completed 33, failed 0, total 38.
    Import Monitor jobs: running 2, waiting 3, completed 33, failed 0, total 38.
    Import Monitor jobs: running 3, waiting 2, completed 33, failed 0, total 38.
    Loading of 'SAPSSEXC_4' import package: ERROR
    Import Monitor jobs: running 2, waiting 2, completed 33, failed 1, total 38.
    Loading of 'SAPPOOL' import package: ERROR
    Import Monitor jobs: running 1, waiting 2, completed 33, failed 2, total 38.
    Loading of 'DOKCLU' import package: ERROR
    Import Monitor jobs: running 0, waiting 2, completed 33, failed 3, total 38.
    Import Monitor jobs: running 1, waiting 1, completed 33, failed 3, total 38.
    Loading of 'SAPCLUST' import package: ERROR
    Import Monitor jobs: running 0, waiting 1, completed 33, failed 4, total 38.
    Inside SAPSSEXC_4.log i have the following errors:
    /usr/sap/BEQ/SYS/exe/run/R3load: START OF LOG: 20071130102907
    /usr/sap/BEQ/SYS/exe/run/R3load: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#14 $ SAP
    /usr/sap/BEQ/SYS/exe/run/R3load: version R7.00/V1.4
    Compiled Oct 20 2007 02:05:46
    /usr/sap/BEQ/SYS/exe/run/R3load -i SAPSSEXC_4.cmd -dbcodepage 4102 -l SAPSSEXC_4.log -stop_on_error
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
    (DB) INFO: T512CLU deleted/truncated #20071130102908
    myCluster (63.20.Imp): 655: error when retrieving table description for physical table T512CLU.
    myCluster (63.20.Imp): 656: return code received from nametab is 2
    myCluster (63.20.Imp): 299: error when retrieving physical nametab for table T512CLU.
    (CNV) ERROR: data conversion failed.  rc = 2
    (DB) INFO: disconnected from DB
    /usr/sap/BEQ/SYS/exe/run/R3load: job finished with 1 error(s)
    /usr/sap/BEQ/SYS/exe/run/R3load: END OF LOG: 20071130102908
    Under SAPPOOL i have:
    usr/sap/BEQ/SYS/exe/run/R3load: START OF LOG: 20071130102907
    /usr/sap/BEQ/SYS/exe/run/R3load: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#14 $ SAP
    /usr/sap/BEQ/SYS/exe/run/R3load: version R7.00/V1.4
    Compiled Oct 20 2007 02:05:46
    /usr/sap/BEQ/SYS/exe/run/R3load -i SAPPOOL.cmd -dbcodepage 4102 -l SAPPOOL.log -stop_on_error
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
    (DB) INFO: ATAB deleted/truncated #20071130102908
    failed to read short nametab of table AT01                           (rc=2)
    (CNVPOOL) conversion failed for row 0 of table  VARKEY = ã ±ã ±ã °â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  
    (CNV) ERROR: data conversion failed.  rc = 2
    (DB) INFO: disconnected from DB
    /usr/sap/BEQ/SYS/exe/run/R3load: job finished with 1 error(s)
    /usr/sap/BEQ/SYS/exe/run/R3load: END OF LOG: 20071130102908
    I read notes 421554 and 898181 i execute the directions from the notes to change the R3load and R3trans.
    Do you have any idea how can i procceed with the errors?
    Thank you in advance
    Thanasis Porpodas

    Hi,
    look at sap note 921593 and search for myCluster ,
    read that section following is a part of that note.
    Symptom:
    During the import into a UNICODE system the following error occurs
    (for example in the SAPCLUST.log):
    myCluster (63.2.Imp): 2085: (Warning:) inconsistent field names(source): physical field K1N05 appears as logic K1N5.
    myCluster (63.2.Imp): 2086: (Warning:) further investigation recommended
    myCluster (63.2.Imp): 1924: error when checking key field consistency for logic table TACOPC    .
    myCluster (63.2.Imp): 1927: logic table is canonical.
    myCluster (63.2.Imp): 1930: received return code 2 from c3_uc_check_key_field_descr_consistency.
    myCluster (63.2.Imp): 1224: unable to retrieve nametab info for logic table TACOPC    .
    myCluster (63.2.Imp): 8032: unable to acquire nametab info for logic table TACOPC    .
    myCluster (63.2.Imp): 2807: failed to convert cluster data of cluster item.
    myCluster: CLU4       *00001*
    myCluster (63.2.Imp): 319: error during conversion of cluster item.
    myCluster (63.2.Imp): 322: affected physical table is CLU4.
    (CNV) ERROR: code page conversion failed              rc = 2
    |
    |                              RSCP - Error
    | Error from:            Codepage handling (RSCP)
    | code:  128  RSCPENOOBJ   No such object
    | Dummy module without real rscpmc[34]
    | module: rscpmm  no:    2 line:    75          T100: TS008
    | TSL01: CPV  p3: Dummy-IPC   p4: rscpmc4_init
    `----
    Cause:
    This problem is caused by incorrect data which should have been removed from the source system before the export.
    Solution:
    There are two possible workarounds:
          1. Modify DDL<dbs>.TPL (<dbs> = ADA, DB2, DB4, DB6, IND, MSS, ORA) BEFORE the R3load TSK files are generated;
                  search for the keyword "negdat:" and add "CLU4" and "VER_CLUSTR" to this line.
          2. Modify the TSK file (most probably SAPCLUST.TSK) BEFORE R3load import is (re-)started;
                  search for the lines starting with "D CLU4 I" and "D VER_CLUSTR I" and change the status (i.e. "err" or "xeq") to "ign" or remove the lines.

  • Which version of Weblogic on Solaris is compatible with Oracle 8.1.7 - Unicode?

    Hi folks,
    We want upgrade WLS 4.5.1 to one of the last version of WLS, but also we are
    planing upgrade Oracle to 8.1.7 version and migrate the character set of the
    database to UTF8 (Unicode),
    so we need to know which versions of WLS are compatible with Oracle 8.1.7
    and Unicode as Character Set.
    Thanks in advance.
    Moises Moreno.

    Hi Moises Moreno
    The latest version of weblogic server is 6.1 with service pack 1. This version
    supports oracle 8.1.7 on major unix platforms viz., solaris(2.6,2.7,2.8),
    hp-unix(11.0,11.0i), linux7.1, Aix4.3.3 and on windows platforms viz.,
    NTwithsp5, 2000.
    BEA jdrivers are having Multibyte character set support (UTF8).
    Note : Weblogic server 5.1 with SP10 also supports oracle 8.1.7.
    FMI : http://www.weblogic.com/platforms/index.html#jdbc
    Thanks & Regards
    BEA Customer Support
    Moises Moreno wrote:
    Hi folks,
    We want upgrade WLS 4.5.1 to one of the last version of WLS, but also we are
    planing upgrade Oracle to 8.1.7 version and migrate the character set of the
    database to UTF8 (Unicode),
    so we need to know which versions of WLS are compatible with Oracle 8.1.7
    and Unicode as Character Set.
    Thanks in advance.
    Moises Moreno.

  • Need some help about i18n on Redhat 71. with Oracle 8.1.7

    Initally I used Redhat 7.0 with Oracle 8.1.7, everything had worked find until I upgrade my system to Redhat 7.1.
    I have no idea about Redhat locate setting. Well, I use NLS_LANG=TRADITIONAL CHINESE_TAIWAN.ZHT16BIG5 to inital database . some character like '3\' had worked fine. Now '3\' will become '3\\' after inserting into tables.
    It's a emergent event! plz help me. I will appreciate your kindness ;>

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Dragoon Chang ([email protected]):
    Initally I used Redhat 7.0 with Oracle 8.1.7, everything had worked find until I upgrade my system to Redhat 7.1.
    I have no idea about Redhat locate setting. Well, I use NLS_LANG=TRADITIONAL CHINESE_TAIWAN.ZHT16BIG5 to inital database . some character like '3\' had worked fine. Now '3\' will become '3\\' after inserting into tables.
    It's a emergent event! plz help me. I will appreciate your kindness ;><HR></BLOCKQUOTE>
    sorry ... I make a big mistake ... :)
    After directlly connecting with sqlplus, I find out it's ok to use sql command.
    well, The problem comes with PHP ...
    I will try out the php.

Maybe you are looking for

  • HTMLLoader js errors

    I am using Flash CS3 and adobe air and when i create a htmlloader main.as package {    import flash.display.NativeWindow;    import flash.display.Sprite;    import flash.display.Stage;    import flash.display.StageAlign;    import flash.display.Stage

  • Rest API v1 Getting Error 400 bad Request

    Hi, we have a very strange problem. We are using a custom JAVA WebDynpro UI for doing our approval workflows, and are making an upgrade from NW Version 7.0 to 7.31 (finally ). We have upgraded the DEV System without any troubles everything works fine

  • Organizing photos on iphoto

    Hi, I am new to iPhoto. I have few questions on organizing my photos using iPhoto. 1. I would like to sort my photos using the date added (& if possible date modified) instead of date taken. How to do this? 2. When I import images, iPhoto copies them

  • Opening a specific directory using JFileChooser.

    Iam using JFIleChooser to select a file is there any way that i the dialog opens at a particular directory say...c:\gallery\imagee...the default dialog opens @ MyDocuments Folder.

  • Webcam non-functional on some servers!?

    We used the FLARmanager, developed a nice little app, runs find locally, runs fine on our dev Linux server (dreamhost.com hosted). Put it on an IIS server. When we click on the 'allow webcam' prompt, the webcam control panel pops up (as usual), but t