Oracle XE 10.2.0.1.0 – Performance with BIG full-text indexes

I would like to use Oracle XE 10.2.0.1.0 only for the full-text searching of the files residing outside the database on the FTP server.
Recently I have found out that size of the files to be indexed is 5GB.
As I have read somewhere on this forum before size of the index should be 30-40% of the indexed text files (so with formatted documents like PDF or DOC even less).
Lets say that the CONTEXT index size over these files will be 1.5-2GB.
Number of the concurrent user will be max. 5.
Does anybody have any experience with Oracle XE performance with the CONTEXT index this BIG?
(Oracle XE license limitations: 1 GB RAM and 1 CPU)
Regards.
Edited by: user10543032 on May 18, 2009 11:36 AM
Edited by: user10543032 on May 18, 2009 12:10 PM

I have used the 100% same configuration as above, but now for the Oracle Database 11g R1 11.1.0.7.0 – Production instead of Oracle 10g XE.
The result is that AUTO_FILTER for Oracle 11g is able to parse Czech language characters from the sample PDF file without any problems.
The problem with Oracle Text 10g R2 may be I guess:
1. In embedded fonts as mentioned in the Link: [documentation | http://download-west.oracle.com/docs/cd/B12037_01/text.101/b10730/afilsupt.htm] (I tried to embbed all fonts and the whole character set, but it did not helped)
2. in the character encoding of the text within the PDF documents.
I would like to add that also other third party PDF2Text converters have similar issues with the Czech characters in the PDF documents – after text extraction Czech national characters were displayed incorrectly.
If you have any other remarks, ideas or conclusions please reply :-)

Similar Messages

  • Oracle 10g  – Performance with BIG CONTEXT indexes

    I would like to use Oracle XE 10.2.0.1.0 only for the full-text searching of the files residing outside the database on the FTP server.
    Recently I have found out that size of the files to be indexed is 5GB.
    As I have read somewhere on this forum before size of the index should be 30-40% of the indexed text files (so with formatted documents like PDF or DOC even less).
    Lets say that the CONTEXT index size over these files will be 1.5-2GB.
    Number of the concurrent user will be max. 5.
    I can not easily test it my self yet.
    Does anybody have any experience with Oracle XE or other Oracle Database edition performance with the CONTEXT index this BIG?
    Will Oracle XE hardware resources license limitation be sufficient to handle one CONTEXT indexe this BIG?
    (Oracle XE license limitations: 1 GB RAM and 1 CPU)
    Regards.

    That depends on at least three things:
    (1) what is the range of words that will appear in the document set (wide range of documents = smaller resultsets = better performance)
    (2) how precise are the user's queries likely to be (more precise = smaller resultsets = better performance)
    (3) how many milliseconds are your users willing to wait for results
    So, unfortunately, you'll probably have to experiment a bit before you'll know...

  • Unable to full text index the contents in Oracle 11g UCM

    Hi,
    I am new to the Oracle UCM 11g.
    i am unable to full text index the content files that are check-in into the Oracle UCM.
    I have added the below entries in config.cfg file:
    SearchIndexerEngineName=OracleTextSearch
    IndexerDatabaseProviderName= SystemDatabase
    AdditionalEscapeChars=-;#
    While performing the indexing operation using Repository Manager only, metadata of the content files are indexed, but full text is not getting indexed.
    What is missing here in Oracle UCM for not fulltext indexing the contents? What configurations do i need to do for this so that i can search perform the full text search on the Contents in Oracle UCM?
    Thanks in Advance
    Dipesh

    Hi Srinath,
    Collection rebuild cycle runs perfectly fine. After enabling tracing for Indexer and systemdatabse, i got the below info in the log:
    "Finished rebuilding the search index with a total of 123 files successfully indexed. A total of 0 files had a full text index."
    The below is the details of the activeindex.hda:
    <?hda version="11gR1-11.1.1.3.0-idcprod1-100505T121221" jcharset=UTF8 encoding=utf-8?>
    @Properties LocalData
    UseImplicitZonedSecurityField=true
    blFieldTypes=
    ActiveIndex=index1
    blDateFormat=M/d{yy}{ h:mm[:ss]{ a}}!mAM,PM!tGMT+05:30
    @end
    @ResultSet SearchCollections
    7
    sCollectionID
    sDescription
    sVerityLocale
    sProfile
    sLocation
    sFlag
    sUrlScript
    TestHost
    !csSearchDefaultSearchCollection
    English-US
    local
    index1
    enabled
    <$URL$>
    @end
    Is it possible that OracleTestSearch Component is missing in Oracle UCM?
    Thanks
    Dipesh

  • Problems using and configuring Oracle 10gR2 database full-text search

    I am having problems trying to set up full-text indexing and search with Universal Content Management (UCM). I followed the Oracle Content Server Installation Guide for windows at [http://download-west.oracle.com/docs/cd/E10316_01/cs/cs_doc_10/documentation/integrator/install_cserver_win_10en.pdf].
    What I did was:
    1. Modify E:\oracle\ucm\server\config\config.cfg by adding SearchIndexerEngineName=DATABASE.FULLTEXT to the end of the file.
    2. Restart the content server.
    3. Rebuild the search indexing using Repository Manager.
    However, I keep seeing the following error when I query by entering words in the "Full-Text Search" box.
    Unable to retrieve search results. Unable to retrieve search results. Unable to create result set for query 'SELECT IdcColl1.dID, dDocName, dDocTitle, dDocType, dRevisionID, dSecurityGroup, dDocAuthor, dDocAccount, dRevLabel, dFormat, dOriginalName, dExtension, dWebExtension, dInDate, dOutDate, dCreateDate, dPublishType, dRendition1, dRendition2, VaultFileSize, WebFileSize, URL, dFullTextFormat, dFullTextCharset, DocMeta.*
    FROM IdcColl1, DocMeta
    WHERE IdcColl1.dID=DocMeta.dID AND (((CONTAINS(dDocFullText,'test') > 0 ))) ORDER BY dInDate Desc'. ORA-20000: Oracle Text error:
    DRG-10599: column is not indexed
    Some web searches suggested the following (all of which I have tried but not resolved this problem).
    1. Publish the schema using Configuration Manager (applet) and then rebuild index
    2. Set the dDocFullText as a "zone field". This is not possible, because dDocFullText does not show up under the list of fields under "Database" or "DatabaseFullText" for the Search Engine drop down (when using Zone Fields Configuration).
    3. Reboot the server (did not work either).
    I logged onto the Oracle database and checked the IdcColl1 table. There is indeed, no index for the field, dDocFullText. There is only 1 index for the field, did. The field, dDocFullText, is a BLOB. The question is, if I am supposed to create an index manually for this field, how would I do it? A web search has not been fruitful in answering this question.
    Here are my server settings.
    For UCM:
    Operating System: Windows 2003 Enterprise
    UCM : 10gR3
    Memory: 1 GB
    Web Server: Apache 2.2.11
    For Oracle:
    Operating System: Windows 2003 Enterprise
    Oracle: 10gR2
    Memory: 1 GB
    Thanks.

    I found out what the problem was. The problem was that I had to create the role, stellent_role, as described in the installation manual. After I created this role and assigned the database user to this role, a restart of the Content Server services and collection rebuild of the index fixed the problem.
    However, I did notice one thing. I checked in 3 PDF files, and when I used Repository Manager to do a collection rebuild, I noticed that for Indexer Counters, the count for Full Text was 0 and the count for Meta Only was 3.
    Anyone have any ideas? Is there something else that I missed? From reading the installation manual, it was not clear how database full-text indexing/searching would handle PDF files.

  • Oracle text performance with context search indexes

    Search performance using context index.
    We are intending to move our search engine to a new one based on Oracle Text, but we are meeting some
    bad performance issues when using search.
    Our application allows the user to search stored documents by name, object identifier and annotations(formerly set on objects).
    For example, suppose I want to find a document named ImportSax2.c: according to user set parameters, our search engine format the following
    search queries :
    1) If the user explicitely ask for a search by document name, the query is the following one =>
         select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0;
    2) If the user don't specify any extra parameters, the query is the following one =>
         select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c' , 1 ) > 0;
    Oracle text only need around 7 seconds to answer the second query, whereas it need around 50 seconds to give an answer for the first query.
    Here is a part of the sql script used for creating the Oracle Text index on the column OBJFIELDURL
    (this column stores a path to an xml file containing properties that have to be indexed for each object) :
    begin
    Ctx_Ddl.Create_Preference('wildcard_pref', 'BASIC_WORDLIST');
    ctx_ddl.set_attribute('wildcard_pref', 'wildcard_maxterms', 200) ;
    ctx_ddl.set_attribute('wildcard_pref','prefix_min_length',3);
    ctx_ddl.set_attribute('wildcard_pref','prefix_max_length',6);
    ctx_ddl.set_attribute('wildcard_pref','STEMMER','AUTO');
    ctx_ddl.set_attribute('wildcard_pref','fuzzy_match','AUTO');
    ctx_ddl.set_attribute('wildcard_pref','prefix_index','TRUE');
    ctx_ddl.set_attribute('wildcard_pref','substring_index','TRUE');
    end;
    begin
    ctx_ddl.create_preference('doc_lexer_perigee', 'BASIC_LEXER');
    ctx_ddl.set_attribute('doc_lexer_perigee', 'printjoins', '_-');
    ctx_ddl.set_attribute('doc_lexer_perigee', 'BASE_LETTER', 'YES');
    ctx_ddl.set_attribute('doc_lexer_perigee','index_themes','yes');
    ctx_ddl.create_preference('english_lexer','basic_lexer');
    ctx_ddl.set_attribute('english_lexer','index_themes','yes');
    ctx_ddl.set_attribute('english_lexer','theme_language','english');
    ctx_ddl.set_attribute('english_lexer', 'printjoins', '_-');
    ctx_ddl.set_attribute('english_lexer', 'BASE_LETTER', 'YES');
    ctx_ddl.create_preference('german_lexer','basic_lexer');
    ctx_ddl.set_attribute('german_lexer','composite','german');
    ctx_ddl.set_attribute('german_lexer','alternate_spelling','GERMAN');
    ctx_ddl.set_attribute('german_lexer','printjoins', '_-');
    ctx_ddl.set_attribute('german_lexer', 'BASE_LETTER', 'YES');
    ctx_ddl.set_attribute('german_lexer','NEW_GERMAN_SPELLING','YES');
    ctx_ddl.set_attribute('german_lexer','OVERRIDE_BASE_LETTER','TRUE');
    ctx_ddl.create_preference('japanese_lexer','JAPANESE_LEXER');
    ctx_ddl.create_preference('global_lexer', 'multi_lexer');
    ctx_ddl.add_sub_lexer('global_lexer','default','doc_lexer_perigee');
    ctx_ddl.add_sub_lexer('global_lexer','german','german_lexer','ger');
    ctx_ddl.add_sub_lexer('global_lexer','japanese','japanese_lexer','jpn');
    ctx_ddl.add_sub_lexer('global_lexer','english','english_lexer','en');
    end;
    begin
         ctx_ddl.create_section_group('axmlgroup', 'AUTO_SECTION_GROUP');
    end;
    drop index ADSOBJ_XOBJFIELDURL force;
    create index ADSOBJ_XOBJFIELDURL on ADSOBJ(OBJFIELDURL) indextype is ctxsys.context
    parameters
    ('datastore ctxsys.file_datastore
    filter ctxsys.inso_filter
    sync (on commit)
    lexer global_lexer
    language column OBJFIELDURLLANG
    charset column OBJFIELDURLCHARSET
    format column OBJFIELDURLFORMAT
    section group axmlgroup
    Wordlist wildcard_pref
    Oracle created a table named DR$ADSOBJ_XOBJFIELDURL$I which now contains around 25 millions records.
    ADSOBJ is the table contaings information for our documents,OBJFIELDURL is the field that contains the path to the xml file containing
    data to index. That file looks like this :
    <?xml version="1.0" encoding="UTF-8" ?>
    <fields>
    <OBJNAME><![CDATA[NomLnk_177527o.jpgp]]></OBJNAME>
    <OBJREM><![CDATA[Z_CARACT_141]]></OBJREM>
    <OBJID>295926o.jpgp</OBJID>
    </fields>
    Can someone tell me how I can make that kind of request
    "select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0;"
    run faster ?

    Below are the execution plan for both the 2 requests :
    select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0
    PLAN_TABLE_OUTPUT
    |     Id     | Operation                              |Name                         |Rows     |Bytes     |Cost (%CPU)|
    |     0     | SELECT STATEMENT                    |                              |1272     |119K     |     4     (0)     |
    |     1      | TABLE ACCESS BY INDEX ROWID     |ADSOBJ      |1272     |119K     |     4     (0)     |
    |     2      |     DOMAIN INDEX                    |ADSOBJ_XOBJFIELDURL     |          |          |     4     (0)     |
    Note
    - 'PLAN_TABLE' is old version
    Executed in 2 seconds
    select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c' , 1 ) > 0
    PLAN_TABLE_OUTPUT
    |     Id     |Operation                              |Name                         |Rows     |Bytes     |Cost (%CPU)|
    |     0     | SELECT STATEMENT                    |                              |1272     |119K     |     4     (0)     |
    |     1     | TABLE ACCESS BY INDEX ROWID     |ADSOBJ                         |1272     |119K     |     4     (0)     |
    |     2     | DOMAIN INDEX                    |ADSOBJ_XOBJFIELDURL     |          |          |     4     (0)     |
    Sorry for the result formatting, I can't get it "easily" readable :(

  • Oracle Text Indexing performance in Unicode database

    Forum folks,
    I'm looking for overall performance thoughts in Text Indexing within a Unicode database. Part of our internal testing suites includes searching on values using contains filters over indexed binary and text documents. We've architected these tests such that they could be run in a suite or on their own, thus, the data is loaded at the beginning of each test and then the text indexes are created and populated prior to running any of the actual testing.
    We have the same tests running on non-unicode instances of Oracle 11gR2 just fine, but when we run them against a Unicode instance, we are almost always seeing timing issues where the indexes haven't finished populating, thus our tests are saying we've only found n number of hits when we are expecting n+ 50 or in some cases n + 150 records to be returned.
    We are just looking for some general information in regards to text indexing performance in a unicode database. Will we need to add sleep time to the testing to allow for the indexes to populate? How much time? We would rather not get into having to create different tests for unicode vs non-unicode, but perhaps that is necessary.
    Any insight you could provide would be most appreciated.
    Thanks in advance,
    Dan

    Roger,
    Thanks much for your quick reply...
    When you talk about Unicode, do you mean AL32UTF8?
    --> Yes, this is the Unicode charset we are using.
    Is the data the same in both cases, or are you indexing simple 7-bit ascii data in the one database, and foreign text (maybe Chinese?) in the UTF8 database?
    With the same data, there should be virtually no difference in performance due to the AL32UTF8 database character set.
    --> We have a data generation tool we utilize. For non-unicode data, we generate using all 256 characters in the ISO-8859-1 set. With our Unicode data for clobs, we generate using only the first 1,000 characters of UTF8 by setting up an array of code points...0 - 1000. For Blobs, we have sets of sample word documents and pdfs that are inserted, then indexed.
    I'm not sure I understand your testing methodology. Do you run ( load-data, index-data, run-queries ) sequentially?
    --> That is correct. We utilize the ctx_ddl package to populate the pending table and then to sync the index....The following is an example of the ddl we generate to create and populate the index:
    create index "DBMEARSPARK_ORA80"."RESRESUMEDOC" on "DBMEARSPARK_ORA80"."RESUME" ("RESUMEDOC") indextype is CTXSYS.CONTEXT parameters(' nopopulate sync (every "SYSTIMESTAMP + INTERVAL ''30'' MINUTE" PARALLEL 2) filter ctxsys.auto_filter ') PARALLEL 2;
    execute ctx_ddl.populate_pending('"DBMEARSPARK_ORA80"."RESRESUMEDOC"',null);
    execute ctx_ddl.sync_index('"DBMEARSPARK_ORA80"."RESRESUMEDOC"',null,null,2);
    If so, there should be no way that the indexes can be half-created. If not, don't you have some check to see if the index creation has finished before running the query test?
    --> Excellent question....is there such a check? I have not found a way to do that yet...
    Were you just lucky with the "non-unicode" tests that the indexing just happened to have always finished by the time you ran the queries?
    --> This is quite possible as well. If there is a check to see if the index is ready, then we could add that into our infrastructure.
    --> Thanks, again, for responding so quickly.
    Edited by: djulson on Feb 12, 2013 7:13 AM

  • Performance issue with Oracle Text index

    Hi Experts,
    We are on Oracle 11.2..0.3 on Solaris 10. I have implemented Oracle Text in our environment and I am facing a strange performance issue that is happening in our environment.
    One sql having CONTAINS clause is taking forever - more than 20 minutes and still does not complete. This sql has a contains clause and an exists clause and a not exists clause.
    Now if I remove the exists clause and a not exists clause , it completes fast. but with those two clauses it is just taking forever. It is late night so i am not able to post the table and sql query details and will do so tomorrow but based on this general description, are there any pointers for me to review?
    sql query doing fine:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    --sql query that hangs forever:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    and exists (--one clause here wiht a few table joins)
    and not exists (--one clause here wiht a few table joins);
    --Now another strange thing I found is if instead of 'TO%' in this sql, if I were to use 'ZZ%' or 'L1%' it works fast but for 'TO%' it goes slow with those two exists not exists clauses!
    I will be most thankful for the inputs.
    OrauserN

    Hi Barbara,
    First of all, thanks a lot for reviewing the issue.
    Unluckily making the change to empty_stoplist did not work out. I am today copying the entire sql here that has this issue and will be most thankful for more insights/pointers on what can be done.
    Here is the entire sql:
    SELECT U.CLNT_OID,
           U.USR_OID,
           S.EMAILADDRESS,
           U.FIRST_NAME,
           U.LAST_NAME,
           S.JOBCODE,
           S.LOCATION,
           S.DEPARTMENT,
           S.ASSOCIATEID,
           S.ENTERPRISECOMPANYCODE,
           S.EMPLOYEEID,
           S.PAYGROUP,
           S.PRODUCTLOCALE
      FROM    ACCESS_USR U
           INNER JOIN
              ACCESS_SIA S
           ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
    WHERE     U.CLNT_OID = 'G39NY3D25942TXDA'
           AND EXISTS
                  (SELECT 1
                     FROM ACCESS_USR_GROUP_XREF UGX
                          INNER JOIN ACCESS_GROUP RELG
                             ON     RELG.CLNT_OID = UGX.CLNT_OID
                                AND RELG.GROUP_OID = UGX.GROUP_OID
                          INNER JOIN ACCESS_GROUP G
                             ON     G.CLNT_OID = RELG.CLNT_OID
                                AND G.GROUP_TYPE_OID = RELG.GROUP_TYPE_OID
                    WHERE     UGX.CLNT_OID = U.CLNT_OID
                          AND UGX.USR_OID = U.USR_OID
                          AND G.GROUP_OID = 920512943
                          AND UGX.INCLUDED = 1)
           AND NOT EXISTS
                      (SELECT 1
                         FROM    ACCESS_USR_GROUP_XREF UGX
                              INNER JOIN
                                 ACCESS_GROUP G
                              ON     G.CLNT_OID = UGX.CLNT_OID
                                 AND G.GROUP_OID = UGX.GROUP_OID
                        WHERE     UGX.CLNT_OID = U.CLNT_OID
                              AND UGX.USR_OID = U.USR_OID
                              AND G.GROUP_OID = 920512943
                              AND UGX.INCLUDED = 1)
           AND CONTAINS (U.LAST_NAME, 'Bon%') > 0;
    Like I said before if the EXISTS and NOT EXISTS clause are removed it works in sub-second. But with those EXISTS and NOT EXISTS CLAUSE IT TAKES ANY WHERE FROM 25 minutes to more than one hour.
    NOte also that it was not TO% but Bon% in the CONTAINS clause that is giving the issue - sorry that was wrong on my part.
    Also please see below the ORACLE TEXT index defined on the table ACCESS_USER:
    --definition of preferences used in the index:
    SET SERVEROUTPUT ON size unlimited
    WHENEVER SQLERROR EXIT SQL.SQLCODE
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
       ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_LEXER with BASIC LEXER is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_wl', 'BASIC_WORDLIST');
       ctxsys.ctx_ddl.set_attribute ('cust_wl', 'SUBSTRING_INDEX', 'true'); -- to improve performance
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_WL with BASIC WORDLIST is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    --now below is the code of the index:
    CREATE INDEX ACCESS_USR_IDX3 ON ACCESS_USR
    (FIRST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    CREATE INDEX ACCESS_USR_IDX4 ON ACCESS_USR
    (LAST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    The strange thing is that, like I said, If I remove the exists clause the query returns very fast. Also if I modify the query to use only one NOT EXISTS clause and remove the other EXISTS clause it returns in less than one second.  Also if I remove the EXISTS clause and use only the NOT EXISTS  clause it returns in less than 4 seconds. But with both clauses it runs forever!
    When I tried to get dbms_xplan.display_cursor to get the query plan (for the case of both exists and not exists clause in the query), it said that previous statement's sql id was 0 or something like that so that I was not able to see the query plan. I will keep trying to get this plan (it takes 25 minutes to one hour each time but will get this info soon). Again any pointers are most helpful.
    Regards
    OrauserN

  • Non jdriver poor performance with oracle cluster

    Hi,
    we decided to implement batch input and went from Weblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are a Weblogic 6.1 cluster and an Oracle 8.1.7 cluster.
    Problem is .. with the new Oracle drivers our actions on the webapp takes twice as long as with Jdriver. We also tried OCI .. same problem. We switched to a single Oracle 8.1.7 database .. and it worked again with all thick or thin drivers.
    So .. new Oracle drivers with oracle cluster result in bad performance, but with Jdriver it works perfectly. Does sb. see some connection?
    I mean .. it works with Jdriver .. so it cant be the database, huh? But we really tried with every JDBC possibility! In fact .. we need batch input. Advise is very appreciated =].
    Thanx for help!!
    Message was edited by mindchild at Jan 27, 2005 10:50 AM
    Message was edited by mindchild at Jan 27, 2005 10:51 AM

    Thx for quick replys. I forget to mention .. we also tried 10g v10.1.0.3 from instantclient yesterday.
    I have to agree with Joe. It was really fast on the single machine database .. but we had same poor performance with cluster-db. It is frustrating. Specially if u consider that the Jdriver (which works perfectly in every combination) is 4 years old!
    Ok .. we got this scenario, with our appPage CustomerOverview (intensiv db-loading) (sorry.. no real profiling, time is taken with pc watch) (Oracle is 8.1.7 OPS patch level1) ...
    WL6.1_Cluster + Jdriver6.1 + DB_cluster => 4sec
    WL6.1_Cluster + Jdriver6.1 + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_cluster => 8-10sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_single => 4sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_cluster => 8sec
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_single => 2-4sec (awesome fast!!)
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_cluster => 6-8sec
    Customers rough us up, because they cannot mass order via batch input. Any suggestions how to solve this issue is very appreciated.
    TIA
    >
    >
    Markus Schaeffer wrote:
    Hi,
    we decided to implement batch input and went fromWeblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are an Weblogic 6.1 cluster and a Oracle8.1.7 cluster.
    Problem is .. with the new Oracle drivers ouractions on the webapp takes twice as long
    as with Jdriver. We also tried OCI .. same problem.We switched to a single Oracle 8.1.7
    database .. and it worked again with all thick orthin drivers.
    So .. new Oracle drivers with oracle cluster
    result in bad performance, but with
    Jdriver it works perfectly. Does sb. see someconnection?Odd. The jDriver is OCI-based, so it's something
    else. I would try the latest
    10g driver if it will work with your DBMS version.
    It's much faster than any 9.X
    thin driver.
    Joe
    I mean .. it works with Jdriver .. so it cant bethe database, huh? But we really
    tried with every JDBC possibility!
    Thanx for help!!

  • How to improve Oracle Veridata Compair pair performance with tables that have big (30-40MB)CLOB/BLOB fileds ?

    How to improve Oracle Veridata Compair pair performance with tables that have big (30-40MB)CLOB/BLOB fileds ?

    Can you use insert .. returning .. so you do not have to select the empty_clob back out.
    [I have a similar problem but I do not know the primary key to select on, I am really looking for an atomic insert and fill clob mechanism, somone said you can create a clob fill it and use that in the insert, but I have not seen an example yet.]

  • Oracle Database Performance With Semantic

    Hello,
    Is there a Developer's Guide for Semantic that specifically talks about database performance with the Semantic network/tables/indexes? We are having issues with performance the larger the semantic network becomes.
    Any help or pointers would be appriciated.
    Thanks
    -MichaelB

    Matt,
    Thanks for your response. Here are the answers to the questions about our setup/environment.
    1) Are you querying multiple models and/or a model + entailment? If so, are you using a virtual model and using the ALLOW_DUP=T query option?
    A single model, no entailments. We attempted to use multiple models, and a virtual model (with ALLOW_DUP=T), however the UNION ALL in the explain plan made the query duration unacceptable.
    2) Are you using named graphs?
    No named graphs.
    3) How many triples are you querying?
    Approximately 85 million.
    4) What semantic network and/or datatype indexes have been created?
    We have PCSGM, PSCGM, PSCM, PCSM, CPSM, and SCM.
    5) What is your hardware setup (number and type of disks, RAM, processor, etc.)?
    We are running the 11.2.0.3 database on a Sun Solaris T2000, we have ASM managing our disks from RAID5, I believe currently we have two Disk Groups with the indexes in one and the data tables in the other. We have 32 GB of memory, and 32 CPUs. However, it is not the only thing running on the machine.
    6) How much memory have you allocated to the database (pga, sga, memory_target, etc.)?
    We have the memory_target set to 9GB, the db_cache_size set to 2GB, and the db_keep_cache_size set to 4.5GB. `pga_aggregate_target` is set to 0 (auto), as is `sga_target`.
    (Since my initial request, we pinned the RDF_VALUE$ (~2.5GB) and C_PK_VID (~1.7GB) objects in the KEEP buffer cache, which drastically improved performance)
    7) Are you using parallel query execution?
    Yes, some of the more complex queries we run with the parallel hint set to 8.
    8) Have you tried dynamic sampling?
    Yes. We have ODS set to 3 for our more complex queries, we have not altered this much to see if there is a performance gained by changing this value.
    Thanks again,
    -Michael

  • Bad INSERT performance when using GUIDs for indexes

    Hi,
    we use Ora 9.2.0.6 db on Win XP Pro. The application (DOT.NET v1.1) is using ODP.NET. All PKs of the tables are GUIDs represented in Oracle as RAW(16) columns.
    When testing with mass data we see more and more a problem with bad INSERT performance on some tables that contain many rows (~10M). Those tables have an RAW(16) PK and an additional non-unique index which is also set on a RAW(16) column (both are standard B*tree). An PerfStat reports tells that there is much activity on the Index tablespace.
    When I analyze the related table and its indexes I see a very very high clustering factor.
    Is there a way how to improve the insert performance in that case? Use another type of index? Generally avoid indexed RAW columns?
    Please help.
    Daniel

    Hi
    After my last tests I conclude at the followings:
    The query returns 1-30 records
    Test 1: Using Form Builder
    -     Execution time 7-8 seconds
    Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
    -     Execution time 25-27 seconds
    Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
    - Execution time 17-18 seconds
    When I use:
    session.setLogLevel(SessionLog.FINE) and
    session.setProfiler(new PerformanceProfiler())
    I don’t see any improvement in the execution time of the query.
    Thank you
    Thanos

  • Improve performance with union all

    Hello there,
    Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    SQL> show parameter optimizer
    ORA-00942: Tabel of view bestaat niet. (Does not exist)I have the following query using the following input variables
    - id
    - startdate
    - enddate
    The query has the following format
    - assume that the number of columns are the same
    - t1 != t3 and t2 != t4
    select ct.*
    from
      select t1.*
      from   tabel1 t1
        join tabel2 t2
          on t2.key = t1.key
      union all
      select t3.*
      from   tabel3 t3
        join tabel4 t4
          on t4.key = t3.key
    where ct.id = :id
      and ct.date >= :startdate
      and ct.date < :enddate
    order by ct.dateIt is performing really slow, after the first read it performs fast.
    I tried the following thing, which was actually even slower!
    with t1c as
    select t1.*
      from   tabel1 t1
        join tabel2 t2
          on t2.key = t1.key
    where t1.id = :id
      and t1.date >= :startdate
      and t1.date < :enddate
    t2c as
    select t3.*
      from   tabel3 t3
        join tabel4 t4
          on t4.key = t3.key
    where t3.id = :id
      and t3.date >= :startdate
      and t3.date < :enddate
    select ct.*
    from
      select *
      from   t1c
      union all
      select *
      from   t2c
    order by ct.dateSo in words, I have an 'union all' construction reading from different tables with matching columns 'id' and 'date'.
    How can I improve this? Can it be improved? If you do not know the answer, but maybe a suggestion, I will be happy aswell!!!
    Thanks in advance!
    Kind regards,
    Metroickha

    >
    So in words, I have an 'union all' construction reading from different tables with matching columns 'id' and 'date'.
    How can I improve this? Can it be improved? If you do not know the answer, but maybe a suggestion, I will be happy aswell!!!
    >
    If you want to improve on what Oracle is doing you first need to know 'what Oracle is doing'.
    Post the execution plans for the query that show what Oracle is doing.
    Also post the DDL for the tables and indexes and the record counts for the tables and ID/DATE predicates.

  • JDeveloper performance with XSL Mapper

    Hi All,
    Not sure if this is the right place to ask this question since it has to do JDeveloper performance with SOA and not ADF.
    In the XSL Transformation mapper tool, whenever I do an auto-map between two large schemas, my JDeveloper memory usage hikes up to about 1.3 GB and then both the JRE and JDeveloper hangs although the transformation finishes. The problem is that sometimes it hangs before you can save it. It almost seems like a memory leak.
    NOTE: I am not getting any out-of-memory error since I increased already increased the memory.
    Any help to optimize and solve this problem will be very helpful.
    Thanks!

    chk this
    http://docs.oracle.com/cd/E11036_01/doc.1013/e10295/xslt_mpr.htm
    you should have SOA composite application extension for this..
    if you dont want a visual transformation then you can use the jdeveloper project properties -> Run/Debug -> edit run configuration -> XSLT

  • Performance with unspecific where clause

    Hi gurus,
    at the moment I do have a sql statement on a view with an unspecific where clause like
    select from <view> where <textfield> like %whatIsearch%
    The text field is not in the key fields which are used to create the view. An index on <textfield> does not help, because the where clause starts with a %.
    Other databases like Oracle do finish the statement within seconds, maxdb needs minutes.
    I there a possibility to speedup the statement in MaxDB? (Besides telling the user to use better qualified statments.)
    Why is Oracle that fast?
    Thanks for you help.
    Best regards
    Christian G

    > Other databases like Oracle do finish the statement within seconds, maxdb needs minutes.
    > I there a possibility to speedup the statement in MaxDB? (Besides telling the user to use better qualified statments.)
    > Why is Oracle that fast?
    Hi Christian,
    In that case Oracle can take advantage from beeing able to brute-force read many blocks at once (aka multiblock read).
    When we assume that there is an index on the oracle database available, then Oracle will likely decide to read all blocks of that index into the cache and look for matches then. It's called Fast Full Scan.
    It's not a very efficient method to address specific rows, but for this requirement it works well.
    Anyhow, you should be aware that this way of evaluating rows does not scale very well - in fact it get's more expensive with every block the index grows.
    MaxDB cannot easily read all blocks in a row, due to the fact that the pages are mixed over all data volumes. This way of storing data eliminates the need for reorganisations and evens out I/O traffic, but it comes for the price of beeing less performant when people use such unefficient predicates.
    Because of this and the way who indexes work in MaxDB (primary keys instead of rowids), MaxDB does only consider an index access for like conditions that start with a '%' when the query can be answered by only accessing the index (index only access).
    What you may try out to improve the situation is to activate the experimental read-ahead or prefetch feature, which is currently available in MaxDB 7.6. only (not in 7.5 or in 7.7).
    By setting READAHEAD_TABLE_THRESHOLD to a value >0, say 128, MaxDB can choose to
    perform table scans (no index scans!) in parallel with multiple server tasks for all table scans that are expected to pass the threshold (unit here is pages as visible in the execution plan of your statement).
    That way the usertask running the query can work on checking the data in the pages, while the server task load the pages into the cache.
    Another approach would be to have the DB Cache big enough so that most of the table would be found in the cache.
    regards,
    Lars

  • Performance problem on function-based index

    Hi guys,
    I am having performance problems with the addition of new function-based indexes.
    alter session set nls_comp='ANSI';
    alter session set nls_sort='BINARY_CI';
    * have to run this because the of case-insensitivity requirements
    I have a view. for ex:
    create or replace view view1
    as
    select * from emp1,user
    where emp1.empno=user.empno
    union
    select * from emp2,user
    where emp2.empno=user.empno
    union
    select * from emp3,user
    where emp3.empno=user.empno and so on
    When I run this it works with a full table scan. Then when i created a function-based index:
    create index user_ix on
    user(nlssort(empno,'NLS_SORT=BINARY_CI'));
    analyze index user_ix compute statistics;
    analyze table user compute statistics;
    the view hangs. but when i run the individual select statements it works.
    Do you guys have any idea on what's going on? Any advise is greatly appreciated.
    Thanks.

    LC is absolutely right. Brain cramp on my part.
    On the other hand, I can't seem to coerce Oracle to apply a to_binary_double conversion as part of an implicit conversion.
    var bin_dbl binary_double;
    select to_binary_double(14) into :bin_dbl from dual;
    SCOTT @ nx102 JCAVE9420> select * from emp where empno = :bin_dbl;
    no rows selected
    Elapsed: 00:00:00.14
    Execution Plan
    Plan hash value: 2949544139
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |     1 |    39 |     1   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |     1 |    39 |     1   (0)| 00:00:01 |
    |*  2 |   INDEX UNIQUE SCAN         | PK_EMP |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("EMPNO"=TO_NUMBER(:BIN_DBL))I'd expect that Oracle would try to convert the binary double to a number, not the other way around.
    Justin

Maybe you are looking for

  • How do i remove photostream from one computer and put onto laptop?

    When i got my iphone 4s i didn't have a laptop so i had to download itunes onto my mums computer. I now have a laptop and i want to take photostream off my mums computer and put it onto my laptop but its all so confusing. please help!!!!

  • User e-mail address in 2 mailing lists

    I have a question from a customer i had no idea. if a persons email address is in 2 mailing lists do they get the e-newsletter twice if the customer sends the e-newsletter to both lists.

  • How to include photo descriptions in iCloud sharing?

    When I share photos to iCloud, I find that the online album (stream) does not include any photo descriptions I have added in iPhoto (or any other "metadata" for that matter). As a workaround I have manually added a comment to each shared photo, with

  • Displaying a workflow in UWL.

    Whats the steps involved in displaying a workflow item in the UWL on the portal.

  • Update database problem

    Hi I have access DB shared on Win 2008 R2 and application written in vb6. On machine with Win 8.1 application not respond after UPDATE or INSERT command once for 5 minutes, once 10 minuters, once its all ok - max 1 second. On other client in local LA