Suggestion: Oracle text CONTEXT index on one or more columns ?

Hi,
I'm implementing Oracle text using CONTEXT ..... and would like to ask you for performance suggestion ...
I have a table of Articles .... with columns .. TITLE, SUBTITLE , BODY ...
Now is it better from performance point of view to move all three columns into one dummy column ... with name like FULLTEXT ... and put index on this single column,
and then use CONTAINS(FULLTEXT,'...')>0
Or is it almost the same for oracle if i put indexes on all three columns and then call:
CONTAINS(TITLE,'...')>0 OR CONTAINS(SUBTITLE,'...')>0 OR CONTAINS(BODY,'...')>0
I actually don't care if the result is a match in TITLE OR SUBTITLE OR BODY ....
So if i move into some FULLTEXT column, then i have duplicate data in a article row ... but if i create indexes for each column, than oracle has 2x more to index,optimize and search ... am I wright ?
Table has 1.8mil records ...
Thank you.
Kris

mackrispi wrote:
Now is it better from performance point of view to move all three columns into one dummy column ... with name like FULLTEXT ... and put index on this single column,
and then use CONTAINS(FULLTEXT,'...')>0What version of Oracle are you on? If 11 then you could use a virtual column to do this, otherwise you'd have to write code to maintain the column which can get messy.
mackrispi wrote:
Or is it almost the same for oracle if i put indexes on all three columns and then call:
CONTAINS(TITLE,'...')>0 OR CONTAINS(SUBTITLE,'...')>0 OR CONTAINS(BODY,'...')>0Benchmark it and find out :)
Another option would be something like this.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9455353124561
Were i you, i would try out those 3 approaches and see which meet your performance requirements and weigh that with the ease of implementation and administration.

Similar Messages

  • ORACLE TEXT  Context index so big, why ?

    Hi,
    One year ago, I've created a context index, for a table that had 2 million rows , and the context index was around 2.8 Mil rows ....
    Today we have 2.9mil rows but the Context index has 95 Mil rows ??
    Last weekend I did a FULL optimize index and it took 28hours and it was still not finished .... so this weekend I droped the index and re-created
    it .... it took 6 hours ....
    Now I started CTX_REPORT.INDEX_STATS to see how it looks like .... cause I exprected 95mil to become like 3.5mil max not more ....
    Can anyone please tell me what should I check to see why did this happen so that the last 900.000 records created 91 mil rows in index ?
    If there would be something wrong, why would the last 900.000 rows create something that prev 2mil rows did not ...
    HEre is index creation :
    CREATE INDEX ART_IDX ON MS_ARTICLE(ORATEXT) INDEXTYPE IS CTXSYS.CONTEXT  FILTER BY ID,ART_DATE,MEDIA_CODE ORDER BY
    ART_DATE DESC PARAMETERS (' LEXER ART_LEX STOPLIST CTXSYS.EMPTY_STOPLIST sync(ON COMMIT) DATASTORE DS_ART');If anyone needs definition of lexer od datastore, then please tell me what do I have to do to get definitions of that.
    Thank you.
    KRis

    Hi,
    Ok there must be something I did wrong .... cause after running INDEX_STATS ...I got this results :
    ===========================================
                        STATISTICS FOR "PRESCLIP"."ART_IDX"
    ===========================================
    indexed documents:                                              2,876,228
    allocated docids:                                               2,876,228
    $I rows:                                                       96,110,561
                                 TOKEN STATISTICS
    unique tokens:                                                  5,584,188
    average $I rows per token:                                          17.21
    tokens with most $I rows:
      JE (0:TEXT)                                                      11,650
      V (0:TEXT)                                                       11,183
      IN (0:TEXT)                                                      10,085So I have expected Average rows per token to be 1 after droping and re-creating index ..... so what else must I do to get the number 96mil to 5.5mil ?
    Thank you.
    Kris

  • Oracle Text Context index keeps growing. Optimize seems not to be working

    Hi,
    In my application I needed to search through many varchar columns from differents tables.
    So I created a materialized view in which I concatenate those columns, since they exceed the 4000 characters I merged them concatenating the columns with the TO_CLOBS(column1) || TO_CLOB(column)... || TO_CLOB(columnN).
    The query is complex, so the refresh is complete on demand for the view. We refresh it every 2 minutes.
    The CONTEXT index is created with the sync on commit parameter.
    The index then is synchronized every two minutes.
    But when we run the optimize index it does not defrag the index. So it keeps growing.
    Any idea ?
    Thanks, and sorry for my poor english.
    Edited by: detryo on 14-mar-2011 11:06

    What are you using to determine that the index is fragmented? Can you post a reproducible test case? Please see my test of what you described below, showing that the optimization does defragment the index.
    SCOTT@orcl_11gR2> -- table:
    SCOTT@orcl_11gR2> create table test_tab
      2    (col1  varchar2 (10),
      3       col2  varchar2 (10))
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- materialized view:
    SCOTT@orcl_11gR2> create materialized view test_mv3
      2  as
      3  select to_clob (col1) || to_clob (col2) clob_col
      4  from   test_tab
      5  /
    Materialized view created.
    SCOTT@orcl_11gR2> -- index with sync(on commit):
    SCOTT@orcl_11gR2> create index test_idx
      2  on test_mv3 (clob_col)
      3  indextype is ctxsys.context
      4  parameters ('sync (on commit)')
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- inserts, commits, refreshes:
    SCOTT@orcl_11gR2> insert into test_tab values ('a', 'b')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> commit
      2  /
    Commit complete.
    SCOTT@orcl_11gR2> exec dbms_mview.refresh ('TEST_MV3')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> insert into test_tab values ('c a', 'b d')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> commit
      2  /
    Commit complete.
    SCOTT@orcl_11gR2> exec dbms_mview.refresh ('TEST_MV3')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- query works:
    SCOTT@orcl_11gR2> select * from test_mv3
      2  where  contains (clob_col, 'ab') > 0
      3  /
    CLOB_COL
    ab
    c ab d
    2 rows selected.
    SCOTT@orcl_11gR2> -- fragmented index:
    SCOTT@orcl_11gR2> column token_text format a15
    SCOTT@orcl_11gR2> select token_text, token_first, token_last, token_count
      2  from   dr$test_idx$i
      3  /
    TOKEN_TEXT      TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
    AB                        1          1           1
    AB                        2          3           2
    C                         3          3           1
    3 rows selected.
    SCOTT@orcl_11gR2> -- optimizatino:
    SCOTT@orcl_11gR2> exec ctx_ddl.optimize_index ('TEST_IDX', 'REBUILD')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- defragmented index after optimization:
    SCOTT@orcl_11gR2> select token_text, token_first, token_last, token_count
      2  from   dr$test_idx$i
      3  /
    TOKEN_TEXT      TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
    AB                        2          3           2
    C                         3          3           1
    2 rows selected.
    SCOTT@orcl_11gR2>

  • Rikaichan (add-on) doesn't work at my office. It gives this error "Javascript application: the index for one or more dictionaries needs to be created. This may take a while on slower computers". And then the PC freezes. What should I do?

    I'm using Firefox 3.6.17, and have installed Rikaichan 2.02. Once I've installed the Japanese-English dictionary, I then click on Rikaichan to get it started, and it always gives a "Javascript Application" error:
    '''"The index for one or more dictionaries needs to be created. This may take a while on slower computers"'''
    Rikaichan always works fine on my laptop, but I just can't seem to install it properly at work. Is there some sort of office security screwing it up?

    venicespent,
    Boot into your Mavericks Recovery partition by hold down the command and R keys whilst booting. You'll see a screen that looks like this:
    Click on the Disk Utility item to open Disk Utility. You should see your boot drive (usually named "Macintosh HD" unless you've changed it) in the column on the left of the screen. Select your boot partition and click on the "Verify" button. Let your Mac do it's thing - if you get green text telling you that everything seems to be OK, then your disk should not be damaged. If, however, you get red text telling you that the disk needs to be repaired, click on the "Repair" button. If the disk can be repaired, you should be good to go. If the disk cannot be repaired then the drive is damaged.
    But before you do anything, make certain that you have backups!!!
    Clinton
    MacBook Pro (15-inch Late 2011), OS Mavericks 10.9.4, 16GB RAM, 960GB SSD, 27” Apple Thunderbolt Display

  • SSIS Error Text was truncated or one or more characters had no match in the target code page

    I the same issue or something close.
    Except I have one Field (27) that get a trunacation error
    Error:
    Data conversion failed. The data conversion for column "Column 27" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
    The "output column "Column 27" (91)" failed because truncation occurred, and the truncation row disposition on "output column "Column 27" (91)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
    Data looks like:Red Text is the field that is throwing the error!
    00000412,
    0000000011411001,
    0273508793,
    01,
    "RUTH           ",
    "EDWARDS             ",
    19500415,20080401,
    "N",
    04488013,
    "1",
    "F",
    365094,
    20080401,
    000472162716,
    "1447203880    ",
    43995202341210,
    00120.000,
    0010,
    00008.26,
    00004.96,
    000.00,
    00002.70,
    00007.66,
    0,
    "PROMETH/COD  SYP 6.25-10 ",
    "Y",
    "Promethazine w/ Codeine Syrup 6.25-10 MG/5ML               ",
    0000,
    "001C",
    610020,"WELLP1537",
    "O",
    "N",
    00,
    "D",
    "S",
    "G",
    "ID01V012008782",
    "TOM AHL CHRYSLER              ",
    "M",
    "M",
    "PBD $20/10+40%/20%            ",
    00008.26,
    "1184641367"

    I have found four things that I always check when I run into this problem.  I have yet to find a time when one of these didn't work (specifically helps when reading data from flat files but I suppose most of the four would apply to any source).  Check out my blog post, content repeated below:
    1.  Make sure to properly configure the "Flat File Source".  When setting the connection properties to the flat file, take time to click on the advanced tab and ensure that the" Name", "DataType", and "OutputColumnWidth" properties are set properly.  I have found that if this is setup correctly when the initial connection is created, some if not all of the data type issues and errors can be alleviated.  The "Flat File Connection Manager Editor" can be accessed while initially creating the connection or by double clicking on a flat file connection within the "Connection Managers" for connections that have previously been created. 
    2.  Depending on the order and steps that were used to create the connection to the flat file, sometimes the data types need to be updated in an additional area.  This can be found by right clicking on the "Flat File Source" and selecting "Show Advanced Editor...".  Once in the advanced editor, click on the "Input and Output Properties" tab.  Expand the "External Columns" folder.  For each field being loaded from the flat file there are some configurable properties.  Make sure that the "DataType" field is properly set for each field.
    3.  Something else that can be done if you are sure that the data type is set correctly in both of the two previously mentioned locations is to set the "Flat File Source" to essentially ignore those annoying truncation errors.  On the same "Input and Output Properties" tab, expand the "Output Columns" folder.  For those fields listed, there is a "TruncationRowDisposition" property.  By default this is set to "RD_FailComponent".  This can be switched to "RD_IgnoreFailure" in order to allow the data to successfully pass through the "Flat File Source" even if SSIS believes that truncation is going to occur.  Along with making this change, you can also check the "DataType" in the "Output Columns" as well.
    Caution: If you do set the "Flat File Source" to "RD_IgnoreFailure" as mentioned above, always take time to review the data loaded in the target table to ensure that the integrity of the data was not jeopardized.
    Note:  I have found that when the "DataType" for both the "External Columns" and "Output Columns" is manually updated that it does not remain the same when the advanced editor is reopened.  For this reason, try Steps 1 and 2 before setting the "Output Columns" manually.
    4.  The last thing to try, and this applies specifically to loading data from Excel files as opposed to text or CSV is to set the package to run in 32-bit mode.  Click on "Project" on the top menu and select "Data Imports Properties...".  Click on "Debugging" under the "Configuration Properties" and set the "Run64BitRuntime" to "False".
    Working with data from flat files can sometimes be difficult in SSIS.  By using one or many of the approaches I have listed above you should be able to create a repeatable process that is frequently needed within most SSIS packages.  Be very careful when setting data types within SSIS and make sure to do it upfront when necessary because it can be harder to debug later in the development process.  If the proper changes are made it should not be a surprise to feel a big SSIS developer sense of relief when the screen shows all green.
    Let me know if this works!
    Check out my blog!

  • SSIS - "Text was truncated or one or more characters had no match in the target code page"

    Hello everyone,
    SQL server 2012, SSIS package, we are getting the following error for some of the mapped columns,
    "Text was truncated or one or more characters had no match in the target code page."
    We're fetching the data from CSV file and dumping that to staging table i.e. SQL server 2012.
    Can anybody please advise how to resolve this error/problem? It's urgent.
    Any help would be much appreciated.
    Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com

    You can enable data viewer (Right click on data flow connector --> Enable Data Viewer) before loading records to find out what's going on. Also, Configure error output to re-direct rows, so you can analyse data type and length.
    Also, try this: 
    Ultimately, in the Advanced Editor of the source datafile, on the Input and Output Properties tab, under External Columns, there is a
    Length property that defaults to 50. Changing that to match the Target Database File did the trick. [Source]
    Check this links: Add a Data Viewer to a Data Flow
    web: www.ronnierahman.com

  • The workflow could not update the item, possibly because one or more columns for the item require a different type of information. Outcome: Unknown Error

    Received this error (The workflow could not update the item, possibly because one or more columns for the item require a different type of information.) recently on a workflow that was
    working fine and no changes were made to the workflow.
    I have tried a few suggestions, i.e. adding a pause before any ‘Update’ action (which didn’t help because the workflow past this action without incident); checked the data type being written
    to the fields (the correct data types are being written); and we even checked the list schema to ensure the list names and the internal names are aligned (they
    are), but we still cannot figure out why the workflow is still throwing this error.
    We located the area within the workflow step where it is failing and we inserted a logging action to determine if the workflow would execute the logging action but it did not, but wrote the same error message.
    The workflow is a Reusable Approval workflow designed in SharePoint Designer 2010 and attached to a content type. 
    The form associated with the list was modified in InfoPath 2010. 
    Approvers would provide their approval in the InfoPath form which is then read by the workflow.
    Side note - items created after the workflow throws this Unknown Error some seem to be working fine. 
    We have deleted the item in question and re-added it with no effect. 
    Based on what we were able to determine there don’t seem to be any consistency with how this issue is behaving.
    Any suggestions on how to further investigate this issue in order to find the root cause would be greatly appreciated?
    Cheers

    Hi,
    I understand that the reusable workflow doesn’t work properly now. Have you tried to remove the Update list item action to see whether the workflow can run without issue?
    If the workflow runs perfectly when the Update list item action is removed, then you need to check whether there are errors in the update action. Check whether the values have been changed.
    Thanks,
    Entan Ming
    Entan Ming
    TechNet Community Support

  • The workflow could not update the item, possibly because one or more columns for the item require a different type of information using Update Item action

       I got error  "The workflow could not update the item, possibly because one or more columns for the item require a different type of information "I  found out the cause is  Update Item action       
    I need to update item in another List call Customer Report ,the field call "Issues"  with data type  "Choice"   to yes
    then the error arise .   please help..

    Thanks for the quick response Nikhil.
    Our SPF 2010 server is relatively small to many setups I am sure. The list with the issue only has 4456 items and there are a few associated lists, eg lookups, Tasks, etc see below for count.
    Site Lists
    Engagements = 4456 (Errors on this list, primary list for activity)
    Tasks = 7711  (All workflow tasks from all site lists)
    Clients = 4396  (Lookup from Engagements, Tslips, etc)
    Workflow History = 584930 (I periodically run a cleanup on this and try to keep it under 400k)
    Tslips = 3522 (Engagements list can create items here, but overall not much interaction between lists)
    A few other lists that are used by workflows to lookup associations that are fairly static and under 50 items, eg "Parters Admin" used to lookup a partners executive admin to assign a task.
    Stunpals - Disclaimer: This posting is provided "AS IS" with no warranties.

  • Oracle Text, create index (indextype is ctxsys.context)

    Dear sirs,
    I am a new user of Oracle Text (Oracle 11g release 11.2) and I am unable to create an index of type ctxsys.context). Any suggestions?:
    code:
    drop table mytable;
    drop index myindex force;
    create table mytable(id number primary key, docs clob);
    insert into mytable values(111555,'this text will be indexed');
    insert into mytable values(111556,'this is a default datastore example');
    commit;
    create index myindex on mytable(docs)
    indextype is ctxsys.context
    parameters ('DATASTORE CTXSYS.DEFAULT_DATASTORE');
    +++++++
    error messages:
    create index myindex on mytable(docs)
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-20000: Oracle Text error:
    ORA-06508: PL/SQL: could not find program unit being called
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 366

    Please check for invalid objects. Log on as sys or system and run:
    select owner, object_name, object_type from all_objects where status='INVALID';
    Post the results here and we'll advise the next step.
    If any objects owned by CTXSYS are invalid you may need to recompile the CTXSYS schema.

  • How do I get Oracle Text to index files on a file server?

    I am new to Oracle (I'm a MS-SQL DBA looking for a Full-Text Search solution that is better than linking to a MS index server.)
    So - Here's the objective:
    I have Oracle Server(Express) installed on a Windows server.
    I would like for Oracle to build a Full-Text Catalog of the files on a separate file server based on file paths in a table in the database.
    (No desire to store terabytes of images and documents inside the database)
    I can get Oracle text up and running, using the URL_Datastore:
    CREATE TABLE files (id NUMBER PRIMARY KEY, issue_id NUMBER, path VARCHAR(255) UNIQUE, ot_format VARCHAR(6), ot_version VARCHAR(10));
    The Compaq server is a remote windows server on my local workgroup, so the fully qualified path is just "compaq" and the URL is valid:
    INSERT INTO files VALUES (9,9,'file://Compaq/FTQ/00000003.pdf',NULL,NULL);
    INSERT INTO files VALUES (13,13,'file://Compaq/FTQ/01.txt',NULL,NULL);
    CREATE INDEX file_index ON files(path) INDEXTYPE IS ctxsys.context
    PARAMETERS ('datastore ctxsys.URL_DATASTORE format column ot_format');
    but when I enter:
    Select * from CTX_User_Index_errors, I see the following errors:
    DRG-11609: URL store: unable to open local file specified by file://Compaq/FTQ/00000003.pdf
    DRG-11609: URL store: unable to open local file specified by file://Compaq/FTQ/01.txt
    Did I miss something?
    Do I need to install anything on the file server?
    I would like to convince my company that Oracle can be much quicker than Microsoft's Indexing Service because it can avoid joining two large result sets (one result set from Full_text (indexing service) and one for specific data contained in fields in the MS-SQL database.) Full Text Searches commonly take 40 - 60 seconds where there are 1.5 million multi-page PDF files for a particular set that I sample search on. Without this massive join, I believe I can get the search to run in under 10 seconds.

    Thank you!
    File_Datastore worked fine.
    I was staying away from File_Datastore because the information I gathered from googling suggested that file_datastore would only work locally.
    Now I just have to get Oracle to pull data out of tables in a MS-SQL database on the local network (don't have a clue yet), and then have it index compiled file paths.
    Then MS-SQL can query Oracle with index and full-text criteria and Oracle can send back a result set
    It may sound like a bad way of performing Full-Text Queries, but anything will be better than the way things are currently running. We are currently performing Full Text Searches on a table that is rebuilt nightly, so the table containing millions of file paths is not live..
    It would be so much better if we just migrated to Oracle, but we currently do not have the resources.

  • Error while running the Oracle Text optimize index procedure (even as a dba user too)

    Hi Experts,
    I am on Oracle on 11.2.0.2  on Linux. I have implemented Oracle Text. My Oracle Text indexes are fragmented but I am getting an error while running the optimize_index error. Following is the error:
    begin
      ctx_ddl.optimize_index(idx_name=>'ACCESS_T1',optlevel=>'FULL');
    end;
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 941
    ORA-06512: at line 1
    Now I tried then to run this as DBA user too and it failed the same way!
    begin
      ctx_ddl.optimize_index(idx_name=>'BVSCH1.ACCESS_T1',optlevel=>'FULL');
    end;
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 941
    ORA-06512: at line 1
    Now CTXAPP role is granted to my schema and still I am getting this error. I will be thankful for the suggestions.
    Also one other important observation: We have this issue ONLY in one database and in the other two databases, I don't see any problem at all.
    I am unable to figure out what the issue is with this one database!
    Thanks,
    OrauserN

    How about check the following?
    Bug 10626728 - CTX_DDL.optimize_index "full" fails with an empty ORA-20000 since 11.2.0.2 upgrade (DOCID 10626728.8)

  • Oracle Text ALTER INDEX Performance

    Greetings,
    We have encountered some ehancement issues with Oracle Text and really need assistance.
    We are using Oracle 9i (Release 9.0.1) Standard Edition
    We are using a very simple Oracle text environmet, with CTXSYS.CONTEXT indextype on Domain Indexes.
    We have indexed two text columns in one table, one of these columns is CLOB.
    Currently if one of these columns is modified, we are using a trigger to automatically ALTER the index.
    This is very slow, it is just like dropping the index and creating it again.
    Is this right? should it be this slow?
    We are also trying to use the ONLINE parameter for ALTER INDEX and CREATE INDEX, but it gives an error saying this feature is not enabled.
    How can we enable it?
    Is there any way in improving the performance of this automatic update of the indexes?
    Would using a trigger be the best way to do this?
    How can we optimize it to a more satifactory performance level?
    Also, are we able to use the language lexers for indexes with the Standard Edition. If so, how do you enable the CTX_DLL?
    Many thanks for any assistance.
    Chi-Shyan Wang

    If you are going to sync your index on every update, you need to make sure that you are optmizing it on a regular basis to remove index fragmentation and remove deleted rows.
    you can set up a dmbs_job to do a ctx_ddl.optmize and run a full optmize periodically.
    Also, depending on the number of rows you have, and also the size of the data, you might want to look at using a CTXCAT index, which is transactional, stays in sync automatically and does not need to be optimized. CTXCAT indexes do not work well on large text objects (they are good for a couple lines of text at most) so they may not suit your dataset.

  • APEX app using Oracle Text  to index pages that require authorzation

    Hi Gurus and APEX Dev team
    My team need to develop an APEX App that will index all our documents spread across various servers. Some of the documents require Single sign on access (e.g. KIX.oraclecorp.com) and some require other authorization methods (e.g. Metalink) . The Question is , Is it possible to index the pages that require authorization using Oracle text. If yes How? I have implemented the demo app which can index pages that do not require authorization.
    Thanks a million
    regards
    Bala

    Hello,
    Unless I misunderstand you, the fact that the pages require authentication doesn't really matter, it is the underlying data you want to index correct? If so then you would index them in exactly the same way that you would index any table data using Oracle Text/interMedia.
    John.
    Blog: http://jes.blogs.shellprompt.net
    Work: http://www.apex-evangelists.com
    Author of Pro Application Express: http://tinyurl.com/3gu7cd
    REWARDS: Please remember to mark helpful or correct posts on the forum, not just for my answers but for everyone!

  • Ctxsys.context index on registered schema xmltype column

    9iR2
    Is it possible to create a text index (indextype is ctxsys.context) on a schema registered xmltype column?
    I tried it, the index creation works fine.
    After the first insert statement the Oracle process seems to hang. (CPU 100%, increasing memory consumption)

    Yes, I previously entered some data.
    Test case:
    1. register schema
    begin
    DBMS_XMLSCHEMA.REGISTERSCHEMA('http://localhost/JobPositionSeeker-1_1.xsd',
    getDocument('JobPositionSeeker-1_1.xsd'), TRUE, TRUE, FALSE, FALSE
    end;
    2. create table
    CREATE TABLE application_xml of XMLType
    XMLSCHEMA "http://localhost/JobPositionSeeker-1_1.xsd" ELEMENT "JobPositionSeeker"
    3. entered some data
    4. created the index
    create index applicant on application_xml ( SYS_NC_ROWINFO$)
    indextype is ctxsys.context
    5. select using index
    select value(a).getClobVal() from application_xml a
    where contains( value(a),'%ai% INPATH(//PersonalData/PersonName/FamilyName)')>0
    6. insert another record
    -> crash

  • Change display of result set from 'showing data as rows, to showing data as one or more columns'

    Hi Everyone,
    I am interested in changing the way that data is displayed in my result set.
    Essentially I want to display a selection of rows (1 to n) as columns, the following diagram explains my intentions -
    Perhaps one of the greatest challenges here is the fact that I do not have a concrete number of rows (or BIN numbers).
    Each stock item could be stored in one or more BINS, which I will not know until running my query.
    Any suggestions here will be greatly appreciated.
    Kind Regards,
    David

    Can you explain on what basis you select those BinLabels? There're lots of other labels also available on your sample data so whats the rule which determines which all BinLabels should be selected?
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs
    Agree with Visakh16's opinion. In addition, it might be helpful if you can post your DDL here.
    Regards,
    Elvis Long
    TechNet Community Support

Maybe you are looking for

  • Identify where to install SCOM Agent on SCCM Site hierarchy.

    Hello.. We have a SCCM Site environment where there is one CAS site, other 5 Primary site servers and one DB server .All are in different machines (7 VMs).i need to monitor complete SCCM site hierarchy.Next on which machines i need to install SCOM Ag

  • How can I get the erasure tool to leave a white or color background and not checkered?

    When using the erasure or erasure background tools, it leaves a checkered background. How can I get the erasure tool to leave a white or color backround?

  • Problem to create a web album

    Hi, I'm using LR3 for a couple of weeks and I'm facing problems to create successfully a new webalbum. After Keying in the ftp-server details LR3 is unable to upload the pic's to the server. I've tried to upload the pics via Filezilla which works per

  • Runtime error CONVT_NO_NUMBER has occurred

    Dear experts, I am using BAPI -- 'HR_MAINTAIN_MASTERDATA' to maintain infotype 8(Basic pay). I debugged the code. The data is coming in IT (that is of type PPROP) from my text file. But it gives runtime error in the FM saying -- "Runtime error CONVT_

  • Update attribute on document creation

    I'm trying to modify the name of a created document. In my S_TieDocument class, I use the ExtendedPreInsert method to do this but can't get it to work. Here's the code I use: in S_TieDocument: public void extendedPreInsert(OperationState opState, S_L