Slow performance for context index
Hi, I'm just a newbie here in forum and I would like ask for your expertise about oracle context index. I have my sql and I'm using wild character for searching '%%' .
I used the sql below with a context index (ctxsys.context) in order to avoid full table scan for wild character searching.
SELECT BODY_ID
TITLE, trim(upper(title)) as title_sort,
SUM(JAN) as JAN,
SUM(FEB) as FEB,
SUM(MAR) as MAR,
SUM(APR) as APR,
SUM(MAY) as MAY,
SUM(JUN) as JUN,
SUM(JUL) as JUL,
SUM(AUG) as AUG,
SUM(SEP) as SEP,
SUM(OCT) as OCT,
SUM(NOV) as NOV,
SUM(DEC) AS DEC
FROM APP_REPCBO.CBO_TURNAWAY_REPORT
WHERE contains (BODY_ID,'%240103%') >0 and
PERIOD BETWEEN '1201' AND '1212'
GROUP BY BODY_ID, trim(upper(title))
But i was surprised that performance was very slow, and when I try this on explain plan time of performance almost consume 2 hours.
plan FOR succeeded.
PLAN_TABLE_OUTPUT
Plan hash value: 814472363
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1052K| 97M| | 805K (1)| 02:41:12 |
| 1 | HASH GROUP BY | | 1052K| 97M| 137M| 805K (1)| 02:41:12 |
|* 2 | TABLE ACCESS BY INDEX ROWID| CBO_TURNAWAY_REPORT | 1052K| 97M| | 782K (1)| 02:36:32 |
|* 3 | DOMAIN INDEX | CBO_REPORT_BID_IDX | | | | 663K (0)| 02:12:41 |
Predicate Information (identified by operation id):
2 - filter("PERIOD">='1201' AND "PERIOD"<='1212')
3 - access("CTXSYS"."CONTAINS"("BODY_ID",'%240103%')>0)
16 rows selected
oracle version: Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
Thanks,
Zack
Hi Rod,
Thanks for the reply, yes I already made gather stats on that table, including rebuild index.
but its so strange when I use another body_id the performance will vary.
SQL> EXPLAIN PLAN FOR
2 SELECT BODY_ID
3 TITLE, trim(upper(title)) as title_sort,
4 SUM(JAN) as JAN,
5 SUM(FEB) as FEB,
6 SUM(MAR) as MAR,
7 SUM(APR) as APR,
8 SUM(MAY) as MAY,
9 SUM(JUN) as JUN,
10 SUM(JUL) as JUL,
11 SUM(AUG) as AUG,
12 SUM(SEP) as SEP,
13 SUM(OCT) as OCT,
14 SUM(NOV) as NOV,
15 SUM(DEC) as DEC
16 FROM WEB_REPCBO.CBO_TURNAWAY_REPORT
17 WHERE contains (BODY_ID,'%119915311%')> 0 and
18 PERIOD BETWEEN '1201' AND '1212'
19 GROUP BY BODY_ID, trim(upper(title));
SELECT * FROM TABLE (dbms_xplan.display);
Explained.
SQL>
Explained.
SQL>
PLAN_TABLE_OUTPUT
Plan hash value: 814472363
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 990 | 96030 | 1477 (1)| 00:00:18 |
| 1 | HASH GROUP BY | | 990 | 96030 | 1477 (1)| 00:00:18 |
|* 2 | TABLE ACCESS BY INDEX ROWID| CBO_TURNAWAY_REPORT | 990 | 96030 | 1475 (0)| 00:00:18 |
|* 3 | DOMAIN INDEX | CBO_REPORT_BID_IDX | | | 647 (0)| 00:00:08 |
Predicate Information (identified by operation id):
2 - filter("PERIOD">='1201' AND "PERIOD"<='1212')
3 - access("CTXSYS"."CONTAINS"("BODY_ID",'%119915311%')>0)
16 rows selected.
Similar Messages
-
Performance of context index with sorting
Dear All,
I've got a problem and don't know how to solve this.
there has a table which have a XMLTYPE field to store the unstructred xml, and created with context index.
When I try to select a record from it by using contains (res, '[searchingfield]')>0, the response time is quick, but when I try to order by another field which in the same table, the response time is drop down slightly. (ex. select id, path, res, update_date from testingtbl where contains(res, 'shopper')>0 order by update_date desc.
Actually there is a context index build for field 'res', any other index build for field 'update_date', when sql without 'order by update_date', the context index will use, but the update_date index will not be used even have ordering criteria.
Is there any expect can tell how to solve this? how to keep the performance even doing the sorting process?
Thanks and Regards
RaymondThanks for your quick reply.
The mentions information provide after back to office, actually I just want to know if there is any method(s) which can use the context index (with contains keyword) and sorting without slow down the performance.
Thanks and Regards
Raymond -
Slow performance for large queries
Hi -
I'm experiencing slow performance when I use a filter with a very large OR clause.
I have a list of users, whose uid's are known, and I want to retrieve attributes for all users. If I do this one at a time, I pay the network overhead, and this becomes a bottleneck. However, if I try to get information about all users at once, the query runs ridiculously slow - ~ 10minutes for 5000 users.
The syntax of my filter is: (|(uid=user1)(uid=user2)(uid=user3)(uid=user4).....(uid=user5000))
I'm trying this technique because it's similar to good design for oracle - minimizing round trips to the database.
I'm running LDAP 4.1.1 on a Tru64 OS - v5.1.This is a performance/tuning forum for iPlanet Application Server. You'd have better luck with this question on the Directory forum.
The directory folks don't have a separate forum dedicated to tuning, but they answer performance questions in the main forum all of the time.
David -
Time for context Index Creation
Hi,
I am creating an context index in table having 10 million rows. But it is still running from 10 hours. Waht may be the expected time for completion
of creating this index.
Immediate replies are more helpful
Thanks,
SriHi,
the answer is "it depends".
- what are you indexing? 10M rows with PDF documents or just simple plain text?
- what is your hardware?
- how is your index made-up? Is it just a plain index or do you use all kind of features (substring etc)?
- Which version of Oracle are you using?
In the oracle documentation you can find information about this issue: http://docs.oracle.com/cd/E11882_01/text.112/e24435/aoptim.htm#i1006756
Herald ten Dam
http://htendam.wordpress.com -
Super slow when making context index
90000 rows, average 30k/clob
charset zhs16gbk
CONNECT CTXSYS/CTXSYS;
begin
ctx_ddl.create_preference('APPLEXER', 'CHINESE_VGRAM_LEXER');
ctx_ddl.create_preference('APPSTORAGE', 'BASIC_STORAGE');
ctx_ddl.set_attribute('APPSTORAGE', 'I_TABLE_CLAUSE', 'tablespace APPINDEX storage (initial 128M)');
ctx_ddl.set_attribute('APPSTORAGE', 'K_TABLE_CLAUSE', 'tablespace APPINDEX storage (initial 16M)');
ctx_ddl.set_attribute('APPSTORAGE', 'R_TABLE_CLAUSE', 'tablespace APPINDEX storage (initial 16M)');
ctx_ddl.set_attribute('APPSTORAGE', 'N_TABLE_CLAUSE', 'tablespace APPINDEX storage (initial 16M)');
ctx_ddl.set_attribute('APPSTORAGE', 'I_INDEX_CLAUSE', 'tablespace APPINDEX storage (initial 16M)');
ctx_ddl.set_attribute('APPSTORAGE', 'P_TABLE_CLAUSE', 'tablespace APPINDEX storage (initial 16M)');
end;
CREATE INDEX CNINFO.IDX_ZHENGWEN ON CNINFO.TEXT_INFORMATION(WENBENZW) INDEXTYPE IS CTXSYS.CONTEXT
PARAMETERS ('STORAGE APPSTORAGE LEXER APPLEXER');
after almost 24hours, the index's status is
'progrs' still.
oracle ee 817
solaris 7
1G ramYou might want to look at/consider the following:
1) Use of parallel option for index creation (would require
partitioning on base table).
2) As per prior suggestion, look at memory, you may want to go
much higher if you can afford it (look at both default and max
parameters for memory)
3) Be careful of some defaults for indexing, we've found that
theme indexing was on by default. If data does not need to be
filtered you might want to specify NULL FILTER for preference,
specify not to Index Themes, set stemmer to NULL (if this meets
your needs) and/or review your stopword lists. -
Oracle text performance with context search indexes
Search performance using context index.
We are intending to move our search engine to a new one based on Oracle Text, but we are meeting some
bad performance issues when using search.
Our application allows the user to search stored documents by name, object identifier and annotations(formerly set on objects).
For example, suppose I want to find a document named ImportSax2.c: according to user set parameters, our search engine format the following
search queries :
1) If the user explicitely ask for a search by document name, the query is the following one =>
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0;
2) If the user don't specify any extra parameters, the query is the following one =>
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c' , 1 ) > 0;
Oracle text only need around 7 seconds to answer the second query, whereas it need around 50 seconds to give an answer for the first query.
Here is a part of the sql script used for creating the Oracle Text index on the column OBJFIELDURL
(this column stores a path to an xml file containing properties that have to be indexed for each object) :
begin
Ctx_Ddl.Create_Preference('wildcard_pref', 'BASIC_WORDLIST');
ctx_ddl.set_attribute('wildcard_pref', 'wildcard_maxterms', 200) ;
ctx_ddl.set_attribute('wildcard_pref','prefix_min_length',3);
ctx_ddl.set_attribute('wildcard_pref','prefix_max_length',6);
ctx_ddl.set_attribute('wildcard_pref','STEMMER','AUTO');
ctx_ddl.set_attribute('wildcard_pref','fuzzy_match','AUTO');
ctx_ddl.set_attribute('wildcard_pref','prefix_index','TRUE');
ctx_ddl.set_attribute('wildcard_pref','substring_index','TRUE');
end;
begin
ctx_ddl.create_preference('doc_lexer_perigee', 'BASIC_LEXER');
ctx_ddl.set_attribute('doc_lexer_perigee', 'printjoins', '_-');
ctx_ddl.set_attribute('doc_lexer_perigee', 'BASE_LETTER', 'YES');
ctx_ddl.set_attribute('doc_lexer_perigee','index_themes','yes');
ctx_ddl.create_preference('english_lexer','basic_lexer');
ctx_ddl.set_attribute('english_lexer','index_themes','yes');
ctx_ddl.set_attribute('english_lexer','theme_language','english');
ctx_ddl.set_attribute('english_lexer', 'printjoins', '_-');
ctx_ddl.set_attribute('english_lexer', 'BASE_LETTER', 'YES');
ctx_ddl.create_preference('german_lexer','basic_lexer');
ctx_ddl.set_attribute('german_lexer','composite','german');
ctx_ddl.set_attribute('german_lexer','alternate_spelling','GERMAN');
ctx_ddl.set_attribute('german_lexer','printjoins', '_-');
ctx_ddl.set_attribute('german_lexer', 'BASE_LETTER', 'YES');
ctx_ddl.set_attribute('german_lexer','NEW_GERMAN_SPELLING','YES');
ctx_ddl.set_attribute('german_lexer','OVERRIDE_BASE_LETTER','TRUE');
ctx_ddl.create_preference('japanese_lexer','JAPANESE_LEXER');
ctx_ddl.create_preference('global_lexer', 'multi_lexer');
ctx_ddl.add_sub_lexer('global_lexer','default','doc_lexer_perigee');
ctx_ddl.add_sub_lexer('global_lexer','german','german_lexer','ger');
ctx_ddl.add_sub_lexer('global_lexer','japanese','japanese_lexer','jpn');
ctx_ddl.add_sub_lexer('global_lexer','english','english_lexer','en');
end;
begin
ctx_ddl.create_section_group('axmlgroup', 'AUTO_SECTION_GROUP');
end;
drop index ADSOBJ_XOBJFIELDURL force;
create index ADSOBJ_XOBJFIELDURL on ADSOBJ(OBJFIELDURL) indextype is ctxsys.context
parameters
('datastore ctxsys.file_datastore
filter ctxsys.inso_filter
sync (on commit)
lexer global_lexer
language column OBJFIELDURLLANG
charset column OBJFIELDURLCHARSET
format column OBJFIELDURLFORMAT
section group axmlgroup
Wordlist wildcard_pref
Oracle created a table named DR$ADSOBJ_XOBJFIELDURL$I which now contains around 25 millions records.
ADSOBJ is the table contaings information for our documents,OBJFIELDURL is the field that contains the path to the xml file containing
data to index. That file looks like this :
<?xml version="1.0" encoding="UTF-8" ?>
<fields>
<OBJNAME><![CDATA[NomLnk_177527o.jpgp]]></OBJNAME>
<OBJREM><![CDATA[Z_CARACT_141]]></OBJREM>
<OBJID>295926o.jpgp</OBJID>
</fields>
Can someone tell me how I can make that kind of request
"select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0;"
run faster ?Below are the execution plan for both the 2 requests :
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0
PLAN_TABLE_OUTPUT
| Id | Operation |Name |Rows |Bytes |Cost (%CPU)|
| 0 | SELECT STATEMENT | |1272 |119K | 4 (0) |
| 1 | TABLE ACCESS BY INDEX ROWID |ADSOBJ |1272 |119K | 4 (0) |
| 2 | DOMAIN INDEX |ADSOBJ_XOBJFIELDURL | | | 4 (0) |
Note
- 'PLAN_TABLE' is old version
Executed in 2 seconds
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c' , 1 ) > 0
PLAN_TABLE_OUTPUT
| Id |Operation |Name |Rows |Bytes |Cost (%CPU)|
| 0 | SELECT STATEMENT | |1272 |119K | 4 (0) |
| 1 | TABLE ACCESS BY INDEX ROWID |ADSOBJ |1272 |119K | 4 (0) |
| 2 | DOMAIN INDEX |ADSOBJ_XOBJFIELDURL | | | 4 (0) |
Sorry for the result formatting, I can't get it "easily" readable :( -
We are also experiencing extremely slow performance for RoboHelp projects under version control. We are using RoboHelp 11, PushOk and a Tortoise SVN repository on a Linux server. We are using a Linux server on our IT guys advice because we found SVN version control under Windows was unstable.
When placing a Robohelp project under version control, and yes the project is on my local machine, it can take up to two hours to complete. We are using the RoboHelp sample projects to test.
We have tried to put the project under version control from Robohelp, and also tried first putting the project under version control from Tortoise SVN, and then trying to open the project from version control in Robohelp. In both cases, the project takes a ridiculous amount of time to open. The Robohelp status bar displays Querying Version Control Status for about an hour before it starts to download from the repository, which then takes more than an hour to complete. In many cases Robohelp becomes unresponsive and we have to start the whole process again.
If adding the project to source control completes successfully, and the the project is opened from version control, performing any function also takes a very long time, such as creating a topic. When I generated a printed documentation layout it took an astonishing 218 minutes and 17 seconds to complete. Interestingly, when I generated the printed documentation layout again, it took 1 min and 34 seconds. However when I closed the project, opened from version control, and tried to generate a printed documentation layout, it again took several hours to complete. The IT guys are at a loss, and say it is not a network issue and I am starting to agree that this is a RoboHelp issue.
I see there are a few other discussions here related to this kind of poor performance, none of which seem to been answered satisfactorily. For example:
Why does it take so long when adding a new topic in RH10 with PushOK SVN
Does anybody have any ideas on what we can do or what we can investigate? I know that there are options for version control, but am reluctant to pursue them until I am satisfied that our current issues cannot be resolved.
Thanks MarkDo other applications work fine with the source control repository? The reason I'm asking is because you must first rule out that there are external factors causing this behaviour. It seems that your it guys have already looked at it, but it's better to be safe than sorry.
I have used both VSS and TFS and I haven't encountered such a performance issue. I would suggest filing it as a bug if you rule out that the problem is not related to external influences: https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform&loc=en
Kind regards,
Willam -
CS5 or Pc Slow performance?
Hi there,I have a problem with my pc at work and because I am not quite sure in which one is exactly the problem I decide to ask you.The pc is intel pentium 4 with 3 Gb Ram XP professional service pack 3 Photoshop CS 5.
The pc is connected to server(data base).When I open a bunch of photos (20-30) approximately 8 - 12 MB each one and sometimes I experience a slow performance(for example I am trying to retouch/clone something and it takes a lot of time for the pc to redraw/respond) the interesting part is that it doesn't happen always(sometimes can be even 3 photos and it will take ages to do something) also I notice that after appr. 20 minutes it's suddenly start to work as it should normally(it's like there was an update and when is updated/complete everything is fine).When that start to happen today I restart CS5 and load the same photos and it work fine, so it is really bizarre for me.I put a 89% ram usage to the CS5(I pretty much use only all the time Photoshop and bridge ) also made the history steps to 7 and the cache to 6 for big and flat files.I was checking the Efficiency and it's all the time 100 % so it's like it's not the photoshop but like I said today i just restard the ps and upload the same photos and got fix...I tried to copy the photos on my hard drive(so it should not be affect of the performance of the server but still the same as when I take them straight from the server.Because the hard drive is only 80 gb ( I don't store anything there ) i did clean up and disk defragment still the same situation.Do you know what can be the problem and is it the Photoshop or the Pc?I'll appreciate it.
Thank youCould be that your PC is sometimes looking to check things on the server, and maybe the server or network is busy/bogged down and you PC gets caught in that cycle mess.....so,
suggest next time you use Ps, set win performance monitor perfmon.exe or perfmon.msc ; setup to view graph of disk use and network activity and cpu usage.....then you may see/track a problem area.
on win7, task manager does most of the same stuff -
Slow performance of JDBC - ODBC MS ACCESS
I experience a very slow performance for jdbc-odbc using ms access as the database. This program works fine with other computer (in term of performance). However, the harddrive is cranking big time with this computer (this is the fastest one among the computers I tested, and also has many gigabytes left to be used). The database is very small. Other computer use exactly the same java version and msaccess driver version. If anyone found the same problem, or have any suggestion please help. Thank you.
I am having the same problem with one machine as well. Running MS Access 2000 (unfortunately), and all machines run well with one exception. DB reads take about 10 seconds each. If a solution has been found, please report.
--Dave -
Oracle 10g – Performance with BIG CONTEXT indexes
I would like to use Oracle XE 10.2.0.1.0 only for the full-text searching of the files residing outside the database on the FTP server.
Recently I have found out that size of the files to be indexed is 5GB.
As I have read somewhere on this forum before size of the index should be 30-40% of the indexed text files (so with formatted documents like PDF or DOC even less).
Lets say that the CONTEXT index size over these files will be 1.5-2GB.
Number of the concurrent user will be max. 5.
I can not easily test it my self yet.
Does anybody have any experience with Oracle XE or other Oracle Database edition performance with the CONTEXT index this BIG?
Will Oracle XE hardware resources license limitation be sufficient to handle one CONTEXT indexe this BIG?
(Oracle XE license limitations: 1 GB RAM and 1 CPU)
Regards.That depends on at least three things:
(1) what is the range of words that will appear in the document set (wide range of documents = smaller resultsets = better performance)
(2) how precise are the user's queries likely to be (more precise = smaller resultsets = better performance)
(3) how many milliseconds are your users willing to wait for results
So, unfortunately, you'll probably have to experiment a bit before you'll know... -
Slow response for HR tcodes PA30/PA20 - impact of Context Solution?
We just implemented the Context Solution for Structural Authorization in our production environment. Through out our testing phase we were aware of somewhat slower performance with the context solution that without it.
However in production we came across a user who needs 30-45 minutes just to get to the PA30 initial screen. As far as I have seen CSSA only impacts data retrieval times and not simply opening the transaction. So we are surprised at this behaviour. She can however run HR reports without any such issues.
Its not a problem with the GUI version for the user as she gets the same error from a different PC as well. The only thing that we could find about this user is that she had access to a huge amount of HR data/objects. Does the context solution have a hard limit on the number of HR objects that a person can have access to?
I would appreciate any ideas!!!............Regards.Hi everyone,
Sorry for not updating this thread for so long. I was aware of and using the RHBAUS00 and RHBAUS02 reports while posting. Somehow the performance was still very slow while using those reports.
The issue turned out to be related to custom coding for HRBAS00_GET_PROFL where we were not getting rid of duplicate profile values. Now, after a coding change the situation has improved a lot.
I am closing the thread. Thanks to everyone who took the time to comment! -
Hi,
I want to create a context index on one column which contains large text. And the table contains millions of records and daily inserts happen into the same table. My question is
1.Do we need to run any procedures after inserting the records daily?
2.Is there any problem from performace point of view creating context index on the table
Thanks,
Srisri333 wrote:
Hi,
I want to create a context index on one column which contains large text. And the table contains millions of records and daily inserts happen into the same table. My question is
1.Do we need to run any procedures after inserting the records daily?Not for what you describe. But you didn't describe much. I guess you will do something with this table data later. It depends from that. But since you only mentioned that you insert. Then no there is nothing to do after that.
2.Is there any problem from performace point of view creating context index on the tableSure. Creating the index takes time. If the index is there new inserts will take more time.
Edited by: Sven W. on Oct 10, 2012 12:02 PM -
Slow TCP performance for traffic routed by ACE module
Hi,
the customer uses two ACE20 modules in active-standby mode. The ACE load-balances servers correctly. But there is a problem with communication between servers in the different ACE contexts. When the customer uses FTP from one server in one context to the other server in other context the throughput through ACE is about 23 Mbps. It is routed traffic in ACE:-( See:
server1: / #ftp server2
Connected to server2.cent.priv.
220 server2.cent.priv FTP server (Version 4.2 Wed Apr 2 15:38:27 CDT 2008) ready.
Name (server2:root):
331 Password required for root.
Password:
230 User root logged in.
ftp> bin
200 Type set to I.
ftp> put "|dd if=/dev/zero bs=32k count=5000 " /dev/null
200 PORT command successful.
150 Opening data connection for /dev/null.
5000+0 records in.
5000+0 records out.
226 Transfer complete.
163840000 bytes sent in 6.612 seconds (2.42e+04 Kbytes/s)
local: |dd if=/dev/zero bs=32k count=5000 remote: /dev/null
ftp>
The output from show resource usage doesn't show any drops:
conc-connections 0 0 800000 1600000 0
mgmt-connections 10 54 10000 20000 0
proxy-connections 0 0 104858 209716 0
xlates 0 0 104858 209716 0
bandwidth 0 46228 50000000 225000000 0
throughput 0 1155 50000000 100000000 0
mgmt-traffic rate 0 45073 0 125000000 0
connections rate 0 9 100000 200000 0
ssl-connections rate 0 0 500 1000 0
mac-miss rate 0 0 200 400 0
inspect-conn rate 0 0 600 1200 0
acl-memory 7064 7064 7082352 14168883 0
sticky 6 6 419430 0 0
regexp 47 47 104858 209715 0
syslog buffer 794624 794624 418816 431104 0
syslog rate 0 31 10000 20000 0
There is parameter map configured with rebalance persistant for cookie insertion in the context.
Do you know how can I increase performance for TCP traffic which is not load-balanced, but routed by ACE? Thank you very much.
RomanDefault inactivity timeouts used by ACE are
icmp 2sec
tcp 3600sec
udp 120sec
With your config you will change inactivity for every protocol to 7500sec.If you want to change TCP timeout to 7500sec and keep the
other inactivity timeouts as they are now use following
parameter-map type connection GLOBAL-TCP
set timeout inactivity 600
parameter-map type connection GLOBAL-UDP
set timeout inactivity 120
parameter-map type connection GLOBAL-ICMP
set timeout inactivity 2
class-map match-all ALL-TCP
match port tcp any
class-map match-all ALL-UDP
match port tcp any
class-map match-all ALL-ICMP
match port tcp any
policy-map multi-match TIMEOUTS
class ALL-TCP
connection advanced GLOBAL-TCP
class ALL-UDP
connection advanced GLOBAL-UDP
class ALL-TCP
connection advanced GLOBAL-ICMP
and apply service-policy TIMEOUTS globally
Syed Iftekhar Ahmed -
Privileges require for a user to create CONTEXT indexes
Hi all,
RDBMS: 11.2.0.3
SO.......: OEL 6.3
What are the necessary privileges that have to be granted to a user to be able to create CONTEXT Indexes, for example. I have granted the CTXAPP to my user, but when i tryied to create the CONTEXT Index with the command bellow, i got an "insufficient privilege" error message.
CREATE INDEX USR_DOCS.IDX_CTX_TAB_DOCUMENTOS_01 ON USR_DOCS.TAB_DOCUMENTOS(DOCUMENTO) INDEXTYPE IS CTXSYS.CONTEXT PARAMETERS ('SYNC (ON COMMIT)');It depends on whether the user is trying to create the index on his own table in his own schema or on somebody else's table in somebody else's schema. The following demonstrates minimal privileges (quota could be smaller) for user usr_docs to create the index on his own table in his own schema and for my_user to create the index on usr_docs table in usr_docs schema.
SCOTT@orcl> -- version:
SCOTT@orcl> SELECT banner FROM v$version
2 /
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
5 rows selected.
SCOTT@orcl> -- usr_docs privileges:
SCOTT@orcl> CREATE USER usr_docs IDENTIFIED BY usr_docs
2 /
User created.
SCOTT@orcl> ALTER USER usr_docs QUOTA UNLIMITED ON users
2 /
User altered.
SCOTT@orcl> GRANT CREATE SESSION, CREATE TABLE TO usr_docs
2 /
Grant succeeded.
SCOTT@orcl> -- my_user privileges:
SCOTT@orcl> CREATE USER my_user IDENTIFIED BY my_user
2 /
User created.
SCOTT@orcl> GRANT CREATE SESSION, CREATE ANY INDEX TO my_user
2 /
Grant succeeded.
SCOTT@orcl> -- user_docs:
SCOTT@orcl> CONNECT usr_docs/usr_docs
Connected.
USR_DOCS@orcl> CREATE TABLE tab_documentos (documento CLOB)
2 /
Table created.
USR_DOCS@orcl> INSERT ALL
2 INTO tab_documentos VALUES ('test data')
3 INTO tab_documentos VALUES ('other stuff')
4 SELECT * FROM DUAL
5 /
2 rows created.
USR_DOCS@orcl> CREATE INDEX USR_DOCS.IDX_CTX_TAB_DOCUMENTOS_01
2 ON USR_DOCS.TAB_DOCUMENTOS(DOCUMENTO)
3 INDEXTYPE IS CTXSYS.CONTEXT
4 PARAMETERS ('SYNC (ON COMMIT)')
5 /
Index created.
USR_DOCS@orcl> DROP INDEX usr_docs.idx_ctx_tab_documentos_01
2 /
Index dropped.
USR_DOCS@orcl> -- my_user:
USR_DOCS@orcl> CONNECT my_user/my_user
Connected.
MY_USER@orcl> CREATE INDEX USR_DOCS.IDX_CTX_TAB_DOCUMENTOS_01
2 ON USR_DOCS.TAB_DOCUMENTOS(DOCUMENTO)
3 INDEXTYPE IS CTXSYS.CONTEXT
4 PARAMETERS ('SYNC (ON COMMIT)')
5 /
Index created. -
Portal Context Index Creation Performance issue
Recreating Portal Context Indexes takes around 36 hours at our site (after portal upgrade from 3.0.9.8.2 to 3.0.9.8.5 as per release notes). I was following the Note:158368.1 to rebuild the indexes. Is there anything that i can do to tune this ?
thanks
subuUnfortunately indexing is generally a fairly intensive operation and can be time consuming.
There are some things that you can do to optimize the performance of your database as a whole which may in turn help the performance of your indexing operation. Look at the Performance Guide and Reference book in the database documentation.
Much of the time spent indexing is taken up by filtering binary documents and fetch content identified by URL attributes. In the case of the later, it might be worth checking in the ctx_user_index_errors view to ensure that you don't have a lot of URL requests that are timing out. The timeout is set to 30 seconds and if there are a lot or URLs where the host cannot be resolved or the fetch times out it might be costing a lot of time during the indexing operation. This is often the case if a proxy is required to reach the URLs but the proxy has not be configured correctly.
Maybe you are looking for
-
Is Firefox working on the OS BADA for the Samsung Wave II GT- S8530?
I posted this 3 months ago in your Mobile forum: Hoping for a feedback from someone at Firefox. So far nothing. Can someone respond? Is Firefox working on the OS BADA for the Samsung Wave II GT- S8530? Any guess if there is something in the works for
-
Solution Manager key error when installing ECC6
hi Iam installing ECC6 abap+java and facing a problem with the solution manager generated key.Each time iam putting in the key its says its the wrong one. i dont know where iam making a mistake.i managed to generate one before when upgarding to ECC6
-
I have mislaid my original Apple ID which means I am unable to restore my IPad from factory settings
I have changed my Apple ID from the original ID that I used to set up my IPad & I haven't kept a record. This means that I am unable to restore from factory settings after an update error.
-
Disabling PnP on MS-6528 LE Board
I am trying to hook up a device to my COM1 port that is not plug n play. The program can't find the device, so I was told to turn off plug n play. I can't figure out how to do this in bios! What do I do? thanks, Mike
-
1st generation Touch. Apple still stock these for battery replacements?
I am aware that apple replaces the entire device for poor battery life for a service charge $79. Does this still apple for a 1st generation Touch? I have a 16GB Nano, but the battery only last a few hours. Does Apple still keep these in stock?