? about domain index build

My db version is 9.0.1.4
I have production issue that I need to resolve. My issue is i have to rebuild domain index on table. This table contains over 10000 rows of clob data which contains column that im trying to build domain index. This domain index build takes over 20 hours to run though. In addition, no dml insert can take place against table where domain index build does occur....
My question is: If I run domain index build statement with nopopulate clase then do a sync against domain index will users be allowed to do dml inserts against table ?

Intermedia text is now Oracle text. Oracle text has it's own forum. The text people do not monitor this forum. Please ask your question in the Oracle Text Forum. You will get a quicker, more expert answer there.
Thanks,
Larry

Similar Messages

  • Domain index query takes 12 hours to execute

    Hi Friends,
    My query which uses domain index takes 12 hours to execute. Can you please help me tuning this query?
    select /*+ NO_UNNEST ORDERED index_ffs(Term idx_recanon_term_ysm1) parallel_index(Term, idx_recanon_term_ysm1, 8) */ term.rowid
    from cmpgn.recanon_search_terms,cmpgn.recanon_term_ysm Term
    where cmpgn.recanon_search_terms.search_type=3 and
    contains(Term.RAW_TERM_TEXT,cmpgn.recanon_search_terms.search_text) > 0 and
    Term.pod_id=11
    Thanks in advance.
    Regards
    Bala

    First your driving table is recanon_search_terms to get the required search terms, then use them to query Oracle Text. What you are trying to do is to get a subset of table term first and than run every individual row of table term against the search terms. This approach will take a very long time.
    Not sure what you want to do, but it looks something I have been working on previously. As far I can see there is a table with controlled terms which are matched against other raw terms in a text document using Oracle Text. The issue is that you do a join in the contains clause without knowing the number of query expressions formed. It can be that 7 thousand individual queries run at once, than use indexes of the other predicates, do some sorting, etc. which would explain the long time needed. It will probably run out of memory causing all sorts of issues.
    As a quick fix try first following statement without hints.
    Tell us how many rows you get, the distinct counts for pod_id and search_type and how long it takes.
    CREATE TABLE test_table AS
    select term.pod_id pod_id, cmpgn.recanon_search_terms.search_type search_type, term_primary_key, ....
    from cmpgn.recanon_search_terms,cmpgn.recanon_term_ysm Term
    where contains(Term.RAW_TERM_TEXT,cmpgn.recanon_search_terms.search_text) > 0
    Create index test_idx on test_table(pod_id, search_type) and use test_table instead to get the results, just by providing pod_id and search_type without the join in the contains clause.
    SELECT ..
    FROM test_table
    WHERE pod_id = X
    AND search_type = Y
    Maybe this approach is sufficient for your purpose. For sure, it will give you instant results. In that case a materialized view instead of the table could work for maintenance reasons; I had some issues with materialized views for above scenario.
    However check very carefully the results. I would have some doubts that all rows in search_text form a valid query expression for Oracle Text. If search_text has just single tokens or phrases wrapping curly brackets around will probably resolve the issue.
    Think about to form one query expression through a function call instead of a table join inside the contains clause. Sometimes to run a set of individual queries are faster than one big query.
    select term.rowid --, form_query_exrpession(3) query_exrpession
    from cmpgn.recanon_term_ysm Term
    where contains(Term.RAW_TERM_TEXT, form_query_exrpession(3)) > 0
    The above function will form one valid Oracle Text query expression by using the table recanon_search_terms inside the function. This approach normally helps, at least in debugging and fine-tuning. Avoid using bind variables first in order to identify highly skewed distribution of search_type.
    The other performance issue is the additional predicate of pod_id = X, here the suggestion from radord works very well. If you want to get your hands dirty have a look at user_datastore in the Oracle Text documentation, this will give you all the freedom you want.

  • Error: Index Build Failed - Insufficient Disk Space

    I am unable to create a full-text index for a set of PDF files (Full Text Index with Catalog function).  The initial build worked a few months ago, but all attempts at a rebuild are met with the following:
    Error: Index Build Failed - Insufficient Disk Space
    I have over 5GB of disk space. I have cleared my general temp folder.
    I am using Acrobat XI Pro Version 11.0.10.32.
    Any help would be greatly appreciated.

    "5GB of disk space"
    Disk space is the space on the Hard Disk Drive.
    A 5GB HDD, in today's computer environment, is effectively so small as to be next to nothing.
    Perhaps you have 5GB of RAM?
    That'd indicate a 64-bit computer (if in Windows). But, Acrobat (in Windows) is a 32-bit application and as such cannot make use of the extra RAM a 64-bit computer can carry.
    The alert/error speadks to "disk space" which is the space available on the HDD.
    If your computer's HDD has about at or less than 12-15 % space available you'll need to make more available (defrag or whatever) or you'll need to move up to a larger HDD.
    Be well...

  • Parallel processing on NONCLUSTERED COLUMNSTORE index build MIA

    Environment:
    SQL2014 Enterprise 12.0.2370
    WindowsServer2008R2 Enterprise
    HP Blade x5570 (12 core)
    Issue:
    I am attempting to build a NONCLUSTERED COLUMNSTORE INDEX on a very large partitioned table.
    Every partition for the table and its indices has its own file, so I can track file IO activity as a proxy for activity on any given partition.
    The default MAXDOP for the instance is 0, and I did not specify MAXDOP in the create statement.
    Observation:
    The build started by accessing all the data partitions and updating all the target index partitions. It was clear from the time this process spent reading each data partition, that this task was not related to the actual data. It was a painfully slow affair.
    After this first pass through the data partitions, it again started accessing the data partitions sequentially and updating the corresponding index partitions.
    What is of some concern is the following:
    1: The access is sequential through the partitions in both passes. There is no evidence whatsoever of parallel build processing of the index partitions.
    2: CPU usage is trivial
    3: IO read volume from the data partitions is 100 times less than the SAN capability for the array on which the data partitions are stored.The process is apparently not in any hurry to read the data.
    4: IO write activity for the index partitions is about 1/3 of the read volume. This seems reasonable.
    5: tempdb is on an SSD array and shows no significant queuing activity.
    There is no obvious evidence of any parallel index build occurring.
    I had expected 12 partitions at a time being processed (evidenced by simultaneous IO to 12 partitions) and a significant CPU load across the cores.
    I am seeing neither of these things.
    This is a painfully slow process, so any insights on this would be helpful.

    Thank you for your internet standard questions.
    1: There is no execution plan for an index other than "create index"
    2: 2 numa nodes, and yes it can see them.
    3: select scheduler_id, current_tasks_count, runnable_tasks_count, work_queue_count, pending_disk_io_count
    FROM sys.dm_os_schedulers
    WHERE scheduler_id < 255
    0    4    0    0    0
    1    4    0    0    0
    2    5    0    0    0
    3    4    1    0    0
    4    2    0    0    0
    5    3    0    0    0
    6    3    0    0    0
    7    4    0    0    0
    8    3    0    0    0
    9    3    0    0    0
    10    2    0    0    0
    11    4    0    0    0
    Not a lot going on there.
    4: By definition, a column store index needs to read all the data, so statistics are irrelevant because for any execution plan, it is by definition "get all the data as quickly as possible". The cardinality values for the table partitions are maintained
    in the metadata and are correct.
    5: EXEC sp_configure 'max degree of parallelism'
    name    minimum    maximum    config_value    run_value
    max degree of parallelism    0    32767    0    0

  • Aborting index build. Too many errors with finding and copying files ...

    Hi dears,
    I'm using UCM 11g.
    I'm trying to start the automatic update cycle. When i start it, it gives me an error message just like :
    Indexing aborted at :<time>. Aborting index build. Too many errors with finding and copying files to the appropriate place.+
    Can anybody help me about this problem?
    Helps will be appreciated.
    Erdo
    Edited by: erdo on 28.Mar.2013 11:11

    Read this: error in Collection Rebuild Cycle
    and maybe also UCM Indexer - Collection Rebuild Cycle errorring out

  • Performance issues with the Vouchers index build in SES

    Hi All,
    We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
    As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
    We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
    The index creation process runs for over 5 days.
    Can you please share any information or issues that you may have faced on your project and how they were addressed?

    Check the following logs for errors:
    1.  The message log from the process scheduler
    2.  search_server1-diagnostic.log  in /search_server1/logs directory
    If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES>

  • How to create a domain index on NCLOB Column

    hi all,
    My database version is 10.2.0.1.
    Any body know how to create a domain index on nclob column.
    SQL> alter table test add (nclob1   nclob);
    Table altered.
    SQL> CREATE INDEX test_nclob ON test (nclob1) indextype is ctxsys.context
      2  /
    CREATE INDEX test_nclob ON test (nclob1) indextype is ctxsys.context
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-20000: Oracle Text error:
    DRG-10509: invalid text column: NCLOB1
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 364Regards
    Singh

    Any body know how to create a domain index on nclob columnNot possible per design/documentation:
    The column that you specify must be one of the following types: CHAR, VARCHAR, VARCHAR2, BLOB, CLOB, BFILE, XMLType, or URIType.
    «

  • Party Merge ORA-29861: domain index is marked LOADING/FAILED/UNUSABLE

    Hi gurus,
    i am using 11.5.10.2 on linux.
    while running the party merge concurrent program , we are getting below error, please help.
    Errors in Merge :
    Application: Receivables_
    Error: Merge failed in Receivables (HZ_STAGED_PARTY_SITES ) with the following error message:
    This unexpected SQL error occurred during the merge process :
    ORA-29861: domain index is marked LOADING/FAILED/UNUSABLE
    The following record was being handled when the error occurred :
    Thanks
    RB

    Thanks Hussain, this is similer to the error i got, i will run the steps and get back to you, once again thanks for your prompt responce.
    Thx
    RB
    Edited by: R12DBA on Jul 23, 2010 4:36 PM
    Edited by: R12DBA on Jul 23, 2010 4:50 PM

  • Domain index on list-partitioned table?

    Hi,
    I have a list-partitioned table, and wanted to create a partitioned Oracle Text index on it. I keep getting an error, and am now wondering if it's possible to do. Here is my syntax:
    CREATE INDEX SCHEMA1.IDX_ALL_TEXT_LOCAL ON SCHEMA1.TABLE1(ALL_TEXT)
    INDEXTYPE IS CTXSYS.CONTEXT
    LOCAL
    (PARTITION PTN1 PARAMETERS('sync (on commit) storage ptn1'),
    PARTITION PTN2 PARAMETERS('sync (on commit) storage ptn2'),
    PARTITION PTN3 PARAMETERS('sync (on commit) storage ptn3'),
    PARTITION PTN4 PARAMETERS('sync (on commit) storage ptn4'),
    PARTITION PTN5 PARAMETERS('sync (on commit) storage ptn5'),
    PARTITION PTN6 PARAMETERS('sync (on commit) storage ptn6'),
    PARTITION PTN7 PARAMETERS('sync (on commit) storage ptn7'),
    PARTITION PTN8 PARAMETERS('sync (on commit) storage ptn8')
    PARAMETERS('section group my_group lexer new_lexer');
    ERROR at line 1:
    ORA-29850: invalid option for creation of domain indexes
    Any advice would be much appreciated.
    Thanks,
    Nora

    ... will it spread the index across the tablespaces that are associated with each partition?No, as demonstrated below.
    SCOTT@orcl_11gR2> CREATE TABLE table1
      2       ( id         NUMBER(6),
      3         all_text      VARCHAR2 (20)
      4       )
      5  PARTITION BY LIST (id)
      6   (PARTITION ptn1 VALUES (2,4) TABLESPACE example,
      7    PARTITION ptn2 VALUES (3,9) TABLESPACE example
      8   )
      9  /
    Table created.
    SCOTT@orcl_11gR2> INSERT ALL
      2  INTO table1 VALUES (2, 'test2')
      3  INTO table1 VALUES (3, 'test3')
      4  INTO table1 VALUES (4, 'test4')
      5  INTO table1 VALUES (9, 'test9')
      6  SELECT * FROM DUAL
      7  /
    4 rows created.
    SCOTT@orcl_11gR2> CREATE INDEX IDX_ALL_TEXT_LOCAL
      2  ON TABLE1 (ALL_TEXT)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  /
    Index created.
    SCOTT@orcl_11gR2> SELECT table_name, tablespace_name
      2  FROM   user_tab_partitions
      3  WHERE  table_name = 'TABLE1'
      4  /
    TABLE_NAME                     TABLESPACE_NAME
    TABLE1                         EXAMPLE
    TABLE1                         EXAMPLE
    2 rows selected.
    SCOTT@orcl_11gR2> SELECT table_name, tablespace_name
      2  FROM   user_tables
      3  WHERE  table_name LIKE '%IDX_ALL_TEXT_LOCAL%'
      4  /
    TABLE_NAME                     TABLESPACE_NAME
    DR$IDX_ALL_TEXT_LOCAL$I        USERS
    DR$IDX_ALL_TEXT_LOCAL$R        USERS
    DR$IDX_ALL_TEXT_LOCAL$N
    DR$IDX_ALL_TEXT_LOCAL$K
    4 rows selected.
    SCOTT@orcl_11gR2>

  • Creating DOMAIN INDEX on INTERVAL PARTITIONING

    Hi !
    I hava a problem, and I hope someone can help me!
    Two questions are asked below:
    1. Main question: HOW CAN I SOLVE THIS PROBLEM, ARE THERE OTHER WAYS DOING THE SAME JOB (MAYBE FASTER) ?
    2. Additionally: Is there a way to accelerate the deletion process
    Step 1: Creating the table* For Information how I create the table:
    CREATE TABLE LOC_EXAMPLE
    COLUMN1 NUMBER
    COLUMN2 NUMBER
    COLUMN3 NUMBER
    COLUMN4 NUMBER
    START_TIME TIMESTAMP
    GEOLOC MDSYS.SDO_GEOMETRY,
    TABLESPACE DB_DATA
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    NOLOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING
    PARTITION BY RANGE (START_TIME)
    INTERVAL (NUMTODSINTERVAL(1,'DAY'))
         PARTITION PART_LOC_EXAMPLE VALUES LESS THAN (TO_DATE('01-01-2008','dd-MM-yyyy'))
    ALTER TABLE LOC_EXAMPLE
    ADD CONSTRAINT PK_LOC_EXAMPLE PRIMARY KEY (COLUMN2,COLUMN4)
    DELETE FROM USER_SDO_GEOM_METADATA VALUE WHERE TABLE_NAME = 'LOC_EXAMPLE'
    INSERT INTO USER_SDO_GEOM_METADATA VALUES ('LOC_EXAMPLE','GEOLOC', MDSYS.SDO_DIM_ARRAY( MDSYS.SDO_DIM_ELEMENT('X',-180,180,0.001111949), MDSYS.SDO_DIM_ELEMENT('Y',-90,90,0.001111949) ), 8307)
    STEP 2: I TRY TO CREATE SPATIAL INDEX (ITS A DOMAIN INDEX IF I'M NOT WRONG) ON PARTITIONED TABLE*
    (PARTITIONED TABLE is an extension of range partitioning)
    CREATE INDEX LOC_EXAMPLE_idx ON LOC_EXAMPLE'(GEOLOC)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX LOCAL;
    THE SECOND STEP IS NOT POSSIBLE AS THE ORACLE DOCUMENTATION SAYS:
    When using interval partitioning, consider the following restrictions:
    -You can only specify one partitioning key column, and it must be of NUMBER or DATE type.
    -Interval partitioning is not supported for index-organized tables.
    -You cannot create a domain index on an interval-partitioned table.
    1) I THINK IT IS IMPOSSIBLE FOR ME TO PASS ON INTERVAL PARTITIONING (AMOUNT OF DATA IS REALY BIG).
    This partitioning is also used to delete datas from database once a mounth (scheduled on the basis of the partitions).
    Is there a way to accelerate the deletion process?
    2) I NEED A SPATIAL INDEX! NO WAY TO PASS ON IT!
    HOW CAN I SOLVE THIS PROBLEM, ARE THERE OTHER WAYS DOING THE SAME JOB (MAYBE FASTER) ?
    Why is it not possible to create a domain index on interval partitioning, any reason?
    Will this be possible anytime?
    I would be grateful to read any advise ...!
    Thanking you in anticipation,
    Ali

    There is a forum here at OTN for spatial. Please delete the contents of this post and ask your question there. Thanks.

  • Creating DOMAIN INDEX (SPATIAL) on INTERVAL PARTITIONING

    Hi !
    I hava a problem, and I hope someone can help me!
    Two questions are asked below:
    1. Main question: HOW CAN I SOLVE THIS PROBLEM, ARE THERE OTHER WAYS DOING THE SAME JOB (MAYBE FASTER) ?
    2. Additionally: Is there a way to accelerate the deletion process
    Step 1: Creating the table For Information how I create the table:
    CREATE TABLE LOC_EXAMPLE
    COLUMN1 NUMBER
    COLUMN2 NUMBER
    COLUMN3 NUMBER
    COLUMN4 NUMBER
    START_TIME TIMESTAMP
    GEOLOC MDSYS.SDO_GEOMETRY,
    TABLESPACE DB_DATA
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    NOLOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING
    PARTITION BY RANGE (START_TIME)
    INTERVAL (NUMTODSINTERVAL(1,'DAY'))
    PARTITION PART_LOC_EXAMPLE VALUES LESS THAN (TO_DATE('01-01-2008','dd-MM-yyyy'))
    ALTER TABLE LOC_EXAMPLE
    ADD CONSTRAINT PK_LOC_EXAMPLE PRIMARY KEY (COLUMN2,COLUMN4)
    DELETE FROM USER_SDO_GEOM_METADATA VALUE WHERE TABLE_NAME = 'LOC_EXAMPLE'
    INSERT INTO USER_SDO_GEOM_METADATA VALUES ('LOC_EXAMPLE','GEOLOC', MDSYS.SDO_DIM_ARRAY( MDSYS.SDO_DIM_ELEMENT('X',-180,180,0.001111949), MDSYS.SDO_DIM_ELEMENT('Y',-90,90,0.001111949) ), 8307)
    STEP 2: I TRY TO CREATE SPATIAL INDEX (ITS A DOMAIN INDEX IF I'M NOT WRONG) ON PARTITIONED TABLE
    (PARTITIONED TABLE is an extension of range partitioning)
    CREATE INDEX LOC_EXAMPLE_idx ON LOC_EXAMPLE'(GEOLOC)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX LOCAL;
    THE SECOND STEP IS NOT POSSIBLE AS THE ORACLE DOCUMENTATION SAYS:
    When using interval partitioning, consider the following restrictions:
    -You can only specify one partitioning key column, and it must be of NUMBER or DATE type.
    -Interval partitioning is not supported for index-organized tables.
    -You cannot create a domain index on an interval-partitioned table.
    1) I THINK IT IS IMPOSSIBLE FOR ME TO PASS ON INTERVAL PARTITIONING (AMOUNT OF DATA IS REALY BIG).
    This partitioning is also used to delete datas from database once a mounth (scheduled on the basis of the partitions).
    Is there a way to accelerate the deletion process?
    2) I NEED A SPATIAL INDEX! NO WAY TO PASS ON IT!
    HOW CAN I SOLVE THIS PROBLEM, ARE THERE OTHER WAYS DOING THE SAME JOB (MAYBE FASTER) ?
    Why is it not possible to create a domain index on interval partitioning, any reason?
    Will this be possible anytime?
    I would be grateful to read any advise ...!
    Thanking you in anticipation,
    Ali

    Is it possible to just use a normal range-partitioned table?
    CREATE TABLE LOC_EXAMPLE
    START_TIME TIMESTAMP
    GEOLOC MDSYS.SDO_GEOMETRY,
    PARTITION BY RANGE (START_TIME)
    PARTITION P1 VALUES LESS THAN (TO_DATE('01-01-2008','dd-MM-yyyy'))
    alter table loc_example add partition p2 VALUES LESS THAN (TO_DATE('02-01-2008','dd-MM-yyyy'));
    alter table loc_example drop partition p1;
    I understand it is not as perfect as interval partitioning, since
    you have to drop an old partition/add a new one either manually
    or by some script. But you should be able to create a spatial domain index
    on it.

  • Still about domain alias and domain migration

    Our company is under a domain name transition. Currently, our domain is lab.D.com, and we are moving to aaa.com.During the transition, we wish both domains could work for us for a long time.
    I added a domain alias aaa.com for our domain lab.D.com, the ldif shows:
    dn: dc=aaa, dc=com, o=internet
    objectclass: alias
    objectclass: inetDomainAlias
    aliasedObjectName: dc=lab, dc=D, dc=com, o=internet
    dc: aaa
    after restarting msg server, I can send email to [email protected] which is acutually [email protected]
    However, this is only half-way to my goal. I wish our emails at receivers' mailboxes were [email protected], not [email protected], if we send emails though loging in web mail typing [email protected] in the user ID box, or by creating [email protected] accounts in MS Outlook.
    ===> Is there any way to do it?
    ===> can [email protected] account be created in MS Outlook?
    In another post about domain migration, you suggested:
    If you want to stop using the old domain, and make the new domain your "default domain", that's a little harder. It involves several steps:
    1. changing all the mail addresses.
    2. changing the "default domain" settings everywhere.
    ===> I wonder, how to do it? is there any command? ldapmodify?
    as an alternative approach, if we decide to change our email addresses to [email protected] first,
    ===> will emails sending to either [email protected] or [email protected] arrive at users who are still in the lab.D.com, if I only change all the mail addresses' domian part to aaa.com since I have added domain alias aaa.com?
    and, I do not think sending emails from [email protected] to the internet would be a problem, right???
    The iMS we use is iPlanet Messaging Server 5.2 (built Feb 21 2002), Directory Server is 4.16 which are very old versions :(
    Thanks.

    I make no claim to be a programmer, nor am I expert
    with ldap commands.
    I know of no easy way to change all, other than
    export to ldif, and use a global change with a text
    editor, and then re-import.So, db2ldif -> change in text editor -> ldif2db, right?
    Another question here is about Direct LDAP.
    I enabled Direct LDAP.
    aaa.com is the domain alias to lab.oldD.com (our old domain).
    I also have changed user alas' email addresss from alas@ lab.oldD.com to [email protected] and added mailAlternateaddress for alas as [email protected], as you instructed in previous posts.
    However, whenever I click "send" either to aaa.com or lab.oldD.com, it shows errors, for examples, - "Returning unknown or illegal alias: [email protected]", "Returning unknown or illegal alias: [email protected]"
    The log shows:
    19:17:27.62: mmc_address_to_tree: Parsing address.
    19:17:27.62: Address: "user@[127.0.0.1]" 0x00000000
    19:17:27.62: Right default: honey.lab.oldD.com
    19:17:27.62: Parsing address with null fixup.
    19:17:27.62: mmc_address_to_tree: Returning.
    19:17:27.62: Rewriting: Mbox = "user", host = "[127.0.0.1]", domain = "$*", literal = "", tag = ""
    19:17:27.62: Rewrite: "$*", position 0, hash table -
    19:17:27.62: Found: "$E$F$U%[email protected]"
    // honey is our email server
    19:17:27.62: Rewrite failed, not forward.
    19:17:27.62: Rewrite: "$*", position 1, hash table -
    19:17:27.62: Failed.
    19:17:27.62: Rewrite: "$*", position 0, rewrite database -
    19:17:27.62: Failed
    19:17:27.62: Rewriting: Mbox = "user", host = "[127.0.0.1]", domain = "[127.0.0.1]", literal = "", tag = ""
    19:17:27.62: Rewrite: "[127.0.0.1]", position 0, hash table -
    19:17:27.62: Failed
    19:17:27.62: Rewrite: "[127.0.0.1]", position 0, hash table -
    19:17:27.62: Failed.
    19:17:27.62: Rewrite: "[127.0.0.1]", position 0, rewrite database -
    19:17:27.62: Failed
    19:17:27.62: Rewriting: Mbox = "user", host = "[127.0.0.1]", domain = "[127.0.0.]", literal = "1", tag = ""
    19:17:27.62: Rewrite: "[127.0.0.*]", position 0, hash table -
    19:17:27.62: Failed
    19:17:27.62: Rewrite: "[127.0.0.]", position 0, hash table -
    19:17:27.62: Failed.
    19:17:27.62: Rewrite: "[127.0.0.]", position 0, rewrite database -
    19:17:27.62: Failed
    19:17:27.62: Rewriting: Mbox = "user", host = "[127.0.0.1]", domain = "[127.0.]", literal = "0.1", tag = ""
    19:17:27.62: Rewrite: "[127.0.*.*]", position 0, hash table -
    19:17:27.62: Failed
    19:17:27.62: Rewrite: "[127.0.]", position 0, hash table -
    19:17:27.62: Failed.
    19:17:27.62: Rewrite: "[127.0.]", position 0, rewrite database -
    19:17:27.62: Failed
    19:17:27.62: Rewriting: Mbox = "user", host = "[127.0.0.1]", domain = "[127.]", literal = "0.0.1", tag = ""
    19:17:27.62: Rewrite: "[127.*.*.*]", position 0, hash table -
    19:17:27.62: Failed
    19:17:27.62: Rewrite: "[127.]", position 0, hash table -
    19:17:27.62: Failed.
    19:17:27.62: Rewrite: "[127.]", position 0, rewrite database -
    19:17:27.62: Failed
    19:17:27.62: Rewriting: Mbox = "user", host = "[127.0.0.1]", domain = "[]", literal = "127.0.0.1", tag = ""
    19:17:27.62: Rewrite: "[]", position 0, hash table -
    19:17:27.62: Found: "$E$R${INTERNAL_IP,$L}$U%[$L]@tcp_intranet-daemon"
    19:17:27.62: Mapping: name = "INTERNAL_IP", input = "127.0.0.1".
    19:17:27.62: Mapping 2 applied to 127.0.0.1
    19:17:27.62: Entry #2 matched, pattern "127.0.0.1", template "$Y", match #0.
    19:17:27.62: New target ""
    19:17:27.62: Exiting...
    19:17:27.62: Final result ""
    19:17:27.62: Mapping result:
    19:17:27.62: New mailbox: "user".
    19:17:27.62: New host: "[127.0.0.1]".
    19:17:27.62: New route: "tcp_intranet-daemon".
    19:17:27.62: New channel system: "tcp_intranet-daemon".
    19:17:27.62: Looking up host "tcp_intranet-daemon".
    19:17:27.62: - found on channel tcp_intranet
    19:17:27.62: mmc_winit('tcp_intranet','[email protected]','') called.
    19:17:27.62: mmc_determine_url beginning with pattern , xadr , mbox , subaddress
    19:17:27.62: Queue area size 18871794, temp area size 18871794
    19:17:27.62: 4717948 blocks of effective free queue space available; setting d
    isk limit accordingly.
    19:17:27.62: mmc_address_to_tree: Parsing address.
    19:17:27.62: Address: "[email protected]" 0x00000000
    19:17:27.62: Right default: lab.oldD.com
    19:17:27.62: Parsing address with local fixup.
    19:17:27.62: mmc_address_to_tree: Returning.
    19:17:27.62: Rewriting: Mbox = "alas", host = "newD.com", domain = "$*", literal
    = "", tag = ""
    19:17:27.62: Rewrite: "$*", position 0, hash table -
    19:17:27.62: Found: "$E$F$U%[email protected]"
    19:17:27.62: Rewrite failed, not forward.
    19:17:27.62: Rewrite: "$*", position 1, hash table -
    19:17:27.62: Failed.
    19:17:27.62: Rewrite: "$*", position 0, rewrite database -
    19:17:27.62: Failed
    19:17:27.62: Rewriting: Mbox = "alas", host = "newD", domain = "newD.com", litera
    l = "", tag = ""
    19:17:27.62: Rewrite: "newD.com", position 0, hash table -
    19:17:27.62: Failed.
    19:17:27.62: Rewrite: "newD.com", position 0, rewrite database -
    19:17:27.62: Failed
    19:17:27.62: Rewriting: Mbox = "alas", host = "newD", domain = ".com", literal =
    "", tag = ""
    19:17:27.62: Rewrite: "*.com", position 0, hash table -
    19:17:27.62: Failed
    19:17:27.62: Rewrite: ".com", position 0, hash table -
    19:17:27.62: Found: "$U%$H$D@TCP-DAEMON"
    19:17:27.62: New mailbox: "alas".
    19:17:27.62: New host: "newD.com".
    19:17:27.62: New route: "TCP-DAEMON".
    19:17:27.62: New channel system: "TCP-DAEMON".
    19:17:27.62: Looking up host "TCP-DAEMON".
    19:17:27.62: - found on channel tcp_local
    19:17:27.62: mmc_address_to_tree: Parsing address.
    19:17:27.62: Address: "[email protected]" 0x00000000
    19:17:27.62: Right default: lab.oldD.com
    19:17:27.62: Parsing address with null fixup.
    19:17:27.62: mmc_address_to_tree: Returning.
    19:17:27.62: Rewriting: Mbox = "alas", host = "newD.com", domain = "$*", literal
    = "", tag = ""
    19:17:27.62: Rewrite: "$*", position 0, hash table -
    19:17:27.62: Found: "$E$F$U%[email protected]"
    19:17:27.62: Rewrite failed, not forward.
    19:17:27.62: Rewrite: "$*", position 1, hash table -
    19:17:27.62: Failed.
    19:17:27.62: Rewrite: "$*", position 0, rewrite database -
    19:17:27.62: Failed
    19:17:27.62: Rewriting: Mbox = "alas", host = "newD", domain = "newD.com", litera
    l = "", tag = ""
    19:17:27.62: Rewrite: "newD.com", position 0, hash table -
    19:17:27.62: Failed.
    19:17:27.62: Rewrite: "newD.com", position 0, rewrite database -
    19:17:27.62: Failed
    19:17:27.62: Rewriting: Mbox = "alas", host = "newD", domain = ".com", literal =
    "", tag = ""
    19:17:27.62: Rewrite: "*.com", position 0, hash table -
    19:17:27.62: Failed
    19:17:27.62: Rewrite: ".com", position 0, hash table -
    19:17:27.62: Found: "$U%$H$D@TCP-DAEMON"
    19:17:27.62: New mailbox: "alas".
    19:17:27.62: New host: "newD.com".
    19:17:27.62: New route: "TCP-DAEMON".
    19:17:27.62: New channel system: "TCP-DAEMON".
    19:17:27.62: Looking up host "TCP-DAEMON".
    19:17:27.62: - found on channel tcp_local
    19:17:27.62: Mapped return address: [email protected]
    19:17:27.62: mmc_rrply: Return detailed status information.
    19:17:27.62: mmc_rrply: Returning return address and channel OK
    19:17:27.62: mmc_wadr(0x001abd40,'','[email protected]') called.
    19:17:27.62: Copy estimate before address addition is 1
    19:17:27.62: Parsing address [email protected]
    19:17:27.62: mmc_address_to_tree: Parsing address.
    19:17:27.62: Address: "[email protected]" 0x00000000
    19:17:27.62: Right default: lab.oldD.com
    19:17:27.62: Parsing address with local fixup.
    19:17:27.62: mmc_address_to_tree: Returning.
    19:17:27.62: Rewriting: Mbox = "alas", host = "lab.oldD.com", domain = "$*", l
    iteral = "", tag = ""
    19:17:27.62: Rewrite: "$*", position 0, hash table -
    19:17:27.62: Found: "$E$F$U%[email protected]"
    19:17:27.62: Match, pattern = "lab.oldD.com", current = "(*domaincheck*)"
    19:17:27.62: old state = not checked.
    19:17:27.62: Performing domainMap check on lab.oldD.com.
    19:17:27.62: Added domainMap result 1 to cache for lab.oldD.com.
    19:17:27.62: new state = succeeded.
    19:17:27.62: New mailbox: "alas".
    19:17:27.62: New host: "lab.oldD.com".
    19:17:27.62: New route: "honey.lab.oldD.com".
    19:17:27.62: New channel system: "honey.lab.oldD.com".
    19:17:27.62: Looking up host "honey.lab.oldD.com".
    19:17:27.62: - found on channel l
    19:17:27.62: Routelocal flag set; scanning for % and !
    19:17:27.62: Address [email protected] requires local processing.
    19:17:27.62: Variant #1 = [email protected]
    19:17:27.62: Variant #2 = *@lab.oldD.com
    19:17:27.62: Checking for [email protected] in the system alias file
    19:17:27.62: - not found
    19:17:27.62: Checking for *@lab.oldD.com in the system alias file
    19:17:27.62: - not found
    19:17:27.62: - adding address [email protected] to headers.
    19:17:27.62: Copy estimate after address addition is 1
    19:17:27.63: mmc_rrply: Return detailed status information.
    19:17:27.63: mmc_rrply: Returning unknown or illegal alias: [email protected]
    I wonder why?
    Thanks.

  • Issue creating domain index

    what lob error? This is a domain index on a clob. I don't have any special settings.
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drvxtab.create_index_tables
    ORA-22853: invalid LOB storage option specification
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 364
    CREATE INDEX myind ON
    my_mv(my_text)
    INDEXTYPE IS ctxsys.context
    PARAMETERS('MEMORY 500m FILTER INSOFilterpref STORAGE bfile_storage')
    PARALLEL 4;

    I remember this error when dealing with InterMedia. Specifically with Maximo.
    I believe you need to set and define your global lexer.
    Another key was to grant 'all privileges' to ctxsys within Oracle.
    There is definitely a lot more to this than what I mentioned.
    There are entire books on setting this up and tuning it.

  • Explain Plan for domain index

    I am running explain plan for queries using a domain index. i.e. an oracle text 'contains' clause. The usage of the domain index appears in the plan o.k, but I am interested in seeing the underlying access to the DR$...$I etc tables and assoicated indexes belonging to ctxsys. Is it possible to see this access ?
    Basically I have all the objects created by the domain index in their own tablespace using a storage preference, but I am wondering if the sub indexes should be moved out from the sub tables for better performance ? Any guidance is appreciated.

    I am running explain plan for queries using a domain index. i.e. an oracle text 'contains' clause. The usage of the domain index appears in the plan o.k, but I am interested in seeing the underlying access to the DR$...$I etc tables and assoicated indexes belonging to ctxsys. Is it possible to see this access ?
    Basically I have all the objects created by the domain index in their own tablespace using a storage preference, but I am wondering if the sub indexes should be moved out from the sub tables for better performance ? Any guidance is appreciated.

  • ORA-29857: domain indexes and/or secondary objects exist in the tablespace

    I tried to drop tablespace APPS_TS_TX_DATA using drop tablespace APPS_TS_TX_DATA including contents and datafiles cascade constraints.
    I got error ORA-29857: domain indexes and/or secondary objects exist in the tablespace.
    After I have dropped all domain indexes, then tried to drop tablespace again, I still got the same error. I have searched metalink regarding this error, there is no hit.
    What exactly objects in the tablespace are prevenng dropping the tablespaces?

    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    ILMU88>
    ILMU88> SELECT COUNT(*)
    2 FROM dba_segments
    WHERE tablespace_name = 'APPS_TS_TX_DATA';
    3
    COUNT(*)
    14190
    drop tablespace APPS_TS_TX_DATA including contents and datafiles cascade constraints;
    drop tablespace APPS_TS_TX_DATA including contents and datafiles cascade constraints
    ERROR at line 1:
    ORA-29857: domain indexes and/or secondary objects exist in the tablespace

Maybe you are looking for

  • OIM 9.1.0.2 - Weblogic JDBC Multi Data Sources for Oracle RAC

    Does OIM OIM 9.1.0.2 BP07 support Weblogic JDBC Multi Data Sources (Services>JDBC>Multi Data Sources) for Oracle RAC instead of inserting the "Oracle RAC JDBC URL" on JDBC Data Sources for xlDS and xlXADS (Services>JDBC>Data Sources> xlDS|xlXADS > Co

  • Questions about updateable views

    Greetings, Some problems which I experienced with Delphi + Crlab ODAC + Oracle DB 10g Express and complex (joined tables) updateable views. 1. Required field (NOT NULL) in a joined table remains required in the view. My data access components read th

  • Problem syncing iPod Classic with iTunes

    A couple of days ago I purchased several albums from iTunes, but none of the songs will sync with my iPod Classic.  I just purchased some other songs, and they synced fine.  I have the current version of itunes and have never had this problem before.

  • How to use variable in topology

    Hi every one.. how to use variable in topology

  • Clustering objects in jndi

    Hi all, I am building a solution using weblogic version 6.1sp1. I am needing to share an object around a set of clusters to tell each cluster about the services in the other clusters. The first solution I thought of was to create a javax.naming.Refer