Indexes failed.

In my process chain at initial fill BIA indexes level,iam getting the following errors:
creation of index terminated
an index already exists with the same name ; index = p
creation of BIA index xxxx terminated for infocube xxxx  terminated during activation.
Could you please give me the solution for this error.
Thanks,
V.Singh

Hi VRV,
Was the new data got loaded into InfoCube prior to BIA Indexes fill? Check it......
I think you may have to delete the old indexes, before you build new indexes.
Hope it helps...........
Regards,
Suman

Similar Messages

  • Index failed for Docs and PDF files with return code 8002

    Hi All,
    I try to index Word documents in a webdav folder in KM. When I add a text document the Indexing completes correctly and I can find the document via the search. When I add a Word Document or PDF to the folder and reindex I get the message Index failed with return code 8002.
    How can I solve this Problem.
    Kind Regards,
    Richard Middelburg

    Hi,
    I have changed the host already from localhost into the hostname of the linux server. I tried this also with the FQDN of the server.  I have entered the following entry
    Hostname linux = sap12
    In the URL generator service = http://sap12
    Must I restart the portal?
    Richard

  • DAG Big Problem with Content Indexes / Failed on both Servers

    Hello,
    i have 20 DBs in a DAG. And for 5 days both! content indexes failed.
    I tried to recreate it on the passive server stopping the services and deleting the folders but it does not work. I will try the contentsubmitters group this evening (because this is very cpu consuming).
    If that doesn't work what else can i do ?
    Best regards
    Stephan

    Hi ,
    Please use the below mentioned blog to have a healthy context index state for your active databases .
    http://blogs.technet.com/b/exchangesearch/archive/2013/10/28/rebuild-an-index-on-exchange-2013-for-specific-databases.aspx
    Then do the following for the passive copies.
    Update-MailboxDatabaseCopy -identity "name of the DB" -CatalogOnly
    Note : Before updating please suspend the passive database copy and then resume the copy once the update gets completed.
    Regards
    S.Nithyanandham
    Thanks S.Nithyanandham

  • ORA-13249: Error in Rtree .. Spatial Index fails 10.2.0.3.0

    We have a table with spatial data (polygon), containing 2.4 million rows. SRID=8307. Creating spatial index fails on this table. Spatial version 10.2.0.3.0 and it is valid in dba_registry. Please help. Thanks in advance. Error message is
    Error starting at line 3 in command:
    CREATE INDEX SST_BASEDATA.SIDX_ISPSPATIALTEMP ON SST_BASEDATA.ISP_SPATIALTEMP (GEOMETRY)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
    PARAMETERS ('tablespace=tdx_basedata_isp layer_gtype=POLYGON') PARALLEL
    Error report:
    SQL Error: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13249: Error in Rtree: [mdrcrrdmddata, select /*+ PARALLEL(a, 32767) */ rid, cen  from M8_1CC99$$ a  where cen >= :1 and cen <= :2], 83243 83244
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 10

    We have a similar problem with ORACLE 10.2.0.3, SRID 31297, 2-dimensional data, points, lines polygon.
    We installed a new Computer (WINDOWS 2003 Server, Oracle 10.2.0.3) and transfered an application from an existing Oracle 10.2.0.2 instance.
    We transferred the data by exp/imp utility.
    The creation of the index failed (during import or creating the index by a sql command)
    for all tables with more than 2 mill rows with the following message:
    SQL Error: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 10
    We did not get the ORA-13249 error.
    For tables with less than 1 mill rows the index was created.
    We decided to go back to Oracle 10.2.0.2 and all works fine.
    We will upgrade the database to 10.2.0.3 when a patch will be available.

  • [CS3] Method Generate for the object Index fails

    Hello,
    I will create an index in my VB script with InDesign CS3:
    Set objIndex = docindex.Generate(indIndexDoc.Pages.Item(1),p)
    The index is longer than one page.
    I think this is the reason, when I get this error:
    Runtime error -2147417851 (80010105)
    Method "Generate" for the object "Index" fails
    Thank you in advance!
    Harald

    I got this hint from Martin Fischer:
    The problem is a special character in the topics.
    Harald

  • INDEX failed to create while  Import using datapump

    Hi DBA's
    database 11.2.01
    OS : Solaris 10
    I am trying to Import the data using the following command.
    impdp system/******* SCHEMAS=EIMS DIRECTORY=expdp_dir DUMPFILE=eimsexp.dmp*
    but it fails import index and error out, but the tables and data imported.
    I checked in the metalink and followed the Note id (During Data Pump Import Index Creation Fails With: ORA-39083 and ORA-14102 errors OR ORA-39083 ORA-02158 and ORA-39112 errors [ID 1066635.1]) , but no luck.
    Kindly advice me.
    ERROR
    ORA-39083: Object type INDEX failed to create with error:
    ORA-14102: only one LOGGING or NOLOGGING clause may be specified
    Failing sql is:
    CREATE UNIQUE INDEX "EIMS"."PK_MDS_PSTN_LEVEL" ON "EIMS"."MDS_PSTN_LEVEL" ("REGION_CODE", "AREA_CODE", "LEVEL_NO", "EXCHANGE_CODE") PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING NOCOMPRESS LOGGING STORAGE( INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "EIMS_DATA" PARALLEL 1
    ORA-39083: Object type INDEX failed to create with error:
    ORA-14102: only one LOGGING or NOLOGGING clause may be specified
    Best Regards,
    SG

    Dear Hussein,
    Yes i applied the patch, still i am getting the same error and again i followed one more note (Impdp Failed With Error Ora-39083, 0ra-01403 On Index_statistics (Doc ID 755253.1))
    in that note i given option "exclude=index", then i got in different error.
    We are in the process of doing testing on Tablespace Encryption.
    We are having two instances one is dev and uat instance.
    we are creating Encryption tablespace on uat instance and trying to import the data from dev instance only from particular user.
    We completed the followings.
    1. Created tablespace Encryption.
    2. Created user and assigned the tablespace to the user EIMS1.
    3. Took export from the dev instance using the following command.
    expdp system/******** SCHEMAS=EIMS DIRECTORY=expdp_dir DUMPFILE=eimsexp.dmp
    4. Importing data from
    impdp system/******** REMAP_SCHEMA=EIMS:EIMS1 DIRECTORY=expdp_dir exclude=index DUMPFILE=eimsexp.dmp
    Error
    ====
    ORA-39083: Object type CONSTRAINT failed to create with error:
    ORA-14102: only one LOGGING or NOLOGGING clause may be specified
    Failing sql is:
    ALTER TABLE "EIMS1"."DOCS" ADD PRIMARY KEY ("ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING NOCOMPRESS LOGGING STORAGE( INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "EIMS_DATA" ENABLE
    ORA-39083: Object type CONSTRAINT failed to create with error:
    ORA-14102: only one LOGGING or NOLOGGING clause may be specified
    before i used to get the bellow errors
    ==========================
    ALTER TABLE "EIMS1"."TST1" ADD CONSTRAINT "ADMACHINEDEPLOYMENTSTATUS_PK" PRIMARY KEY ("MID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING NOCOMPRESS LOGGING STORAGE( INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "EIMS_DATA" ENABLE
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    ORA-39083: Object type INDEX_STATISTICS failed to create with error:
    ORA-01403: no data found
    ORA-01403: no data found
    Failing sql is:
    DECLARE I_N VARCHAR2(60); I_O VARCHAR2(60); c DBMS_METADATA.T_VAR_COLL; df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS'; BEGIN DELETE FROM "SYS"."IMPDP_STATS"; c(1) := 'ID'; DBMS_METADATA.GET_STAT_INDNAME('EIMS1','DOCS',c,1,i_o,i_n); INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES ('I',5,0,I_N,NUL
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"EIMS1"."PK_MDS_PSTN_LEVEL" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"EIMS1"."PK_MD_AMT_REQUEST_PRIORITY" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"EIMS1"."PK_MD_AMT_REQUEST_QUEUE" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"EIMS1"."PK_MD_BH02_REQUEST_PRIORITY" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"EIMS1"."PK_MD_BH03_REQUEST_PRIORITY" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"EIMS1"."PK_MD_BH03_REQUEST_QUEUE" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"EIMS1"."PK_MD_BH04_REQUEST_PRIORITY" creation failed
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1049 error(s) at 20:43:02
    Thanks
    SG

  • Building secondary Index fails for large number(25,000,000) of records

    I am inserting 25,000,000 records of the type:
    Key --> Data
    [long,String,long] --> [{long,long}, {String}}
    using setSecondaryBulkLoad(true) and then build two Secondary indexes on {long,long} and {String} of the data portion.
         private void buildSecondaryIndex(DataAccessLayer dataAccessLayer ) {
              try {
                   SecondaryIndex<TDetailSecondaryKey, TDetailStringKey, TDetailStringRecord> secondaryIndex = store.getSecondaryIndex(dataAccessLayer.getPrimaryIndex(), TDetailSecondaryKey.class, SECONDARY_KEY_NAME);
              } catch (DatabaseException e) {
                   throw new RuntimeException(e);
    It fails when I build the SecondaryIndex probably due to Java Heap Space Error. See the failure trace below.
    I do not face this problem when I deal with 250,000 records.
    Is there a work around that without haveing to set the memory space configurations of the JVM.
    Failure Trace:
    java.lang.RuntimeException: Environment invalid because of previous exception: com.sleepycat.je.RunRecoveryException
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:444)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.insertCellSetInOneTxn(TDetailStringDAOInsertTest.java:280)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.mainTest(TDetailStringDAOInsertTest.java:93)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99)
         at org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81)
         at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
         at org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75)
         at org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45)
         at org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:66)
         at org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35)
         at org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42)
         at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
         at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52)
         at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:38)
         at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
    Caused by: Environment invalid because of previous exception: com.sleepycat.je.RunRecoveryException
         at com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:976)
         at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:584)
         at com.sleepycat.je.txn.Txn.undo(Txn.java:713)
         at com.sleepycat.je.txn.Txn.abortInternal(Txn.java:631)
         at com.sleepycat.je.txn.Txn.abort(Txn.java:599)
         at com.sleepycat.je.txn.AutoTxn.operationEnd(AutoTxn.java:36)
         at com.sleepycat.je.Environment.openDb(Environment.java:505)
         at com.sleepycat.je.Environment.openSecondaryDatabase(Environment.java:382)
         at com.sleepycat.persist.impl.Store.openSecondaryIndex(Store.java:684)
         at com.sleepycat.persist.impl.Store.getSecondaryIndex(Store.java:579)
         at com.sleepycat.persist.EntityStore.getSecondaryIndex(EntityStore.java:286)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:441)
         ... 22 more
    Caused by: java.lang.OutOfMemoryError: Java heap space
         at java.util.HashMap.resize(HashMap.java:462)
         at java.util.HashMap.addEntry(HashMap.java:755)
         at java.util.HashMap.put(HashMap.java:385)
         at java.util.HashSet.add(HashSet.java:200)
         at com.sleepycat.je.txn.Txn.addReadLock(Txn.java:964)
         at com.sleepycat.je.txn.Txn.addLock(Txn.java:952)
         at com.sleepycat.je.txn.LockManager.attemptLockInternal(LockManager.java:347)
         at com.sleepycat.je.txn.SyncedLockManager.attemptLock(SyncedLockManager.java:43)
         at com.sleepycat.je.txn.LockManager.lock(LockManager.java:178)
         at com.sleepycat.je.txn.Txn.lockInternal(Txn.java:295)
         at com.sleepycat.je.txn.Locker.nonBlockingLock(Locker.java:288)
         at com.sleepycat.je.dbi.CursorImpl.lockLNDeletedAllowed(CursorImpl.java:2357)
         at com.sleepycat.je.dbi.CursorImpl.lockLN(CursorImpl.java:2297)
         at com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2227)
         at com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1296)
         at com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1442)
         at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1368)
         at com.sleepycat.je.Cursor.retrieveNextAllowPhantoms(Cursor.java:1587)
         at com.sleepycat.je.Cursor.retrieveNext(Cursor.java:1397)
         at com.sleepycat.je.SecondaryDatabase.init(SecondaryDatabase.java:182)
         at com.sleepycat.je.SecondaryDatabase.initNew(SecondaryDatabase.java:118)
         at com.sleepycat.je.Environment.openDb(Environment.java:484)
         at com.sleepycat.je.Environment.openSecondaryDatabase(Environment.java:382)
         at com.sleepycat.persist.impl.Store.openSecondaryIndex(Store.java:684)
         at com.sleepycat.persist.impl.Store.getSecondaryIndex(Store.java:579)
         at com.sleepycat.persist.EntityStore.getSecondaryIndex(EntityStore.java:286)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:441)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.insertCellSetInOneTxn(TDetailStringDAOInsertTest.java:280)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.mainTest(TDetailStringDAOInsertTest.java:93)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    1. Does the speed of building of secondary index
    depend on the type of the data in the key? Will
    having integers in secondary key as opposed to string
    be better?The byte size of the key and data is significant of course, but the data type is not.
    2. How much are we bound of the memory? Lets assume
    my memory setting is fixed.
    a. I know with current memory settings if I set txn
    n on, I have java Heap Error.
    So will I be limited on the size of
    secondary index or
    will it just get really slow swapping
    tree information from the disk as it builds it.No. The out-of-memory error was caused by a very large transaction that holds locks. When using small transactions or non-transactional access, you won't have this problem. In general, like most databases, JE writes and reads information to/from disk as needed.
    b. Is there any other way of speeding the build of
    f secondary database?No, other then general performance tuning, nothing I know of.
    c. Will it be more beneficial not to bulk
    load when the datasize gets large
    so that secondary database is built
    incrementally?It's up to you whether you want to pay the price during an initial load or incrementally.
    d. Do you think it will help to partition the
    e original database into smaller databases
    using some criteria, and thus build
    smaller trees.          Why? You can use deferred write or non-transactional access to load any size database.
    The only weak point in this is if we have to bulk
    bulk load in one partition
    at some time increasing its size we may
    face the same problem againFace what problem?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Rebuilding spatial indexing fail after patch 9.2.0.8.0

    Hi!
    I did apply patchset, and after i did try to
    set NLS_LANG=American_America.CL8MSWIN1251
    sqlplusw ......
    SQL> alter index XXXX.YYYYYYYY_g_idx rebuild; -- <- Russian name
    alter index XXXX.YYYYYYYY_g_idx rebuild
    ERROR at line 1:
    ORA-29858: error occurred in the execution of ODCIINDEXALTER routine
    ORA-29400: data cartridge error
    ¡a¤
    ORA-13249: internal error in Spatial index: [mdidxrbd]
    ORA-13205: internal error while parsing spatial parameters
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_9I", line 259
    ORA-06512: at line 1
    As you can see, rebuilding was failed.
    Before upgrade all did work fine.
    If i will create table WITH_ENGLISH_NAME as SELECT from my table all working fine ;(
    I DID apply all *.sql from patchset.
    Any idea, gurus ?
    P.S.
    I did use script from metalink
    verify_spatial_installation.sql :
    SQL>
    SQL> prompt Verify version and status
    Verify version and status
    SQL> select COMP_NAME, SCHEMA, VERSION, STATUS
    2 from dba_registry where comp_id='SDO';
    COMP_NAME SCHEMA VERSION STATUS
    Spatial MDSYS 9.2.0.8.0 VALID
    SQL>
    SQL> prompt Number of objects
    Number of objects
    SQL> select count(*)
    2 from dba_objects where owner='MDSYS';
    COUNT(*)
    250
    SQL>
    SQL> prompt Summary count of objects
    Summary count of objects
    SQL> select object_type, count(*)
    2 from dba_objects where owner='MDSYS'
    3 group by object_type;
    OBJECT_TYPE COUNT(*)
    FUNCTION 47
    INDEX 17
    INDEXTYPE 2
    LIBRARY 12
    LOB 12
    OPERATOR 14
    PACKAGE 25
    PACKAGE BODY 25
    SEQUENCE 2
    TABLE 18
    TRIGGER 7
    TYPE 34
    TYPE BODY 10
    VIEW 25
    14 rows selected.
    SQL>
    SQL> prompt Any invalid objects ?
    Any invalid objects ?
    SQL> select object_name, object_type, status
    2 from dba_objects
    3 where owner='MDSYS'
    4 and status <> 'VALID'
    5 order by object_name;
    OBJECT_NAME OBJECT_TYPE STATUS
    SDO_MIGRATE PACKAGE BODY INVALID
    SQL>
    SQL> prompt List of all spatial objects ordered by object_name
    List of all spatial objects ordered by object_name
    SQL> select object_name, object_type, status
    2 from dba_objects where owner='MDSYS'
    3 order by object_name;
    OBJECT_NAME OBJECT_TYPE STATUS
    AGGRCENTROID TYPE VALID
    AGGRCENTROID TYPE BODY VALID
    AGGRCONVEXHULL TYPE VALID
    AGGRCONVEXHULL TYPE BODY VALID
    AGGRLRSCONCAT TYPE VALID
    AGGRLRSCONCAT TYPE BODY VALID
    AGGRLRSCONCAT3D TYPE VALID
    AGGRLRSCONCAT3D TYPE BODY VALID
    AGGRMBR TYPE VALID
    AGGRMBR TYPE BODY VALID
    AGGRUNION TYPE VALID
    AGGRUNION TYPE BODY VALID
    ALL_GEOMETRY_COLUMNS VIEW VALID
    ALL_SDO_GEOM_METADATA VIEW VALID
    ALL_SDO_INDEX_INFO VIEW VALID
    ALL_SDO_INDEX_METADATA VIEW VALID
    ALL_SDO_LRS_METADATA VIEW VALID
    ALL_SDO_MAPS VIEW VALID
    ALL_SDO_STYLES VIEW VALID
    ALL_SDO_THEMES VIEW VALID
    CS_SRS TABLE VALID
    DBA_GEOMETRY_COLUMNS VIEW VALID
    DBA_SDO_INDEX_INFO VIEW VALID
    DBA_SDO_INDEX_METADATA VIEW VALID
    DBA_SDO_LRS_METADATA VIEW VALID
    DBA_SDO_MAPS VIEW VALID
    DBA_SDO_STYLES VIEW VALID
    DBA_SDO_THEMES VIEW VALID
    F81_INDEX_OBJECT TYPE VALID
    F81_INDEX_OBJ_ARRAY TYPE VALID
    F81_NT_IND_TYPE TYPE VALID
    GEOCODER_HTTP PACKAGE VALID
    GEOCODER_HTTP PACKAGE BODY VALID
    GEOCODE_RESULT TYPE VALID
    GEODETIC_SRIDS VIEW VALID
    H81_INDEX_OBJECT TYPE VALID
    H81_INDEX_OBJ_ARRAY TYPE VALID
    H81_NT_IND_TYPE TYPE VALID
    HHAND FUNCTION VALID
    HHBYTELEN FUNCTION VALID
    HHCBIT FUNCTION VALID
    HHCELLBNDRY FUNCTION VALID
    HHCELLSIZE FUNCTION VALID
    HHCLDATE FUNCTION VALID
    HHCOLLAPSE FUNCTION VALID
    HHCOMMONCODE FUNCTION VALID
    HHCOMPARE FUNCTION VALID
    HHCOMPOSE FUNCTION VALID
    HHDECODE FUNCTION VALID
    HHDISTANCE FUNCTION VALID
    HHENCODE FUNCTION VALID
    HHENCODE_BYLEVEL FUNCTION VALID
    HHGBIT FUNCTION VALID
    HHGETCID FUNCTION VALID
    HHGROUP FUNCTION VALID
    HHGTBIT FUNCTION VALID
    HHGTYPE FUNCTION VALID
    HHIDLPART FUNCTION VALID
    HHIDPART FUNCTION VALID
    HHINCRLEV FUNCTION VALID
    HHJLDATE FUNCTION VALID
    HHLENGTH FUNCTION VALID
    HHLEVELS FUNCTION VALID
    HHMATCH FUNCTION VALID
    HHMAXCODE FUNCTION VALID
    HHNCOMPARE FUNCTION VALID
    HHNDIM FUNCTION VALID
    HHOR FUNCTION VALID
    HHORDER FUNCTION VALID
    HHPRECISION FUNCTION VALID
    HHSBIT FUNCTION VALID
    HHSETCID FUNCTION VALID
    HHSTBIT FUNCTION VALID
    HHSTYPE FUNCTION VALID
    HHSUBDIVIDE FUNCTION VALID
    HHSUBSTR FUNCTION VALID
    HHXOR FUNCTION VALID
    LOCATOR_WITHIN_DISTANCE OPERATOR VALID
    MD PACKAGE VALID
    MD PACKAGE BODY VALID
    MD$RELATE TABLE VALID
    MD1 PACKAGE VALID
    MD1 PACKAGE BODY VALID
    MD2 PACKAGE VALID
    MD2 PACKAGE BODY VALID
    MDERR PACKAGE VALID
    MDERR PACKAGE BODY VALID
    MDPRVT_IDX PACKAGE VALID
    MDPRVT_IDX PACKAGE BODY VALID
    MD_LRS PACKAGE VALID
    MD_LRS PACKAGE BODY VALID
    OGIS_GEOMETRY_COLUMNS TABLE VALID
    OGIS_SPATIAL_REFERENCE_SYSTEMS TABLE VALID
    ORDMD_AG_LIBS LIBRARY VALID
    ORDMD_CS_LIBS LIBRARY VALID
    ORDMD_IDX_LIBS LIBRARY VALID
    ORDMD_LRS_LIBS LIBRARY VALID
    ORDMD_MBR_LIBS LIBRARY VALID
    ORDMD_MIG_LIBS LIBRARY VALID
    ORDMD_PRIDX_LIBS LIBRARY VALID
    ORDMD_REL_LIBS LIBRARY VALID
    ORDMD_RTREE_LIBS LIBRARY VALID
    ORDMD_UDT_LIBS LIBRARY VALID
    ORDMD_UTL_LIBS LIBRARY VALID
    ORDMD_WD_LIBS LIBRARY VALID
    PK_SDO_MASK INDEX VALID
    PK_SRID INDEX VALID
    PRVT_IDX PACKAGE VALID
    PRVT_IDX PACKAGE BODY VALID
    RTREE_FILTER OPERATOR VALID
    RTREE_IDX PACKAGE VALID
    RTREE_IDX PACKAGE BODY VALID
    RTREE_INDEX INDEXTYPE VALID
    RTREE_INDEX_METHOD TYPE VALID
    RTREE_INDEX_METHOD TYPE BODY VALID
    RTREE_NN OPERATOR VALID
    SAMPLE_SEQ SEQUENCE VALID
    SDO PACKAGE VALID
    SDO PACKAGE BODY VALID
    SDOAGGR TYPE VALID
    SDOAGGR TYPE BODY VALID
    SDOAGGRTYPE TYPE VALID
    SDO_3GL PACKAGE VALID
    SDO_3GL PACKAGE BODY VALID
    SDO_ADMIN PACKAGE VALID
    SDO_ADMIN PACKAGE BODY VALID
    SDO_AGGR_CENTROID FUNCTION VALID
    SDO_AGGR_CONVEXHULL FUNCTION VALID
    SDO_AGGR_LRS_CONCAT FUNCTION VALID
    SDO_AGGR_LRS_CONCAT_3D FUNCTION VALID
    SDO_AGGR_MBR FUNCTION VALID
    SDO_AGGR_UNION FUNCTION VALID
    SDO_ANGLE_UNITS TABLE VALID
    SDO_AREA_UNITS TABLE VALID
    SDO_CATALOG PACKAGE VALID
    SDO_CATALOG PACKAGE BODY VALID
    SDO_CONSTRUCT_DIM_ARRAY FUNCTION VALID
    SDO_CS PACKAGE VALID
    SDO_CS PACKAGE BODY VALID
    SDO_DATUMS TABLE VALID
    SDO_DIM_ARRAY TYPE VALID
    SDO_DIM_ELEMENT TYPE VALID
    SDO_DIST_UNITS TABLE VALID
    SDO_DROP_USER TRIGGER VALID
    SDO_ELEM_INFO_ARRAY TYPE VALID
    SDO_ELLIPSOIDS TABLE VALID
    SDO_FILTER OPERATOR VALID
    SDO_GEOM PACKAGE VALID
    SDO_GEOM PACKAGE BODY VALID
    SDO_GEOMETRY TYPE VALID
    SDO_GEOMETRY TYPE BODY VALID
    SDO_GEOM_IDX INDEX VALID
    SDO_GEOM_METADATA_TABLE TABLE VALID
    SDO_GEOM_TRIG_DEL1 TRIGGER VALID
    SDO_GEOM_TRIG_INS1 TRIGGER VALID
    SDO_GEOM_TRIG_UPD1 TRIGGER VALID
    SDO_IDX PACKAGE VALID
    SDO_IDX PACKAGE BODY VALID
    SDO_IDX_MDATA_IDX INDEX VALID
    SDO_IDX_TAB_SEQUENCE SEQUENCE VALID
    SDO_INDEX_METADATA_TABLE TABLE VALID
    SDO_INDEX_METHOD_9I TYPE VALID
    SDO_INDEX_METHOD_9I TYPE BODY VALID
    SDO_INT2_FILTER OPERATOR VALID
    SDO_INT2_RELATE OPERATOR VALID
    SDO_INT_FILTER OPERATOR VALID
    SDO_INT_RELATE OPERATOR VALID
    SDO_LRS PACKAGE VALID
    SDO_LRS PACKAGE BODY VALID
    SDO_LRS_METADATA_TABLE TABLE VALID
    SDO_LRS_META_IDX INDEX VALID
    SDO_LRS_TRIG_DEL TRIGGER VALID
    SDO_LRS_TRIG_INS TRIGGER VALID
    SDO_LRS_TRIG_UPD TRIGGER VALID
    SDO_MAPS_TABLE TABLE VALID
    SDO_MBR TYPE VALID
    SDO_META PACKAGE VALID
    SDO_META PACKAGE BODY VALID
    SDO_MIGRATE PACKAGE VALID
    SDO_MIGRATE PACKAGE BODY INVALID
    SDO_NN OPERATOR VALID
    SDO_NN_DISTANCE OPERATOR VALID
    SDO_NUMTAB TYPE VALID
    SDO_ORDINATE_ARRAY TYPE VALID
    SDO_POINT_TYPE TYPE VALID
    SDO_PRIDX PACKAGE VALID
    SDO_PRIDX PACKAGE BODY VALID
    SDO_PROJECTIONS TABLE VALID
    SDO_RELATE OPERATOR VALID
    SDO_RELATEMASK_TABLE VIEW VALID
    SDO_RELATE_MASK PACKAGE VALID
    SDO_RELATE_MASK PACKAGE BODY VALID
    SDO_RID_ARRAY TYPE VALID
    SDO_RTREE_ADMIN PACKAGE VALID
    SDO_RTREE_ADMIN PACKAGE BODY VALID
    SDO_RTREE_FILTER OPERATOR VALID
    SDO_RTREE_RELATE OPERATOR VALID
    SDO_STAT TYPE VALID
    SDO_STATTAB TYPE VALID
    SDO_STYLES_TABLE TABLE VALID
    SDO_THEMES_IDX INDEX VALID
    SDO_THEMES_TABLE TABLE VALID
    SDO_TUNE PACKAGE VALID
    SDO_TUNE PACKAGE BODY VALID
    SDO_UTIL PACKAGE VALID
    SDO_UTIL PACKAGE BODY VALID
    SDO_VERSION FUNCTION VALID
    SDO_VPOINT_TYPE TYPE VALID
    SDO_WITHIN_DISTANCE OPERATOR VALID
    SPATIAL_INDEX INDEXTYPE VALID
    SYS_C001565 INDEX VALID
    SYS_C001571 INDEX VALID
    SYS_C001706 INDEX VALID
    SYS_LOB0000027008C00040$$ LOB VALID
    SYS_LOB0000027008C00041$$ LOB VALID
    SYS_LOB0000027053C00012$$ LOB VALID
    SYS_LOB0000027053C00013$$ LOB VALID
    SYS_LOB0000027209C00004$$ LOB VALID
    SYS_LOB0000027216C00005$$ LOB VALID
    SYS_LOB0000027216C00006$$ LOB VALID
    SYS_LOB0000027216C00013$$ LOB VALID
    SYS_LOB0000027216C00014$$ LOB VALID
    SYS_LOB0000027229C00006$$ LOB VALID
    SYS_LOB0000028651C00012$$ LOB VALID
    SYS_LOB0000028651C00013$$ LOB VALID
    TRANSFORM_MAP PACKAGE VALID
    TRANSFORM_MAP PACKAGE BODY VALID
    UNIQUE_ANGLE_UNITS INDEX VALID
    UNIQUE_AREA_UNITS INDEX VALID
    UNIQUE_DIST_UNITS INDEX VALID
    UNIQUE_LAYERS INDEX VALID
    UNIQUE_MAPS INDEX VALID
    UNIQUE_STYLES INDEX VALID
    UNIQUE_TABLES INDEX VALID
    UNIQUE_THEMES INDEX VALID
    USER_CS_SRS TABLE VALID
    USER_GEOMETRY_COLUMNS VIEW VALID
    USER_SDO_GEOM_METADATA VIEW VALID
    USER_SDO_INDEX_INFO VIEW VALID
    USER_SDO_INDEX_METADATA VIEW VALID
    USER_SDO_LRS_METADATA VIEW VALID
    USER_SDO_MAPS VIEW VALID
    USER_SDO_STYLES VIEW VALID
    USER_SDO_THEMES VIEW VALID
    USER_TRANSFORM_MAP TABLE VALID
    V81_INDEX_OBJECT TYPE VALID
    V81_INDEX_OBJ_ARRAY TYPE VALID
    V81_NT_IND_TYPE TYPE VALID
    VERTEX_SET_TYPE TYPE VALID
    VERTEX_TYPE TYPE VALID
    250 rows selected.
    SQL>
    SQL> spool off

    Sir! No Sir! :)
    I will try to explain.
    If I do have table with russian letter, than creating index creation will fail, no matter name of index :(
    But if I will prepare copy of my table with english name (including meta-data) - index creations is succesfull.

  • Creation of  rules index failing with ORA-01652 exception

    I am trying to create a rules index in the following way,
    BEGIN
         SEM_APIS.CREATE_RULES_INDEX(
         'APPS_RDF_IDX',
         SEM_Models('SEMANTIC_SEARCH_MODEL'),
         SEM_Rulebases('OWLPRIME','SEMANTIC_SEARCH_RULEBASE'));
    END;
    with semantic_search_rulebase having about 5 rules and with 28839 triples in the model.
    When I am trying to run create index it fails after a long time by throwing exception
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    though TEMP is allocated 5GB memory.
    Please clarify me on the following questions,
    1. How much TEMP space should be allocated if the triples are going to be in millions and rules at about 10 to 100 and why is indexing taking a lot of TEMP space with a less amount of triples.
    2. How much time normally would create rules index take with triples of size from thousands to millions.
    3. How to make the create rules index run faster.
    Thanks,
    Phani

    First of all, please start using create_entailment API instead of that create_rules_index API.
    Regarding 1), 5GB temp space is not a whole lot.
    It is hard to say exactly how much you need because you have user defined rules.
    Regarding 2) and 3), please check out the following inference best practice paper.
    http://www.oracle.com/technology/tech/semantic_technologies/pdf/semantic_infer_bestprac_wp.pdf
    Also, if you like, please post your rules and I may be able to help you model
    some of your rules using native OWL constructs.

  • Reducing size of datatype involved in index fails where as increasing size work

    This question is related to my question in SO Database
    events that will cause deletion of index
    For example if i create a table like
    CREATE TABLE dbo.Customers (ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED, CustomerName NVARCHAR(200));
    CREATE INDEX IX_CustomerName ON dbo.Customers(CustomerName);
    And then try to change column datatype like shown below to 230 from 200.
    alter table Customers
    alter column CustomerName nvarchar(230) null
    The above alter command runs successfully and index IX_CustomerName is not dropped.
    But if i try to reduce the size of column to 150 from 200 then it shows error.
    alter table Customers
    alter column CustomerName nvarchar(150) null
    I get error like
    Msg 5074, Level 16, State 1, Line 1
    The index 'IX_CustomerName' is dependent on column 'CustomerName'.
    Msg 4922, Level 16, State 9, Line 1
    ALTER TABLE ALTER COLUMN CustomerName failed because one or more objects access this column.
    So reducing the size(data may loss) of a column does not work if index is present on a column where as if the index not present then above
    query works successfully. So why does it does not allow reducing the column when index present where as increasing size does not show any problem and works?

    Hello,
    Indeed, a little bit strange even because in BOL it's documented for a different behaviour, see
    ALTER TABLE (Transact-SQL) => Remarks =>
    "Changing the Size of a Column
    You can change the length, precision, or scale of a column by specifying a new size for the column data type in the ALTER COLUMN clause. If data exists in the column, the new size cannot be smaller than the maximum size of the data. Also, the column cannot
    be defined in an index, unless the column is a varchar, nvarchar, or varbinary data type and the index is not the result of a PRIMARY KEY constraint. See example P."
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Indexing Failed - User-Defined message search

    Hi
    I am trying to set-up a user-defined message search in our PI 7.3 system. After having created a filter and defined a search criteria, SAP says to "create an index of the messages that match the active filters and search criteria"  [Link|http://help.sap.com/saphelp_nw73/helpdata/en/48/b2e0186b156ff4e10000000a42189b/content.htm]
    Every time I try to run indexing it fails. I don't know why as there is no help-button available and I haven't found anyone experiencing the same issue. Anyone else experienced the same issue?
    Also - using XPath - do I have to consider any particular namespace prefix etc for PI messsages? I have tested my XPath on payload XML in XMLSpy, but the search functionality seem to expect a different format on the XPath syntax...
    Thank you!
    regards Ole

    Hello,
    I have set this up for an interface now. I am not sure how important the index job is because
    the indexing job worked in the dev system, never stopped in QA and failed in prod. Anyway the message search works in all three systems.
    The following blog made me understand the namespace prefix:
    http://scn.sap.com/people/abinash.nanda/blog/2011/08/17/pi-73--adapter-user-defined-message-search
    Regards,
    Per Rune

  • Spatial index fails for arcs in geodetic data

    Where is the documentation for describing the differences between Oracle 8.1.7.2 and Oracle 9.2.0.3 for Spatial? We have 8.1.7.2 for production and are testing 9.2.0.3 in development. Data that loads and builds spatial indexes just fine in 8i fails miserably with the 'arcs in geodetic data' error in 9i. Catpatch.sql has been run twice for the 9i database. What specifically has changed in 9i to prevent the spatial r-tree indexing of geodetic data in 9i? I have run the sdo_geom.sdo_arc_densify and the indexes still fails. No problems were found with the geometry using the sdo_geom.validate_geometry_with_context and indexes still fail. Changed the tolerances on test data and still no indexes.
    Any information would be appreciated. Thanks!

    Hi,
    Arcs, compound line segments containing arcs, circles, and optimized rectangles are not supported for geodetic data in 9i. Also, the distance between two coordinates specified in spatial cannot be equal to or greater than 1/2 the great circle distance around the Earth, and single polygons must be smaller than 1/2 the Earth's surface area. All of the above is associated with geodetic data.
    Also, tolerance for geodetic data is specified in meters, where previously the tolerance was specified in degrees. Also, the bounds of the coordinate system have to be -180 to 180 in longitude, and -90 to 90 in latitude (tolerance and bounds are set in user_sdo_geom_metadata).
    If you want to continue using your data as you had been in 8i, you should probably remove the spatial reference system id in the data and in user_sdo_geom_metadata.
    However, the whole earth geometry model will get you a lot of functionality, so you should consider modifying your data to adhere to the rules of geodetic data.
    Hope this helps,
    Dan

  • Insert using Spatial Index fails when called from PHP

    I have a stored procedure that inserts into a table with a spatial index on one of its fields.
    When I run the procedure from SQL Server Management Studio it runs just fine.
    When I run it from PHP, it returns the error: "INSERT failed because the following SET options have incorrect settings: 'CONCAT_NULL_YIELDS_NULL, ANSI_WARNINGS, ANSI_PADDING'. Verify that SET options are correct for use with indexed views and/or indexes
    on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations"
    From within Management Studio, I have tried many different combinations of settings, until I wound up with the list I have in the procedure below.
    Within Managment Studio the error changes based on the options I have set, but PHP always replies with: 'CONCAT_NULL_YIELDS_NULL, ANSI_WARNINGS, ANSI_PADDING'
    I have configured PHP to use the same SQL user as management studio, and I have tried both mssql_query and mssql_execute with bound parameters.
    The spatial index is on the field post.location If I remove the spatial index it works, but I need the spatial index.
    CREATE PROCEDURE insert_post ( @subject AS VARCHAR(250), @body AS VARCHAR(2000), @latitude AS FLOAT, @longitude AS FLOAT ) AS BEGIN
    SET NOCOUNT ON
    SET ANSI_WARNINGS ON
    SET ANSI_PADDING ON
    SET CONCAT_NULL_YIELDS_NULL ON
    SET NUMERIC_ROUNDABORT OFF
    DECLARE @location AS geography = geography::Point(@latitude, @longitude, 4326)
    INSERT INTO post
        subject,
        body,
        location
    ) VALUES (
        @subject,
        @body,
        @location
    END

    Hi Charles
    Your issue looks like it have two basic sources: (1) The connection string properties, (2) The table structure and the data which you try to insert.
    (1) The connection string properties can be specified in various ways.
    You can set most of them during the connection, depending on the provider which you are using. We did not get from you any information about the PHP code, so we have no information regarding this part.
    The connection properties can be changed after the connection was made, which is basically what you have try to do. But we don't know what properties
    are differents (since you .. the PHP code..) but you can check those properties dynamically, insert them to a table and then compare. To check all the properties you can use this post:
    http://ariely.info/Blog/tabid/83/EntryId/153/SQL-Server-Get-Connection-Properties.aspx
    (2) We need to understand your
    table structure and compare it to yours query. Please post your DDL+DML (queries to create the table and to insert some sample data), and the full SP code
    as it is now.
    [Personal Site]  [Blog]  [Facebook]

  • Create Index fail in process chain

    Hi Friends,
    I have a process chain in production system which loads inventory data everyday at 6:00Am.
    2 days back R3 production system failed and the data is not loaded into BI.  It was stopped at Delete Index step. I have started again by selecting Repeat then it worked and the infopackage is also executed successfully.
    After that it is failed at Create index step and giving message " Job cancelled after system exception ERROR_MESSAGE". Till then it is giving same problem everyday.
    Could you guys help me how to solve this and execute the chain successfully.
    Thanks & Regards

    Hi,
    Try to create index for the cube manually i.e got info cube , Right Click -> manage->performance-> perform check index - if red do repair index, if it will not help then create index. After that run it though process chain
    Hope it will resolve your problem.
    Sangita

  • Create index fails (but worked previously)

    oracle 8.1.6 / solaris 7
    I installed InterMedia a few weeks ago (after many problems with configuring
    the listener and tnsnames) and created 3 text indexes on 2 tables(one column
    being a CLOB). The searches worked great (using "contains"). Next I tried to
    create a preference using ctx_ddl.create_preference andctx_ddl.set_attribute
    to allow for streaming of docs on the server. When Itried to create the index
    on the docs, it failed, giving me the
    ORA - 29885 error.
    Today, I needed to truncate the tables I created with the 3 indexes; in order
    to do so, I had to drop the index (FORCE) and when trying to recreatethem, it
    failed, giving me the error below. IT WAS WORKING 2 WEEKS AGO! WHAT
    HAPPENED??? The metalink site does not have good info and is TOO slow.
    null

    here's the error message:
    ORA-29855:error occurred in the execution of ODCIINDEXCREATE routine
    ORA-20000:interMedia Text error: DRG-50704: Net8 listener is not running or
    cannot start external procedures ORA-28575: unable to open RPC connection to
    externalprocedure agent ORA-06512: at "CTXSYS.DRUE", line 126 ORA-06512: at
    "CTXSYS.TEXTINDEXMETHODS", line 54 ORA-06512: at line 1
    oracle 8.1.6 / solaris 7
    I installed InterMedia a few weeks ago (after many problems with configuring
    the listener and tnsnames) and created 3 text indexes on 2 tables(one column
    being a CLOB). The searches worked great (using "contains"). Next I tried to
    create a preference using ctx_ddl.create_preference andctx_ddl.set_attribute
    to allow for streaming of docs on the server. When Itried to create the index
    on the docs, it failed, giving me the
    ORA - 29885 error.
    Today, I needed to truncate the tables I created with the 3 indexes; in order
    to do so, I had to drop the index (FORCE) and when trying to recreatethem, it
    failed, giving me the error below. IT WAS WORKING 2 WEEKS AGO! WHAT
    HAPPENED??? The metalink site does not have good info and is TOO slow.
    null

  • Mailbox Database Content Index Failed and Suspended

    Hello,
         I am having an issue with a Mailbox Database.  The content index shows FailedAndSuspended.  there are no Bad Copy Counts  and it shows as Mounted.  I have no DAG so this is a single database and followed the article
    here: 
    http://technet.microsoft.com/en-us/library/ee633475(v=exchg.150).aspx
    After it crawled for a little while it went to Failed, then unknown, then crawling, and has finally ended up at FailedAndSuspended again.  I am also popping errors as follows:
    MSExchangeFastSearch Event ID 1004
    MSExchangeIS Event ID 1012
    MSExchange  ADAccess Event ID 4020
    MSExchangeRepl Event ID 4087 (Though this is weird as it is a stand alone database why would it try to replicate it?)
    MSExchange Common Event ID 4999
    The following Warnings are hitting at the same time:
    MSExchangeFastSearch Event ID 1006, 1009, 1010
    MSExchange Mid-Tier Storage Event ID 6002, 10010
    Anyone have any thoughts on a fix for this?
    Michael R. Mastro II

    Ok I did a Set-MailboxDatabase "Mailbox Database #" -IndexEnabled $false, then stopped both the MS Exchange Search and MS Exchange Search Host Controllers. Started them again and it showed disabled when I did a get-mailboxdatabasecopystatus. I then went
    Set-MailboxDatabase "Mailbox Database #" -IndexEnabled $true. After about 10 minutes got a FailedAndSuspended Status. So here are the logs from event viewer.
    Log Name:      ApplicationSource:        MSExchange Mid-Tier StorageDate:          7/20/2014 7:53:49 PMEvent ID:      6002Task Category: ResourceHealthLevel:         WarningKeywords:      ClassicUser:          N/AComputer:      MRM2EX1.MRM2Inc.comDescription:Ping of mdb '7d98ce58-997c-41c1-abde-06d7eaf3dbdd' timed out after '00:00:00' minutes.  Last successful ping was at '7/20/2014 11:53:49 PM' UTC.Event Xml:<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">  <System>    <Provider Name="MSExchange Mid-Tier Storage" />    <EventID Qualifiers="32768">6002</EventID>    <Level>3</Level>    <Task>6</Task>    <Keywords>0x80000000000000</Keywords>    <TimeCreated SystemTime="2014-07-20T23:53:49.000000000Z" />    <EventRecordID>703314</EventRecordID>    <Channel>Application</Channel>    <Computer>MRM2EX1.MRM2Inc.com</Computer>    <Security />  </System>  <EventData>    <Data>7d98ce58-997c-41c1-abde-06d7eaf3dbdd</Data>    <Data>00:00:00</Data>    <Data>7/20/2014 11:53:49 PM</Data>  </EventData></Event>
    Log Name:      ApplicationSource:        MSExchangeFastSearchDate:          7/20/2014 7:37:17 PMEvent ID:      1006Task Category: GeneralLevel:         WarningKeywords:      ClassicUser:          N/AComputer:      MRM2EX1.MRM2Inc.comDescription:The FastFeeder component received a connection exception from FAST. Error details: Microsoft.Exchange.Search.Fast.FastConnectionException: Connection to the Content Submission Service has failed. ---> Microsoft.Ceres.External.ContentApi.ConnectionException: Recovery failed after 0 retries   at Microsoft.Ceres.External.ContentApi.DocumentFeeder.DocumentFeeder.CheckRecoveryFailed()   at Microsoft.Ceres.External.ContentApi.DocumentFeeder.DocumentFeeder.SubmitDocument(Document document, TimeSpan timeout)   at Microsoft.Ceres.External.ContentApi.DocumentFeeder.DocumentFeeder.SubmitDocument(Document document)   at Microsoft.Exchange.Search.Fast.FastFeeder.SubmitDocumentInternal(Object state)   --- End of inner exception stack trace ---Event Xml:<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">  <System>    <Provider Name="MSExchangeFastSearch" />    <EventID Qualifiers="32772">1006</EventID>    <Level>3</Level>    <Task>1</Task>    <Keywords>0x80000000000000</Keywords>    <TimeCreated SystemTime="2014-07-20T23:37:17.000000000Z" />    <EventRecordID>703192</EventRecordID>    <Channel>Application</Channel>    <Computer>MRM2EX1.MRM2Inc.com</Computer>    <Security />  </System>  <EventData>    <Data>Microsoft.Exchange.Search.Fast.FastConnectionException: Connection to the Content Submission Service has failed. ---&gt; Microsoft.Ceres.External.ContentApi.ConnectionException: Recovery failed after 0 retries   at Microsoft.Ceres.External.ContentApi.DocumentFeeder.DocumentFeeder.CheckRecoveryFailed()   at Microsoft.Ceres.External.ContentApi.DocumentFeeder.DocumentFeeder.SubmitDocument(Document document, TimeSpan timeout)   at Microsoft.Ceres.External.ContentApi.DocumentFeeder.DocumentFeeder.SubmitDocument(Document document)   at Microsoft.Exchange.Search.Fast.FastFeeder.SubmitDocumentInternal(Object state)   --- End of inner exception stack trace ---</Data>  </EventData></Event>
    Log Name:      ApplicationSource:        MSExchangeFastSearchDate:          7/20/2014 7:43:22 PMEvent ID:      1010Task Category: GeneralLevel:         WarningKeywords:      ClassicUser:          N/AComputer:      MRM2EX1.MRM2Inc.comDescription:An operation attempted against a FAST endpoint exprienced an exception. This operation may be retried. Error details: Microsoft.Exchange.Search.Fast.PerformingFastOperationException: An Exception was received during a FAST operation. ---> System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:01:00'. ---> System.IO.IOException: The write operation failed, see inner exception. ---> System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:01:00'. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host   at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)   at System.ServiceModel.Channels.SocketConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   --- End of inner exception stack trace ---   at System.ServiceModel.Channels.SocketConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   at System.ServiceModel.Channels.BufferedConnection.WriteNow(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, BufferManager bufferManager)   at System.ServiceModel.Channels.BufferedConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   at System.ServiceModel.Channels.ConnectionStream.Write(Byte[] buffer, Int32 offset, Int32 count)   at System.Net.Security.NegotiateStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)   at System.Net.Security.NegotiateStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)   --- End of inner exception stack trace ---   at System.Net.Security.NegotiateStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)   at System.Net.Security.NegotiateStream.Write(Byte[] buffer, Int32 offset, Int32 count)   at System.ServiceModel.Channels.StreamConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   --- End of inner exception stack trace ---Server stack trace:    at System.ServiceModel.Channels.StreamConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   at System.ServiceModel.Channels.StreamConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout, BufferManager bufferManager)   at System.ServiceModel.Channels.FramingDuplexSessionChannel.OnSendCore(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.TransportDuplexSessionChannel.OnSend(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout)   at System.ServiceModel.Dispatcher.DuplexChannelBinder.Request(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)   at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)   at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)Exception rethrown at [0]:    at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)   at Microsoft.Ceres.ContentEngine.Admin.FlowService.IFlowServiceManagementAgent.GetFlows()   at Microsoft.Exchange.Search.Fast.IndexManager.<GetFlows>b__16()   at Microsoft.Exchange.Search.Fast.IndexManagementClient.PerformFastOperation[T](Func`1 function, String eventLogKey)   --- End of inner exception stack trace ---Event Xml:<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">  <System>    <Provider Name="MSExchangeFastSearch" />    <EventID Qualifiers="32772">1010</EventID>    <Level>3</Level>    <Task>1</Task>    <Keywords>0x80000000000000</Keywords>    <TimeCreated SystemTime="2014-07-20T23:43:22.000000000Z" />    <EventRecordID>703224</EventRecordID>    <Channel>Application</Channel>    <Computer>MRM2EX1.MRM2Inc.com</Computer>    <Security />  </System>  <EventData>    <Data>Microsoft.Exchange.Search.Fast.PerformingFastOperationException: An Exception was received during a FAST operation. ---&gt; System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:01:00'. ---&gt; System.IO.IOException: The write operation failed, see inner exception. ---&gt; System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:01:00'. ---&gt; System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host   at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)   at System.ServiceModel.Channels.SocketConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   --- End of inner exception stack trace ---   at System.ServiceModel.Channels.SocketConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   at System.ServiceModel.Channels.BufferedConnection.WriteNow(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, BufferManager bufferManager)   at System.ServiceModel.Channels.BufferedConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   at System.ServiceModel.Channels.ConnectionStream.Write(Byte[] buffer, Int32 offset, Int32 count)   at System.Net.Security.NegotiateStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)   at System.Net.Security.NegotiateStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)   --- End of inner exception stack trace ---   at System.Net.Security.NegotiateStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)   at System.Net.Security.NegotiateStream.Write(Byte[] buffer, Int32 offset, Int32 count)   at System.ServiceModel.Channels.StreamConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   --- End of inner exception stack trace ---Server stack trace:    at System.ServiceModel.Channels.StreamConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)   at System.ServiceModel.Channels.StreamConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout, BufferManager bufferManager)   at System.ServiceModel.Channels.FramingDuplexSessionChannel.OnSendCore(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.TransportDuplexSessionChannel.OnSend(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout)   at System.ServiceModel.Dispatcher.DuplexChannelBinder.Request(Message message, TimeSpan timeout)   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)   at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)   at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)Exception rethrown at [0]:    at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData&amp; msgData, Int32 type)   at Microsoft.Ceres.ContentEngine.Admin.FlowService.IFlowServiceManagementAgent.GetFlows()   at Microsoft.Exchange.Search.Fast.IndexManager.&lt;GetFlows&gt;b__16()   at Microsoft.Exchange.Search.Fast.IndexManagementClient.PerformFastOperation[T](Func`1 function, String eventLogKey)   --- End of inner exception stack trace ---</Data>  </EventData></Event>
    Log Name:      ApplicationSource:        MSExchange Mid-Tier StorageDate:          7/20/2014 7:53:49 PMEvent ID:      6002Task Category: ResourceHealthLevel:         WarningKeywords:      ClassicUser:          N/AComputer:      MRM2EX1.MRM2Inc.comDescription:Ping of mdb '7d98ce58-997c-41c1-abde-06d7eaf3dbdd' timed out after '00:00:00' minutes.  Last successful ping was at '7/20/2014 11:53:49 PM' UTC.Event Xml:<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">  <System>    <Provider Name="MSExchange Mid-Tier Storage" />    <EventID Qualifiers="32768">6002</EventID>    <Level>3</Level>    <Task>6</Task>    <Keywords>0x80000000000000</Keywords>    <TimeCreated SystemTime="2014-07-20T23:53:49.000000000Z" />    <EventRecordID>703314</EventRecordID>    <Channel>Application</Channel>    <Computer>MRM2EX1.MRM2Inc.com</Computer>    <Security />  </System>  <EventData>    <Data>7d98ce58-997c-41c1-abde-06d7eaf3dbdd</Data>    <Data>00:00:00</Data>    <Data>7/20/2014 11:53:49 PM</Data>  </EventData></Event>
    Log Name:      ApplicationSource:        MSExchange Mid-Tier StorageDate:          7/20/2014 7:58:39 PMEvent ID:      10010Task Category: ResourceHealthLevel:         WarningKeywords:      ClassicUser:          N/AComputer:      MRM2EX1.MRM2Inc.comDescription:Process: MSExchangeDelivery (2824), Db:a433e8a2-ca26-438e-a9a3-d71305c6c266,C:1,BT:00:00:00,Db:7d98ce58-997c-41c1-abde-06d7eaf3dbdd,C:1,BT:00:00:00,Event Xml:<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">  <System>    <Provider Name="MSExchange Mid-Tier Storage" />    <EventID Qualifiers="32768">10010</EventID>    <Level>3</Level>    <Task>6</Task>    <Keywords>0x80000000000000</Keywords>    <TimeCreated SystemTime="2014-07-20T23:58:39.000000000Z" />    <EventRecordID>703327</EventRecordID>    <Channel>Application</Channel>    <Computer>MRM2EX1.MRM2Inc.com</Computer>    <Security />  </System>  <EventData>    <Data>Process: MSExchangeDelivery (2824), Db:a433e8a2-ca26-438e-a9a3-d71305c6c266,C:1,BT:00:00:00,Db:7d98ce58-997c-41c1-abde-06d7eaf3dbdd,C:1,BT:00:00:00,</Data>  </EventData></Event>
    Michael R. Mastro II

Maybe you are looking for

  • How to edit the text displayed in KM Upload link?

    Hi All,         I want to edit the text displayed in KM Upload link ie "You can upload a file to the repository from your computer. Click "Show Properties" to assign another name or to change the settings". I tried searching for the resource bundle f

  • Capturing from HI8 through HV20 - does it work without timecode?

    I am capturing material from a Hi8 camera. I have connected it with my Mac through the Canon HV20. I can see the material in the Log and capture window in my FCP 6.0.2, but when I choos capture now. FcP wants to locate the timecode, which does not ex

  • Executing jar file

    Is there any way to execute a .jar file without having the JRE installed? I have a jar file that I need to execute on multiple PCs but I can't install the JRE on all of them. Any help is appreciated, thanks

  • Preloader to also preload external f4v files in main movie?

    n00b alert... I did search the forum (and several others online), but either the code got too complicated for me to even attempt to adjust it, or I didn't find a relevant answer. So I really hope someone here can help me out. I've found a preloader c

  • New Mac Pro USB Problems!

    It seems the new Mac Pro cannot recognize certain devices.  Such as my Access Virus Ti synthesizer-  users cannot get the Mac to recognize the USB connection from the Virus.  Please help.  I'm losing my mind after 7 hours trying to figure it out and