Unique constraint not working

Hi,
I originally had a unique constraint on a table on columns say C1,C2,C3. Later I added a 4th column C4. I can see that the 4th column is part of the same index in all_cons_columns.r
But, when I try to insert row with same value of C1,C2,C3 but different C4 it gives me the same Unique key error even though the value I am entering for C4 is unique for that table. What could cause this?

user13755008 wrote:
select * from all_cons_columns where constraint_name='HSRP_TRACK_UK';
OWNER CONSTRAINT_NAME TABLE_NAME
COLUMN_NAME
POSITION
NC3 HSRP_TRACK_UK HSRP_TRACK
ODBID_EQP
1
NC3 HSRP_TRACK_UK HSRP_TRACK
START_DATE
4
NC3 HSRP_TRACK_UK HSRP_TRACK
OBJECT_NUM
2
NC3 HSRP_TRACK_UK HSRP_TRACK
did some details get misplaced here?
>
>
>
select * from all_ind_columns where INDEX_NAME='HSRP_OBJ_INTER_ST_UIX';
INDEX_OWNER INDEX_NAME TABLE_OWNER TABLE_NAME
COLUMN_NAME
COLUMN_POSITION COLUMN_LENGTH CHAR_LENGTH DESC
NC3 HSRP_OBJ_INTER_ST_UIX NC3 HSRP_TRACK
ODBID_EQP
1 22 0 ASC
NC3 HSRP_OBJ_INTER_ST_UIX NC3 HSRP_TRACK
OBJECT_NUM
2 4 4 ASC
NC3 HSRP_OBJ_INTER_ST_UIX NC3 HSRP_TRACK
INTERFACE
3 32 32 ASC
NC3 HSRP_OBJ_INTER_ST_UIX NC3 HSRP_TRACK
START_DATE
4 7 0 ASC
select * from nc3.hsrp_track where odbid= 87820678;
ODBID CREATETMSTMP CREATEUS CREATETRAN
LASTUPDTMSTMP LASTUPDU LASTUPDTRAN ODBID_EQP
ODBID_START_ORDER ODBID_STOP_ORDER ODBID_PREV START_DATE STOP_DATE STATUS COMPONENT_TYPE OBJE
INTERFACE PROCESS
87820678 2013-01-18-11:29:29.285050 NC3USER StoreRouting
2013-01-18-11:29:29.285050 NC3USER StoreRouting 87811116
87811106 2013-01-18 9999-12-31 Started Equipment 1
Serial0/0 line-protocol
SQL> insert into NC3.HSRP_TRACK Values(3,'2013-01-18-11:29:29.436033', 'NC3USER ', 'StoreRouting', '2013-01-18-11:29:29.436033', 'NC3USER ', 'StoreRouting ', 87820377,'','','', '2013-01-18','9999-12-31','Started', 'Equipment', '1', 'Serial0/0', '');
insert into NC3.HSRP_TRACK Values(3,'2013-01-18-11:29:29.436033', 'NC3USER ', 'StoreRouting', '2013-01-18-11:29:29.436033', 'NC3USER ', 'StoreRouting ', 87820377,'','','', '2013-01-18','9999-12-31','Started', 'Equipment', '1', 'Serial0/0', '')
ERROR at line 1:
ORA-00001: unique constraint (NC3.HSRP_TRACK_UK) violated
SQL> desc nc3.hsrp_track
Name Null? Type
ODBID NOT NULL NUMBER(11)
CREATETMSTMP NOT NULL TIMESTAMP(6)
CREATEUSER NOT NULL CHAR(8)
CREATETRAN NOT NULL CHAR(16)
LASTUPDTMSTMP NOT NULL TIMESTAMP(6)
LASTUPDUSER NOT NULL CHAR(8)
LASTUPDTRAN NOT NULL CHAR(16)
ODBID_EQP NOT NULL NUMBER(11)
ODBID_START_ORDER NUMBER(11)
ODBID_STOP_ORDER NUMBER(11)
ODBID_PREV NUMBER(11)
START_DATE NOT NULL DATE
STOP_DATE NOT NULL DATE
STATUS NOT NULL CHAR(16)
COMPONENT_TYPE NOT NULL VARCHAR2(32)
OBJECT_NUM NOT NULL VARCHAR2(4)
INTERFACE NOT NULL VARCHAR2(32)
PROCESS VARCHAR2(15)
SQL> select * from nc3.hsrp_track where odbid_eqp=87820377;
no rows selected
Please, see since odbid_eqp that I am trying to insert is not presen in table unique index should not be thrown in any case? I am confused.
Edited by: user13755008 on Jan 18, 2013 8:24 PMpost results from SQL below
SELECT COLUMN_NAME, POSITION, CONSTRAINT_NAME FROM ALL_CONS_COLUMNS
WHERE TABLE_NAME = 'HSRP_TRACK' AND OWNER = 'NC3';

Similar Messages

  • Unique Constraint Not Allowed in OA Table design?

    Is it really true or is just my OApps Admin /DBA is just plain stupid for not allowing unique constraint at table level?
    I have to implement a EO attribute level validation that does a select on an unindexed field thru the List Validator feature.
    The problem is it is taking 40-60 seconds to commit something to the EO.

    Thats true, if talking about only OAF, you don't need the constraint as you can handle that in code!
    --Mukul                                                                                                                                                                                                                                                       

  • url-pattern for extension mapping in security-constraint not working

    I'm trying to use extension mapping in a <security-constraint> configuration,
    According to:
    http://download.oracle.com/otn-pub/jcp/servlet-3_1-fr-eval-spec/servlet-3_1-final.pdf?AuthParam=1429824454_de04222eab1b8…
    Section 12.2:
    A string beginning with a ‘*.’ prefix is used as an extension mapping.
    But WebLogic does not take in consideration my configuration. If I use path mapping exact mapping it work.
    My configuration is:
    <security-constraint>
        <web-resource-collection>
            <web-resource-name>Unsecured</web-resource-name>
            <url-pattern>*.wsdl</url-pattern>
            <url-pattern>*.xsd</url-pattern>
        </web-resource-collection>
        <user-data-constraint>
            <transport-guarantee>NONE</transport-guarantee>
        </user-data-constraint>
    </security-constraint>
    <security-constraint>
        <web-resource-collection>
            <web-resource-name>HttpAuth</web-resource-name>
            <url-pattern>/ws/*</url-pattern>
        </web-resource-collection>
        <auth-constraint>
            <role-name>ws-user</role-name>
        </auth-constraint>
        <user-data-constraint>
            <transport-guarantee>INTEGRAL</transport-guarantee>
        </user-data-constraint>
    </security-constraint>
    <login-config>
        <auth-method>BASIC</auth-method>
        <realm-name>Test1</realm-name>
    </login-config>
    <security-role>
        <role-name>ws-user</role-name>
    </security-role>
    WebLogic Server 12c (12.1.3)
    Has anybody used extension mapping with security-constraint? Is that a WebLogic issue?

    Hi nikita,
    I have delt with the same problem before. As you say, most JSF actions all get posted back to the original page, and the faces servlet internally redirects according to the navigation rules and actions. This can mean the URL seen by the browser does not always correspond to the actual JSP (wrapped by JSF) that produced the content.
    Generally adding the "<redirect/>" tag to all your navigation rules (in faces-config.xml) remedies this, so the actions are still posted back to the original page, but then the JSF servlet sends an http-redirect to the browser before invoking the new page. This way, the URL is always in sync, and the security constraints defined in your web descriptor always get invoked properly.
    regards,
    tony

  • Constraint not working in OBIEE 10G Dashboard prompt

    Hi All,
    We are facing a issue in constraint in dashboard prompt.
    We have two database tables Employee and Calender. Both are joined through time_id key.
    We have made a report from coloumns of both.
    We have made dashboard prompt consisting of 3 coloums i.e.
    Month-YEAR,
    Week Ending,
    Shift Date
    First two are from same employee table and Shift date is from calender.
    Both dimensions are also in Awnsers as it is.
    All 3 prompts are multiselect. Constraint box is checked in all.
    Issue is that, When I select a Month-Year in prompt, Corresponding 4 or 5 weekendings comes in multiselect box.
    But when i select a week ending from here, i don't get corresponding Seven dates for that particular week.
    Instead all shift dates available there in calender table comes in multiselect.
    Does constraint don't work on coloums of different dimensions or tables?
    Any body , Please reply with a possible solution.
    Regards,
    Apurv Mishra

    Thanks @786372 for your response..
    But, If we will make view for those coloums , then how can we apply ''Is Prompted" for prompted coloum in our report filters.
    As per my understanding, to make ''Is Prompted '' work , Fx (coloum formula) should be same in report and prompt.
    So, how can we create a different view only for prompt.
    We may use presentation variable to make above work, i won't be able to create multi-select prompt.
    Regards
    Apurv

  • Foreign key also refer to unique constraint??

    foreign key also refer to unique constraint.
    (GREAT...)
    1.then table that containt unique constraint act as master table??
    2.IS unique constraint will replace with primary key??
    3.Is unique constraint+not null gives all fuctionality as primary key constraint in ???
    4.if primary key=unique+not null then what is use of primary key????????????
    thanks
    kuljeet pal singh

    When you are establishing a foreign key relationship between two tables, a child record must point to a unique record in the parent table. Typically, the child record points to the primary key of the parent, although any unique field or fields in the parent will do.
    So, a table with a unique constraint can act as a parent table in a foreign key relationship.
    A unique constraint may be replaced with a primary key, but not neccessarily.
    A unique constraint plus a not null constraint is functionally identical to a primary key.
    The principle benefit of a primary key compared to unique plus not null is that it provides additional information to someone looking at the database. The primary key is the unchanging identifier for a particular record. A unique constraint plus a not null constraint only implies uniqueness. It is somewhat common for unique values to change over time, as long as they remain unique, but a primary key should never change.
    TTFN
    John

  • [svn] 1720: Bugs: LCDS-304 - Authentication not working in all cases when using security constraint with NIO endpoints .

    Revision: 1720
    Author: [email protected]
    Date: 2008-05-14 14:50:06 -0700 (Wed, 14 May 2008)
    Log Message:
    Bugs: LCDS-304 - Authentication not working in all cases when using security constraint with NIO endpoints.
    QA: Yes
    Doc: No
    Details:
    Update to the TomcatLoginCommand to work correctly with NIO endpoints.
    Ticket Links:
    http://bugs.adobe.com/jira/browse/LCDS-304
    Modified Paths:
    blazeds/branches/3.0.x/modules/opt/src/tomcat/flex/messaging/security/TomcatLoginCommand. java

    Revision: 1720
    Author: [email protected]
    Date: 2008-05-14 14:50:06 -0700 (Wed, 14 May 2008)
    Log Message:
    Bugs: LCDS-304 - Authentication not working in all cases when using security constraint with NIO endpoints.
    QA: Yes
    Doc: No
    Details:
    Update to the TomcatLoginCommand to work correctly with NIO endpoints.
    Ticket Links:
    http://bugs.adobe.com/jira/browse/LCDS-304
    Modified Paths:
    blazeds/branches/3.0.x/modules/opt/src/tomcat/flex/messaging/security/TomcatLoginCommand. java

  • TRUNCATE TABLE NOT WORKING AFTER DROPPING CONSTRAINTS

    Hi,
    I have a table with a foreign key constraint. I know you can't truncate tables when there are foreign key constraints. So I drop the constraints before running the TRUNCATE TABLE command. But SQL Server is still stating there are foreign key constraints
    even after they have just been dropped.
    When I use SQL Server Management Studio to generate a drop & create script on this table or any other table with an FK consttaint, the generated script fails stating that there are still foreign key constraints??
    I have the same problem for every table that has FK constraints, for those without FK, TRUNCATE table works without issues.
    The end goal is to reset the identity value of the primary key. Since DBCC does not work on Azure, TRUNCATE TABLE is the only way left, especially if you can't even drop and recreate tables with FK constraints.
    What am I missing here?
    Peter

    Hi,
    Thanks for posting here.
    TRUNCATE TABLE is similar to the DELETE statement with no WHERE clause; however, TRUNCATE TABLE is faster and uses fewer system and transaction log resources.
    TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain. To remove the table definition in addition to its data, use the DROP TABLE statement.
    If the table contains an identity column, the counter for that column is reset to the seed value defined for the column. If no seed was defined, the default value 1 is used. To retain the identity counter, use DELETE instead.
    Restrictions
    You cannot use TRUNCATE TABLE on tables that:
    •Are referenced by a FOREIGN KEY constraint. (You can truncate a table that has a foreign key that references itself.)
    •Participate in an indexed view.
    •Are published by using transactional replication or merge replication.
    For tables with one or more of these characteristics, use the DELETE statement instead.
    TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions. For more information, see CREATE TRIGGER (Transact-SQL).
    Truncating Large Tables
    Microsoft SQL Server has the ability to drop or truncate tables that have more than 128 extents without holding simultaneous locks on all the extents required for the drop.
    Permissions--------------------------------------------------------------------------------
     The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you
    can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.
    You cannot truncate a table which has an FK constraint on it.
    Typically my process for this is:
    Drop the constraints
    Trunc the table
    Recreate the constraints.
    Hope this helps you.
    Girish Prajwal

  • Add unique constraint only if it not exists

    hello racle community,
    I would like to create a unique constraint (at least over two columns) on several tables via a script. Is there any way to fire the ALTER TABLE statement only when a constraint over the same columns does not exists already ? Or do I have to query the USER_CONSTRAINTS table first ?
    Ikrischer

    Yes, something like this:
    set serveroutput on
    declare
      UNIQUE_CONS_EXISTS exception;
      pragma  exception_init(UNIQUE_CONS_EXISTS, -2261); 
    begin
      for r in (select table_name
                      ,rownum rn
                from all_tables where ....) loop
        begin
          execute immediate 'alter table ' || r.Table_Name || ' add constraint i_' || r.rn || ' unique (col1,col2)';
          when UNIQUE_CONS_EXISTS then
            dbms_output.put_line(sqlerrm);
          end;
      end loop;
      dbms_output.put_line('Done');
    end;
    /The only thing to bear in mind is that your constraint names will simply be numbered increments and will have no meaning.

  • NOT NULL Unique Constraint in Data Modeler

    I've created Unique Constraints in the Relational Model and I'm trying to figure out how to make it a NOT NULL constraint.
    Let's say the table name is category with columns cat_id, cat_name, sort.
    In SQL I create "ALTER TABLE category MODIFY (category CONSTRAINT xxx_cat_name_nn NOT NULL);", but inside the modeler there is no data entry points in the [Unique Key Properties - xxx_cat_name_nn] dialog box, that I can find, that lets me tell it that it is a NOT NULL constraint. I'm sure there is a way but I'm just fall over my own feet trying to find it.
    Any help would be greatly appricated.
    Edited by: 991065 on Feb 28, 2013 1:40 PM

    Hi,
    You can make the column NOT NULL by unsetting the "Allow Nulls" property for the Column.
    If you want a named NOT NULL Constraint, you should also set the "Not Null Constraint Name" property (on the Default and Constraint tab of the Column Properties dialog).
    David

  • Unique constraint is not thrown when used MERGE INSERT (alone) via dblink.

    We found some interesting behaviour of unique constraint on Merge query when we use Merge When Not Matched Insert (no update query) via a dblink.
    In one Schema S1, on Table A1(c1,c2,c3) there is a unique constraint on column (c1,c2).
    Column c2 is nullable and has null for some records.
    Now i have a table A2 with same defintion as A1 in Schema S2.In S2 , i have a dblink of S1 as S1 itself.
    I have data in S2.A2. Here also i have some records with c2 as null and c1 matching with the data of S1.A1.
    Now from schema S2,
    I am using the following Merge Query,
    MERGE INTO S1.A1 target
    USING S2.A2 source
    ON (target.c1 = source.c1 and target.c2 = source.c2)
    WHEN NOT MATCHED
    INSERT (c1,c2,c3) values (source.c1, source.c2,source.c3)
    WHEN MATCHED
    UPDATE c3 = source.c3;
    Now when i execute this merge in schema S2,
    if i have some data in S1.A1 and S2.A2 having c1 as same and c2 as null, as oracle does not treat two nulls same, it goes for an insert, i have got unique constraint violated error.
    But if i execute MERGE INSERT alone, though that record is getting inserted , i am not getting unique constraint violated error.
    Oracle version we are using is 10g (10.2...).
    Is it a bug in oracle or what could have caused this behaviour.

    Dear,
    ERROR at line 1:
    ORA-00001: unique constraint (SYS_C00137508) violatedYou need to think about two things
    (a) read consistency : what was the situation of table_1 when the maching clause has been initially evaluated ; there were 0 rows matching which means the merge operation will be all insert
    (b) your matching clause has a problem : the join column must be unique in both tables otherwise the merge will be ambigous. You don't have a unique key on the source table
    (c) think that the merge operation will never insert id =1 and then update id = 1 within the same operation. This will never happen
    Hope this helps
    Mohamed Houri

  • EXCLUDEUSER is not working

    db: 11.2.0.3
    ogg: Version 11.1.1.1.2_03
    For some reason which I can't figure out my TRANLOGOPTIONS EXCLUDEUSER <USER> is not working. Could anyone shed some light on this?
    Extract settings:
    EXTRACT ext1
    SETENV (ORACLE_SID = "TEST")
    USERID GGS_OWNER, PASSWORD pass_encrypted, ENCRYPTKEY DEFAULT
    TRANLOGOPTIONS ASMUSER sys@+ASM, ASMPASSWORD pass_encrypted, ENCRYPTKEY DEFAULT
    TRANLOGOPTIONS EXCLUDEUSER GGS_OWNER
    EXTTRAIL /u01/app/oracle/product/11.2.0/ogg/dirdat/lc
    TABLE schema1.test;
    TABLE schema2.*;What am I missing? Im getting the replicat to fail everytime due to unique constraint violation on the table schema1.test (It is the only table where I have bi-directional replication configured) . Both source and extract target are excluding GGS user.
    Thanks in Advance
    N K

    stevencallan wrote:
    I don't understand what excluding your GoldenGate user has to do with a constraint violation on another schema's table. You are trying to insert something that already exists (or perhaps update something to an already existing value, which is constrained by a unique constraint). Your problem is with the data between the two tables (on source and target). This has zip to do with excluding your GoldenGate user. All that does is prevent the DML from being ping-ponged back and forth. The exclude will ignore what is done on the end where DML is taking place from a replicat's operation. The insert/update that is failing is legitimate DML that is failing. Had it been successful, the exclude option would prevent that insert/update from being sent back to where it occurred in the first place.Actually Steven that is what is happenning. I have table test1 on both nodes. So I make an insert on node 1 it gets captured and replicated to node 2 and then from there it gets captured and sent back to node1 where REPLICAT fails with UQ Constraint violation on test1 table. Using the discardfile i can see it is the very same row I just inserted, hence my excludeuser is not working on node 2.
    What else could be the reason for this behaviour if not excludeuser not working?

  • Application Translation Not Working - Primary Key Error

    I had created an application translation to Spanish but it wasn't displaying my Spanish translation. I was going to try and redo it so I tried creating a new mapping and then seeding the translatable text. I got a primary key error "ORA-20001: Seed insert error: WWV_FLOW_TOPLEVEL_TABS.TAB_TEXT ORA-00001: unique constraint (FLOWS_030100.WWV_FLOW_TRANSLATABLE_TEXT_PK) violated".
    I went in and deleted what I had done through APEX for this app and tried to create a new application but still get the primary key error when I try to seed it. I gave it a new translated application ID but that doesn't seem to help. Anyone have this problem or no of the reason I'm having this issue?
    Thanks.

    Hi David,
    Thanks for reporting this. This was an interesting problem to solve.
    As it turns out, it was a logic error in Application Express (i.e., bug). When deleting a translation mapping, the associated strings in the translation repository would be deleted for that application, but if and only if you had actually published the application.
    I think this is why you could never get Spanish working properly - you had never actually published the application the first time. So I'll bet what you did is deleted the original mapping, then you recreated the mapping for the same language but with a different translated application ID. Since you had never published the application from the original translation mapping, and there were orphaned rows in the translation repository, you encountered a "collision" when you tried to seed the second time with the different translated application ID.
    Your action of deleting rows from WWV_FLOW_TRANSLATABLE_TEXT$ cleaned up these orphaned rows. As you stated, this isn't recommended to perform DML on the underlying APEX tables. A couple alternatives could have been:
    1) Before deleting the translated application mapping, actually publish the application and then delete the mapping.
    2) If you had deleted the mapping already, you could recreate the mapping for the same language and with the original translated application ID. Then, publish the application and then go back and delete the mapping.
    I realize all this sounds crazy. But it was only an issue because you had not actually published the application. Not your fault, though, as this is a bug in APEX.
    This bug will be fixed in Application Express 4.0. This way, you won't have to worry about if you published or didn't publish. The orphaned rows will be cleaned up when you delete a mapping.
    Thanks again for reporting this.
    Joel

  • Full-Text search is not working with PDF files - SQL Server 2012 64 bit

    Hi,
    We are in the process of storing PDF files in SQL Server 2012 with Full-Text search capability.
    I followed the steps as below and it works fine with word document but not for PDF files. I tried with PDF ifiler 11 & 9 and both are unsuccessful.
    Server/DB Level Settings:
    1)
    Enable FileStream
    2)
    Install Full-Text
    then restart
    3)
    Use [specific db]
    alter
    database [db name]
    add
    filegroup Files
    contains filestream;
    alter
    database [db name]
    add
    file (
    name = N'Files',
    filename =
    N'D:\SQL\DATA') to
    filegroup [Files];
    3)
    Database level
    Settings:
    FileStream:
    FileStream
    Directory name:
    [Set the name]
    FileStream
    non-transacted
    Access: [set Appropriate]
    3a)
    Add a
    datafile to DB
    with filestreamdata
    filetype.
    4)
    Share D:\SQL\DATA
    directory and
    add specific accounts
    with read/write
    access
    5)
    Give bulkadmin
    access to those
    specific accounts
    at server
    level
    6)
    From the
    page (link)
    download and
    install the *.pdf
    IFilter for
    FTS. Link:
    http://www.adobe.com/support/downloads/detail.jsp?ftpID=5542
    7)
    To the
    PATH global system
    variable add
    path to the
    catalog,
    where you installed
    the plugin.
    Default for
    this version is:
    C:\Program
    Files\Adobe\Adobe
    PDF iFilter 9
    for 64-bit
    platforms\bin
    8)
    From the
    page (link)
    download a
    FilterPackx64.exe
    and install
    it. Link:
    http://www.microsoft.com/en-us/download/confirmation.aspx?id=20109
    9)
    Now from
    SSMS execute the following
    procedures:
    -sp_fulltext_service
    'load_os_resources',1
    -sp_fulltext_service
    'verify_signature', 0
    EXEC
    sp_fulltext_service
    'update_languages';
    -- update language list
    EXEC
    sp_fulltext_service
    'restart_all_fdhosts';
    -- restart daemon
    reconfigure
    with override;
    10)
    Restart the
    server
    11)
    select document_type,
    path from
    sys.fulltext_document_types
    where document_type
    = '.pdf'
    -select
    document_type,
    path from sys.fulltext_document_types
    where document_type
    = '.docx'
    12) Results are OK.
    Following is my Table /Index/ catalog script:
    CREATE
    TABLE dbo.DocumentFilesTest
    DocumentId  INT
    IDENTITY(1,1)
    NOT NULL
    PRIMARY KEY,
    AddDate datetime
    NOT NULL,
    Name nvarchar(50)
    NOT NULL,
    Extension nvarchar(10)
    NOT NULL,
    Description nvarchar(1000)
    NULL,
    FileStream_Id UNIQUEIDENTIFIER
    ROWGUIDCOL NOT
    NULL UNIQUE DEFAULT
    NEWSEQUENTIALID(),
    FileSource varbinary(MAX)
    FILESTREAM DEFAULT(0x)
    go
    --Add default add date for document   
    ALTER
    TABLE dbo.DocumentFilesTest
    ADD CONSTRAINT
    DF_DocumentFilesTest_AddDate
    DEFAULT sysdatetime()
    FOR AddDate
    EXEC
    sp_fulltext_database
    'enable'
    GO
    IF
    NOT EXISTS
    (SELECT
    TOP 1 1 FROM sys.fulltext_catalogs
    WHERE name
    = 'Ducuments_Catalog_test')
    BEGIN
    EXEC sp_fulltext_catalog
    'Ducuments_Catalog_test',
    'create',
    'D:\SQL\PDFBlob';
    END
    --EXEC sp_fulltext_catalog 'Ducuments_Catalog_test', 'drop'
    DECLARE
    @indexName nvarchar(255)
    = (SELECT
    Top 1 i.Name
    from sys.indexes
    i
    Join sys.tables
    t on 
    i.object_id
    = t.object_id
    WHERE t.Name
    = 'DocumentFilesTest'
    AND i.type_desc
    = 'CLUSTERED')
    PRINT @indexName
    EXEC
    sp_fulltext_table
    'DocumentFilesTest',
    'create',
    'Ducuments_Catalog_test', 
    @indexName
    EXEC
    sp_fulltext_column
    'DocumentFilesTest',
    'FileSource',
    'add', 0,
    'Extension'
    EXEC
    sp_fulltext_table
    'DocumentFilesTest',
    'activate'
    EXEC
    sp_fulltext_catalog
    'Ducuments_Catalog_test',
    'start_full'
    ALTER
    FULLTEXT INDEX
    ON [dbo].[DocumentFilesTest]
    ENABLE
    ALTER
    FULLTEXT INDEX
    ON [dbo].[DocumentFilesTest]
    SET CHANGE_TRACKING
    = AUTO
    ALTER
    FULLTEXT CATALOG
    Ducuments_Catalog_test REBUILD
    WITH ACCENT_SENSITIVITY=OFF;
    INSERT
    INTO DocumentFilesTest(Extension,
    Name,
    FileSource)
    SELECT
     'pdf'
    'BOL12006553.pdf'
    * FROM
    OPENROWSET(BULK
    'd:\SQL\PDFBlob\BOL12006553.pdf',
    SINGLE_BLOB)
    AS BLOB;
    GO
    INSERT
    INTO DocumentFilesTest(Extension,
    Name,
    FileSource)
    SELECT
     'docx'
    'test.docx'
    * FROM
    OPENROWSET(BULK
    'd:\SQL\PDFBlob\test.docx',
    SINGLE_BLOB)
    AS Document;
    GO
    SELECT
    d.*
    FROM dbo.DocumentFilesTest
    d WHERE
    Contains(d.FileSource,
    'BILL')
    Returns nothing. it should come from PDF file
    SELECT
    d.*
    FROM dbo.DocumentFilesTest
    d WHERE
    Contains(d.FileSource,
    'TEST')
    Returns from word document as follows:
    2           2014-06-04 10:11:41.393            test.docx docx           
    NULL   [BINARY Value]  [Binary Value]
    Any help is appreciated. Its been a long wait.
    Thanks,
    Vel
    Vel Thavasi

    Hello,
    Did you check the fulltext log files for more details about the errors. If the filter isn’t working, there should be errors in the error log file.
    The following thread is about similar issue, please refer to:
    http://social.msdn.microsoft.com/forums/sqlserver/en-US/69535dbc-c7ef-402d-a347-d3d3e4860d72/sql-server-2008-64bit-fulltext-indexing-pdf-not-working-cant-find-ifilter
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click here.
    Fanny Liu
    TechNet Community Support

  • Uniques constraint violation error while executing statspack.snap

    Hi,
    I have configured a job to run the statspack snap at a interval of 20 min from 6:00 PM to 3:00 AM . Do perform this task , I have crontab 2 scripts : one to execute the job at 6 PM and another to break the job at 3 AM. My Oracle version is 9.2.0.7 and OS env is AIX 5.3
    My execute scripts look like:
    sqlplus perfstat/perfstat <<EOF
    exec dbms_job.broken(341,FALSE);
    exec dbms_job.run(341);
    The problem is , that the job work fine for weekdays but on weekend get aborted with the error :
    ORA-12012: error on auto execute of job 341
    ORA-00001: unique constraint (PERFSTAT.STATS$SQL_SUMMARY_PK) violated
    ORA-06512: at "PERFSTAT.STATSPACK", line 1361
    ORA-06512: at "PERFSTAT.STATSPACK", line 2471
    ORA-06512: at "PERFSTAT.STATSPACK", line 91
    ORA-06512: at line 1
    After looking on to metalink , I came to know that this is one listed bug 2784796 which was fixed in 10g.
    My question is , why there is no issue on weekdays using the same script. There is no activity on the db on weekend and online backup start quite late at night.
    Thanks
    Anky

    The reasons for hitting this bug are explained in Metalink, "...cursors with same sql text (at least 31 first characters), same hash_value but a different parent cursor...", you can also find the workaround in Note:393300.1.
    Enrique

  • Db Adapter Logical Delete not working

    Hi,
    I have an ESB that contains a dbadapter that performs a logical delete once the esb has finished processing. The problem we are seeing is that this logical delete is not always happening. We update a field in the source table from 0 to 1 on successful completion, but as I said, this does not always work, causing unique constraint violations on our destination tables. Disabling and re-enabling the dbadapter service in the ESB Console usually clears the problem up, though at times a bounce of the SOA Suite using ./opmnctl stopall is necessary. We are using SOA Suite 10.1.3.1.
    Any ideas what could be causing this behavior?

    The 10.1.3.1 had a number of issues and I would highly recommend upgrading at the earliest possible. One common issue that people get with 10.1.3.1 is people developing SOA object in 10.1.3.3 or 10.1.3.4. You must make sure that your developers used the same version of JDeveloper, e.g. 10.1.3.1.
    Here is a list of patches that I believe you should have in a 10.1.3.1 environment at a minimum, sorry I don't have the descriptions, hopefully one will address your issue.
    2617419
    5877231
    5838073
    5841736
    5905744
    5742242
    5729652
    5724766
    5664594
    5965376
    5672007
    6033824
    5758956
    5876231
    5900308
    5915792
    5473225
    5853207
    5990764
    5669155
    5149744
    cheers
    James

Maybe you are looking for

  • How to hide custom fields in Shopping cart depening on user role

    Hi, We have some custom fields in shopping cart for basic view. Every thing works fine. Now client is asking to hide all the custom fields based on user role. I found some function module to fund roles. now my main problem is unable to find the cusot

  • Editing Xl Reporter Report

    Hi all, How we can edit the XL repoter report.i m new to XL reporter.I am not getting how i can edit the report.how the columns are linked means which column is linked with which data base field. Thanks, Neetu

  • Ethernet connection very slow

    Hello! I have a printer connected to the Extreme Base on the Ethernet port, and it's EXTREMELY slow, a 10 MB PDF file takes over 2 hours to transfer. The printers ethernet card runs @ 100 Mbps. The internet connection is just fine, the multicast rate

  • Airport express on Vista with 'later version of itunes'

    Hey everybody....so I'm trying to put my airport express on my new vista machine. It's telling me when I install the software that my version of itunes is too recent. I did just download the new update, but it did that even before. Am I just down wit

  • Help organizing library?

    I recently converted my WMA from windows media player to MP3 using the newest version of itunes. In my itunes library the music in not always organized by albums. That is there are several songs in the same album but the itunes library has them seper