DEFECT: Sql Server table displays wrong primary key columns.

This is what Sql Developer shows on the Keys tab of a table display pointing to a table in Sql Server 2000:
FK_ConsolidationAccount_AccountType     Account_Id     1     FOREIGN KEY
FK_ConsolidationAccount_AccountType      AccountType_Id     1     FOREIGN KEY
FK_ConsolidationAccount_AccountType      ParentAccount_Id     1     FOREIGN KEY
FK_ConsolidationAccount_ConsolidationAccount     Account_Id     1     FOREIGN KEY
FK_ConsolidationAccount_ConsolidationAccount     AccountType_Id     1     FOREIGN KEY
FK_ConsolidationAccount_ConsolidationAccount     ParentAccount_Id     1     FOREIGN KEY
PK_ConsolidationAccount     Account_Id     1     PRIMARY KEY
PK_ConsolidationAccount     AccountType_Id     1     PRIMARY KEY
PK_ConsolidationAccount     ParentAccount_Id     1     PRIMARY KEY
This is the correct information for that table:
FK_ConsolidationAccount_AccountType      AccountType_Id     1     FOREIGN KEY
FK_ConsolidationAccount_ConsolidationAccount     ParentAccount_Id     1     FOREIGN KEY
PK_ConsolidationAccount     Account_Id     1     PRIMARY KEY

Take a look to the action:
ORA-02437: cannot validate (string.string) - primary key violated
Cause: attempted to validate a primary key with duplicate values or null values.
Action: remove the duplicates and null values before enabling a primary key.
Nicolas.

Similar Messages

  • Moving Access table with an autonumber key to SQL Server table with an identity key

    I have an SSIS package that is moving data from an Access 2010 database to a SQL Server 2008 R2 database.  Two of the tables that I am migrating have identity keys in the SQL Server tables and I need to be able to move the autonumber keys to the SQL
    Server tables.  I am executing a SQL Script to set the IDENTITY_INSERT ON before I execute the Data Flow task moving the data and then execute a SQL Script to set the IDENTITY_INSERT OFF after executing the Data Flow task.
    It is failing with an error that says:
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E21  Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was
    done.".
    Error: 0xC020901C at PGAccountContractDetail, PGAccountContractDetail [208]: There was an error with input column "ID" (246) on input "OLE DB Destination Input" (221). The column status returned was: "User does not have permission to
    write to this column.".
    Error: 0xC0209029 at PGAccountContractDetail, PGAccountContractDetail [208]: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR.  The "input "OLE DB Destination Input" (221)" failed because error code 0xC020907C occurred, and the
    error row disposition on "input "OLE DB Destination Input" (221)" specifies failure on error. An error occurred on the specified object of the specified component.  There may be error messages posted before this with more information
    about the failure.
    Error: 0xC0047022 at PGAccountContractDetail, SSIS.Pipeline: SSIS Error Code DTS_E_PROCESSINPUTFAILED.  The ProcessInput method on component "PGAccountContractDetail" (208) failed with error code 0xC0209029 while processing input "OLE DB
    Destination Input" (221). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.  There may be error messages posted
    before this with more information about the failure.
    Any ideas on what is causing this error?  I am thinking it is the identity key in SQL Server that is not allowing the update.  But, I do not understand why if I set IDENTITY_INSERT ON.  
    Thanks in advance for any help/guidance provided.

    I suspect it is the security as specified in the message. E.g .your DBA set the ID columns so no user can override values in it.
    And I suggest you 1st put the data into a staging table, then push it to the destination, this does not resolve the issue, but ensures better processing.
    Arthur
    MyBlog
    Twitter

  • Unable To Select From SQL Server table with more than 42 columns

    I have set up a link between a Microsoft SQL Server 2003 database and an Oracle 9i database using Heterogeneous Services (HSODBC). It's working well with most of the schema I'm selecting from except for 3 tables. I don't know why. The common denominator between all the tables is that they all have at least 42 columns each, two have 42 columns, one has 56, and the other one, 66. Two of the tables are empty, one has almost 100k records, one has has 170k records. So I don't think the size of the table matters.
    Is there a limitation on the number of table columns you can select from through a dblink? Even the following statement errors out:
    select 1
    from "Table_With_42_Cols"@sqlserver_db
    The error message I get is:
    ORA-28500: connection from ORACLE to a non-Oracle system returned this message [Generic Connectivity Using ODBC]
    ORA-02063: preceding 2 lines from sqlserver_db
    Any assistance would be greatly appreciated. Thanks!

    Not a very efficient and space friendly design to do name-value pairs like that.
    Other methods to consider is splitting those 1500 parameters up into groupings of similar parameters, and then have a table per group.
    Another option would be to use "vertical table partitioning" (as oppose to the more standard horizontal partitionining provided by the Oracle partition option) - this can be achieved (kind of) in Oracle using clusters.
    Sooner or later this name-value design is going to bite you hard. It has 1500 rows where there should be only 1 row. It is not scalable.. and as you're discovering, it is unnatural to use. I would rather change that table and design sooner than later.

  • How to update primary key column

    Hi,
    Can you suggest me best workaround/algorithm for below task:
    (Oracle 10g, Solaris OS.)
    Situation:
    Table P has primary key column "Code", child tables F1, F2, ..., F15 reference with foreign key column "P_Code" column "P.Code", and we don't know which of the child tables has data for particular "P.Code" value.
    Task:
    Change "P.Code" value from 100 to 200. So that result would be that record P[Code = 100] should be updated as:
    update P set
    Code = 200
    where Code = 100;And child tables column "P_Code" should be updated as:
    update F1, F2, .., F15 set
    P_code = 200
    where P_code = 100;The best solution would be that one very easily can repeat that task.
    Edited by: CharlesRoos on 28.12.2010 12:10

    If you are looking for reusable and repetitive solution, then may be...
    SQL> CREATE TABLE p (p_code NUMBER PRIMARY KEY);
    Table created.
    SQL> INSERT INTO p VALUES(100);
    1 row created.
    SQL> INSERT INTO p VALUES(300);
    1 row created.
    SQL> INSERT INTO p VALUES(500);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> CREATE TABLE F1 (p_code NUMBER REFERENCES p(p_code));
    Table created.
    SQL> CREATE TABLE F2 (p_code NUMBER REFERENCES p(p_code));
    Table created.
    SQL> CREATE TABLE F3 (p_code NUMBER REFERENCES p(p_code));
    Table created.
    SQL> INSERT INTO F1 VALUES(100);
    1 row created.
    SQL> INSERT INTO F3 VALUES(100);
    1 row created.
    SQL> INSERT INTO F2 VALUES(500);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> CREATE OR REPLACE PROCEDURE update_child_parent(pi_p_code_old NUMBER,
      2                                                  pi_p_code_new NUMBER) IS
      3    CURSOR table_to_update IS
      4      SELECT table_name,
      5             to_number(extractvalue(xmltype(DBMS_XMLGEN.getxml('SELECT count(*) c FROM ' ||
      6                                                               table_name ||
      7                                                               ' WHERE p_code=' ||
      8                                                               pi_p_code_old)),
      9                                    '/ROWSET/ROW/C')) cnt
    10        FROM user_tables
    11       WHERE table_name IN ('F1', 'F2', 'F3');
    12 
    13  BEGIN
    14    EXECUTE IMMEDIATE 'ALTER TABLE p DISABLE PRIMARY KEY CASCADE';
    15    UPDATE p SET p_code = pi_p_code_new WHERE p_code = pi_p_code_old;
    16    FOR i IN table_to_update LOOP
    17      IF i.cnt > 0 THEN
    18        EXECUTE IMMEDIATE 'UPDATE ' || i.table_name || ' SET p_code=' ||
    19                          pi_p_code_new || ' WHERE p_code=' || pi_p_code_old;
    20      END IF;
    21    END LOOP;
    22    EXECUTE IMMEDIATE 'ALTER TABLE p ENABLE VALIDATE PRIMARY KEY';
    23  END update_child_parent;
    24  /
    Procedure created.
    SQL> EXECUTE update_child_parent(100,200);
    PL/SQL procedure successfully completed.
    SQL> SELECT * FROM p;
        P_CODE
           200
           300
           500
    SQL> SELECT * FROM F1;
        P_CODE
           200
    SQL> SELECT * FROM F2;
        P_CODE
           500
    SQL> SELECT * FROM F3;
        P_CODE
           200
    SQL> INSERT INTO p VALUES(300);
    INSERT INTO p VALUES(300)
    ERROR at line 1:
    ORA-00001: unique constraint (HR.SYS_C005931) violated
    SQL> EXECUTE update_child_parent(500,900);
    PL/SQL procedure successfully completed.
    SQL> SELECT * FROM p;
        P_CODE
           200
           300
           900
    SQL>  SELECT * FROM F2;
        P_CODE
           900
    SQL>

  • Can a composite primary key column be null

    Hi All,
    It will be a silly question but still I would like to ask can a composite primary key column be null?
    Thanks,
    Rafi.

    Rafi,
    Why you think it would be allowed?
    SQL> drop table test purge;
    drop table test purge
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> create table test as select * from dba_objects;
    Table created.
    SQL> alter table test add primary key(object_id, owner);
    Table altered.
    SQL> insert into test(object_id, owner) values(null, 'aman');
    insert into test(object_id, owner) values(null, 'aman')
    ERROR at line 1:
    ORA-01400: cannot insert NULL into ("SYS"."TEST"."OBJECT_ID")
    SQL> insert into test(object_id, owner) values(1,null);
    insert into test(object_id, owner) values(1,null)
    ERROR at line 1:
    ORA-01400: cannot insert NULL into ("SYS"."TEST"."OWNER")
    SQL>HTH
    Aman....

  • FillSchema picks too many primary key columns

    I don't know whether this is an ODP.NET error or a Microsoft error.
    Oracle9i Release 9.2.0.1.0
    ODP.NET 9.2.0.4
    .NET Framework 1.1
    Create a table with one primary key column and one unique column and name the primary key column like the unique column with the suffix "_ID":
    CREATE TABLE t_bib_uebertrag_kap(
    &nbsp kapazitaet_id NUMBER(8) NOT NULL
    &nbsp&nbsp CONSTRAINT pk_kapa primary key,
    &nbsp kapazitaet VARCHAR2(15) NOT NULL
    &nbsp&nbsp CONSTRAINT a_un_kapa unique,
    &nbsp geschwindigkeit NUMBER(12) NOT NULL,
    &nbsp bemerkung VARCHAR2(255)
    After calling FillSchema on this table the PrimaryKey property of the DataTable contains 2 columns: KAPAZITAET_ID and KAPAZITAET.
    The same happens with other tables and similar column names
    R. Lüthke

    Tony, I'm gonna find time to try this because I was thinking it probably would be a double check.
    BUT... Oracle is written in C, and I imagine the extra check as a simple if comparison for the new value versus a constant NULL value. Kind of like checking for end of string. It's already doing things like checking datatypes and that as it inserts data, and I don't think this additional check (if it even exists) would add any significant overhead.
    But unless I find a much larger performance hit than I expect, I'm gonna stick with creating the NOT NULL's explicitly. Good data modelling trumps small performance tricks for me.
    My main worry is if/when somebody removes the primary key constraint (such as the example above where they might do a CREATE TABLE x AS SELECT * FROM y). Or if the data model gets updated to where the primary key becomes just a unique key, then the NOT NULL goes away when it shouldn't. I don't like basic table and column properties changing.
    Okay.. test done. Ran about 31,000 records from dba_tables through an insert into two tables, both with primary keys and one with explicit NOT NULLs. No measurable difference in stats detected.
    With NOT NULL and PRIMARY KEY:
    call count cpu elapsed disk query current rows
    Parse 1 0.35 0.35 0 0 0 0
    Execute 1 1.88 1.83 0 24792 35887 30954
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 2.24 2.18 0 24792 35887 30954
    Now with only PRIMARY KEY.
    call count cpu elapsed disk query current rows
    Parse 1 0.37 0.36 0 0 0 0
    Execute 1 1.87 1.83 0 24792 35887 30954
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 2.24 2.19 0 24792 35887 30954

  • How to know primary key column name form a table name in sql query

    Suppose I only know the table name. How to get its primary key column name from the table name?
    Thanks

    Views don't have primary keys though their underlying tables might. You'd need to pick apart the view to determine where it's columns are coming from.
    You can select the text of the view in question from user_views.

  • ORA-02070: Error when updating a SQL Server table thru an Oracle View

    I have a SQL Server table TIMESHEET which contains a number of VARCHAR and NUMERIC columns plus a DATETIME column.
    Only the DATETIME column is giving me trouble.
    On the ORACLE side I have a view which selects from the SQL Server table but in order to get the SELECT to work, I had to either put a CAST or TO_DATE function call around the DATETIME field
    Below is the relevant part of the 2 view definitions I have tried
    create view TIMESHEET as
    SELECT
    "TsKeySeq" as TS_KEY_SEQ,
    "EmployeeNo" as EMPLOYEE_NO,
    CAST("PeriodEnding" AS DATE) as PERIOD_ENDING,
    . . . (more columns - not relevant)
    FROM [email protected];
    An update to the view generates this message
    ORA-02070: database OLEMSQLPSANTDAS6 does not support CAST in this context
    create view TIMESHEET as
    SELECT
    "TsKeySeq" as TS_KEY_SEQ,
    "EmployeeNo" as EMPLOYEE_NO,
    TO_DATE("PeriodEnding") as PERIOD_ENDING,
    . . . (more columns - not relevant)
    FROM [email protected];
    An update to the view generates this message
    ORA-02070: database OLEMSQLPSANTDAS6 does not support TO_DATE in this context
    If I don't include either the TO_DATE() or CAST() then I get
    Select Error: ORA-28527: Heterogeneous Services datatype mapping error
    ORA-02063:preceding line from OLEMSQLSANTDAS6
    Does anyone have any idea how to update a SQL Server DATETIME column thru an ORACLE view?

    You can't cast accross heterogenious databases and there is no need to. HSODBC treats SQL Server DATETIME column as DATE. For example, I have SQL Server table:
    CREATE TABLE [Ops].[T_JobType](
         [JobType] [varchar](50) NOT NULL,
         [JobDesc] [varchar](200) NULL,
         [InsertDt] [datetime] NOT NULL CONSTRAINT [InsertDt_00000006]  DEFAULT (getdate()),
         [InsertBy] [varchar](128) NOT NULL CONSTRAINT [InsertBy_00000006]  DEFAULT (user_name()),
         [LastUpdated] [datetime] NOT NULL CONSTRAINT [LastUpdated_00000006]  DEFAULT (getdate()),
         [LastUpdatedBy] [varchar](128) NOT NULL CONSTRAINT [LastUpdatedBy_00000006]  DEFAULT (user_name()),
    CONSTRAINT [T_JobType_PK] PRIMARY KEY CLUSTERED
         [JobType] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 100) ON [DATA01FG]
    ) ON [DATA01FG]Now on Oracle side I do:
    SQL> desc "Ops"."T_JobType"@pbods
    Name                                      Null?    Type
    JobType                                   NOT NULL VARCHAR2(50)
    JobDesc                                            VARCHAR2(200)
    InsertDt                                  NOT NULL DATE
    InsertBy                                  NOT NULL VARCHAR2(128)
    LastUpdated                               NOT NULL DATE
    LastUpdatedBy                             NOT NULL VARCHAR2(128)
    SQL> select "InsertDt" from "Ops"."T_JobType"@pbods;
    InsertDt
    18-AUG-08
    09-OCT-08
    22-OCT-09
    18-AUG-08
    19-NOV-08
    SQL> SY.

  • Need to create form on a table with report with a table has NO primary key

    Hi, I tried to created some insert/update/delete form+report in an application, it works fine only if the table has primary key. Does anyone know how to create the same functionality with a table with no primary key? I saw an application is built on older version of htmldb that is using tables with no primary keys at all.
    Here are the specific issues that I am facing:
    - I am building some Form on a table with Report, it requires the table with primary key for form to update. Is there a workaround that I can use tables that has no primary keys at all?
    - Say if primary key is necessary in the previous report+form, but the maximum number of columns that I can use to composed a primary is only 2 for that Form-Report, I cannot find anything handling > 2 primary key. Do you know if there are some ways to composite a primary key from many columns together?
    Your help is really appreciated.
    Thanks,
    Angela

    Sorry to ask response so late. I had no time to get back to that issue before.
    Regarding the triggers, I can make it work for the update, but not the insert.
    Here is my trigger:
    create or replace trigger STATUS_T1
    instead of insert on STATUS
    begin
    insert into STATUS ("LABEL", "AREA", "OWNER", "TEST_NAME", "STATUS", "REMARKS", "BUGS", "DEV_MGR", "TEST_BY_DATE")
    values(:new.LABEL, :new.AREA, :new.OWNER, :new.TEST_NAME, :new.STATUS, :new.REMARKS, :new.BUGS, :new.DEV_MGR, :new.TEST_BY_DATE);
    end;
    by any chance, you can notify what is wrong?
    I already skip the ROWID when inserting to the view STATUS, but I cannot figure out what is wrong when inserting a new record to that view.
    It gave me the following errors:
    ORA-06550: line 1, column 38: PL/SQL: ORA-00904: "ID": invalid identifier ORA-06550: line 1, column 7: PL/SQL: SQL Statement ignored
    Error Unable to process row of table STATUS
    Then, I turned to debug mode, I am thinking that maybe because I use a HIDDEN item to hold the value of ROW_ID as I use the rowid (called ID in the view) to retrieve the record as a column link from previous page. What do you think?
    Thanks again,
    Angela

  • Updation of table records having 5-6 primary key columns.

    I have a table structure having 12 primary composite keys.I have created report + form on this table.My requirement is to update the table by using form items fields.I am taking help of url option on report attribute page of application to fetch data on form page when we click on edit link of report page.Now i am having two problems which are as follows:-
    (i)I am unable to update data as requirement is to update 5 to 6 primary fields along with other non primary keys.I tried to create Pl/SQL process which will run update query but this process is not updating values.Is there any way that i could fetch data direct from database table in query rather then taking item values.Is there any other workaround?
    (ii)One of the primary key column contains records which have ' , ' in them .For ex:- cluth,bearing.So when i get navigated to edit page i am only getting text displayed as clutch i.e. text before ',' is getting displayed in text field while comma and the text after this is not getting displayed in text field of form page.
    Any solutions will be very helpful.
    Thanks
    Abhi

    Hello Abhi,
    >> I am unable to update data as requirement is to update 5 to 6 primary fields along with other non primary keys
    APEX wizards support a composite PK with up to 2 segments only. For every other scenario, you’ll have to manually create your DML code.
    If you have control over your data model, I would listen very carefully to Andre advices. Using a single segment PK is the best practice way. If you can’t add PK to your table, it seems that you’ll have to write your own DML code. The Object Browser option of creating a Package with methods on database table(s) can be a great help.
    >> … I am taking help of url option … One of the primary key column contains records which have ' , ' in them …
    Using the f?p notation, to pass a comma in an item value, you should enclose the characters with backslashes. For example,
    \cluth,bearing\In your case, it should be the item/column notation.
    http://download.oracle.com/docs/cd/E17556_01/doc/user.40/e15517/concept.htm#BCEDJBEH
    Regards,
    Arie.
    ♦ Please remember to mark appropriate posts as correct/helpful. For the long run, it will benefit us all.
    ♦ Author of Oracle Application Express 3.2 – The Essentials and More

  • Hibernate mapping XML files for the two SQL Server tables below.

    Hello all..,
    Question 1:
    I am working on a project that needs to support a database with an inherited legacy schema that you cannot change. The schema is provided below.Hibernate mapping XML files for the two SQL Server tables below. Please provide those two XML files. Assume some hypothetical package and class names. Assume that no "fancy" stuff such as lazy initialization, optimistic locking etc is needed at this time.
    CREATE TABLE [SURVEY_ANSWERS] (
    [ANSWER_ID] [int] IDENTITY (1,1) NOT NULL,
    [QUESTION_ID] [int] NOT NULL,
    [POSITION] [int] NULL,
    [TEXT] [varchar](350) NULL
    CREATE TABLE [dbo].[SURVEY_QUESTIONS] (
          [QUESTION_ID] [int] IDENTITY (1, 1) NOT NULL ,
          [TEXT] [varchar] (350) NULL
    GO
    ALTER TABLE SURVEY_ANSWERS
    ADD CONSTRAINT pk_SURVEY_ANSWERS PRIMARY KEY(ANSWER_ID,QUESTION_ID);
    ALTER TABLE [dbo].[SURVEY_QUESTIONS] ADD
           PRIMARY KEY  CLUSTERED
                [QUESTION_ID]
    GO
    ALTER TABLE [dbo].[SURVEY_ANSWERS] ADD
           FOREIGN KEY
                [QUESTION_ID]
          ) REFERENCES [dbo].[SURVEY_QUESTIONS] (
                [QUESTION_ID]
          )Question 2:
    Assume that you are working on a project developing, say, a banking application. You are the Architect and thinking that Hibernate ORM should be used for the entire access to the relational database. As usual, you have created (or auto-generated) a set of HBM XML files as well as POJOs for which you define the mappings. Assume now that a new requirement has just popped up. The system needs to be able to import new bank accounts and user information in bulk from a very large XML file at once and store it in the database. Assume the XML file contains all necessary information to populate fields in database tables. As performance is very important for this operation. Given this description, how would you approach the problem?
    Please describe briefly.
    -Thanks and regards
    Praveen Soni

    You're not fooling anyone Dennis_Mox. But nice try.Jeez, man. Mail me at denismox[at]yandex.ru, I will show you that exact test, dammit.

  • Data Pump .xlsx into a SQL Server Table and the whole 32-Bit, 64-Bit discussion

    First of all...I have a headache!
    Found LOTS of Google hits when trying to data pump a .xlsx File into a SQL Server Table. And the whole discussion of the Microsoft ACE 64-Bit Driver or the Microsoft Jet 32-Bit Driver.
    Specifically receiving this error...
    An OLE DB record is available.  Source: "Microsoft Office Access Database Engine"  Hresult: 0x80004005  Description: "External table is not in the expected format.".
    Error: 0xC020801C at Data Flow Task to Load Alere Coaching Enrolled, Excel Source [56]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER.  The AcquireConnection method call to the connection manager "Excel Connection Manager"
    failed with error code 0xC0202009.
    Strangely enough, if I simply data pump ONE .xlsx File into a SQL Server Table utilizing my SSIS Package, it seems to work fine. If instead I am trying to be pro-active and allowing for multiple .xlsx Files by using a Foreach Loop Container and a variable
    @[User::FileName], it's erroring out...but not really because it is indeed storing the rows onto the SQL Server Table. I did check all my Delay
    Why does this have to be sooooooo difficult???
    Can anyone help me out here in trying to set-up a SSIS Package in a rather constrictive environment to pump a .xlsx File into a SQL Server Table? What in God's name am I doing wrong? Or is all this a misnomer? But if it's working how do I disable the error
    so that is stops erroring out?

    Hi ITBobbyP,
    According to your description, when you import data of .xlsx file to SQL Server database, you got the error message.
    The error can be caused by the following reasons:
    The excel file is locked by other processes. Please kindly resave this file and name it to other file name to see if the issue will be fixed.
    The ACE(Access Database Engine) is not up to date as Vaibhav mentioned. Please download the latest ACE and install it from the link:
    https://www.microsoft.com/en-us/download/details.aspx?id=13255.
    The version of OFFICE and server bitness is not the same. To solve the problem, please refer to the following document:
    http://hrvoje.piasevoli.com/2010/09/01/importing-data-from-64-bit-excel-in-ssis/
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    Wendy Fu
    TechNet Community Support

  • Getting errors when updating a column on a table having a primary key

    Hi,
    I have an application on Oracle APEX that raises the following error after an attempt (through the application) to update a column with no specific constraint on it:
    ORA-06550: line 1, column 17: PL/SQL: ORA-00936: missing expression ORA-06550: line 1, column 9: PL/SQL: SQL Statement ignoredUnable to fetch row.
    The involved table has a primary key conatraint and the corresponding column can be populated by a sequence (but there is no trigger to manipulate the sequence).
    The sequence is mentioned in the involved page definition for populating the primary key.
    If I disable the primry key and set to null the corresponding value for the primary of the record to be updated, then it is possible to update that record (thus the above column) through the application.
    Did someone encountered this situation before?
    If yes, what was then your workaround/solution?
    Kind Regards.

    Dear user8058501 ,
    Firstly) Did you check
    Auto Row Fetch (After upgrade to 4.0.1)
    Automated Row Fetch on Table with Synonym causes ORA-00936: missing expr.
    Secondly) If the problem is not resolved, Would you provide a sample on apex.oracle.com with workspace/developer account to be able to help you
    Please, if this solves your question, mark it as Correct. Otherwise as helpful.
    Best Regards
    Mahmoud

  • Updating a table with no Primary key

    What I am trying to find out is if you can uniquely update a single record in a table that has no primary key. I see no way to reference the record I want, other than specifying all the field values in a where clause. This is a problem though, because if I have another record with the same field values, that will get updated too. (I know its a bad database design, but some people do it and I need to work around it!!!), there are some situations where you would have identical records as far as a where clause is concerned, for example, you cant use a blob in a where clause so that might be where your records differ...
    In SQLServer 2000, Enterprise manager throws an error if you have a table with no primary key and duplicate records that you try to update through the GUI. EM uses stored procedures to execute updates, and it rolls back ones which result in more than one update result.
    Im not too familiar with Cursors in SQL, only that they are quite slow and not implemented on all DB products. But I think that they might be able to solve the problem (but I dont logically see how).
    Can anybody explain this to me?

    another record with the same field valuesIf you have two records and all the field values are the same then the records are logically the same anyways, so it shouldn't matter if you update both (one would question why there are two records in the first place.) In other words there is no way, either computationally or manually to tell them apart.
    I suspect that that is rather rare. Instead what is more likely is that you are only using some of the fields. So just keep adding fields until it is unique.

  • Using MS-SQL Server tables to build Universes

    My team has been tasked to create Universes so users can create WEBI reports.  We don't have a business warehouse, but that appeared not to be a problem as our understanding is that we can create Universes directly from MS-SQL Server tables. 
    On attempting to use Microsoft SQL Server Management Studio (20005), we find that once we expand the list of tables for SAP performance slows down to a 5 to 10 second response time between mouse clicks. 
    Is there a setting to work around this?  Are we approaching this the wrong way?  Is there documentation that addresses this?
    Thanks,
    Leo

    Hi Leo,
    There are a couple of options which can help improve the performance of your designer.
    1) Under tools, options on the graphics tab if you turn off the Show Row Counts, then the db will not be queried for the number of rows
    2) Check tools, options general ensure Automatic parse upon defintion and check universe integrity at opening are checked off
    It is not clear from your message, but you mentioned SAP as your source. I am not sure if you intended that you were connecting directly to SAP or not.
    The number of tables can also have an impact of the performance of the designer tool. You can limit the number of tabels by tailoring a tables strategies. This means you can limit the query that returns the list of table to a specific set.
    Hope this helps
    Alan

Maybe you are looking for