Combining tables with multi-column keys using NOT EQUALS

Hi, I am trying to perform a query in SQL to find a particular person who has a value for an attribute that nobody else has, so far my query looks like this:
SELECT CS1.*
FROM COMPANYSTAFF CS1
WHERE NOT EXISTS
(SELECT * FROM COMPANYSTAFF CS2 WHERE CS1.COMPSTAFFID != CS2.COMPSTAFFID AND CS1.MAININTEREST = CS2.MAININTEREST)
GO
The problem is that the table represents a weak entity, so the primary key is the name of the company the person works for combined with his ID. I want to change the not equals part of the above query so that I can say CS1-CompanyName and ID is not Equal to CS2-Company Name and ID rather than just both seperately.
In order to clarify what probably sounds really stupid I would really like the ability to say something like:
(CS1.COMPNAME, CS1.COMPSTAFFID) != (CS2.COMPNAME, CS2.COMPSTAFFID)
Does anyone know if this is possible or if there is another way around it?
Thanks
Dave

So ?
SELECT CS1.*
FROM COMPANYSTAFF CS1
WHERE NOT EXISTS
(SELECT * FROM COMPANYSTAFF CS2 WHERE (CS1.COMPSTAFFID != CS2.COMPSTAFFID OR CS1.COMPNAME != CS2.COMPNAME)
AND CS1.MAININTEREST = CS2.MAININTEREST)GO ?
Is is MS SQL ? :-)
Regards
Dmytro

Similar Messages

  • Error inserting a row into a table with identity column using cfgrid on change

    I got an error on trying to insert a row into a table with identity column using cfgrid on change see below
    also i would like to use cfstoreproc instead of cfquery but which argument i need to pass and how to use it usually i use stored procedure
    update table (xxx,xxx,xxx)
    values (uu,uuu,uu)
         My component
    <!--- Edit a Media Type  --->
        <cffunction name="cfn_MediaType_Update" access="remote">
            <cfargument name="gridaction" type="string" required="yes">
            <cfargument name="gridrow" type="struct" required="yes">
            <cfargument name="gridchanged" type="struct" required="yes">
            <!--- Local variables --->
            <cfset var colname="">
            <cfset var value="">
            <!--- Process gridaction --->
            <cfswitch expression="#ARGUMENTS.gridaction#">
                <!--- Process updates --->
                <cfcase value="U">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                    <cfquery datasource="#application.dsn#">
                    UPDATE SP.MediaType
                    SET #colname# = '#value#'
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <!--- Process deletes --->
                <cfcase value="D">
                    <!--- Perform actual delete --->
                    <cfquery datasource="#application.dsn#">
                    update SP.MediaType
                    set Deleted=1
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <cfcase value="I">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                   <cfquery datasource="#application.dsn#">
                    insert into  SP.MediaType (#colname#)
                    Values ('#value#')              
                    </cfquery>
                </cfcase>
            </cfswitch>
        </cffunction>
    my table
    mediatype:
    mediatypeid primary key,identity
    mediatypename
    my code is
    <cfform method="post" name="GridExampleForm">
            <cfgrid format="html" name="grid_Tables2" pagesize="3"  selectmode="edit" width="800px" 
            delete="yes"
            insert="yes"
                  bind="cfc:sp3.testing.MediaType.cfn_MediaType_All
                                                                ({cfgridpage},{cfgridpagesize},{cfgridsortcolumn},{cfgridsortdirection})"
                  onchange="cfc:sp3.testing.MediaType.cfn_MediaType_Update({cfgridaction},
                                                {cfgridrow},
                                                {cfgridchanged})">
                <cfgridcolumn name="MediaTypeID" header="ID"  display="no"/>
                <cfgridcolumn name="MediaTypeName" header="Media Type" />
            </cfgrid>
    </cfform>
    on insert I get the following error message ajax logging error message
    http: Error invoking xxxxxxx/MediaType.cfc : Element '' is undefined in a CFML structure referenced as part of an expression.
    {"gridaction":"I","gridrow":{"MEDIATYPEID":"","MEDIATYPENAME":"uuuuuu","CFGRIDROWINDEX":4} ,"gridchanged":{}}
    Thanks

    Is this with the Travel database or another database?
    If it's another database then make sure your columns
    allow nulls. To check this in the Server Navigator, expand
    your DataSource down to the column.
    Select the column and view the Is Nullable property
    in the Property Sheet
    If still no luck, check out a tutorial, like Performing Inserts, ...
    http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/index.jsp
    John

  • REMOVE_ELEMENT in a Table with ( tree by key Column )

    Hi all,
    Designed a table with Tree by key column ( a Normal table with tree )
    In one of the row there is a Drop down field provided for selection.
    We have delete button provided to the customer.
    In this delete button handeled removing the Lead selected row using the method  REMOVE_ELEMENT of COntext_node.
    so in the context node if we are having 15  elements. The particluar hierarchy selected can have minimun 4 elements
    In the UI display the total node( total hierarchy node) is getting deleted but in the debugging mode the node is having  14 elements.
    Kindly suggest how to handle it so that in the node also i have the total hierarchy deleted.
    Thank you,
    Usha

    Hello Usha,
    For this you need write the logic. If you are deleting a context element of ID say 'ROW1', then you need to take care of deleting all the context element which has parent key as 'ROW1'.
    BR, Saravanan

  • Populating a table with two columns using a custom bean

    hello,
    Can someone provide me or give me a link to an example of populating a table (with two columns) with a custom bean?
    thank you
    fwu

    1)create a java class
    2)have a list as a class variable
    3) populate the list in the constructor..
    4) map the values in the af:table
    //Employee pojo
    public class Employee {
        public Employee() {
            super();
        private String empName;
        private String empManager;
        private String job;
        public void setEmpName(String empName) {
            this.empName = empName;
        public String getEmpName() {
            return empName;
        public void setEmpManager(String empManager) {
            this.empManager = empManager;
        public String getEmpManager() {
            return empManager;
        public void setJob(String job) {
            this.job = job;
        public String getJob() {
            return job;
    //maanged bean
    public class Bean {
    private List<Employee> employee;
        public Bean() {
            super();
            employee = new ArrayList<Employee>();
          Employee e1 = new Employee();
          e1.setEmpName("xxxxx");
          e1.setEmpManager("xxxxxxxx");
          e1.setJob("xxxxxxx");
          Employee e2 = new Employee();
          e2.setEmpName("yyyyyyy");
          e2.setEmpManager("yyyyyyy");
          e2.setJob("yyyyyyt");
          Employee e3 = new Employee();
          e3.setEmpName("zzzzzz");
          e3.setEmpManager("zzzzzzz");
          e3.setJob("zzzzzzzz");
          employee.add(e1);
          employee.add(e2);
          employee.add(e3);
          employee.add(e4);
        public void setEmployee(List<Employee> employee) {
            this.employee = employee;
        public List<Employee> getEmployee() {
            return employee;
        }in the table map like
    <af:table value="#{Bean.employee}" var="row" rowBandingInterval="0"
                        id="t1">
    <af:column headerText="Employee Name" id="c1">
                  <af:inputText value="#{row.empName}" id="it5"/>
                </af:column>
                <af:column headerText="Employee Manager" id="c2">
                  <af:inputText value="#{row.empManager}" id="it2"/>
                </af:column>
    </af:table>

  • Sparse table with many columns

    Hi,
    I have a table that contains around 800 columns. The table is a sparse table such that many rows
    contain up to 50 populated columns (The others contain NULL).
    My questions are:
    1. Table that contains many columns can cause a performance problem? Is there an alternative option to
    hold table with many columns efficiently?
    2. Does a row that contains NULL values consume storage space?
    Thanks
    dyahav

    [NULLs Indicate Absence of Value|http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref725]
    A null is the absence of a value in a column of a row. Nulls indicate missing, unknown, or inapplicable data. A null should not be used to imply any other value, such as zero. A column allows nulls unless a NOT NULL or PRIMARY KEY integrity constraint has been defined for the column, in which case no row can be inserted without a value for that column.
    Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
    Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns, the columns more likely to contain nulls should be defined last to conserve disk space.
    Most comparisons between nulls and other values are by definition neither true nor false, but unknown. To identify nulls in SQL, use the IS NULL predicate. Use the SQL function NVL to convert nulls to non-null values.
    Nulls are not indexed, except when the cluster key column value is null or the index is a bitmap index.>
    My guess for efficiently storing this information would be to take any columns that are almost always null and place them at the end of the table definition so they don't consume any space.
    HTH!

  • Mapping tables with no primary key defined

    Hi!
    I was just wondering if it's possible to map a table with no primary key? The reason why is that I have a logging table that I want to access, which is built very simple and the only useful filtering criteria that I'll be using there is a date column.
    Regards,
    Bjorn Boe

    TopLink requires that an object have a primary key to determine identity, perform caching, and database updates and deletes. It is best to define a primary key in all your objects; a sequence number can be used if the object has no natural primary key. The primary key does not have to be defined on the database, as long as a set of fields in the table are unique you can define them to be the primary key in your TopLink descriptor.
    If your table truly has no primary key, nor unique set of fields, it is still possible to map the table with some restrictions. If you disable caching, and mark the object as read-only, and never update or delete the object, you can map it for read-only purposes.
    You will need to do the following:
    - Set any field as the primary key to avoid the validation warning.
    - Set the cache type to NoIdentityMap.
    - Set the descriptor to be read-only.

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • Cluster table with Spatial column

    Hi,
    I tried to create a spatial table(with one SDO_GEOMETRY column) with cluster on one attribute column. But I keep getting error ORA-03001: unimplemented feature.
    Is this mean that I can not cluster a table with SDO_GEOMETRY column?
    Thanks
    Helen

    Hi Helen,
    The parameter you mention is only for real application clusters, not for clustering columns of tables.
    As far as I can tell, when clustering columns of different tables together Oracle will try to
    store all of the data associated with those tables together on disk.
    The Oracle Spatial geometry datatype (mdsys.sdo_geometry) includes two varray types of
    length 1048576. Because these varrays can hold so much data Oracle "makes arraingements"
    to store data in these columns outside of the table in a lob segment (in reality, data is only
    stored out-of-line if there is over 4kb of data in the varray).
    Because of this (no ability to ensure the spatial data is stored with the clustering columns),
    the clustering mechnism is disabled when you have spatial data.
    I read through the doc and it is unclear - the only restriction I could find is using these columns
    as the clustering key.
    Hope this helps,
    Dan

  • Select count(x) on a table with many column numbers?

    Hi all,
    i have a table with physical data with 850 (!!) colums and
    ~1 Million rows.
    The select count(cycle)from test_table Statement is very, very slow
    WHY?
    The select count(cycle)from test_table is very fast by e.g 10 Colums. WHY?
    What has the number of columns, to do with the SELECT count(cyle).... statement?
    create test_table(
    cycle number primary key,
    stamp date,
    sensor 1 number,
    sensor 2 number,
    sensor_849 number,
    sensor_850 number);
    on W2K Oracle 9i Enterprise Edition Release 9.2.0.4.0 Production
    Can anybody help me?
    Many Thanks
    Achim

    hi lennert, hi all,
    many thanks for all the answers. I�m not an Oracle expert.
    Sorry for my english.
    Hi Lennert,
    you are right, what must i do to use the index in the
    query? Can you give me a pointer of direction, please?
    Many greetings
    Achim
    select count(*) from w4t.v_tfmc_3_blocktime;
    COUNT(*) ==> Table with 3 columns (very fast)
    306057
    Ausf�hrungsplan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 SORT (AGGREGATE)
    2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _BLOCKTIME'    
    Statistiken
    0 recursive calls
    0 db block gets
    801 consistent gets
    794 physical reads
    0 redo size
    388 bytes sent via SQL*Net to client
    499 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    select count(*) from w4t.v_tfmc_3_value;
    COUNT(*)==> Table with 850 columns (very slow)
    64000
    Ausf�hrungsplan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 SORT (AGGREGATE)
    2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _VALUE'    
    Statistiken
    1 recursive calls
    1 db block gets
    48410 consistent gets
    38791 physical reads
    13068 redo size
    387 bytes sent via SQL*Net to client
    499 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

  • Error while importing a table with BLOB column

    Hi,
    I am having a table with BLOB column. When I export such a table it gets exported correctly, but when I import the same in different schema having different tablespace it throws error
    IMP-00017: following statement failed with ORACLE error 959:
    "CREATE TABLE "CMM_PARTY_DOC" ("PDOC_DOC_ID" VARCHAR2(10), "PDOC_PTY_ID" VAR"
    "CHAR2(10), "PDOC_DOCDTL_ID" VARCHAR2(10), "PDOC_DOC_DESC" VARCHAR2(100), "P"
    "DOC_DOC_DTL_DESC" VARCHAR2(100), "PDOC_RCVD_YN" VARCHAR2(1), "PDOC_UPLOAD_D"
    "ATA" BLOB, "PDOC_UPD_USER" VARCHAR2(10), "PDOC_UPD_DATE" DATE, "PDOC_CRE_US"
    "ER" VARCHAR2(10) NOT NULL ENABLE, "PDOC_CRE_DATE" DATE NOT NULL ENABLE) PC"
    "TFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS"
    " 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "TS_AGIMSAPPOLOLIVE030"
    "4" LOGGING NOCOMPRESS LOB ("PDOC_UPLOAD_DATA") STORE AS (TABLESPACE "TS_AG"
    "IMSAPPOLOLIVE0304" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10 NOCACHE L"
    "OGGING STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEF"
    "AULT))"
    IMP-00003: ORACLE error 959 encountered
    ORA-00959: tablespace 'TS_AGIMSAPPOLOLIVE0304' does not exist
    I used the import command as follows :
    imp <user/pwd@conn> file=<dmpfile.dmp> fromuser=<fromuser> touser=<touser> log=<logfile.log>
    What can I do so that this table gets imported correctly?
    Also tell me "whether the BLOB is stored in different tablespace than the default tablespace of the user?"
    Thanks in advance.

    Hello,
    U can either
    1) create a tablespace with the same name in destination where you are trying to import.
    2) get the ddl of the table, modify the tablespace name to reflect the existing tablespace name in destination and run the ddl in the destination database, and run your import command with option ignore=y--> which will ignore all the create errors.
    Regards,
    Vinay

  • HOW TO CREATE A TABLE WITH 800 COLUMNS?

    I have to create a table with 800 columns.I know the create statement to create a table but it will take more time.
    So tell me the other method.

    If you really think that you have to store 800 values for a given entity, it would be a wise idea if you store you information in columnar fashion. Make a main table and a attribute table, keep the primary identifier in the  main table and store other attributes in the attribute table where you can keep the primary key of the first table as foreign key (not necessary) to maintain the relationship.
    eg.
    emp_id
    emp_name
    dob
    city
    state
    country
    pincode
    1
    Mr X
    01/01/1990
    ABC
    ZXC
    MMM
    12345
    Can be stored as
    Main Table
    emp_id
    emp_name
    1
    Mr X
    Attribute Table
    attr_id
    emp_id
    attr_nam
    attr_value
    1
    1
    dob
    01/01/1990
    2
    1
    city
    ABC
    3
    1
    state
    ZXC
    4
    1
    country
    MMM
    5
    1
    pincode
    12345
    Creating table with large number of columns is bad design as suggested by other Gurus.
    Thanks

  • Dynamic Table with two columns

    Hi!
    i have to create a Dynamic Table with two columns having 5-5 links each with some text...... three links r based on certain conditions....they r visible only if condition is true...
    if the links r not visible in this case another links take it's place & fill the cell.
    links/text is coming from database.
    i am using Struts with JSP IDE netbeans
    Please help me
    BuntyIndia

    i wanna do something like this
    <div class="box_d box_margin_right">
              <ul class="anchor-bullet">
              <c:forEach items="${data.faqList}" var="item" varStatus="status"
                        begin="0" end="${data.faqListSize/2-1}">
                        <li>${item}</li>
                   </c:forEach>
              </ul>
              </div>
              <div class="box_d">
              <ul class="anchor-bullet">
              <c:forEach items="${data.faqList}" var="item" varStatus="status"
                        begin="${data.faqListSize/2}" end="${data.faqListSize}">
                        <li>${item}</li>
                   </c:forEach>
              </ul>
              </div>
    wanna divide table in two columns....if one link got off due to condition other one take it's position...
    I have created a textorderedlist
    Bunty

  • Updating a table with no Primary key

    What I am trying to find out is if you can uniquely update a single record in a table that has no primary key. I see no way to reference the record I want, other than specifying all the field values in a where clause. This is a problem though, because if I have another record with the same field values, that will get updated too. (I know its a bad database design, but some people do it and I need to work around it!!!), there are some situations where you would have identical records as far as a where clause is concerned, for example, you cant use a blob in a where clause so that might be where your records differ...
    In SQLServer 2000, Enterprise manager throws an error if you have a table with no primary key and duplicate records that you try to update through the GUI. EM uses stored procedures to execute updates, and it rolls back ones which result in more than one update result.
    Im not too familiar with Cursors in SQL, only that they are quite slow and not implemented on all DB products. But I think that they might be able to solve the problem (but I dont logically see how).
    Can anybody explain this to me?

    another record with the same field valuesIf you have two records and all the field values are the same then the records are logically the same anyways, so it shouldn't matter if you update both (one would question why there are two records in the first place.) In other words there is no way, either computationally or manually to tell them apart.
    I suspect that that is rather rare. Instead what is more likely is that you are only using some of the fields. So just keep adding fields until it is unique.

  • Table with frozen columns

    In a recent project I've had the need of a Java Swing table with frozen columns.
    After a long research at the Swing forum at java.sun.org and google I found out that
    1. Currently there is no proper Open Source solution
    2. All the postings at sun are good concepts, but far away from solutions
    3. The only (for me )acceptable commercial solution is the JCTable from Quest (formerly KGroup)
    Since I want to have full control over the source code of the table, I decided to collect all the tips from this forums and write my own table.
    Here is the my first result:
    http://jroller.com/resources/kriede/CoolTable.java
    The main idea is to have two tables, one for the locked columns (=fixed columns = frozen columns) and one for the scrollable columns. With all the tips from this forum it was more or less a puzzle to make it work pretty. The next task will be to provide a JTable-like interface, meaning to write delegate/adapter methods to the API of the two nested JTable instances.

    The FixedColumnExample from the link that you provided doesn't work with JDK 1.3 or higher.
    Anyway, putting a seperate table for the fixed columns into the row header is the first step to solve the problem of fixed columns.
    The next problem is to synchonize the scrolling behaviour of the two tables. The FixedColumnExample tries to do it by overwriting the valueChange method (maybe this was working with JDK 1.8), but with JDK 1.2 or higher it is recommended to use ChangeListeners. That's what I am doing, as also described here: http://www.chka.de/swing/components/JScrollPane-bugfix.html.
    Another problem is navigation with the tab or arrow keys: The default actions for those events move the selection only within one JTable. It would be nicer, if e.g. the tab key moves the selection within one row across both, the fixed and the scrollable columns.
    Those problems and some more are solved in the CoolTable sample:
    http://jroller.com/resources/kriede/CoolTable.java
    Please try it out and send comments.

  • Exporting Table with CLOB Columns

    Hello All,
    I am trying to export table with clob columns with no luck. It errors saying EXP-00011TABLE do not exist.
    I can query the table and the owner is same as what i am exporting it from.
    Please let me know.

    An 8.0.6 client definitely changes things. Other posters have already posted links to information on what versions of exp and imp can be used to move data between versions.
    I will just add that if you were using a client to do the export then if the client version is less than the target database version you can upgrade the client or better yet if possilbe use the target database export utility to perform the export.
    I will not criticize the existance of an 8.0.6 system as we had a parent company dump a brand new 8.0.3 application on us less than two years ago. We have since been allowed to update the database and pro*c modules to 9.2.0.6.
    If the target database is really 8.0.3 then I suggest you consider using dbms_metadata to generate the DDL, if needed, and SQLPlus to extact the data into delimited files that you can then reload via sqlldr. This would allow you to move the data with some potential adjustments for any 10g only features in the code.
    HTH -- Mark D Powell --

Maybe you are looking for

  • [iphone] how to do a fade out/in transition on switch to landscape

    In my app when I switch from portrait to landscape I completely change the view. Right now it does the normal animated transition from portrait to landscape and then paints in my new view. This looks kind of stupid.... so what I want is when I switch

  • March Madness on Demand!!!!

    OK, so I want to watch the tournament games on my PB, but for some reason they just won't work. I've already installed RealPlayer and Windows Media Player, but for some reason it always just says "Connecting to media" and eventually (after like 10 mi

  • Update picture slows down

    Hi I am trying to make a little robot that can draw a 2D map of the suroundings. (It uses a rotating distance sensor, like af radar) I whant to use the picture-control, to draw in the information, so i can get a printable picture out. I have already

  • Using Document Management within SAP

    Hi, I'm relatively new to SAP and need to have some information about using the Document Management Services within SAP.  As with other ERP products there are drawbacks to allowing this functionality such as disk space concerns, upgrades, performance

  • Several problems after IOS 6 upgrade.

    Screen brightness keeps changingto a dark screen - then won't light up when the button is pushed. Phone is not ringing when calls come in. Apps are not starting or will not update. And several other issues!!!