Lock escalation on table with no key/index

Hi,
I've got the table with single column only (int) and no indexes nor pk/uq, just auto-created statistics.
now, in transaction, there is a command: SELECT <COL> FROM <TABLE> WITH (ROWLOCK UPDLOCK) WHERE <COL>=100
Table has variable num of records but it doesn't go over 200k, usually below 60k. Column values are mostly unique but can be duplicated in minor portion.
Question 1: when this select executes, what will be locked - row, page or entire table?
Question 2: if I have PK/UQ/index and execute same SELECT as above, what will locked then?

Hi,
1. Please note that every lock taken by sql server has some memory associated with it so more the number of locks more memory it will use. To avoid this unnecessary usage sql server uses Lock escalation when it deems necessary that it would be better to
lock whole table instead of locking whole row.
There is no use here using UPDLOCK with select statement, it would just change locking behavior and would force U lock lets see example
I have a table having structure as
CREATE TABLE [dbo].[Logging](
[Id] [int] IDENTITY(1,1) NOT NULL,
[c1] [char](50) NOT NULL DEFAULT ('SOME MORE DATA')
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
it ha some 20 K records now lets concentrate on query which you asked
begin transaction
select @@SPID,ID,C1 from dbo.Logging with (ROWLOCK,UPDLOCK) where id=12
--commit
To see locks taken by this query we would run below query in other SSMS window
Select
resource_type,
request_mode,
request_status,
request_session_id
from sys.dm_tran_locks
where resource_database_id=11--databse ID
below is result
You can see update lock bein taken on row and IX intent exclusive lock on object that is table and shared lock on database.
Now if we change the query to scan whole table like below
Now you can see Only exclusive lock is taken on whole table instead of taking on every row. If you see the query every row has predicate 'some more data' so ultimately a table scan would be required .
Above is assuming default Isolation level
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP

Similar Messages

  • Create table with constraint and index

    If I create a table with constraint key; after that I create an unique index key, I got an error. Please see below.
    Does it mean when I create a table with constraint the unique index are automatically created and I could not create
    index key as I did as below?
    create table test_const(ename varchar2(50) not null,
    key_num number not null,
    descr varchar2(100),
    constraint constraint_test_const unique (ename, key_num));
    create unique index test_const_idx on test_const
    "ENAME","KEY_NUM"
    tablespace tmp_data;
    Error report:
    SQL Error: ORA-01408: such column list already indexed
    *01408. 00000 - "such column list already indexed"*

    Not too hard to check (the answer is yes by the way).
    ME_XE?create table test_const(ename varchar2(50) not null,
    key_num number not null,
    descr varchar2(100),
    constraint constraint_test_const unique (ename, key_num));
      2    3    4 
    Table created.
    Elapsed: 00:00:00.12
    ME_XE?select index_name, index_type
    from user_indexes where table_name = 'TEST_CONST';
      2 
    INDEX_NAME                     INDEX_TYPE
    CONSTRAINT_TEST_CONST          NORMAL
    1 row selected.
    Elapsed: 00:00:00.14
    ME_XE?

  • Deadlock when updating different rows on a single table with one clustered index

    Deadlock when updating different rows on a single table with one clustered index. Can anyone explain why?
    <event name="xml_deadlock_report" package="sqlserver" timestamp="2014-07-30T06:12:17.839Z">
      <data name="xml_report">
        <value>
          <deadlock>
            <victim-list>
              <victimProcess id="process1209f498" />
            </victim-list>
            <process-list>
              <process id="process1209f498" taskpriority="0" logused="1260" waitresource="KEY: 8:72057654588604416 (8ceb12026762)" waittime="1396" ownerId="1145783115" transactionname="implicit_transaction"
    lasttranstarted="2014-07-30T02:12:16.430" XDES="0x3a2daa538" lockMode="X" schedulerid="46" kpid="7868" status="suspended" spid="262" sbid="0" ecid="0" priority="0"
    trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
    hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783115" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
               <inputbuf>
    UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales1', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'ABCD' AND cam_window= 2   </inputbuf>
              </process>
              <process id="process9cba188" taskpriority="0" logused="2084" waitresource="KEY: 8:72057654588604416 (2280b457674a)" waittime="1397" ownerId="1145783104" transactionname="implicit_transaction"
    lasttranstarted="2014-07-30T02:12:16.427" XDES="0x909616d28" lockMode="X" schedulerid="23" kpid="8704" status="suspended" spid="155" sbid="0" ecid="0" priority="0"
    trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
    hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783104" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
                <inputbuf>
    UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales2', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'12345' AND cam_window= 1   </inputbuf>
              </process>
            </process-list>
            <resource-list>
              <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
                <owner-list>
                  <owner id="process9cba188" mode="X" />
                </owner-list>
                <waiter-list>
                  <waiter id="process1209f498" mode="X" requestType="wait" />
                </waiter-list>
              </keylock>
              <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock18ee1ab00" mode="X" associatedObjectId="72057654588604416">
                <owner-list>
                  <owner id="process1209f498" mode="X" />
                </owner-list>
                <waiter-list>
                  <waiter id="process9cba188" mode="X" requestType="wait" />
                </waiter-list>
              </keylock>
            </resource-list>
          </deadlock>
        </value>
      </data>
    </event>

    To be honest, I don't think the transaction is necessary, but the developers put it there anyway. The select statement will put the result cam_status
    into a variable, and then depends on its value, it will decide whether to execute the second update statement or not. I still can't upload the screen-shot, because it says it needs to verify my account at first. No clue at all. But it is very simple, just
    like:
    Clustered Index Update
    [analyst_monitor].[IX_Clust_scam_an_name_window]
    cost: 100%
    By the way, for some reason, I can't find the object based on the associatedObjectId listed in the XML
    <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor"
    indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
    For example: 
    SELECT * FROM sys.partition WHERE hobt_id = 72057654588604416
    This return nothing. Not sure why.

  • Maintenance View for custom table with foreign key relationship

    Hi Folks,
         I have created a custom table with foreign key relationship with other check tables. I want to create a maintenance view / tablemaintenance generator. What all things I need to take care for the foreign keys related fields while creating the maintenance view / tablemaintenance generator.
    Regards,
      santosh

    Hi,
    You do not have to do anything explicitely for the foreign key relationships in the table maintainance generator.
    Create the table maintainance generator via SE11 and it will take care of all teh foreign key checks by itself.
    Regards,
    Ankur Parab

  • ORA-00942 error on truncating a table with a XML Index

    Oracle Version: 11.2.0.1.0
    When truncate command fails with error "ORA-00942: table or view does not exist" when run against a table with an XML Index defined
    SQL> CREATE TABLE XML_TEST
    2 (
    3 ID INTEGER,
    4 TESTXML SYS.XMLTYPE
    5 );
    Table created.
    SQL> truncate table XML_TEST;
    Table truncated.
    SQL> CREATE INDEX xmlindex ON XML_TEST(TESTXML)
    2 indextype IS xdb.xmlindex
    3 parameters ('PATH TABLE MY_PATH_TABLE');
    Index created.
    SQL> truncate table XML_TEST;
    truncate table XML_TEST
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> Drop Index xmlindex;
    Index dropped.
    SQL> truncate table XML_TEST;
    Table truncated.

    No, I don't think that explanation is correct. I don't think it has to do with user privs. besides, we don't
    adjust rowids on an import -- we recreate the index, just like a b-tree index import would.
    This should be working. It's most likely a bug in our (i.e. Text) import code -- SYS.XMLTYPE is a little
    strange because under the covers it's actually a function-based index.
    I will test it out and file a bug if I can reproduce the behavior on solaris.

  • How to insert record in child table with foreign key

    Hi,
    I am using Jdeveloper 11.1.2.0. I have two master table one child table.
    How to insert and update a record in child table with foreign key ?
    I have created VO based on three EO(one eo is updatable other two eo are references) by using joined query.
    Thanks in Advance
    Edited by: 890233 on Dec 24, 2011 10:40 PM

    ... And here is the example to insert using sequenceimpl by getting the primary key of the master record and insert master and detail together.
    Re: Unable to insert a new row with a sequence generated column id
    -Arun

  • Why Segment shrink is not supported for tables with function-based indexes

    As we all know , Segment shrink is not supported for tables with function-based indexes.
    But i'm very confused .
    Why Segment shrink is not supported for tables with function-based indexes ?? what's its essential?

    Creating a function based index creates a hidden virtual column (you'll see it if you query user_tab_cols) and once you index a virtual column you can no longer shrink the table:orcl> create table t1(c1 number,c2 as (c1 * 2)) segment creation immediate;
    Table created.
    orcl> alter table t1 enable row movement;
    Table altered.
    orcl>
    orcl> alter table t1 shrink space;
    Table altered.
    orcl> create index i2 on t1(c2);
    Index created.
    orcl> alter table t1 shrink space;
    alter table t1 shrink space
    ERROR at line 1:
    ORA-10631: SHRINK clause should not be specified for this object
    orcl>so the issue is not with function based indexes per se, it is a level beneath that. Perhaps because the virtual column has no physical existance, when the row is moved there is no reason for Oracle to realize that an index needs updating? I haven't attempted to reverse engineer this, I would be interested to know if anyone else has.

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Updating database table with DUPLICATE keys

    i have an internal table having data as follows.
    emp_id     name     date           proj_id  activity_id    Hours  Remarks
    101     Pavan    09.10.2007      123     1         2.00       Coding
    101     Pavan     09.10.2007   124     2         1.00        Documentation
    102     Raj     09.10.2007   123        3         6.00     Testing
    Now i need to update a Ztable with above mentioned data.
    The structure of the Ztable is as follows.
    Mandt     emp_id  name    date    proj_id   activity_id   Hours   Remarks
    NOte: i have ticked both check boxes for the field MANDT in table.
    Rest didnt select the check boxes.
    I believe now the field MANDT alone is a primary key for the z-table.
    NOw i have tried with UPDATE/INSERT statments to update the database table.
    But instead of inserting all the rows, the system is over writing on the same emp_id row.
    Even tried with the statments like INSERT INTO <Ztable> values <Internal table> ACCEPTING DUPLICATE KEYS.
    But its not getting inserted as a separate row in the table.
    Requirement is to insert the multiple rows in the database table without any over writing activity.
    Can anyone give me the code to do this?
    Regards
    Pavan

    Hi Pavan,
        Please let me know what are the key fields in your Ztable. Try with below code it may help you. change the "Ztablename" as your database table name and I_INTERNAL TABLE  as your internal table name. still you are facing the problem please let me know.
    lock the custom table before updating the table.
      CALL FUNCTION 'ENQUEUE_E_TABLE'
       EXPORTING
      MODE_RSTABLE         = 'E'
         TABNAME              = 'ZTABLENAME'
      VARKEY               =
      X_TABNAME            = ' '
      X_VARKEY             = ' '
      _SCOPE               = '2'
      _WAIT                = ' '
      _COLLECT             = ' '
       EXCEPTIONS
         FOREIGN_LOCK         = 1
         SYSTEM_FAILURE       = 2
         OTHERS               = 3
      IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ELSE.
        INSERT  ZTABLENAME FROM TABLE I_INTERNALTABLE ACCEPTING DUPLICATE KEYS.
        COMMIT WORK.
      ENDIF.
    unlock after updating the custom table After updation is done.
      CALL FUNCTION 'DEQUEUE_E_TABLE'
       EXPORTING
        MODE_RSTABLE       = 'E'
         TABNAME            = 'ZTABLENAME'
        VARKEY             =
        X_TABNAME          = ' '
        X_VARKEY           = ' '
        _SCOPE             = '3'
        _SYNCHRON          = ' '
        _COLLECT           = ' '

  • Issues with reverse key indexes and range scan

    I have a question. Why is it that reverse key indexes do not work in a range scan?
    Thanks

    Chris, well said in simple terms.
    Extract from metalink:
    Oracle8 provides the ability to create reverse key indexes. Reverse key indexes
    reverse the bytes of each indexed column with the exception of ROWID and still
    maintains the column order. Reverse key indexes are useful for Oracle Parallel
    Server environments.
    In an OPS environment, modifications to indexes are focused on a small
    set of leaf blocks. Reversing the keys of the index allows insertions to be
    distributed across all the leaf keys in the index. Reverse key indexes prevent
    queries from performing an index range scan since lexically adjacent keys
    are not stored next to each other. Reverse key indexes can also be used in
    situations where users insert ascending values and delete lower values from the
    table, thus helping to prevent skewed indexes.
    ===
    Good discussion at
    http://asktom.oracle.com/pls/ask/f?p=4950:8:2737861489787945222::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:627823669999
    http://asktom.oracle.com/pls/ask/f?p=4950:8:2737861489787945222::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:6163160472530
    Jaffar
    Message was edited by:
    The Human Fly

  • How to create a view on tables with different keys?

    I have to create a View on:
    Z3PVR: Transparent Table
    BSEG: Cluster Table
    CKIS: Transparent Table
    BKPF: Transparent Table
    RV61A: Structure
    T001: Transparent Table
    All the tables have different "Key Fields" and the structure has no "Key Fields". When i create the view, what do I mention in the "JOIN FIELDS" tab. and how do i create the view with the structure?
    Please advise.

    How to create a view on a Non-Transparent Tables.
    how to create view?
    HELP.. How to create a view with the tables with ALV

  • Best Practice loading Dimension Table with Surrogate Keys for Levels

    Hi Experts,
    how would you load an Oracle dimension table with a hierarchy of at least 5 levels with surrogate keys in each level and a unique dimension key for the dimension table.
    With OWB it is an integrated feature to use surrogate keys in every level of a hierarchy. You don't have to care about
    the parent child relation. The load process of the mapping generates the right keys and cares about the relation between the parent and child inside the dimension key.
    I tried to use one interface per Level and created a surrogate key with a native Oracle sequence.
    After that I put all the interfaces in to one big Interface with a union data set per level and added look ups for the right parent child relation.
    I think it is a bit too complicated making the interface like that.
    I will be more than happy for any suggestions? Thank you in advance!
    negib
    Edited by: nmarhoul on Jun 14, 2012 2:26 AM

    Hi,
    I do like the level keys feature of OWB - It makes aggregate tables very easy to implement if your sticking with a star schema.
    Sadly there is nothing off the shelf with the built in knowledge modules with ODI , It doesnt support creating dimension objects in the database by default but there is nothing stopping you coding up your own knowledge module (use flex fields maybe on the datastore to tag column attributes as needed)
    Your approach is what I would have done, possibly use a view (if you dont mind having it external to ODI) to make the interface simpler.

  • Multiple entries in a Z table with same key fields

    Hi
    I do have a ZTABLE, with 3 key fields defined earlier. It consists of around 1 lakh records. Later onwards, two of the non keyfields have been made to key fields.
    This table is being populated with records at the time of saving a ztransaction.
    But some times, the system is updating the same records, some times twice, sometimes thrice, etc. I got to know that all fields (both key fields and non-key fields) of the record are same. That is, records are being updated in to the database table n number of times may be depending of some false logic in the program.
    If I tried to enter the same using SM30, it is showing me an error message stating that the record is already existing.
    What can be the reson?

    Hi,
    It seems there is some kind of data inconsistency..try to get all the records and then delete the duplicate entries through program....Now once u r done , from now onwards there won't be any duplicate entiers.,also before updating table use filters to avoid the duplication..
    Regards,
    Nagaraj

  • Problem CREATE TABLE with PRIMARY KEY Still in Trouble ! Please Help!

    Hi there !
    I use the orcle 8i, and i don't know why i can't create table with any primary key EXample:
    SQL> CREATE TABLE O_caisses
    2 (No_caisse NUMBER(3) constraint caisses_pk PRIMARY KEY,
    3 NB_BILLETS NUMBER(5)
    4 )
    5 /
    CREATE TABLE O_caisses
    ERROR at line 1:
    ORA-18008: cannot find OUTLN schema
    ***********some Debuger show me this way: *********************
    Well there r certain point u got to notice when creating a table with constraints.
    1) U can create table with COLUMN level constraint.
    2) U can create table with TABLE level constraint.
    3) In COLUMN level constraint u can't give a constraint a name
    but only mention the type of constraint.
    4) In TABLE level constraint u can give a name to constraint.
    Following are the examples of both
    --COLUMN LEVEL
    CREATE TABLE O_caisses
    (No_caisse NUMBER(3) PRIMARY KEY,
    NB_BILLETS NUMBER(5));
    --TABLE LEVEL
    CREATE TABLE O_caisses
    (No_caisse NUMBER(3),
    NB_BILLETS NUMBER(5),
    constraint pk_caisse primary key (No_caisse));
    ********************And this is another one:*****************
    SQL>grant create any outline to username;
    BUT the problem is still present, i don't know what to do now !
    Please could some body help me !
    Thanks alot!
    Luong.

    The clue is in the error message: the OUTLN schema is missing.
    This is something Oracle 8i introduced to help manage the CBO (or soemthing equally geeky and internal). For some reason your database no longer has this user. It ought to be created automatically during installation (or upgrade) but catproct may not have completed probably or some over zealous admin type has dropped it.
    Solution is to re-install (or re-upgrade) as you cannot create this user on their own. Alas.
    HTH, APC

  • Select from 2 tables with common key and than left join(unique result)

    Hi,
    I have 2 tables that have a common id (Customer_id) and I have a third table which contain her key consist of 3 column ( Customer_id , rms_customer_id,billind_tree_id) and have the manager_name field
    I am using a left join because not all the customers have a manager.
    I need to take only the manager_name field from the left join but the problem is that I am not getting a unique name results because the customer_id is not the primary key(only part of )
    so I have use the following :
    left join
    ( select * from ACCOUNT_Manager am where
    rms_customer_id <= all (select rms_customer_id from ACCOUNT_Manager am2 where
    am2.customer_id = am.customer_id )) am
    on c.ID = am.CUSTOMER_ID (C is one of the first 2 tables with the ID as key)
    Is there anyway more efficient of doing it ?
    Thanks

    Please consider the following when you post a question. This would help us help you better
    1. New features keep coming in every oracle version so please provide Your Oracle DB Version to get the best possible answer.
    You can use the following query and do a copy past of the output.
    select * from v$version 2. This forum has a very good Search Feature. Please use that before posting your question. Because for most of the questions
    that are asked the answer is already there.
    3. We dont know your DB structure or How your Data is. So you need to let us know. The best way would be to give some sample data like this.
    I have the following table called sales
    with sales
    as
          select 1 sales_id, 1 prod_id, 1001 inv_num, 120 qty from dual
          union all
          select 2 sales_id, 1 prod_id, 1002 inv_num, 25 qty from dual
    select *
      from sales 4. Rather than telling what you want in words its more easier when you give your expected output.
    For example in the above sales table, I want to know the total quantity and number of invoice for each product.
    The output should look like this
    Prod_id   sum_qty   count_inv
    1         145       2 5. When ever you get an error message post the entire error message. With the Error Number, The message and the Line number.
    6. Next thing is a very important thing to remember. Please post only well formatted code. Unformatted code is very hard to read.
    Your code format gets lost when you post it in the Oracle Forum. So in order to preserve it you need to
    use the {noformat}{noformat} tags.
    The usage of the tag is like this.
    <place your code here>\
    7. If you are posting a *Performance Related Question*. Please read
       {thread:id=501834} and {thread:id=863295}.
       Following those guide will be very helpful.
    8. Please keep in mind that this is a public forum. Here No question is URGENT.
       So use of words like *URGENT* or *ASAP* (As Soon As Possible) are considered to be rude.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for

  • Error opening report as webservice when there's parameters!

    I followed the instructions in the developer's guide: "Publishing and Consuming a Report as a Web Service" I am using VS2008 and Crystal Reports 2008. I have a web application with 1 report in it called CrystalReport1.rpt and this report is published

  • What is the least version of CR to run in BO XI

    Hi, anyone know what would be the least version of CR such that report could be run in BO XI and BO XI R2 respectively ? Thanks.

  • Unable to store job at printer - Disk is not present ot is full

    I have an HP Laserjet 3005dn with 80mb of memory and the RAM Disk is full.  How do I clear it or reset it?

  • Mail Attachment Scaled to Unviewable

    I have a client that uses Mac Mail in OS X 10.5.x. When ever she attaches a screen capture, it gets scaled down so small it is not viewable. I have sat at her Mac and tried to figure out why this is happening, but cannot see any settings any where th

  • Time management and Payroll

    Hi Team, _My first question:_ Kindly can you let me know how does the data go and get process in Payroll apart from the below points? I would like to share with you my knowledge on it. Integration between IT0007 and IT0008-Capacity utilization After