Delete millions of rows and fragmentation

Hi Gurus,
i have deleted 20 lak rows from a table which has 30 lak rows,
and i wanted to release the fragmented space , please tell me the procedure other than exp/imp or alter table move
and also the recommended way to do this prod env... (coalesce /alter move etc.. )
db version is 11.2
Thanks for great help
Raj

870233 wrote:
Hi Gurus,
i have deleted 20 lak rows from a table which has 30 lak rows,
and i wanted to release the fragmented space , please tell me the procedure other than exp/imp or alter table move
and also the recommended way to do this prod env... (coalesce /alter move etc.. )
db version is 11.2
Thanks for great help
RajInstead of Deleting 2 Million rows out of 3 Million, I would suggest Creating a Temporary table with data that should be retained and Dropping the original table.
I believe, this will amount to lesser work.
Steps would be like below:
1. Create table your_table_temp as select * from your_table where condition to retain records;
2. Drop table your_table;
3. Alter table your_table_temp rename to your_table;
You might as well want to exploit the advantage provided by the NOLOGGING while loading your temp table.

Similar Messages

  • Delete millions of rows

    What are the best ways to delete few million rows from a table keeping the table available for the loads and customer access? This table has 50 million rows, and I need to retain few months’ data and delete the rest. Please let me know if you have any ideas

    deletion of 2 million rows in oracle 10g
    answer does not change

  • Tried to delete a particular row and out of memory

    dear all,
    hi. i have one physical primary and one logical standby. both are of the same processing power and configuration.  however, whenever i tried to delete one specific table row in the logical standby,it went thru the whole table and ended up gimme a out of memory error.  Any one hv the same experience?

    857009 wrote:
    dear all,
    hi. i have one physical primary and one logical standby. both are of the same processing power and configuration.  however, whenever i tried to delete one specific table row in the logical standby,it went thru the whole table and ended up gimme a out of memory error.  Any one hv the same experience?
    Not without an actual error message.  ORA-nnnnn

  • How do I delete multiple rows and columns in an image?

    I am looking into how digital SLRs extract video data from a CMOS sensor. To give an example, the GH1 from Panasonic can record video at 1920 x 1080 from a 12 MPixel sensor that, say, is 4000 horizontal x 3000 vertical. How they do that seems to be not in the public domain, but there have been a few guesses (see http://www.dvxuser.com/V6/showthread.php?t=206797 and http://luminous-landscape.com/forum/index.php?showtopic=38713).
    One approach would be to simply read every second row of sensor pixels (1500 rows read from the original 3000) and once you have those in memory, delete every second column (2000 columns left). You would end up with a 2000 x 1500 image which could then be resampled to 1920 x 1080.
    I'd like to simulate what the camera appears to be doing, by generating a 4000 x 3000 test image and then asking Photoshop CS4 to delete the appropriate rows and columns. It may not necessarily be every second row; the Canon 5DMk11 appears to read every third row, so I may need to delete two out of every three rows.
    Can Photoshop do that sort of thing? If so, how?

    Thanks for the suggestions. Yes, I did take a detailed look at your images, but they weren't 100% convincing because it wasn't clear just what was happening. And Adobe's explanation, after reading it again, explains nothing at all to someone who doesn't know how Nearest Neighbor works.
    But you are correct -- Nearest Neighbor does effectively delete pixels. I proved it with the attached 6 x 6 image of coloured pixels (the tiny midget image right after this full stop -- you'll have to go to maximum zoom in PS to see it).
    These are the steps to delete every second row and then every second column.
    1. Select Image > Image Size.
    2. Set Pixel Dimensions > Height to half the original (in this case set to 3 pixels).
    3. Set Resample Image to Nearest Neighbor.
    4. Click OK and the image shoould now consist of three vertical white strips and three vertical Green-Red-Blue stripes.
    5. Repeat steps 1-4, but this time set Pixel Dimensions > Width to half the original (in this case set to 3 pixels). The image should now consist of three horizontal stripes Green-Red-Blue.
    Just to make sure the method worked for every third pixel, I repeated the above steps but with 2 pixels instead of 3 and obtained the correct White-Green, White-Green pattern.
    I resampled the Height and Width separately for the test so that I could see what was happening. In a real example, the height and width can be changed at the same time in the one step, achieving the same results as above.
    Finally, how to explain how Nearest Neighbor works in the simple cases described above?
    Division by 2
    In the case of an exact division by two (pixels numbered from top leftmost pixel), only the even numbered rows and columns remain. To put it a different way, every odd numbered row and column are deleted.
    Division by 3
    Only rows and columns 2, 5, 8, 11... remain.
    Division by N
    Only rows and columns 2, 2+N, 2+2N, 2+3N... remain.
    To put it simply, a resample using Nearest Neighbor (using an exact integer multiple) keeps every Nth row and column, starting from number two. The rest are deleted.

  • How to delete the selected rows with a condition in alv

    dear all,
    i am using the code in object oriented alv.
    WHEN 'DEL'.
    PERFORM delete_rows.
    FORM delete_rows.
    DATA : lv_rows LIKE lvc_s_row.
    data : wa_ROWs like LVC_S_ROW.
    FREE : gt_rows.
    CALL METHOD alv_grid->get_selected_rows
    IMPORTING
    et_index_rows = gt_rows.
    IF gt_rows[] IS INITIAL.
    MESSAGE s000 WITH text-046.
    EXIT.
    ENDIF.
    loop at gt_rows into wa_ROWs .
    if sy-tabix ne 1.
    wa_ROWs-INDEX = wa_ROWs-INDEX - ( sy-tabix - 1 ).
    endif.
    delete gt_sim INDEX wa_ROWs-INDEX .
    endloop.
    the rows to be deleted from int.tab gt_sim not in the alv display.
    all the rows should not be deleted if one of the field in gt_sim eq 'R'.
    how to check this condition

    dear jayanthi,
            ok if i am coding like that as u mentioned ,
              it will exit the loop when first time the field value is 'R'.
      if any of  the selected rows contains  field value 'R'. it shold not delete all the selected rows.
    as u suggested it will not delete after first time the field value is r.
    i am deleting it by tab index so,
    suppose if i am selecting the row without field value R say its tabix is 1.
      the next row with tabix 2 with field value R.
      it deletes the first row and exits , it should not delete the first row also.

  • Update million of rows

    Dear all,
    DB : 10.2.0.4.
    OS : Solaris 5.10
    I have a partitioned table with million of rows and am updating a filed like
    update table test set amount=amount*10 where update='Y';
    when I run this query it generates many archive logs as it doesn't commits the transaction anywhere..
    Please give me an idea where I can commit this transaction after every 2000 rows.. such that archive log generation will not be much..
    Please guide
    Kai

    There's not a lot you can do about the the amount of redo you generate (unless perhps you make the table unrecoverable, and that might not help much either).
    It possible that if the column being updated is in an index dropping that during the update might help and recreating afterwards, but that could land you in more trouble.
    One area of concern is the amount of undo space for the large transaction, this could even be exceeded and your statement might fail,
    and that might be a reason for splitting it to smaller transactions.
    Certainly there is no point in splitting down to 2000 records chunks, I'd want to aim much higher that that.
    If you feel you want to divide it the record may contain a field that could be used, eg create_date, or if you are able to do partition by partition that might help.
    If archive log management is the problem then speaking to the DBA should help.
    Hope these thoughts help - but you are responsible for any actions you take, regards - bigdelboy

  • Urgent - Deletion of duplicate rows in a table - Help me

    Hi friends,
    Here is my purchase_order_line table desc,
    SQL> desc purchase_order_line;
    Name Null? Type
    PURCHASE_ORDER_LINE_ID NOT NULL NUMBER
    DESTINATION_ID NOT NULL NUMBER
    PURCHASE_ORDER_ID NOT NULL NUMBER
    LINE_NO NOT NULL NUMBER
    PART_NO VARCHAR2(10)
    PART_DESC NOT NULL VARCHAR2(100)
    ORDER_QTY NOT NULL NUMBER
    ORDER_DATE NOT NULL DATE
    DUE_DATE DATE
    VERSION_NO NOT NULL NUMBER
    VERSION_DATE NOT NULL DATE
    purchase_order_line_id is a primary key.
    If you ignore the primary key column temporarily,
    for the rest of 2 to 11 columns, i have a duplicate data in this table.
    I want to clear out all duplicates.
    Suppose if I have 3 similar rows(exclude primarykey), then delete first two rows and leave the last one.
    OR delete last two rows and leave the first one.
    What is the best solution for this ?
    thanks for help.
    Sridhar

    here is some example that might be of help.
    SQL> select case_number, case_status_desc status, case_ownr owner,
      2         case_yr year, doc_id
      3  from wrt_case
      4  order by doc_id;
    CASE_NUMBER          STATUS     OWNER YEAR      DOC_ID
    2006-786             ACTIVE     E     2006    22072734
    2006-786             ACTIVE     E     2006    22072734
    2006-786             ACTIVE     E     2006    22081673
    2006-786             ACTIVE     E     2006    22081673
    2006-786             ACTIVE     E     2006    22143005
    2006-786             ACTIVE     E     2006    22143005
    2006-786             ACTIVE     E     2006    22243094
    2006-786             ACTIVE     E     2006    22243094
    8 rows selected.
    SQL> Select case_number, case_status_desc status, case_ownr owner,
      2                 case_yr year, doc_id, rowid,
      3                 row_number() over (partition by doc_id order by doc_id) as rn
      4    From wrt_case;
    CASE_NUMBER          STATUS     OWNER YEAR      DOC_ID ROWID                      RN
    2006-786             ACTIVE     E     2006    22072734 AAD8bTAAJAAAJ4nAAA          1
    2006-786             ACTIVE     E     2006    22072734 AAD8bTAAJAAAJ4nAAC          2
    2006-786             ACTIVE     E     2006    22081673 AAD8bTAAJAAAJ4nAAB          1
    2006-786             ACTIVE     E     2006    22081673 AAD8bTAAJAAAJ4nAAD          2
    2006-786             ACTIVE     E     2006    22143005 AAD8bTAAJAAAJ4nAAE          1
    2006-786             ACTIVE     E     2006    22143005 AAD8bTAAJAAAJ4nAAF          2
    2006-786             ACTIVE     E     2006    22243094 AAD8bTAAJAAAJ4nAAG          1
    2006-786             ACTIVE     E     2006    22243094 AAD8bTAAJAAAJ4nAAH          2
    8 rows selected.
    SQL> Delete From wrt_case
      2   Where rowid in (Select t.rid
      3                     From (Select case_number, case_status_desc status, case_ownr owner,
      4                                  case_yr year, doc_id, rowid as rid,
      5                                  row_number() over (partition by doc_id order by doc_id) as rn
      6                             From wrt_case) t
      7                   Where t.rn > 1);
    4 rows deleted.
    SQL> select case_number, case_status_desc status, case_ownr owner,
      2         case_yr year, doc_id, rowid
      3  from wrt_case
      4  order by doc_id;
    CASE_NUMBER          STATUS     OWNER YEAR      DOC_ID ROWID
    2006-786             ACTIVE     E     2006    22072734 AAD8bTAAJAAAJ4nAAA
    2006-786             ACTIVE     E     2006    22081673 AAD8bTAAJAAAJ4nAAB
    2006-786             ACTIVE     E     2006    22143005 AAD8bTAAJAAAJ4nAAE
    2006-786             ACTIVE     E     2006    22243094 AAD8bTAAJAAAJ4nAAG
    SQL>

  • Strategey to Delete Millions/Move millions of rows in a database

    Hello,
    I am using SQL Server 2012 SE.
    I am trying to delete rows from a couple of tables (GetPersonValue has 250 million rows and I am trying to delete 50Million rows and GetPerson has 35 Million rows and I am trying to delete 20 million rows). These tables are in TX replication.The plan is to
    delete data older than 400 days old.
    I tried to move data to new tables from the last 400 days and it took me like 11 hours. If I delete data in chunks of 500000 then its taking a long time to rebuild indexes(delete plus rebuild indexes 13 hours). 
    Since I am using standard edition partition wont work.
    Is there a way to speed up things? Experts I need your valuable inputs.
    Please find ddl below:
    GO
    CREATE TABLE [dbo].[GetPerson](
    [GetPersonId] [uniqueidentifier] NOT NULL,
    [LinedActivityPersonId] [uniqueidentifier] NOT NULL,
    [CTName] [nvarchar](100) NULL,
    [SNum] [nvarchar](50) NULL,
    [PHPrimary] [nvarchar](50) NULL,
    [PHAlt1] [nvarchar](50) NULL,
    [PHAlt2] [nvarchar](50) NULL,
    [EAdd] [nvarchar](50) NULL,
    [ImportedAt] [datetime] NOT NULL,
    [LinedActivityId] [uniqueidentifier] NOT NULL,
    [Order] [int] NOT NULL,
    [PHAssName] [varchar](255) NULL,
    [TXAssName] [varchar](255) NULL,
    [EMAssName] [varchar](255) NULL,
    CONSTRAINT [PK_GetPerson] PRIMARY KEY NONCLUSTERED
    [GetPersonId] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    CREATE TABLE [dbo].[GetPersonValue](
    [GetPersonValueId] [uniqueidentifier] NOT NULL,
    [GetPersonId] [uniqueidentifier] NOT NULL,
    [ValueDefId] [uniqueidentifier] NULL,
    [ValueDefName] [nvarchar](50) NULL,
    [ValueListItemId] [uniqueidentifier] NULL,
    [Value] [nvarchar](max) NULL,
    CONSTRAINT [PK_GetPersonValue] PRIMARY KEY NONCLUSTERED
    [GetPersonValueId] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    /****** Object: Table [dbo].[LinedActivity] Script Date: 4/16/2015 10:30:38 AM ******/
    GO
    CREATE TABLE [dbo].[LinedActivity](
    [LinedActivityId] [uniqueidentifier] NOT NULL,
    [AccountTriggerId] [uniqueidentifier] NOT NULL,
    [LinedActivityStatusId] [int] NOT NULL,
    [QueuedAt] [datetime] NOT NULL,
    [LastUpdatedAt] [datetime] NULL,
    [IsLiveMode] [bit] NOT NULL,
    [PHJobId] [uniqueidentifier] NULL,
    [EMJobId] [uniqueidentifier] NULL,
    [TXJobId] [uniqueidentifier] NULL,
    [NotificationTemplateId] [uniqueidentifier] NULL,
    [Size] [int] NOT NULL,
    [ResultsExported] [bit] NOT NULL,
    [JobCompletedEMSent] [bit] NOT NULL,
    [SubStatusId] [int] NULL,
    CONSTRAINT [PK_JobQueue] PRIMARY KEY NONCLUSTERED
    [LinedActivityId] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    GO
    /****** Object: Index [IX_GetPerson_LinedActivityId] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_GetPerson_LinedActivityId] ON [dbo].[GetPerson]
    [LinedActivityId] ASC
    INCLUDE ( [GetPersonId],
    [LinedActivityPersonId],
    [CTName],
    [SNum],
    [PHPrimary],
    [PHAlt1],
    [PHAlt2],
    [EAdd],
    [ImportedAt],
    [Order],
    [PHAssName],
    [TXAssName],
    [EMAssName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    GO
    /****** Object: Index [IX_GetPerson_LinedActivityPerson] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_GetPerson_LinedActivityPerson] ON [dbo].[GetPerson]
    [LinedActivityPersonId] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
    GO
    GO
    /****** Object: Index [IX_GetPerson_LinedActivityPersonId_GetPersonId] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_GetPerson_LinedActivityPersonId_GetPersonId] ON [dbo].[GetPerson]
    [LinedActivityPersonId] ASC,
    [GetPersonId] ASC
    INCLUDE ( [CTName],
    [SNum],
    [PHPrimary],
    [PHAlt1],
    [PHAlt2],
    [EAdd],
    [ImportedAt]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
    GO
    GO
    /****** Object: Index [IX_GetPersonValue_GetPersonId_GetPersonValueId] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_GetPersonValue_GetPersonId_GetPersonValueId] ON [dbo].[GetPersonValue]
    [GetPersonId] ASC,
    [GetPersonValueId] ASC
    INCLUDE ( [ValueDefId],
    [ValueDefName],
    [ValueListItemId],
    [Value]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
    GO
    /****** Object: Index [IX_LinedActivity_1] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_LinedActivity_1] ON [dbo].[LinedActivity]
    [LinedActivityStatusId] ASC,
    [IsLiveMode] ASC
    INCLUDE ( [LinedActivityId],
    [AccountTriggerId],
    [QueuedAt],
    [LastUpdatedAt],
    [PHJobId],
    [EMJobId],
    [TXJobId],
    [NotificationTemplateId],
    [Size],
    [ResultsExported],
    [JobCompletedEMSent]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    GO
    /****** Object: Index [IX_LinedActivity_2] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_LinedActivity_2] ON [dbo].[LinedActivity]
    [AccountTriggerId] ASC,
    [LinedActivityStatusId] ASC,
    [ResultsExported] ASC
    INCLUDE ( [LinedActivityId],
    [QueuedAt],
    [LastUpdatedAt],
    [IsLiveMode],
    [PHJobId],
    [EMJobId],
    [TXJobId],
    [NotificationTemplateId],
    [Size],
    [JobCompletedEMSent]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    GO
    /****** Object: Index [IX_LinedActivity_3] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_LinedActivity_3] ON [dbo].[LinedActivity]
    [LinedActivityStatusId] ASC,
    [ResultsExported] ASC
    INCLUDE ( [LinedActivityId],
    [AccountTriggerId],
    [QueuedAt],
    [LastUpdatedAt],
    [IsLiveMode],
    [PHJobId],
    [EMJobId],
    [TXJobId],
    [NotificationTemplateId],
    [Size],
    [JobCompletedEMSent]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    GO
    /****** Object: Index [IX_LinedActivity_IL_QJSID_ATID_QJID_QAT_LU_PJID_EJID_SJID_NTID_S_New] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_LinedActivity_IL_QJSID_ATID_QJID_QAT_LU_PJID_EJID_SJID_NTID_S_New] ON [dbo].[LinedActivity]
    [IsLiveMode] ASC,
    [LinedActivityStatusId] ASC,
    [AccountTriggerId] ASC,
    [LinedActivityId] ASC,
    [QueuedAt] ASC,
    [LastUpdatedAt] ASC,
    [PHJobId] ASC,
    [EMJobId] ASC,
    [TXJobId] ASC,
    [NotificationTemplateId] ASC,
    [Size] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
    GO
    /****** Object: Index [IX_LinedActivity_NotificationTemplateID_TXJOBID] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_LinedActivity_NotificationTemplateID_TXJOBID] ON [dbo].[LinedActivity]
    [NotificationTemplateId] ASC
    INCLUDE ( [TXJobId]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    GO
    /****** Object: Index [IX_LinedActivity_LinedActivityID_New] Script Date: 4/16/2015 10:30:38 AM ******/
    CREATE NONCLUSTERED INDEX [IX_LinedActivity_LinedActivityID_New] ON [dbo].[LinedActivity]
    [LinedActivityId] ASC
    INCLUDE ( [QueuedAt]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[GetPerson] ADD DEFAULT ((0)) FOR [Order]
    GO
    ALTER TABLE [dbo].[LinedActivity] ADD CONSTRAINT [DF_LinedActivity_IsLiveMode] DEFAULT ((0)) FOR [IsLiveMode]
    GO
    ALTER TABLE [dbo].[LinedActivity] ADD CONSTRAINT [DF_LinedActivity_SubmittedJobID] DEFAULT (CONVERT([uniqueidentifier],CONVERT([binary],(0),(0)),(0))) FOR [PHJobId]
    GO
    ALTER TABLE [dbo].[LinedActivity] ADD DEFAULT ((0)) FOR [Size]
    GO
    ALTER TABLE [dbo].[LinedActivity] ADD DEFAULT ((0)) FOR [ResultsExported]
    GO
    ALTER TABLE [dbo].[LinedActivity] ADD DEFAULT ((0)) FOR [JobCompletedEMSent]
    GO
    ALTER TABLE [dbo].[GetPerson] WITH CHECK ADD CONSTRAINT [FK_GetPerson_LinedActivity] FOREIGN KEY([LinedActivityId])
    REFERENCES [dbo].[LinedActivity] ([LinedActivityId])
    GO
    ALTER TABLE [dbo].[GetPerson] CHECK CONSTRAINT [FK_GetPerson_LinedActivity]
    GO
    ALTER TABLE [dbo].[GetPersonValue] WITH CHECK ADD CONSTRAINT [FK_GetPersonValue_GetPerson] FOREIGN KEY([GetPersonId])
    REFERENCES [dbo].[GetPerson] ([GetPersonId])
    GO
    ALTER TABLE [dbo].[GetPersonValue] CHECK CONSTRAINT [FK_GetPersonValue_GetPerson]
    GO
    Here is my delete statement:
    select A.GetPersonValueid,B.GetPersonID into temp_table
    From GetPersonValue A inner Join GetPerson B
    on A.GetPersonid =B.GetPersonID inner join LinedActivity C
    on B.LinedActivityId = C.LinedActivityID and C.QueuedAt >Getdate()-400
    delete from GetPersonValue where GetPersonValueid in (select GetPersonValueid from temp_table)
    delete from GetPerson where GetPersonid in (select GetPersonid from temp_table)
    drop table temp_table
    ALTER INDEX ALL ON GetPersonValue REBUILD WITH (FILLFACTOR = 80)
    ALTER INDEX ALL ON GetPerson REBUILD WITH (FILLFACTOR = 80)
    Experts I need your valuable inputs here. Thanks a ton in advance.

    From the code you posted, none of the tables have clustered indexes.  If that is correct, the first thing you need to do is create one.
    This should be a little faster, much faster if you add a clustered index.
    select B.GetPersonID
    into temp_table
    From GetPersonValue A inner Join GetPerson B
    on A.GetPersonid =B.GetPersonID inner join LinedActivity C
    on B.LinedActivityId = C.LinedActivityID and C.QueuedAt >Getdate()-400
    CREATE CLUSTERED INDEX [IX_temp_table] ON [dbo].[temp_table]
    [GetPersonID] ASC
    DECLARE @rowcnt INT;
    SET @rowcnt = 1;
    WHILE @rowcnt > 0
    BEGIN
    delete TOP (50000) a
    from GetPersonValue a
    INNER JOIN temp_table t
    ON t.GetPersonId = a.GetPersonId;
    SET @rowcnt = @@ROWCOUNT;
    END
    SET @rowcnt = 1;
    WHILE @rowcnt > 0
    BEGIN
    delete TOP (50000) a
    from GetPerson a
    INNER JOIN temp_table t
    ON t.GetPersonId = a.GetPersonId;
     SET @rowcnt = @@ROWCOUNT;END
    drop table temp_table;
    If i delete based of top n rows there is a chance of getting a foreign key violation error when deleting from the GetPerson table.

  • Deleting Millions of Selected rows from a production Table.

    Hi Friends.
    I have to copy millions of Rows from one table to a second one to do and export of its Data and run a Truncate command after(Second one). The main problem is the Time spend it in the Deleting process. The Inserting is nice and easy with /* +Append */ But The Deleting is a mess because required to much time and make the process really slow....Could someone give me a tip with this issue?  Thanks for your time!
    Emmanuel G. Carrillo Trejos.

    It would help if you could quntify "subset" here. If you are deleting 75% of the table, it will likely be faster to follow Syed's suggestion and move the data you want to keep to a new table, drop the old table, and rename the new table to the old table name. If you are going to be deleting a small fraction of rows, but you are going to be doing this regularly (i.e. you delete all rows older than X days), it will be faster to partition the table and drop the partition.
    I see no possible beneit to playing around with any transaction-related comands.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Missing functionality.Draw document wizard - delete/add rows and copy/paste

    Scenario:
    My customer is using 2007 SP0 PL47 and would like the ability to change the sequence of the rows of the draw document wizard and delete/add multiple rows (i.e. when you create an AR Invoice copied from several deliveries).
    This customer requires the sequence of items on the AR invoice to be consistent with the sequence on the original sales order including text lines and subtotals. Currently we cannot achieve this when there are multiple deliveries.
    Steps to reproduce scenario:
    1.Create a Sales order with several items and use text lines, regular and subtotals.
    2.Create more than one delivery based on the sales order and deliver in a different sequence than appears on the sales order.
    3.Open an AR Invoice and u2018Copy fromu2019 > Deliveries. Choose multiple deliveries. Choose u2018Customizeu2019.
    4.Look at the sequence of items on the Invoice. How can the items and subtotals and headings be moved around so they appear in the same sequence as on the sales order?
    Current Behaviour:
    In SAP B1 itu2019s not possible to delete or add more than one row at a time on the AR Invoice or Draw Document Wizard.
    Itu2019s not possible to copy/paste a row on the AR Invoice or Draw Document Wizard.
    Itu2019s not possible to change the sequence of the rows using the Draw Document Wizard.
    Business Impact: This customer is currently spending a lot of time trying to organize the AR invoice into a presentable format. They have to go through the invoice and delete the inapplicable rows one by one (because SAP B1 does not have the ability to delete multiple lines at a time) and also has to manually delete re-add rows to make it follow the same sequence as the sales order.
    Proposals:
    Enable users to delete or add more than one row at a time on the AR Invoice or Draw Document Wizard.
    Enable users to copy/paste rows on the AR Invoice or Draw Document Wizard.

    Hi Rahul,
    You said 'It is not at all concerned with Exchange rate during GRPO...' If that is the case how does the Use Row Exchange Rate from Base Document in the draw document wizard work? Does this mean 1 GRPO : 1 AP Invoice so I can use the base document rate?
    How should I go about with transactions like these? That is adding an AP Invoice from multiple GRPO's having different exchange rates. What I am trying to capture here is that in the AP Invoice, base document rates should be used in the row item level and not the current rate when adding the invoice.  
    Thanks,
    Michelle

  • Insert row and delete row in a table control

    Hi Experts,
    I am using a table control in module pool programming, How can I Insert row and delete row in a table control?
    Thanks in Advance....

    Santhosh,
    Iam using this code..
    FORM fcode_delete_row
                  USING    p_tc_name           TYPE dynfnam
                           p_table_name
                           p_mark_name   .
    -BEGIN OF LOCAL DATA----
      DATA l_table_name       LIKE feld-name.
    data: p_mark_name type c.
      FIELD-SYMBOLS <tc>         TYPE cxtab_control.
      FIELD-SYMBOLS <table>      TYPE STANDARD TABLE.
      FIELD-SYMBOLS <wa>.
      FIELD-SYMBOLS <mark_field>.
    -END OF LOCAL DATA----
      ASSIGN (p_tc_name) TO <tc>.
    get the table, which belongs to the tc                               *
      CONCATENATE p_table_name '[]' INTO l_table_name. "table body
      ASSIGN (l_table_name) TO <table>.                "not headerline
    delete marked lines                                                  *
      DESCRIBE TABLE <table> LINES <tc>-lines.
      LOOP AT <table> ASSIGNING <wa>.
      access to the component 'FLAG' of the table header                 *
        ASSIGN COMPONENT p_mark_name OF STRUCTURE <wa> TO <mark_field>.
    if <MARK_FIELD> = 'X'.
        PERFORM f_save_confirmation_9101.
        IF gv_answer EQ '1'.
          DELETE <table> INDEX syst-tabix.
          IF sy-subrc = 0.
            <tc>-lines = <tc>-lines - 1.
          ENDIF.
          ELSE.
          ENDIF.
        ENDIF.
      ENDLOOP.
    in this code   ASSIGN COMPONENT p_mark_name OF STRUCTURE <wa> TO <mark_field>.
    if <MARK_FIELD> = 'X'.
    this code is not working...

  • How to delete a selected row from datagrid and how to create a datagrid popup

    hi friends,
                  I am new to flex.i am doing a flex4 application,i need help for this.
                i am having a data grid with columns.i have two questions.
               Ques 1: when  i selected a partiuclar row from a datagrid and click delete button  means that record will delete from the datagrid and DTO from the cloud  tables also.
                Ques 2: when i save  the data grid values using save button means that data will store in  respective cloud DTO which is related to the datagrid,
                     My requirement is i am using a search button when i click the search  button it will show a datagrid that datagrid will contain the previous  datagrid datas which is saved in the cloud.
    REQUIREMENT example: first screen:           i am using with data grid 3 columns (Student Roll number ,Student Name,Student pecentage)---->save--->data will store in cloud DTO.
    Second screen:                search button ----> it need show  datagrid popup.it will have data which we saved in the first screen with same columns(Student Roll number ,Student Name,Student pecentage).
    This is my requirement.
    Any suggession welcome.
    Thanks in advance.
    B.Venkatesan

    Lets break the problem statement in multiple steps
    1. We need a way to know the selection on all rows.
    2. We need the association of the checkBox with the data
    The  solution is to use a arrayCollection/array that holds all the instances  created for checkbox.This collection should be a property of component  containing the datagrid. We need to use a custom component  implementation or inline ItemRenderer. The way you have used is called  dropinItemRenderer. Preferaly use custom component implementation and  add the instance to the arrayCollection at CreationComplete. Make sure  you use addItemAt so that you add the instance in the same row as the  data. To get rowIndex the custom Checkbox should implement  IDropInListItemRenderer. You could iterate this collection to check all  the instances that are checked.
    Note: This is the approach considering your dataprovider doesnt have a selection field.
    Nishant

  • How to delete a Selected row from datagrid and how to create a datagrid popup with saved values

    hi friends,
                  I am new to flex.i am doing a flex4 application,i need help for this.
                i am having a data grid with columns.i have two questions.
               Ques 1: when i selected a partiuclar row from a datagrid and click delete button means that record will delete from the datagrid and DTO from the cloud tables also.
                Ques 2: when i save the data grid values using save button means that data will store in respective cloud DTO which is related to the datagrid,
                    My requirement is i am using a search button when i click the search button it will show a datagrid that datagrid will contain the previous datagrid datas which is saved in the cloud.
    REQUIREMENT example: first screen: i am using with data grid 3 columns (Student Roll number ,Student Name,Student pecentage)---->save--->data will store in cloud DTO.
    Second screen: search button ----> it need show  datagrid popup.it will have data which we saved in the first screen with same columns(Student Roll number ,Student Name,Student pecentage).
    This is my requirement.
    Any suggession welcome.
    Thanks in advance.
    B.Venkatesan

    Lets break the problem statement in multiple steps
    1. We need a way to know the selection on all rows.
    2. We need the association of the checkBox with the data
    The  solution is to use a arrayCollection/array that holds all the instances  created for checkbox.This collection should be a property of component  containing the datagrid. We need to use a custom component  implementation or inline ItemRenderer. The way you have used is called  dropinItemRenderer. Preferaly use custom component implementation and  add the instance to the arrayCollection at CreationComplete. Make sure  you use addItemAt so that you add the instance in the same row as the  data. To get rowIndex the custom Checkbox should implement  IDropInListItemRenderer. You could iterate this collection to check all  the instances that are checked.
    Note: This is the approach considering your dataprovider doesnt have a selection field.
    Nishant

  • ALV grid oo delete rows and update to table

    Hi all
      How can I delete one row and update to the db table?
    thanks

    Hi,
    Refer:-
    The ALV Grid has events data_changed and data_changed_finished. The former method is
    triggered just after the change at an editable field is perceived. Here you can make checks for
    the input. And the second event is triggered after the change is committed.
    You can select the way how the control perceives data changes by using the method
    register_edit_event. You have two choices:
    1. After return key is pressed: To select this way, to the parameter i_event_id pass cl_gui_alv_grid=>mc_evt_enter.
    2. After the field is modified and the cursor is moved to another field: For this, pass cl_gui_alv_grid=>mc_evt_modifies to the same parameter.
    To make events controlling data changes be triggered, you must select either way by
    calling this method. Otherwise, these events will not be triggered.
    To control field data changes, ALV Grid uses an instance of the class
    CL_ALV_CHANGED_DATA_PROTOCOL and passes this via the event data_changed.
    Using methods of this class, you can get and modify cell values and produce error messages.
    Hope this helps you.
    Regards,
    Tarun

  • How to store data from textfile to vector and delete a selected row.

    Can someone teach me how to store data from textfile to vector and delete a selected row. And after deleting, i want to write the changes in my textfile.
    Do someone has an idea? :)

    nemesisjava wrote:
    Can someone teach me how to store data from textfile to vector and delete a selected row. And after deleting, i want to write the changes in my textfile.
    Do someone has an idea? :)What's the problem? What have you done so far? What failed?
    What you described should be pretty easy to do.

Maybe you are looking for