A better way to delete IT2001 records in batch.,

Hi Gurus,
A little problem here. Due to data source error, I need to delete IT2001 records in batch.
Currently I am using standard report RPUREOPN to do this. But RPUREOPN will not update those deduction in IT2006 quota. So I've to delete & reupload cooresponding IT2006 quota data which is inconvenient.
Is there a better way to do this job? I mean deleting IT2001 in batch and also update the deductions in IT2006.
Thanks in adv!
Br,Kee

After running the report you mentioned above to delete absences/attn you can try running the report RPTUPD00 to revaluate the attendance and absence.
Regards,
Divya

Similar Messages

  • Is there a way to delete emails in a batch instead of individually?

    I keep getting a ton of emails on the I Pad even after I delete them they just keep coming back.  Is there a way to delete them like "select all" or something similar?

    select the menu item "View > Show Formula List":

  • Delete a record in MDT Database from WinPE during Pre install

    Hello Experts,
    This is my requirement
    1) MDT Deployment database will be updated with MAC address of new laptops.
    2) All new laptops will be pre-configured with PXE booting as primary boot option. (When the MAC address is not available on Deployment DB,  System boots via hard disc).
    3) Deployment Tool Kit (MDT) will deploy the image to all the MAC addresses available on deployment DB (Step 1)
    4) After deployment, database entry for corresponding laptop should be deleted automatically such that system boots via hard disc
    Is there any way to delete a record from MDT database in Pre-Install step of the task sequence, such that Laptop boots from the hard disc after restart from PE install? Idea is to delete the record of corresponding laptop after PE install such that PXE boor
    doesn't pick up anything from the share.
    Regards, karthik.

    Basically, this would be the same effect if you have a PXE advertisement for a certain number of machines. And your PXE advertisement is always mandatory, except you only want to install machines which occur in the database. Why no try to solve it in another
    way? Because this is very devious.
    If this post is helpful please click "Mark for answer", thanks! Kind regards

  • Best way to delete large number of records but not interfere with tlog backups on a schedule

    Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules.  There is a list of tables that need a lot of records purged from them.  What would be a good approach to use for deleting the old records?
    Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
    Approach #1
    A one-time delete that did everything.  Delete all the old records, in batches of say 50,000 at a time.
    After each run through all the tables for that DB, execute a tlog backup.
    Approach #2
    Create a job that does a similar process as above, except dont loop.  Only do the batch once.  Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
    Note:
    Some of these (well, most) are going to have relations on them.

    Hi shiftbit,
    According to your description, in my opinion, the type of this question is changed to discussion. It will be better and 
    more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
    take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state. 
    For more information about deleting a large number of records without affecting the transaction log.
    http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
    Hope it can help.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • How to delete the records in a faster way

    The dml is as below:
    delete from GFSTM64_PLANT_INVENTORY a
    where a.PLANT_EOP_PART_SAKEY IN
    (select EOP_PLANT_PART_SAKEY from GFSTM62_EOP_PLANT_PART
    where END_OF_PERIOD_DATE = '31-DEC-2011' )
    This has to delete 15 lakh records. Taking more than 6 hours and not yet deleted. I do not have the option to change the data model. Please let me know the fastest way to delete the 15 lakh records.
    thanks,
    Vinodh

    Hi,
    not having DDL for objects referenced in the statement, table sizes, explain plan and other essential information, we can only guess, but there is a one likely scenario here.
    You wrote the DELETE in the form:
    DELETE FROM table1 where table1.col1 in (SELECT col1 FROM table2 WHERE <some additional conditions on table2>).
    If Oracle 'takes this literally', then it would go through every record in table1 evaluate the IN condition. That means that the query in round brackets will essentially be executed as many times as there are rows in table1!
    Fortunately, the optimizer has ways of transforming into a simple join between table1 and table2, which in most cases will be much more efficient. However, in some cases there are may be things preventing the optimizer from doing this query transform (optimizer settings, bugs, use of analytic functions in other parts of the query etc.), so you may try a manual rewrite to help the optimizer, something like:
    DELETE FROM table1 where table1.col1 in (SELECT col1 FROM table1, table2 WHERE table1.col1 = table2.col2 and <some additional conditions on table2>)
    Best regards,
    Nikolay

  • Is there a way to delete records from MDM automatically?

    Is there a way to delete records from MDM automatically?
    I am able to import the data automatically through MDIS, but I have to delete the data first. Itu2019s possible to do it ?

    Hi Adam,
    Current scenario
    USER1: call ME to delete old catalog data
    ME: open the MDM & delete it manually
    USER1: Transaction to extract new data file
    MDIS: load the data to catalog
    As per your requirement, you should save map in following way which can solve your purpose. Create a XML file which should consist of new and existing records. So in import Manager, for newly added records you need to set Default Import Action as Create and for existing records you should use or set Default Import Action as Replace and then should save in map.
    So using this every-time if new record comes (not available in data manager), it will get created and for existing record (already available in Data Manager) it will replace (which means delete the existing record (old catalog data) and create a new record).
    Regards,
    Mandeep Saini

  • Looking for a better way to utilize streams to track deletes in the db.

    I'm trying to figure out a way to track deletes in the database using streams. I found that a dml_hander for deletes could satisfy my needs but it appears I would need to create a dml_handler for each table in the schema. Since I have ~250 tables I'm thinking there has to be a better way to do this. I simply need a way to capture all deletes and insert them into a table before they are deleted from the db. Is there a better way then creating a handler for each table?
    Thanks,
    Doug

    So far you haven't posted a version number or any information about the use of any auditing tool, whether FGA or other auditing so it is impossible to either comment or advise you further.
    If you want help you are going to need to do something you did not do in your original post and that is provide a description of your environment, your business rules, how you have attempted to use FGA or standard auditing, etc. Streams is for replication not auditing so perhaps you mean AQ but so far you haven't said that yet either.
    The more information you can provide, and perhaps some code or clear descriptions of what you've attempted, the better the help possible.

  • The correct way to delete old transport request record ?

    Dear all,
    We want to delete old transport request record before 2008 (two years before) of our system.
    We learn that we should delete the records in the below two path:
    /usr/sap/trans/data
    /usr/sap/trans/cofiles
    Our target is that we will not see the old record after deleting in our PRD system.
    Normally, we see the transport request record in system by this way:
    STMS -->  import overview --> PRD --> Import Queue   system PRD --> Import history
    Our question is do we need to restart our system after delete the record to refresh the Import history?
    We have not delete the record now.
    Any experienced or expert, please kindly give advice.
    Regards,
    Allen

    Hello Allen,
    there are some notes and documentations for this process. Deleting the files from /cofiles and /data will not erase the request from the import history, as you already know.
    You should follow note #41732 to have the requests/data completely deleted from your system. There is a lot of complementary reading in notes #7224, #556734, #189841, etc.
    Note that you can remove the request from the import queue, import history, data and cofiles subdirectory, but still have it on the <SID> buffer (located in  /usr/sap/trans/buffer/<SID>). If this happens to you, please save a copy of the buffer file <SID>, then create a new, blank file named <SID> at a time when the import queue is empty (VERY IMPORTANT). Or, alternatively, please excute the tp cleanbuffer FIL command (which will erase the requests that are not found in /data and /cofiles directories).
    --> Please test this first on a test system, not directly on a productive environment.
    This should be enough to delete old transport requests on your system.
    Best regards,
    Tomas Black

  • A better way than a global temp table to reuse a distinct select?

    I get the impression from other threads that global temp tables are frowned upon so I'm wondering if there is a better way to simplify what I need to do. I have some values scattered about a table with a relatively large number of records. I need to distinct them out and delete from 21 other tables where those values also occur. The values have a really low cardinality to the number of rows. Out of 500K+ rows there might be a dozen distinct values.
    I thought that rather than 21 cases of:
    DELETE FROM x1..21 WHERE value IN (SELECT DISTINCT value FROM Y)
    It would be better for performance to populate a global temp table with the distinct first:
    INSERT INTO gtt SELECT DISTINCT value FROM Y
    DELETE FROM x1..21 WHERE value IN (SELECT value FROM GTT)
    People asking questions about GTT's seem to get blasted so is this another case where there's a better way to do this? Should I just have the system bite the bullet on the DISTINCT 21 times? The big table truncates and reloads and needs to do so quickly so I was hoping not to have to index it and meddle with disable/rebuild index but if that's better than a temp table, I'll have to make do.
    As far as I understand WITH ... USING can't be used to delete from multiple tables or can it?

    Almost, but not quite, as efficient as using a temporary table would be to use a PL/SQL collection and FORALL statements and/or referencing the collection in your subsequent statements). Something like
    DECLARE
      TYPE value_nt IS TABLE OF y.value%type;
      l_values value_nt;
    BEGIN
      SELECT distinct value
        BULK COLLECT INTO l_values
        FROM y;
      FORALL i IN 1 .. l_values.count
        DELETE FROM x1
         WHERE value = l_values(i);
      FORALL i IN 1 .. l_values.count
        DELETE FROM x2
         WHERE value = l_values(i);
    END;or
    CREATE TYPE value_nt
      IS TABLE OF varchar2(100); -- Guessing at the type of y.value
    DECLARE
      l_values value_nt;
    BEGIN
      SELECT distinct value
        BULK COLLECT INTO l_values
        FROM y;
      DELETE FROM x1
       WHERE value = (SELECT /*+ cardinality(v 10) */ column_value from table( l_values ) v );
      DELETE FROM x2
       WHERE value = (SELECT /*+ cardinality(v 10) */ column_value from table( l_values ) v );
    END;Justin

  • Is There a Better Way to Work in the Marker List Window?

    Is there a better way to sequentially listen to phrases one-by-one in the Marker List window? What I'm doing is Auto-Marking one single long file to break out 271 bits and save each as their own file. It's WAY faster than copying and pasting bits into new files and "saving as" 217 times.
    BUT, after Auto-Marking, I have 300-400 phrases to listen to, deleting the non-keepers as I go, until I'm left with my "keeper" 271 marked phrases. But it's so tedious to move from phrase-to-phase. I have to double-click each one before I can hear it (you can move the cursor with the down-arrow, but it won't actually select the audio). So I have to use the mouse (unless I'm missing something) and double-click each of the hundreds of phrases. Then whenever I delete one (which I'll have to do about a hundred times or more to get rid of bad takes, alternates, etc.), inexplicably the cursor jumps way up the list so you have to scroll back down dozens of files to get to where you were. It took me 35 minutes to do it this last time.
    Contrast that with Reaper's audition/preview functionality (which, ironically, AA has also, but only for files already saved into a folder). Once I had all the files saved into a folder, I QC'd all 217 files in Reaper and MAN what a difference! All I had to do was use the "down" arrow to advance to the next file AND have it play automatically (Audition can do the same thing with the "Open File" feature). It literally took me 5 minutes to check all 217 files that way. If AA could add that kind of functionality to the Marker List window, or if I'm just completely missing something (very possible) I would REALLY be happy.
    Any ideas?
    Thanks again! Happy New Years again!
    Ken

    Wild Duck,
    That doesn't quite do what I need. My end-product is 271 (used to be 116) separate files created from one large file. That large one is made up of WAY more than 271 (the VO actor records different versions of some commands, makes mistakes, etc.).
    So I need the ability to listen to each marker, and then be able to delete it if need be.
    The Playlist makes this impossible in two ways. It only has 2 options for hearing each marker, and neither option allows me to delete that marker after I've heard it. It either plays them all back-to-back without stopping, or it plays each as you click the "Move Down" button. That last one would be great if showed me which marker was playing! But it doesn't, so there is no way for me to know which marker number I just heard, nor can I delete that marker after I hear it.
    Sigh.
    Thanks for the tip though:).
    Ken

  • Af:query : Delete duplicate records from results manually

    Hi
    I have an ADF page with af:quey on a view object.  I have created a viewcriteria to choose few attributes from the view object.
    The view object is created manually using a sql query, where the query has joins to various other tables (it has outer joins too).
    On submit, due to one to many relationship in the joins, i am getting duplicate rows in the results.
    I am currently using 11.1.1.7.0 jDeveloper (We can't upgrade to upper versions due to other constraints of the project).  I thought the property 'Selected in Query = false' on the view object attribute would fix this problem (to eliminate duplicate rows).  But, i guess, 11.1.1.7.0 don't seem to be supporting this option.
    Hence, i was thinking of manually deleting the duplicates from the results of the af:query before displaying it to the user.
    Please let me know, if there is a better way to solve the problem, if not, how can i manually remove duplicates from the resultset before displaying the results.
    I created a new QueryLIstener method to delete the duplicates, by executing the view object, but it retruns all the records without applying the criteria.  May be i am doing something wrong.
    Please suggest the best way.
    Thanks
    Pradeep

    Hi Frank,
    I do have a distinct in my sql but due to 1->M joins i get duplicate rows.  I can avoid it only if i can unselect the attributes in the select.
    I like to display the attributes to the user to choose the criteria (which adds as a predicate to the sql).  But I would like to unselect the 1-M attributes from the results, so i get distinct rows as i have mentioned distinct in the sql.
    Is there a way to restrict duplicate rows ?
    Thanks
    Pradeep

  • Error while updating or deleting a record

    Hi to all...
    i have created atable group_master with 2 fields Group_id and Group_name
    the user will specify a group_name in a text box and press save. Before commiting a seq number will be generated
    inside the form procedure and it will be inserted.
    The insert is working fine.
    But when i select a record i.e a group_name and change it contents and commit it, it is not commiting.
    when i checked the display error it showed Ora-01400 cannot insert null values into the table.
    Iam not inserting a value here iam updating the group name only.
    even when i delete a record it shows same error....
    can any one help to solve this....
    thanx in advance........

    I have when button pressed trigger whick helps to show the lov which contains the group_id and group_name;
    a := show_lov('lov27');
    if a then
    go_block('group_master');
    execute_query;
    end if;
    it is correctly fetching the delected data. but propmpts a message box asking Do u want to save the record?
    if i ignore it n modify the group_name and save it,still it shows unable to insert null value.
    i jus want a simple way to solve this....
    i want to fetch that particular record and modify or delete it.
    group_id n group_name cannot be null.
    i will be using Buttons for add,delete,update and save record. i will be using lov to fetch records for modification or deletion.
    how to avoid the message box which prompts when i call lov with execute_query.....
    * I tried Key-listval...but i didnt get any solution. i will try again....
    thanx for ur replies....

  • After I watch a podcast I delete it. When I go back in the next day it shows 500-600 episodes I have to delete manually. So 2 questions....  Is there a way to stop this from happening and is there a way to delete all. This is getting frustrating.

    After I watch a podcast I delete it. When I go back in there are 500-600 episodes and I have to delete each one manually. Is there a way to stop this from happening or is there a way to delete all at once. It's getting very frustrating.

    Had all these issues.  Bit the bullet and downloaded the Downcast App ($1.99).  Much better expereince and a lot more features.  Sad.

  • Can some one tell me a better way...

    Hi All..
    I work on a web application. I create user ids for people logging into my application. here I have a small problem.. This is what I am currently doing...
    When I create a new user, I assign a new user id to the user and store all hi info.
    All our user ids are stored in User_ID table and user info in User_Info table.
    First I get the max user id from the User_Id table.
    int iuserid = select max(User_ID) from User_ID;
    Then I add ' 1 ' to this and assign it to new user.
    iuserid = iuserid+1;
    insert into User_id values(iuserid, <ssn> );
    Then I store all user info in User_info table..
    insert into User_info(iuserid, <first_name>, <last_name>,...);
    Both these SQLs are executed as a transaction.
    The problem that I have...
    It works fine in a normal environment and in my testing.
    But when the load inceases, before I insert into the User_info, another user tries to get the max User_id from User_ID table, then it conflicts..
    I have seen occurences of User_Info storing the info of a different user against the User_id when many people are accessing the app at the same time.
    Can some one tell me a better way to handle this scenario..
    Appreciate all you help..
    TIA..
    CK

    Hi,
    assuming that the requirement for user_id is only for uniqueness (primary key requirement) not continuosly.
    perhaps can try this,
    1) create a table to keep the max id for each table's key.
    e.g.
    create table key_id_lookup
    key_name char(n),
    current_value number(m)
    where n & m is the size of the field
    2) for each creation of entry in the user_id table, lookup the value in the key_id_table and increment it after retrieval.
    e.g. current_id = select current_value from key_id_lookup where key_name = 'user_id_key';
    current_id+=1;
    update key_id_lookup set current_value = current_id where key_name = 'user_id_key';
    something similar to use of sequence if your database support it.
    3) continue with the creation of record in other tables. now obtain the time stamp of creation and append with the current_id obtained. e.g. timestamp = 132456872; size of m is 5, user_id = 132456872 x 100000 + current_id;
    this should give you a unique key at creation time.
    There should be other better ways of resolving this(like locking the table for the update operation but affect performance, etc), depending on the feature supported by the database in use.
    Hope this can help as last resort if other options not available.

  • A better way to activate an LOV

    I want to allow users to activate the LOVs and run queries from them without having to press the enter query button, then move their cursor to the appropriate text box, then press Ctrl+L, then press the execute query button. Is there some way to fire an LOV from a button press or some simpler way than I have right now?

    Thats really a good way.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Steven Lietuvnikas ([email protected]):
    i follow what you're saying but i think i found a better way to do it
    i made a button that says search on it and the whenbuttonpressed trigger is simply enter_query
    this then moves the focus to the first item in the record block
    i then made a whennewiteminstance trigger on the first item in the record block that says:
    DECLARE
    DUMMY BOOLEAN;
    BEGIN
    If (:system.Mode = 'ENTER-QUERY') Then
    DUMMY := SHOW_LOV('LOV_PERSONS',15,10);
    EXECUTE_QUERY;
    END IF;
    END;
    this works good<HR></BLOCKQUOTE>
    null

Maybe you are looking for

  • Does iMac Flat Panel have built in Dual Voltage Support?

    I have iMac Flat Panel 17" machine (PowerPC G4 1GHz) I am living in USA currently I am travelling to India where the power rating is 220V 50Hz instead of 110 V. I know all laptops have dual voltage support. Does iMac Flat Panel have built in Dual vol

  • CCM 5.0(2) and Conference Connection

    I see in the compatibility matrix that this combination in not supported, but does anyone know if it will work? Our efforts to retire CCC have been delayed and our CCM 5.0 upgrade starts in the morning. We can live with it even if it works poorly, bu

  • FM for checking valid drive on PC

    Hi all, I am developing an interface. There is one file which is to be uplaoded in my code. I am doing some processing on that code and downloading that file on PC. User will enter the input & output file name. When user will enter the path I want o

  • FM EDI_PATH_CREATE_CLIENT_DOCNUM is creating XML ONLY for control record

    Hi Experts, We have configured a port of XML type and used EDI_PATH_CREATE_CLIENT_DOCNUM function module to create XML and put that XML in a directory path (Configured in FILE transaction). IDOCs are being generated using sales order output type and

  • Shockware Flash is enabled but doesn't work

    I'm running Windows 7, and I.E. 10.   I have installed the current version (11) of Adobe Flash several times, but is never recognized.   However, under "Manage Add ons" Shockwave is listed as enabled.   I have reinstalled several times with no luck.