Data purging in 11i

Hi All,
I am new to Oracle 11i apps. i would like to know,
1. What are the tables or logs need to be purge in periodic basis?
2. Why i need to purge those tables or logs?
3. Am i need to do it manually or scripts exists in the server?
4. Is there any criteria to purge those tables or logs?
Please help me.

Hi,
Please see notes :
Reducing Your Oracle E-Business Suite Data Footprint using Archiving, Purging, and Information Lifecycle Management [ID 752322.1]
Purging Strategy for eBusiness Suite 11i [ID 732713.1]
How Does Archive & Purge functionality works in 11i AP? [205362.1]
Purge Concurrent Request and/or Manager Data Slow Performance [ID 789698.1]
Please also review forum Search:
https://forums.oracle.com/thread/search.jspa?peopleEnabled=true&userID=&containerType=&container=&q=ebs+purge
Thanks &
Best Regards,

Similar Messages

  • Data Migration from 11i to R12 Global - Open POs,lines, receipts & on hand upload, Is it possible to do the onhand qty upload with over riding of all receipts which uploaded against Open PO lines?

    Hi Friends,
    We are in a phase of data migration from 11i to R12 
    I was discussed with client & they wants extraction of all open POs which was generated after 01 Jan 2014 to till date in 11i.
    Condition for open POs is PO qty-received qty=>0
    critical Example for open PO is :PO no: 10 has 4 lines, 3lines full qty has been received & for 1 line partial qty(say 50 out of 100) received.
    in this case he wants in R12 uploading as PO no:10 should entered as open PO with all 4lines & 3 lines complete receipt should be done, for 4th line partial qty i.e 50 should be received.
    the question is if we upload on hand qty first, then open POs & receipts, it will increase the onhand qty in new system(mismatch of on hand qty's 11i to R12) 
    Is it possible to do the onhand qty upload with over riding of all receipts which uploaded against Open PO lines.
    Or Please advice best solution.
    Thanks & Regards
    Giri

    adetoye50 wrote:
    Dear Contacts Journal Support Team,
    FYI, this is a user to user support forum.  You are NOT addressing Apple here.
    Honestly, I doubt anyone is really going to take the time to read the novel you have written.

  • LOG_FILE_NOT_FOUND when running cleaner manually after some data purge

    I hit LOG_FILE_NOT_FOUND error when running cleaner manually after some data purge, I searched the forum, found someone also faced the same issue before, but cannot find any clue on how to fix it. Below is the error trace and followed by our configurations
    Caused by: com.sleepycat.je.EnvironmentFailureException: (JE 4.1.6)
    Environment must be closed, caused by:
    com.sleepycat.je.EnvironmentFailureException: Environment invalid because of
    previous exception: (JE 4.1.6) /scratch/tie/thirdeye/index/data-store
    fetchTarget of 0x50f/0x3fb9dd6 parent IN=368491717 IN
    class=com.sleepycat.je.tree.IN lastFullVersion=0x510/0x2ca7d18
    parent.getDirty()=false state=0 LOG_FILE_NOT_FOUND: Log file missing, log is
    likely invalid. Environment is invalid and must be closed.
    at
    com.sleepycat.je.EnvironmentFailureException.wrapSelf(EnvironmentFailureExcept
    ion.java:196)
    at
    com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:1439)
    at com.sleepycat.je.Environment.checkEnv(Environment.java:2117)
    at com.sleepycat.je.Environment.checkpoint(Environment.java:1440)
    at
    com.oracle.thirdeye.datastore.DataStoreManager.clean(DataStoreManager.java:402
    at
    com.oracle.thirdeye.infostore.InfoStoreManager.clean(InfoStoreManager.java:301
    ... 11 more
    Caused by: com.sleepycat.je.EnvironmentFailureException: Environment invalid
    because of previous exception: (JE 4.1.10)
    /scratch/tie/thirdeye/index/data-store fetchTarget of 0x50f/0x3fb9dd6 parent
    IN=368491717 IN class=com.sleepycat.je.tree.IN
    lastFullVersion=0x510/0x2ca7d18 parent.getDirty()=false state=0
    LOG_FILE_NOT_FOUND: Log file missing, log is likely invalid. Environment is
    invalid and must be closed.
    at com.sleepycat.je.tree.IN.fetchTarget(IN.java:1332)
    at com.sleepycat.je.tree.IN.findParent(IN.java:2886)
    at com.sleepycat.je.tree.Tree.getParentINForChildIN(Tree.java:881)
    at com.sleepycat.je.tree.Tree.getParentINForChildIN(Tree.java:809)
    at
    com.sleepycat.je.cleaner.FileProcessor.findINInTree(FileProcessor.java:1152)
    at com.sleepycat.je.cleaner.FileProcessor.processIN(FileProcessor.java:1090)
    at
    com.sleepycat.je.cleaner.FileProcessor.processFile(FileProcessor.java:538)
    at com.sleepycat.je.cleaner.FileProcessor.doClean(FileProcessor.java:241)
    at com.sleepycat.je.cleaner.Cleaner.doClean(Cleaner.java:463)
    ------------Configurations-------------------------
    EnvironmentConfig.ENV_RUN_CLEANER -> false
    EnvironmentConfig.CHECKPOINTER_HIGH_PRIORITY -> true
    EnvironmentConfig.CLEANER_EXPUNGE -> false
    Any hints are appreciated. I'm also working for Oracle CDC, feel free to call me at 861065151679 or drop me an email at [email protected] so that we can talk more in detail
    Anfernee

    Anfernee, I will contact you via email.
    --mark                                                                                                                                                                                                                   

  • Data purge utility in OIM 9.1.0.2

    Hi All,
    Anybody aware of any data purge procedures/steps/utility in OIM.
    I have studied that following tables are used for audit purpose. Following table holds of user profile snapshot (whenever user data get changed). Reference document is http://docs.oracle.com/cd/E10391_01/doc.910/e10365/useraudit.htm
    UPA
    UPA_USR
    UPA_FIELDS
    UPA_GRP_MEMBERSHIP
    UPA_RESOURCE
    AUD_JMS
    UPA_UD_FORMS
    UPA_UD_FORMFIELDS
    I guess tis tables can be truncated (after taking a backup) without any risk ?
    Please suggest?
    Ritu

    Refer more about script here:docs.oracle.com/cd/E14899_01/doc.9102/e14763/bulkload.htm#CHDHAFHC
    OIM server should be up and running while running the script.
    Regards,
    GP

  • Oracle EBS Data Purging and Archival

    Hi,
    I would like to know if there is any tool available in market for Oracle EBS data purging and Archival?
    Thanks,

    yes, there are 3rd-party tool available which will apply a set of business rules (ie all data older than Nov.1, 2007) through the various Oracle modules implemented at a customer site.
    They are 3rd-party tools; You can go to Oracle.com and look in partners validated integration solutions. At the moment there are 2 partners offering such integrated solution:
    Solix EDMS Validated Integration with 12.1
    IBM Optim Data Growth Solution
    the only other solution is to hire OCS for a customized developed solution

  • Sliding window for historical data purge in multiple related tables

    All,
    It is a well known question of how to efficiently BACKUP and PURGE historical data based on a sliding window.
    I have a group of tables, they all have to be backed up and purged based on a sliding time window. These tables have FKs related to each other and these FKs are not necessary the timestamp column. I am considering using partition based on the timestamp column for all these tables, so i can export those out of date partitions and then drop them. The price I have to pay by this design is that timestamp column is actually duplicated many times among parent table, child tables, grand-child tables although the value is the same, but I have to do the partition based on this column in all tables.
    It's very much alike the statspack tables, one stats$snapshot and many child tables to store actual statistic data. I am just wondering how statspack.purge does this, since using DELETE statement is very inefficient and time consuming. In statspack tables, snap_time is only stored in stats$snapshot table, not everywhere in it's child table, and they are not partitioned. I guess the procedure is using DELETE statement.
    Any thought on other good design options? Or how would you optimize statspack tables historical data backup and purge? Thanks!

    hey oracle gurus, any thoughts?

  • Delete or Truncate statement for data purging

    Hi,
    I am writting a stored procedure to purge data from table every month. There is no constraint on table column except primary key.
    I am using 'delete' statement as it records an entry in the transaction log for each deleted row. But it's a slow process compared to 'Truncate' statement.
    Can you please suggest what should I choose (Delete or truncate) as per best practice.
    Thanks
    Sandy
    SandyLeo

    If you want to  delete all rows in the table use TRUNCATE, otherwise I would suggest the below technique
    --SQL2008 and onwards
    insert into Archive..db
    select getdate(),d.*
    from (delete  from db1
            output deleted.*
            where col=<value>
    ) d
            go
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • LMS 3.2 Data purge

    Hello Dear's
    We have installed CS 3.2 and RME on 1 server on C: Drive. Now the problem is C: Drive is full with MB of space behind, when i went through the Cisco Works folder  i found location CSCOpx/files/rme/syslogpurgeddata files are occupying space in GB so i planning to delete these files,
    My concern is if i m deleting these files is it will crashed my CS and RME.???????????  AND  these files are just the sylogs which have been received by devices?????   Correct me if i m wrong
    Thanks

    you are correct, syslogfirst.db,syslogsecond.db and syslogthird.db are files you should never delete manually and you need to contact TAC to get the procedure and the necessary java file to reclaim the diskspace.
    But like Afroj mentioned it is save to delete the files in NMSROOT/files/rme/syslogpurgeddata
    syslog consumes a lot of disk space when a lot of messages are coming in.. Firewall logging send to the syslog server could be a reason as well as roaming messages from wireless APs - or it is just the "normal" behaviour - it depends on the network.
    Reinitialization of RME will removes all data from RME - like device configuration, inventory all your RME jobs and all historical data. I definitely would prefer contacting TAC.
    But if you have set syslog purge to 7 days from the beginning I assume the amount of your syslog messages is huge and if that is truth I doubt that you could reduce the size of syslogfirst.db, etc.
    What is the size of these files?/files/rme/syslogpurgeddata?
    [edit]
    as far as I know DBSpaceReclaimer.class is included in LMS 3.2 - but currently I cannot verfy it. It will remove the three dataspaces (syslogfirst.db, etc.) and - if I remember well it - will also whipe out the current content of the RME syslog db
    the header of this thread mention LMS 3.2 but in your last post you wrote CS 3.2 RME 4.2 - both are LMS 3.1 - verify your version under Common Services > Software Center > Software Update; an overview of all LMS applications and versions can be found here: Cisco Works - Appliccations and Versions -

  • Oracle Apps data Purge

    Hi Apps Gurus
    We have a requirement to delete one of the operating units data from Oracle Projects ,SCM and Finance modules.
    There are huge transactions and interfaced to GL and other modules(e.g. Payable Invoices have been transferred to GL)
    There are standard concurrent programs and Purge functionalities, however it checks for dependencies. We would like to delete it completely including GL module.
    Regards
    Dharam

    Hi;
    You can not delete module from EBS,If you do than consistiency will be destory. You can not delete data from backend too, coz its also can be cause many problem.
    If you need to purge huge data than you have to log SR and confirm it wiht oracle support
    Regard
    Helios

  • CRM Data Purge

    Hi
    Could anyone point me in the direction of permanently purging CRM data in e-Business Suite? We have now accumulated significant volumes of CRM data (in millions), namely hz_parties, hz_locations, hz_party_relationships, notes, etc. We are interested in purging historic data from the system relating to a party rather than set the party to inactive.
    Any assistance/pointers would be much appreciated.
    Regards
    SH

    Hi Guys
    Many thanks for all your responses. I will raise a SR for Oracle to confirm that parties cannot be deleted but I get the point around these being master entities in the system. I can go with the option of inactivating the parties, thereby making them unavailable on the system.
    regards
    SH

  • How to organize partitions and chronological data purging

    I need some help in automatic table purging (10g)
    How to organize a new table to be chronologically purged ?
    It will be helpful if good links will be posted.
    Thanks.
    bol
    Message was edited by:
    Bolev
    Message was edited by:
    Bolev

    Therefore ... please expand ... what exactly are youtrying to accomplish.
    and here is your answer to your question:
    Therefore, this kind of purge now gets
    filed under 'archiving old data' in this tired old
    brain of mine. Archive to some alternate storage,
    perhaps reformatted, and the source records
    eliminated. AKA 'move data to near-line or
    alternate storage') That is exactly what it is and in my understanding
    there is no conflict with term "chronologically
    purged" (unfortunately English is my second language
    :) Got it. And now, perhaps, you understand why we needed the explanation.
    >
    Actually this is not a simple issue and I am trying
    to accomplish this the most efficient way.
    I guess, archive-truncate will be the best solution
    in most cases.
    You are quite correct. It is not a simple issue. And we still need more information, simply because there is no 'one size fits all' solution. A significant amount is determined by the table and index statistics as well as how many indexes.
    So next set of questions starts with:
    for each table, describe
    - are there foreign keys ... dependents must be handled first;
    - the general table storage statistics (mainly rows per block);
    - is partitioning [option] involved;
    - the key, or foreign key, which you can use to purge;
    - is that key indexed;
    - how many other indexes and constraints will be impacted by the purge;
    - roughly ow much (percentage) of the table will be purged each time;
    - how often;
    As I noticed in my original post I was also asking
    about good related references (links) if exist
    Perhaps Niall can help. He admitted to experience <g>
    Impressed with this board We try. It's sometimes also quite a madhouse.

  • Data Purge

    Hello,
    I am using SQL Server 2012 SE. I need to setup a purge job to delete data from 3 tables.
    One of the tables have 200 Million records and then the next table has 30Million records and then the third table has 500000 records and I will be cleaning up atleast half in each of these tables. These three tables are in relationship.
    Also this database is a publisher database and I am replicating deletes as well.
    I have currently scripted them as:
    The idea is to cleanup records older than 425 days based of time1 in QJ table.
    SELECT ip.IPId,qj.QJId into #IPid_temp
    FROM QJ QJ INNER JOIN
    IP IP ON QJ.QJId = IP.QJId INNER JOIN
    IPFlex IPF ON IP.IPId = IPF.IPId
    where qj.time1<getdate()-425
    delete from IPflex where IPID in (select distinct IPId from #IPid_temp)--cleaning around 60 million
    delete from IP where IPID in (select distinct IPId from #IPid_temp)--cleaning around 15 million
    delete from QJ where QJId in (select distinct QJId from #IPid_temp)--cleaning around 200000
    I put the above script in a sql job and after 6 hours of execution it timeout. This is in my test environment.
    Error:Time-out occurred while waiting for buffer latch type 2 for page (1:13538335), database ID 7
     Since this is the first time I am running it I will have to worry about the time thing. I am assuming the next day it runs it shd be quite faster as there wont be much data(coz i will be deleting only 1 days worth of data the next day). The next step
    of this job is followed by an index rebuild and that finishes the job.
    Is there a better way of handling or performing the purge activity?
    Thanks for the inputs experts.

    >> One of the tables have 200 Million records [sic] and then the next table has 30 Million records [sic] and then the third table has 500000 records [sic] and I will be cleaning up at least half in each of these tables. <<
    Rows are nothing like records. It is always scary when a poster cannot get a basic term wrong; would you trust a doctor who talks about checking your gills for parasites? 
    We usually archive old data, but you checks with the lawyers, so you know this is okay, right? 
    >> These three tables are in a relationship. << 
    So, where is the DDL and the DRI actions that enforce this relationship? Does your boss make you debug code without showing it to you? If you do not have such constraints, then you do not have a relational database. 
    Your data element violate ISO-11179 rules, but they are so short as to be useless for a maintenance programmer until they learn all of the cryptic names. This usually means that you are a FORTRAN or BASIC programmer who has not un-learned his old language. 
    >> The idea is to cleanup records [sic] older than 425 days based of “time1” in QJ table. <<
    Wow! What does this table look like? Why do you think that “QJ” is a valid table name? Have you ever heard of ISO-11179 Standards? Why did you alias a table to its own name? 
    I would have used DATE for this column and never used a reserved word like TIME (a data type) or put a fake array index on it. We have no idea what this data element is, so let's call it “expiry_date”
    Why did you create a fake scratch tape with the proprietary, Sybase SELECT..INTO.. syntax? It looks like the ANSI/ISO Standard SQL singleton, but does not work that way. 
    INSERT INTO Fake_Scratch_Tape -- corrected bad code!! 
    SELECT IP.ip_id, QJ.qj_id 
     FROM QJ, IP, IPFlex AS IPF 
     WHERE IP.ip_id = IPF.ip_id
      AND QJ.qj_id = IP.qj_id 
      AND QJ.oj_id = IP.oj_id 
      AND expiry_date < DATEADD (DAYS, -425, CURRENT_TIMESTAMP);
    Because you did not follow Netiquette, we have to guess at everything. But we cannot make a guess without any specs! Try this skeleton and fix it. I am sure it is wrong, but you can see the REFERENCES clauses and maybe tell us what we need to know. 
    CREATE TABLE IP 
    (ip_id CHAR(10) NOT NULL PRIMARY KEY,
     oj_id CHAR(10) NOT NULL
        REFERENCES QJ(qj_id)
      ON DELETE CASCADE,
    CREATE TABLE QJ
    (qj_id CHAR(10) NOT NULL PRIMARY KEY,
    CREATE TABLE IPflex 
    (ip_id CHAR(10) NOT NULL 
      REFERENCES IP(ip_id)
      ON DELETE CASCADE,
     oj_id CHAR(10) NOT NULL
        REFERENCES QJ(qj_id)
      ON DELETE CASCADE,
    PRIMARY KEY (ip_id, oj_id),
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Data Purge PD Data in Sandbox Development Client

    I am trying to establish a Sandbox/Demo Client in our development environment for the exclusive use of HR Development practitioners.
    I want it to be a copy of the current Development Client other than for data i.e. something like a blank canvas.  However my BASIS team struggle to deliver.  They can elimnate data from the PA structure as part of the copy process but they have not yet found a way to eliminate data from the PD structure.  I find myself having to tediously breakdown every relationship within an org. unit before I can delete lowest denominators within the PD Structure.
    I wonder if you have ever encountered this problem?  Perhaps you know someone who has?  How did you or someone you know overcome it?
    Regards,
    Dan Carr

    Hi Jyothi,
    You can follow above instructions and informed your company's BASIS TEAM.
    No need to worry about it but be very sure beofre deleting any data.
    Also, always take backup of data before deleting anything from SAP.
    1 more option : You can copy the data from quality system and upload it.
    I hope this will help you.
    Regards,
    Rahul

  • How to purge EBS data without deleting any setup or configuration details

    Today I get a very interesting requirement. We have a one year old instance. Now, senior management wants to remove/purge/delete all transactional data though they want to keep all the EBS suite R12.1 setup and configuration information. I do not have any idea how can I achieve this. Please help me on this regard.

    Hi Su;
    Please see:
    Purging Strategy for eBusiness Suite 11i [ID 732713.1]
    Also see:
    http://oracleappstechnology.blogspot.com/2008/12/in-built-data-purge-concurrent-programs.html << r11 iicin hepsi burda
    In r12 What is use of Purge log and Closed system alerts
    Purge Debug Log And System Alerts Performance Issues
    Regard
    Helios

  • How to purge BPEL data on 10.1.3.1

    Hi Experts,
    I want to purge OLD data on BPEL dehudration repositiory.
    a) I did read notes in Metalink and talks about 10.1.2.. However mine is 10.1.3
    b) I also read the belwo url
    http://orasoa.blogspot.com/2007/02/delete-bulk-bpel-instances.html
    c) Can someone PLEASE suggest a supported and proven method for DATA PURGE on 10.1.3.1?
    I'm still reading the archive from this forum.... no luck so far..
    In fact I'm also looking for details of tables which I can purge...
    Thanks a lot!!!
    Natrajan

    Hi,
    here are the comments which are in the plsql package collaxa:
    * procedure insert_sa
    * Stored procedure to do a "smart" insert of a scope activation message.
    * If a scope activation message already exists, don't bother to insert
    * and return 0 (this process can happen if two concurrent threads generate
    * an activation message for the same scope - say the method scope for
    * example - only one will insert properly; but both threads will race to
    * consume the activation message).
    * procedure insert_wx
    * Stored procedure to insert a retry exception message into the
    * wi_exception table. Each failed attempt to retry a work item
    * gets logged in this table; each attempt is keyed by the work item
    * key and an increasing retry count value.
    * procedure update_doc
    * Stored procedure to do a "smart" insert of a document row. If the
    * document row has not been inserted yet, insert the row with an empty
    * blob before returning it.
    * procedure delete_ci
    * Deletes a cube instance and all rows in other Collaxa tables that
    * reference the cube instance. Since we don't have referential
    * integrity on the tables (for performance reasons), we need this
    * method to help clean up the database easily.
    * procedure delete_cis_by_domain_ref
    * Deletes all the cube instances in the system. Since we don't have
    * referential integrity on the tables (for performance reasons), we
    * need this method to help clean up the database easily.
    * procedure delete_cis_by_pcs_id( processId )
    * Deletes all the cube instances in the system for the specified process.
    * Since we don't have referential integrity on the tables
    * (for performance reasons), we need this method to help clean
    * up the database easily.
    * procedure insert_document_ci_ref
    * Stored procedure to do a "smart" insert of a document reference.
    * If a document reference already exists for a cube instance, don't bother to insert
    * and return 0.
    */

Maybe you are looking for

  • How do I stop automatic downloads?

    Okay, I redeemed some codes that came with blurays for digital copies of movies. Thing is, I have a bandwidth cap and really don't want to download them. Plus, after all, I have a blu-ray. I went into Preferences and clicked the Store tabbed and unch

  • PDF presets not showing up in indesign CC

    Hi, We have about 28 CC users on mac (mavericks), and we've just started using Indesign CC within the last month or so. My problem is that we are 'deploying' via remote desktop, a standard set of PDF presets so everyone have the same settings. This w

  • In which case we use FBRA (Reset Cleared Item)

    Hi, Hello experts i am very confused about this transaction can any budy please make me cleare that In which case we use FBRA (Reset Cleared Item). Quick reply will be really very helpfull . Thanks In advance.

  • Help on Multiple Event Listeners

    Hi: How do you implement both ItemListener and ListSelectionListener in the same interface? In another word, how do I put JRadioButton, JList, JComboBox...etc,each one with its own listener in init() method? I came up with the following program. Anyo

  • Is it possible to sell the new headphones?

    I want to buy an ipod shuffle.But i don't need the headphones.I need only the earpods!.Is it possible to sell the new headphones in Apple retail stores or any other markets?