Archive process using partition & datapump

Oracle 10.2
I have a large table that has not been maintained for quite some time. It has over a million and a half records dating back to 2009. I'm working on a one-time archival and purge process that will get it into a state where I can start a regular archive and purge process. This is in Production and I can have minimal downtime.
My thoughts are to
Datapump out the table with only the records where history_date > 12-31-2010 & hold dmp file
Create a new table as select from the current table and partition it
Run ALTER TABLE schema.new_table
EXCHANGE PARTITION CURR_PART
WITH TABLE schema.old_table
EXCLUDING INDEXES
WITHOUT VALIDATION
Datapump the held records back into schema with IF_TABLE_EXISTS=REPLACE
This will give me the original table with only the data I want to keep, and also a new table that I can archive off and purge at my leisure with as little downtime to the clients as possible.
Trying to think of any gotchas. I will be performing this in test of course, but do you think this is a good way, or is there a better, faster way I hadn't thought of?
I think this will give minimal downtime and also keep archive logging to a minimum.

DanceRat wrote:
Okay: I currently have a large table that is not partitioned and needs to remain that way. There is nothing to partition on that would serve the clients (only me). The reason it is so large is because it has not been groomed or pruned in an extremely long time. I need to archive and purge off records from that table with the caveats of 1) no huge archive loggings 2) get the database table back to the client with a minimal of off time. 3) It's production, I'm not going to take it out of archival mode.
Requirements 1 and 3 are mutually exclusive. Regardless of how you archive the data you don't want to keep, at some point you are going to have to either delete a bunch of rows or create a table and insert a bunch of rows. Either will generate redo.
So, back to the original thought of datapumping the metadata and the newest of the data that needs to remain in the table.
Moving the data/table over to a partitioned table using exchange partition
Replacing the original table with the datapump metadata and table held.
At my leisure, being able to then archive off the newly partitioned data and put it into media that my clients can keep for future reference.
So hope that is clearer.
Edited by: DanceRat on Feb 16, 2012 12:02 PMWhy not just do a SELECT of the data you want to purge, spooling the results to a text file. That file will then constitute a fixed-width text file that could be read by most anything now or in the future. Then, using the same criteria, start deleting the rows from the table. That will require zero down time. If you are concerned about massive archivelogging, determine some criteria whereby you can delete rows in chunks and spread it out over several days. Crude, but seems to meet all of your criteria. And for a one-off operation, crude isn't so bad.

Similar Messages

  • Arch process using more swap space

    Hi,
    DB : 11.2
    OS : Aix 6
    While taking archive backup by oem,only 9 archive process consuming more swap space.
    The parameter value is LOG_ARCHIVE_MAX_PROCESSES=15 at db level.
    Why only 9 archive process use swap space at os level?
    how to determine all the archive process needed?
    Thanks & Regards,
    VN

    user3266490 wrote:
    Hi,
    Thanks for your reply.
    Physical memory 120 GB,Swap space 45GB.
    We getting TOP consuming SWAP space PID's from OS team.Then we checked v$session with that PID's.We found that PID's belongs to archive processes.
    Nealry 25 log switch per hour happened.the log file size 8gb,8 groups. Then REDO log files is is way too small by around a factor of SIX
    Increase REDO log file size to 50GB to reduce the number of switches per hour to around 3 0r 4.

  • Can data be purged without the archiving process in BW system?

    Dear all,
    can anybody give me some comments on the data purging process?  After checking some materials about BI data archiving,  I am wondering:
    if data can be purged skipping the archiving process?  As long as no requests to retain the historical data, it should be good for performance reason to purge some useless data.
    My doubt to this question is:
    1. Will there be any inconsistency if we do the data purging manually without instruction of SAP standard process?
    2. Will there be any particular issues for DSO and Cube data purging?  Since there should be some business related dependencies.
    It would be highly appreciated for sharing the best practice in data purging or archiving in BW system!
    thanks in advance!
    Jennifer

    Hi,
    as others are pointing you should never delete data with other means than SAP standard procedures (request, selective, archiving....); in other words, do not delete data directly from the the BW DB tables!! This would definitively lead to big troubles... I am not sure what you mean exactly by "manually"...
    Deleting data (purging and or archiving) should always been considered carfully:
    - is the cube/dso a source of data for other targets (datamart)? If this is the case then the deletion should be carefully analyzed
    - for DSO/ODS it is particularly important to ensure that the keys of the records to be deleted will never be loaded again in the DSO; because of the overwrtie capabilities for these containers, this situation will obviously lead to incorrect data posted in subsequent targets
    Currently we've never used the archiving functionalities of BW... We monitor closely the data genererated by BW itself in datamart scenarios (PSA in between providers can grow very fast) and have implemented a customized process for that since it wasn't possible to archive PSA in BW3x....
    Most of the data stored in our providers are still used by our consummers (six years of data, some 3TB) and we use the logical partitionning extensively on the calendar year; this enable us to "move" "old" data to less expensive storage line.
    hope this helps....
    Olivier.

  • Error While Deploying the BPEL Process using obant script

    Hi All,
    I am getting the following error while deploying the BPEL Process using obant script. we are using the BPEL Version 10.1.2.0.2.Any information in this regard will be really helpful.
    Buildfile: build.xml
    main:
    [bpelc] file:/home5102/dibyap/saravana/Test/CreditRatingService.wsdl
    [bpelc] validating "/home5102/dibyap/saravana/Test/CreditRatingService.bpel" ...
    BUILD FAILED
    /home5102/dibyap/saravana/Test/build.xml:15: ORABPEL-01002
    Domain directory not found.
    The process cannot be deployed to domain "default" because the domain directory "/opt02/app/ESIT/oracle/esit10gR2iAS/BPEL10gR2/iAS/integration/orabpel/domains/default/deploy" cannot be found or cannot b
    e written to.
    Please check your -deploy option value; "default" must refer to a domain that has been installed locally on your machine.
    Total time: 23 seconds
    dibyap@ios5102_ESIBT:/home5102/dibyap/saravana/Test>
    Thanks,
    Saravana

    In 10.1.2.0.2 you need to create your own build.xml
    I have found an example, it may be of some help. This does call a property file
    cheers
    James
    <?xml version="1.0" ?>
    <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Run cxant on this file to build, package and deploy the
    ASB_EFT BPEL process
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
    <project name="ASB_EFT" default="main" basedir=".">
    <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Name of the domain the generated BPEL suitcase will be deployed to
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
    <property name="deploy" value="default" />
    <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    What version number should be used to tag the generated BPEL archive?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
    <property name="rev" value="1.0" />
    <!-- BPEL Best Practices Properties -->
    <!-- Defaults Properties for TARGET environments
    # CHANGE THIS FILE TO REFLECT THE TARGET ENVIRONEMNT
    # either dev, test, or prod.properties
    -->
    <property file="ebusd.properties"/>
    <property name="env" value="${env.name}"/>
    <property name="current.project.name" value="${project.name}"/>
    <property name="target.project.name" value="${project.name}_${env}"/>
    <property name="deployment.profile" value ="${env}.properties"/>
    <property name="source.development.directory" location="${basedir}"/>
    <property name="target.env.directory" location="${basedir}/deploy/${project.name}_${env}"/>
    <property file="${deployment.profile}"/>
    <property name="build.fileencoding" value="UTF-8"/>
    <!-- Prints Environment
    -->
    <target name="print.env" description="Display environment settings">
    <echo message="Base Directory: ${basedir}"/>
    <echo message="Deployment Profile: ${deployment.profile}"/>
    <echo message="target.env.directory: ${target.env.directory}"/>
    <echo message="Deploy to Domain: ${deployToDomain}"/>
    <echo/>
    <echo message="os.name: ${os.name}"/>
    <echo message="os.version: ${os.version}"/>
    <echo message="os.arch: ${os.arch}"/>
    <echo/>
    <echo message="java.home: ${java.home}"/>
    <echo message="java.vm.name: ${java.vm.name}"/>
    <echo message="java.vm.vendor: ${java.vm.vendor}"/>
    <echo message="java.vm.version: ${java.vm.version}"/>
    <echo message="java.class.path: ${java.class.path}"/>
    <echo/>
    <echo message="env: ${env}"/>
    <echo message="current.project.name: ${current.project.name}"/>
    <echo message="target.project.name: ${target.project.name}"/>
    <echo message="server.name: ${server.name}"/>
    </target>
    <!--
    Copies the current directory structure along with
    all the file into the target.env.directory and
    change the name of the project
    -->
    <target name="create.environment">
    <copy todir="${target.env.directory}">
    <fileset dir="${basedir}"/>
    <filterset begintoken="@" endtoken="@">
    <filtersfile file="${deployment.profile}"/>
    </filterset>
    </copy>
    <move file="${target.env.directory}/${current.project.name}.jpr" tofile="${target.env.directory}/${target.project.name}.jpr"/>
    </target>
    <target name="main">
    <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    the bpelc task compiles and package BPEL processes into versioned BPEL
    archives (bpel_...jar). See the "Programming BPEL" guide for more
    information on the options of this task.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
    <bpelc input="${basedir}/bpel.xml" rev="${rev}" deploy="${deploy}" />
    </target>
    </project>
    here is a property file
    project.name=ASB_EFT
    env.name=ebusd
    deployToDomain=default
    server.name=[server]
    server.port=7788
    ebusd\:7788=http://[server]:7788/
    IntegrationMailAccount=OracleBPELTest
    IntegrationMailAddress=[email]
    IntegrationMailPassword=[password]
    archivedir=[directory]
    inbounddir=/[directory]
    errordir=[directory]
    outbounddir=[directory]
    bpelpw=bpel
    dbhost1=[dbserver]
    dbhost2=[dbserver]
    dbport=1523
    dbservice=bpel
    dbconnstr=jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=[server])(PORT=1523))(ADDRESS=(PROTOCOL=tcp)(HOST=[server])(PORT=1523)))(CONNECT_DATA=(SERVICE_NAME=ebusd)))

  • Purchase Order is not deleted after archive process

    Hi,
    I want to delete completely a specific purchase order from the system so I followed the next steps to do that but is not working.. not sure if I'm doing something wrong.
    Steps:
    1) I marked up the PO as deleted in the ME22 transaction:
    2) In the transaction SARA I followed these steps:
    Step 1: Create variant for Archiving. Give it a name. Enter your PO number. Flag One step procedure. Flag detailed log. Erase flag for test run.
    Step 2: Maintain your start date, you can use Immediate in such a small case.
    Step 3: Maintain spool parameters. Enter your printer and set the values if you want to print immediatly and if you want keep the print. I suggest to hold the spool in the system. This helps you to determine any error.
    Step 4. Execute the variant.
    3) If I go to the transaction ME23N it says that the PO is archived but is there yet. Also, I can see it in the "Docuement Overview". How can I delete this PO completely from the system??
    Thanks for the help!!

    HI Carlos,
    Since you archived PO, it is stored in archive physical drive not in SAP drive.
    when u use me23n, the archived PO is picking from physical drive and it says "Already archived" message.
    I'm sure there will be no table level entries for the archived documents
    Go to SARA transaction - information system--> Archive Explorer - MM_EKKO and execute.
    Check your PO is listed here.
    So if you dont want to see your PO, then you need to De-activate information system
    in SARA-Information System-->Customizing -
    search infrastructure by Object name and use activate/deactivate icon -
    A question i really need to know - Why you want to delete a particular PO, Archival process is not a small thing and usually it takes place as a MASS activity.
    reg,
    bhg

  • BPM process archiving "Processing of archiving write command failed"

    Can someone help me with the following problem. After archiving a BPM proces, I get the following messages (summary):
    ERROR  Processing of archiving write command failed
    ERROR  Job "d5e2a9d9ea8111e081260000124596b3" could not be run as user"E61006".
    LOG -> Processing of archiving write command failed
    [EXCEPTION] com.sap.glx.arch.xml.XmlArchException: Cannot create archivable items from object
    Caused by: java.lang.ClassCastException: ...
    Configuration
    I've completed the following steps based on a blog item.
    1. created an archive user with the corresponding roles
    2. updated the destination DASdefault with the created user -> destination ping = OK
    3. created an archive store BPM_ARCH based on unix root folder
    4. created home path synchornization with home path /<sisid>/bpm_proc/ and archive store BPM_ARCH
    5. start process archiving from manage processes view.
    Process Archiving
    Manage Process -> Select a process from the table -> Archive button -> Start archiving by using the default settings.
    Archiving Monitor
    The following log is created which describe that the write command failed.
    Write phase log:
    [2011.09.29 12:00:18 CEST] INFO   Job bpm_proc_write (ID: d5e2a9d9ea8111e081260000124596b3, JMS ID: ID:124596B30000009D-000000000C08) started on Thu, 29 Sep 2011 12:00:18:133 CEST by scheduler: 5e11a5e0df3111decc2d00237d240438
    [2011.09.29 12:00:18 CEST] INFO   Start execution of job named: bpm_proc_write
    [2011.09.29 12:00:18 CEST] INFO   Job status: RUNNING
    [2011.09.29 12:00:18 CEST] ERROR  Processing of archiving write command failed
    [2011.09.29 12:00:18 CEST] INFO   Start processing of archiving write command ...
    Verify Indexes ...
    Archive XML schema ...
    Resident Policy for object selection is  instanceIds = [9ca38cb2343511e0849600269e82721e] ,  timePeriod = 1317290418551 ,  inError = false ,
    [2011.09.29 12:00:18 CEST] ERROR  Job "d5e2a9d9ea8111e081260000124596b3" could not be run as user"E61006".
    [2011.09.29 12:00:18 CEST] INFO   Job bpm_proc_write (ID: d5e2a9d9ea8111e081260000124596b3, JMS ID: ID:124596B30000009D-000000000C08) ended on Thu, 29 Sep 2011 12:00:18:984 CEST
    Log viewer
    The following message is created in the log viewer.
    Processing of archiving write command failed
    [EXCEPTION]
    com.sap.glx.arch.xml.XmlArchException: Cannot create archivable items from object
    at com.sap.engine.core.thread.execution.CentralExecutor$SingleThread.run(CentralExecutor.java:328)
    Caused by: java.lang.ClassCastException: class com.sap.glx.arch.Archivable:sap.com/tcbpemarchear @[email protected]2@alive incompatible with interface com.sap.glx.util.id.UID:library:tcbpembaselib @[email protected]f@alive
    at com.sap.glx.arch.him.xml.JaxbTaskExtension.createJaxbObjects(JaxbTaskExtension.java:69)
    at com.sap.glx.arch.xml.JaxbSession.fillFromExtensions(JaxbSession.java:73)
    at com.sap.glx.arch.pm.xml.ArchProcessExtension.fillHimObjects(ArchProcessExtension.java:113)
    at com.sap.glx.arch.pm.xml.ArchProcessExtension.createArchObjectItem(ArchProcessExtension.java:60)
    at com.sap.glx.arch.xml.JaxbSession.createArchObjectItems(JaxbSession.java:39)
    at com.sap.glx.arch.xml.Marshaller.createItems(Marshaller.java:29)
    ... 61 more

    Hi Martin,
    I don't have a specific answer sorry, however I do recall seeing a number of OSS notes around BPM archiving whilst searching for a different issue last year - have you checked on there for anything relevant to your currnet version and SP level?  There were quite a few notes if memory serves me well!
    Regards,
    Gareth.

  • How to start Archiving process in production system.

    Dear Experts,
    As i was new to archiving process need help regarding this.as our client as 1.3 Tb of data with them,as they want to clean the unwanted data from the database.as they are using windows 2003 as server, oracle as database and using sapr/3 4.6 c.i want to know from where i have to start the process.i have read so many threads regarding this.they are helpful for me.i have some queries that i have to clarify.
    1) I have found which table increasing more in database.,i have checked (DB15 ,SAARA).how to find this has to archive.
    2) Data which is not important who has to delete that data.i heard data deletion is not basis work.
    3) After deleting data in datbase without reorganising space will not be empty is this may be the cause??
    4) is there any data loss in  reoraganising??
    5) is there any way to do reorganising with brtools in oracle.??
    Please clarify me.
    Regards,
    Naveen  N

    Dear Naveen,
    Data archiving is an inter-departmental activity.
    The person responsible for each application (which he/she is using in SAP Solution to carry out the business process) should be involved in SAP Data Archiving Process/Activities.
    The team should be grouped with the persons responsible for each application.
    After the planned Analysis, The team grouped for executing the Data Archiving process produce the checklist for Archive Data (based on your business process and other dependent factors). They will provide the inputs (archiving objects) for Data Archiving requirements.
    Moreover, As a part of Data Archiving process, you will have to delete the planned archive data from the actual Database. So, some spaces will become free in Database but all will be scattered in its respected Tablespace(s) in random manner. You will have to perform Database Reorganization to regain the free space.Because, The Data is stored in Database in form of Fragmented database objects (Tables/Indexes) in its respective Tablespace (s).Reorganization is generally performed to defragment these database objects. By doing so, one can improve performance and regain the space within tablespaces that was being unused due to fragmentation.
    I hope this information will be helpful.
    Regards,
    Bhavik G. Shroff

  • Process Order Deletion and Archiving process

    Hi all,
    I just want to ask what is the proper process in deleting and archiving process orders and what are the possible impacts on the other modules?
    Thanks!

    Using SARA, Archiving object is PR_ORDER or t-code COAC.  Follow the step pre-process, write archive file, and delete from database.
    I don't recall any prerequisite for archiving process orders.   However, costing guys/girls often yelled to keep process orders for a very long while so you might want to check with their requirement (Note that archiving is more business-oriented than you think).
    Before you can do so, you must follow the steps of process order life cycle (from create, release, tech.comp., close, deletion flag, deletion indicator, accordingly).
    For deletion flag, and deletion indicator, you can set via COAC t-code.  There is a set up for period lapse between deletion flag to deletion indicator to archive in order type configuration (residence time 1, 2).  Process order, for example,  must reside in database for 1 month after deletion flag and another 1 month after deletion indicator before you can archive them out if you set residence time 1,2 to 1.
    Hope it helps.

  • Printing process used for iPhoto prints?

    Hi,
    Can anyone tell me what printing process is used when ordering enlargements through iPhoto?
    I've placed one of my photos in a international exhibit (yay!) and in preparation for the show, they want to know the printing process for inclusion on the display card. Is it a Chromogenic print, or a Gelatin silver print, or an Ultra-Chrome archival digital print, or an Archival Pigment print on watercolor paper, or a B&W Fiber print, or a Platinum print, etc...
    I know that the print order was any of the above, but I'm wondering how I should reply to this request. Is it just a "Digital Print on photographic paper"? Or is there some more accurate way of describing the print process?
    Thanks!!
    -- John

    This is the info from Kodak Print services:
    http://www.kodakgallery.com/HelpAboutPrints.jsp?#printing
    "We print your photos on high-quality Kodak paper, a resin-coated, silver-halide color paper optimized for digital printers. A light source inside our digital printers exposes the photographic paper pixel by pixel. This process mimics traditional photography, in which light from the subject exposes photographic film inside of a camera."
    If you want more info about their archival process, you could try sending them an email.
    http://www.kodakgallery.com/HelpCustomerSupport.jsp?

  • Find resource utilized in archive process....

    Hello I am new to oracle.
    I am deciding to configure standby database for my production database( oracle 9.2.0.6).
    My database is running in archive log mode and its currently archiving in one location.
    I was questioned by my line manager ?..
    what will be the resource utilized at the production server if you try to archive redologs at 2 location (ie mandatory). interms of cpu, io and performance of current database running.
    what difference it makes if I archived to second destination which can be local or on remote server.
    I need to understand impact on performance of my production server if I archived to more than one location.
    Is there any specific views that I have to query at database level or at os level.
    Plz help.

    Hi,
    If you don't mind, I won't answer your questions in the order you asked them.
    what difference it makes if I archived to second destination which can be local or on remote server.The destination for the 2nd set of archived redo log files must be remote (IMNSHO) if you want to set up anything close to a DR scenario. Standby Databases are used in case of crash of the Primary Database, when it can't be started up again fast, to failover to the standby in order to continue working. If you set up the Standby on the prod server, odds are 99% for it to be useless when the crash happens. And you don't want that, do you?
    So, in that case, it'll "cost" nework badwidth. Up to you to check how much archive volume is generated on average per hour, to know what kind of bandwidth you need. I posted a query to compte that some time ago, but can't seem to find it anywhere in the Forum. Anyway it can be achieved by:
    SELECT
      TRUNC(ARCHIVED.NBARCHIVEDBLOCKS * BLOCKSIZE.VALUE /1024/1024/24/3600,2) "MebiBytes per Second"
    FROM
    (SELECT VALUE FROM V$PARAMETER WHERE NAME='db_block_size') BLOCKSIZE,
    (SELECT SUM(BLOCKS) NBARCHIVEDBLOCKS FROM V$ARCHIVED_LOG WHERE TRUNC(FIRST_TIME) = TRUNC(SYSDATE)-1 AND STANDBY_DEST='NO') ARCHIVED;(as you're archiving to one destination only atm).
    what will be the resource utilized at the production server if you try to archive redologs at 2
    location (ie mandatory). interms of cpu, io and performance of current database running.There's no ROT to compute such things, they depend on way too much info. But you can count on a small overhead in CPU and memory (ram) usage on the production server. Though this will not have any noticeable impact on the production database. Except of course if you're realy running low on those resources!
    Anyway, the first thing you should do if you want to set it up is to start 2 archiver processes (log_archive_max_processes=2) in order to archive fast enough. oh, and of course you should first try it yourself on a test server!
    HTH,
    Yoann.
    Message was edited by:
    Yoann Mainguy

  • Table for Payroll Archive process

    Hi ,
    I need the table in which the data for payroll archive process is stored.

    Payroll Archive Information goes in "PAY_ACTION_INFORMATION" Table .
    Important Points about this table :
    1. When the value of the column "ACTION_CONTEXT_TYPE" is 'PA' then the row stores Payroll Level Information and the corresponding ACTION_CONTEXT_ID is the "PAYROLL_ACTION_ID" of the Payroll Archiver run. Note that "PAYROLL_ACTION_ID" is the primary key of PAY_PAYROLL_ACTIONS
    2. When the value of the column "ACTION_CONTEXT_TYPE" is 'AAP' then the row stores Assignment Level Information and the corresponding ACTION_CONTEXT_ID is the "ASSIGNMENT_ACTION_ID" of the Archiver for the assignment. Note that "ASSIGNMENT_ACTION_ID" is the primary key of the table "PAY_ASSIGNMENT_ACTIONS" .
    The following query can be used to fetch all the archived information for a particular run for a particular assignment :
    SELECT pai.*
    FROM pay_action_information pai,
    pay_payroll_actions ppa,
    pay_assignment_actions paa
    WHERE ppa.request_id = :request_id
    AND ppa.payroll_action_id = paa.payroll_action_id
    AND paa.assignment_id = pai.assignment_id
    AND paa.assignment_action_id = pai.action_context_id
    AND pai.action_context_type = 'AAP'
    AND pai.assignment_id = :assignment_id
    The following query can be used to fetch all payroll level information pertaining to a particular run .
    SELECT pai.*
    FROM pay_action_information pai,
    pay_payroll_actions ppa
    WHERE ppa.request_id = :request_id
    AND ppa.payroll_action_id = pai.action_context_id
    AND pai.action_context_type = 'PA'
    Please note that for 'Year End Archiver' all the information is stored in FF_ARCHIVE_ITEMS

  • Archive process  cleared-Went thorugh Metalink docs

    If there are 2 archive process defined how does the archiver works
    i got 2 archiver proces running resuklting in an entry in log file saying
    unable to archive log actively being archived by another rpocess
    Message was edited by:
    Maran Viswarayar

    Inside the V2 folder are subfolders representing your Mail accounts. The names refer to the email addresses you use.
    From the Mail menu bar, select
    File ▹ Import Mailboxes...
    Import from the mailboxes in the restored folder. The imported messages will appear in a new mailbox. Move the ones you want to keep wherever you like and delete the rest.

  • Satellite Pro L300 - using partition magic

    Hi,
    I have a Toshiba Satellite Pro L300 laptop and I would like to make an additional partition beside the already existing C: and the hidden partition. I have created the recovery discs and I want to use partition magic for the partitioning process (no new installation of windows xp). Can anyone tell me if using partition magic will have unwanted effect on possible recovery from the created disks?
    Thank you in advance

    Usually you can do this with Windows option in disc management but can you please tell me something at first?
    Do you use Vista or WXP? I ask this because if you use preinstalled Vista the HDD must already have two partitions. Please check it out in Windows explorer.
    Is hidden partition described as EISA (WinRE) partition?

  • Archival process of PO

    could any one give me the name of the package that is used in the archival process of PO .
    i saw one package which is related to the archival process but oracle is not using that package to archive the records when we apporve the PO through the form. The package name which saw is ECE_PO_ARCHIVE_PKG

    Hi Syed,
    STO :- Stock Transfer Order
    Stock Transfer between Two Plants with One Company code.
    The Purchase Order Type is Used in this case is "UB",
    And the Delveiry Type Used here is "NL"
    STPO:- Stock Transfer Purchase Order
    Stock Transfer Purchase Orders Between Two Plants with Two Different Company Codes.
    The Purchase Order Type is Used in this case is "NB",
    And the Delveiry Type Used here is "NLCC".
    Hope this Clarifies your doubts and Please Reward if Really Helpful,
    Thanks and Regards,
    Sateesh.Kandula

  • I have pavilion g6 1313 ax. I want to know the full process of partition.

    I have a hp pavilion G6 1313AX with OS windows 7 64-bit. i had deleted the recovery partition(D). i want to know the full process of partition of the HDD or if there is any way of doing partition without effecting the current OS. The size of C- is 441 gb nd i wanna to reduce it.

    You could use a partition program like partition magic but I would still make sure you have all your important pictures and documents saved to an external device or copied to another machine just in case.
    Reminder: Please select the "Accept as Solution" button on the post that best answers your question. Also, you may click on the white star in the "Kudos" button for any helpful post to give that person a quick thanks. These feedback tools help keep our community active, so you receive better answers faster.

Maybe you are looking for

  • Top-of-page event is not triggered

    Hi, I have a problem with top-of-page event. I have a report that shows the results in ALV grid display. But I use "REUSE_ALV_GRID_DISPLAY" not OO alv and there is only one screen with number 1000. On the menu toolbar there is a button that prints th

  • Printer

    I have a new Mac and can't add my HP Officejet 5740e printer

  • Deploying sample applications in mosaic

    Hello, i want to deploy the applications in Mosaic. I'm following these instructions: Use the Ant build files provided with the Mosaic ES2 installation to deploy sample applications to the server. To run Ant, type ant from the command line. When you

  • Regular web log (access.log) rotation?

    I am using Weblogic as a web server and I have enabled http logging to the access.log file. What is the BEA recommended method for copying and purging the access.log file on a regular (daily) basis? - Kevin

  • Some of the buttons on web pages will not work with firefox but will with IE, is there a fix or do i need to uninstall firefox

    going through my account at verizon pay bill get to a submit button, it will not work. go back to site with IE everthing find. Happened mant times before, but sometimes i restart the page and it works, not the last time, i had to use IE