COPA archiving

Hi Experts
I am having few questions regarding COPA archiving objects:
1. what are differences between archiving objects COPA1_xxxx and COPAA_xxxx, COPAB_xxxx.
I have found the SAP recomendation that to use archiving object COPA1_xxxx is not recommended. But there is no ohter explanation why to use COPAA_xxxx and COPAB_xxxx.
2. Is it possible to archive by the COPAC_xxxx (archiving of segments) archiving object if I use COPA1_xxxx?
Thanks
Barbora

Dear Barbora,
When an operating concern (xxxx) is generated in CO-PA, the following archiving objects are generated.
u2022 COPA1_xxxx for the accrued operating concern
u2022 COPAA_xxxx
u2022 COPAB_xxxx
u2022 COPA1_xxxx
u2022 COPA2_xxxx for account-based Profitability Analysis
For profitability segments: COPAC_xxxx
Archiving objects COPAA_xxxx and COPAB_xxxx have replaced archiving object COPA1_xxxx.
Although it is still possible to use the archiving object COPA1_xxxx, SAP recommends that you only use the new archiving objects, as they are the standard archiving objects used now. For example, Customizing activity is available only for the new archiving objects in IMG in newer versions.
Also, if you implement SAP Note 383728 you can use the generated archiving objects COPA1_xxx and COPA2_xxx to archive Profitability Analysis objects from tables CE4xxxx
When a line item in Profitability Analysis is updated,
An entry is inserted in the table CE1xxxx,
A newly formed results object is entered in table CE4xxxx, and
The related totals record is updated in table CE3xxxx.
As of SAP R/3 4.5 you have an additional table called CE4xxxx_ACCT. It contains the detailed account assignment information and can grow a lot faster than the actual database table CE4xxxx. For more information see SAP Note 199467.
Best Regards,
Kaushik

Similar Messages

  • BW COPA Archiving

    Hello Xperts,
    I have a requirement of doing archiving in BW in the COPA area. The archiving targets are Cubes and ODS's.
    Please guide me as how should I proceed in this direction.
    Points would be assigned.
    Thanks & Regards
    Rohit Parti

    Hi,
    Archiving
    go through tcode SARA & OAAD
    http://http://help.sap.com/saphelp_470/helpdata/en/6d/56a06a463411d189000000e8323d3a/frameset.htm
    http://http://help.sap.com/saphelp_46c/helpdata/en/6d/56a06a463411d189000000e8323d3a/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/87/b12642aaf1de2ce10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/5c/11afa1d55711d2b1f80000e8a5b9a5/frameset.htm
    http://www.sap-press.com/downloads/h956_preview.pdf
    http://help.sap.com/saphelp_webas610/helpdata/en/5c/11afaad55711d2b1f80000e8a5b9a5/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/e9/c36642ea59c753e10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_47x200/helpdata/en/2e/9396345788c131e10000009b38f83b/frameset.htm
    Re: Archiving
    Please chk the following link it contaains PDF doc ...
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a111ae90-0201-0010-4bbe-809ec5627433
    Tarak

  • Two copies archive logs with only one defined

    Hi,
    On a 11g database, I have only got flash_recovery_area defined. When switched into archive log mode, I have expected only one copy of archive logs produced in the defined USE_DB_RECOVERY_FILE_DEST location, but there is another copy generated as well under $ORACLE_HOME/dbs directory. How to explain that? and how to DISABLE the second copy to be produced?
    Thanks for any help
    Zhuang Li
    PS: more info
    In spfile:
    orcl.__java_pool_size=50331648
    orcl.__large_pool_size=16777216
    orcl.__oracle_base='/usr/oracle11g'#ORACLE_BASE set from environment
    orcl.__pga_aggregate_target=1828716544
    orcl.__sga_target=1056964608
    orcl.__shared_io_pool_size=0
    orcl.__shared_pool_size=654311424
    orcl.__streams_pool_size=0
    *.audit_file_dest='/usr/oracle11g/admin/orcl/adump'
    *.audit_trail='db'
    *.compatible='11.1.0.0.0'
    *.control_file_record_keep_time=30
    *.control_files='/db/orcl1/control0^@^@C^AC"^@^@^@^@^@^C^@^@^@^@^@^@^A^D{^S^@^@1.ctl','/db/orcl1/control02.ctl',
    '/db/orcl1/control03.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='orcl'
    *.db_recovery_file_dest='/usr/oracle11g/flash_recovery_area'
    *.db_recovery_file_dest_size=6442450944
    *.diagnostic_dest='/usr/oracle11g'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
    *.job_queue_processes=5
    *.open_cursors=300
    *.pga_aggregate_target=1824522240
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_max_size=1258291200#internally adjusted
    *.sga_target^@^@C^AC"^@^@^@^@^@^D^@^@^@^@^@^@^A^DCS^@^@=1056964608
    *.undo_tablespace='UNDOTBS1'
    ==============================
    SQL> select destination from V$ARCHIVE_DEST;
    DESTINATION
    /usr/oracle11g/R1/dbs/arch
    USE_DB_RECOVERY_FILE_DEST
    10 rows selected.
    SQL> SQL> archive log lis
    SP2-0718: illegal ARCHIVE LOG option
    SQL> archive log list
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 2549
    Next log sequence to archive 2551
    Current log sequence 2551
    SQL>
    ===================
    SQL> show parameter archive
    NAME TYPE VALUE
    archive_lag_target integer 0
    log_archive_config string
    log_archive_dest string
    log_archive_dest_1 string
    log_archive_dest_10 string
    log_archive_dest_2 string
    log_archive_dest_3 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    log_archive_dest_state_1 string enable
    log_archive_dest_state_10 string enable
    log_archive_dest_state_2 string enable
    log_archive_dest_state_3 string enable
    log_archive_dest_state_4 string enable
    log_archive_dest_state_5 string enable
    log_archive_dest_state_6 string enable
    log_archive_dest_state_7 string enable
    log_archive_dest_state_8 string enable
    log_archive_dest_state_9 string enable
    log_archive_duplex_dest string
    log_archive_format string %t_%s_%r.dbf
    log_archive_local_first boolean TRUE
    log_archive_max_processes integer 4
    log_archive_min_succeed_dest integer 1
    log_archive_start boolean FALSE
    log_archive_trace integer 0
    standby_archive_dest string ?/dbs/arch

    This is the way 11g install sets up the archive destination intially after creating the database using DBA.
    What you can do is to go to the Recovery Settings in Enterprise Manager. You would notice that Archive Log Destination number 1 is set to usr/oracle11g/R1/dbs/arch while number 10 is set to USE_DB_RECOVERY_FILE_DEST
    Remove the entry for Number 1 (leave it blank). Apply the settings. This will force Oracle to only log to flash_recovery_area.
    <br>
    Oracle Database FAQs
    </br>

  • When I change my email account's password, Thunderbird deletes all my local copies of my emails

    I have a Microsoft Office 365 account I access through IMAP in Thunderbird. None of the Thunderbird "save local copies/archive/etc." options seem to actually work like they did for my Yahoo POP3 account, which could be related to my real problem, but I haven't tried too hard to fix that because downloading all the emails gives me offline access anyway. Now the real problem:
    Whenever I change my password through the Office 365 webpage, the next time I open Thunderbird it tries to access that account before I can stop it, gets the wrong password, and then deletes its entire cache of my emails, so it has to redownload them once I put the right password into Thunderbird. This redownloading takes forever. How can I stop it from deleting them in the first place?

    delete the password in Thunderbird, before you change it anywhere else.
    Thunderbird is not very smart, but it does know if a password is missing from it's store.

  • Archiving - Performance

    Hi Friends,
         We had Archived  FI  data for year 2005 for a table
         When we run the select query for that table for year 2005 , it is taking lot of time , around 60,000 secs
    When we run for 2006,2007 and 2008 , select query is completed within 120 secs
    I would like to know if  write select query on table which is archived for an year, will it effect any performance?
    Thanks
    Chandra

    Hi Chandra,
    I am also in a archiving project and would like to know which statistics you are talking about. Please provide some information so I can also get it done for my project.
    Also I am facing some in COPA data archiving, while using COPA1_XXXX object its took 95 hrs to runt he job and resulted in cancellation of the job. I have activated index 3 for table CE1XXXX. If you know some thing on COPA archiving please tell me.
    Regards,
    Shailesh

  • How do I log in to WLS programatically.?

    Hello All,
    I'm looking to write a simple java program that logs into weblogic server (10.3) and authenticate a user in this process. I've looked at the API's and find methods like login() but also see snippets of code where environments and configurations are being set. I can't get my head around what variables need setting.
    Is there a program or working piece of code that would help me achieve this? I'm using Jdeveloper 11g.
    Many thanks in advance,
    Regards,
    PP.

    Hi PP,
    The following code may probably be useful for you (programmatical deployment example):
    import java.io.*;
    import weblogic.deploy.api.tools.*; //SesionHelper
    import weblogic.deploy.api.spi .*; //WebLogicDeploymentManager
    import weblogic.deploy.api.spi.DeploymentOptions;
    import javax.enterprise.deploy.spi.TargetModuleID;
    import javax.enterprise.deploy.spi.status.ProgressObject;
    import javax.enterprise.deploy.spi.status.DeploymentStatus;
    import javax.enterprise.deploy.shared.ModuleType;
    import javax.enterprise.deploy.spi.Target;
    public class DeployTest {
    public static void main(String [] args) {
    DeployTest dt = new DeployTest();
    try {
    dt.deploy();
    }catch (Exception e) {e.printStackTrace();}
    public void deploy() throws Exception {
    String protocol="t3";
    String hostName="localhost";
    String portString="7001";
    String adminUser="weblogic";
    String adminPassword="weblogic";
    String fileLoc ="C:\\oracle\\wls1033\\user_projects\\domains\\SR3_2484339780\\sessiontest3";
    // Use getRemoteDeploymentManager when admin server is not on same machine as the archive. Automatically copies archive to the Admin server's upload directory
    // WebLogicDeploymentManager deployManager=SessionHelper.getRemoteDeploymentManager(protocol, hostName,portString, adminUser, adminPassword);
    WebLogicDeploymentManager deployManager=SessionHelper.getDeploymentManager(protocol, hostName,portString, adminUser, adminPassword);
    System.out.println("WebLogicDeploymentManager: " + deployManager);
    DeploymentOptions options = new DeploymentOptions();
    options.setStageMode(DeploymentOptions.NOSTAGE);
    System.out.println("\t DeploymentOptions: " + options);
    Target[] targets = deployManager.getTargets();
    for (Target target : targets) {
    System.out.println(target.getName());
    TargetModuleID myModule = deployManager.createTargetModuleID("SessionApp",ModuleType.WAR, target);
    deployManager.deploy(new TargetModuleID[] {myModule}, new File(fileLoc),null, options);
    You can see another example of accessing programmatically at Re: Start Admin Server using NodeManager without 'boot.properties' (in this case WLST is used programmatically).

  • Time machine won't see my airdisk for i can't mount it

    hi,
    i can't use my lacie hard dirve that is hooked up to my airport extreme base station with the time machine.
    looking around the forums i understood that time machine only works with the drives that are mounted on the desktop. some apparently do that manually by clicking to it in finder, others use an automator app to automatically mount.
    but my problem is i can't mount the drive in anyways. clicking on it in finder wouldn't mount it on the desktop in may case: although i can browse it in finder it wouldn't appear on the desktop.
    since it never appears on the desktop i can't drag it to an automator app.
    does anyone have any solutions?

    TM may start a new backup sequence for any of a number of reasons. See #D3 in the Time Machine - Troubleshooting *User Tip* at the top of this forum.
    Click here to download the +Time Machine Buddy+ widget. It shows the messages from your logs for one TM backup run at a time, in a small window.
    Navigate to the first of the backup attempt that failed, copy and post the messages here. A clue should be lurking in there.
    i'd like to keep those backups because there's some important stuff in there.
    That's a curious comment. Have you been deleting things from your internal HD, relying on TM to keep it's copies archived?

  • Time Machine won't see my previous backups

    hi,
    for some reason, TM has continually told me there isn't enough space to make a backup to my external usb hd.
    it's been backing up to this hd for about 9 months. disk utility checked it out and it seemed okay.
    is there a way to get TM to pickup where it left off instead of having to start afresh? i'd like to keep those backups because there's some important stuff in there.
    thanks for any help!
    cheers,
    sean

    TM may start a new backup sequence for any of a number of reasons. See #D3 in the Time Machine - Troubleshooting *User Tip* at the top of this forum.
    Click here to download the +Time Machine Buddy+ widget. It shows the messages from your logs for one TM backup run at a time, in a small window.
    Navigate to the first of the backup attempt that failed, copy and post the messages here. A clue should be lurking in there.
    i'd like to keep those backups because there's some important stuff in there.
    That's a curious comment. Have you been deleting things from your internal HD, relying on TM to keep it's copies archived?

  • Time Machine won't see my backup after resetting Time Capsule

    i had to reset my TC because i got a new modem.  once i got the network up and running again, now time machine wants to start over backing up my MBP, even though i can see the backup files on the TC....
    how do i get time machine to recognize that this time machine has my backups on it already?
    I had changed the name of the TC, then i changed it back.  so i don't think that's the problem? 
    will i have to erase the time capsule and start over backing up?
    thanks in advance...

    TM may start a new backup sequence for any of a number of reasons. See #D3 in the Time Machine - Troubleshooting *User Tip* at the top of this forum.
    Click here to download the +Time Machine Buddy+ widget. It shows the messages from your logs for one TM backup run at a time, in a small window.
    Navigate to the first of the backup attempt that failed, copy and post the messages here. A clue should be lurking in there.
    i'd like to keep those backups because there's some important stuff in there.
    That's a curious comment. Have you been deleting things from your internal HD, relying on TM to keep it's copies archived?

  • Syncing two database via archivelogs

    Hello Gurus of the Oracle World.
    I have two database I want to keep in sync. both databases are 7.3.4 database. Yeah I know they are old and out of date. But this is my world and it is very old . My question is am using sqlbacktrack as a backup and recovery tool. I am able to backup my primary database and restore it to a different location. Now my problems is keeping them in-sync every 1 to 2 hours. Is there a script I can use to apply the archive logs via some type of cronjob? So that when I copy the archive log files over to the secondary database I can apply them automatically with no intervention from me or any users.
    Your help is greatly appreciated.
    Thanks.

    I don't know exactly how sqlbacktrack works. I also don't recall a lot about 7.3.4, so take this with a grain of salt...
    When you recover the database on to the other machine, do you know whether the recovery process does anything like resetting the archive log sequence? I would expect that your tool has some sort of option to prevent this, at least.
    Once you have the standby database, you should be able to put the database in manual recovery mode and have a script that copies archived log files from the primary to the standby and applies them to the standby. The standby database cannot be open for this-- user's wouldn't be able to access the data unless you stopped the recovery process and opened the database. The database would also have to be opened read-only.
    Niall Litchfield's presentation "You Probably Don't Need Data Guard" would be a good place to start. The presentation and scripts are linked from his site
    http://www.niall.litchfield.dial.pipex.com/
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • DB mirroring

    Hi
    I've to manage 2 servers: 1 master and 1 backup.
    The second MUST be the real time exact mirror of the first.
    - May I use the two-face commit method?
    - If yes, must I create triggers to mirror the whole dml and ddl
    operations or it's not necessary?
    Thanks in advance.

    Hi
    I've to manage 2 servers: 1 master and 1 backup.
    The second MUST be the real time exact mirror of the first.
    - May I use the two-face commit method?
    - If yes, must I create triggers to mirror the whole dml and ddl
    operations or it's not necessary?You 3 possibilties:
    1. standby database (best way, db copies archive logs to second
    db, where the logs are automatically applied)
    2. replication with snapshots (hard work)
    3. make you own replication with triggers (more hard work)

  • RMAN backup unable to write to share

    Oracle 11gR2 OEL5 64bit
    I am unable to execute my backups because the files cannot be written to the NFS share. The share has been mounted with 'oracle:oinstall' and '777' for permissions. However, I still get the following error:
    ORA-19504: failed to create file "/rman/backup/prod/ctrl_file.bckp"
    ORA-27040: file create error, unable to create file
    Linux-x86_64 Error: 22: Invalid argumentI have researched different mount option on ML and I have tried them also, but I still get the same issue. It's weird because the last time I did this it worked and it was also able read and recover the files from the share.
    Is there a mount option out there that works so that RMAN can write to an NFS share?
    I used the following opton:
    mount -t cifs -o username=winorcl,rw,dir_mode=0777,file_mode=0777,uid=1003,gid=1003 //myshare/oracletmp /rman/backup/prod/Just some more info. The 'winorcl' user has full RWX privs on the share. The uid and gid are for 'oracle' and 'oinstall' which in this case are the same.
    Thank you.

    Hi,
    ORA-19504: failed to create file "/rman/backup/prod/ctrl_file.bckp"
    ORA-27040: file create error, unable to create file
    Linux-x86_64 Error: 22: Invalid argument
    mount -t cifs -o  .... Is use of CIFS Protocol for RMAN backups supported? [ID 444809.1]
    CIFS is fine for RMAN files: backup pieces, datafile copies, archived logs but it is NOT certified by Oracle.
    Oracle Support: If there are any problems involved in using RMAN and CIFS, then we cannot get the RDBMS development involved.
    Write To CIFS Filesystem on Linux Fails ( ORA-01119, ORA-27040 ) [ID 1417168.1]
    Regards,
    Levi Pereira

  • Archive Smartform Copies - Error - OTF end command // missing in OTF data

    Hi All,
    We are printng invoices and archiving them at the same time.
    When I try to print and archive just the original invoice it works fine
    That is , it prints and archives.
    However when I set the flag SSFCOMPOP-TDARCCOP to 'X'  (Archive Copies as well),
    I get the error OTF end command // missing in OTF data.
    Im guessing that the system tries to archive multiple copies and the End of File Command // at the end of each copy is causing the issue. But I dont know how to go about solving it.
    Regards,
    Nehal.

    Implemented SAP Note 1123505 - OTF-PDF conversion. Archiving several copies

  • Archive expansion copies to user Downloads

    When I click to expand an archive, no matter what disk it is on, Lion thinks it is a download, and copies the expanded files to ~/Downloads. That is downright irritating. I NEVER want the OS to decide for me that my files need to be copied to another HDD. Any way to change this? I'll even use Terminal if necessary.

    Welcome to SDN.
    Two changes are needed, one to the print program, one to the Smartform.
    In the program specify the number of copies;
    data: params type SSFCOMPOP.
    params-tdcopies = '3'.
    Then pass this params structure to the function module parameter OUTPUT_OPTIONS.
    In the Smartform;
    Create three windows, say ORIGINAL, DUPLICATE, TRIPLICATE each in the same place on the page with a suitable text element.  Make the window type 'copies window'.  The for ORIGINAL choose 'Only Original', for DUPLICATE and TRIPLICATE choose 'Only copies - copies differ'.  Then apply a condition on DUPLICATE and TRIPLICATE based on SFSY-COPYCOUNT being either 2 or 3.
    Regards,
    Nick

  • Archive Smartform copies

    Hi,
    Does any one know how to archive smartform copies
    and he field tdarccop .
    When I set it it to X and pass an invoice with copies to the smartform i
    get an OTF // error.

    Hi,
    Please let me know the step by step process to archive the smartform.Can we Archive the smartform to Unix Server ?
    My Requirement is to convert the smartform in to PDF format and write it to unix server.But i am facing some problem as i am getting soem conversion problems.
    Thanking you.
    Prathima.

Maybe you are looking for