SAP Archive process

Hi all,
How to archive the data from SAP when the data have a delete tag?
can any one please explain the archive process in SAP? we have to delete production order data?
Thanks in advance.
Regards,
veera
Edited by: Veerab on Oct 3, 2011 9:52 AM

Hi,
Main transaction is SARA of all SAP archiving objects;
1) Complete the customization steps of the SAP archive object on SARA
2) In order to delete the documents, you need to archive the object successfully, first
3) Then, you can delete the archive session object
4) Do not forget to reorganize the archived tables and create the statistics
The object name is PP_ORDER.
Best regards,
Orkun Gedik

Similar Messages

  • Purchase Order is not deleted after archive process

    Hi,
    I want to delete completely a specific purchase order from the system so I followed the next steps to do that but is not working.. not sure if I'm doing something wrong.
    Steps:
    1) I marked up the PO as deleted in the ME22 transaction:
    2) In the transaction SARA I followed these steps:
    Step 1: Create variant for Archiving. Give it a name. Enter your PO number. Flag One step procedure. Flag detailed log. Erase flag for test run.
    Step 2: Maintain your start date, you can use Immediate in such a small case.
    Step 3: Maintain spool parameters. Enter your printer and set the values if you want to print immediatly and if you want keep the print. I suggest to hold the spool in the system. This helps you to determine any error.
    Step 4. Execute the variant.
    3) If I go to the transaction ME23N it says that the PO is archived but is there yet. Also, I can see it in the "Docuement Overview". How can I delete this PO completely from the system??
    Thanks for the help!!

    HI Carlos,
    Since you archived PO, it is stored in archive physical drive not in SAP drive.
    when u use me23n, the archived PO is picking from physical drive and it says "Already archived" message.
    I'm sure there will be no table level entries for the archived documents
    Go to SARA transaction - information system--> Archive Explorer - MM_EKKO and execute.
    Check your PO is listed here.
    So if you dont want to see your PO, then you need to De-activate information system
    in SARA-Information System-->Customizing -
    search infrastructure by Object name and use activate/deactivate icon -
    A question i really need to know - Why you want to delete a particular PO, Archival process is not a small thing and usually it takes place as a MASS activity.
    reg,
    bhg

  • Mass upload of documents through SAP Archive Link

    Hi Experts,
    Our client is having a requirement, to upload legacy (old) data from third party tool to DMS Content server through Archive link. Tool is implemented and it works fine for attaching individual document along with SAP Object like, Purchase Order, material, etc.
    We dont have any idea as how to upload bulk of drawings for material, need your advice on this.
    Regards,

    Hi,
    I am trying to get more info on how to bulk upload employee documentation through SAP Archive Link' OpenText Enterprise Scan into the categories in OAWD. Once the documents are in Enterprise Scan the Administrator needs to link these to the categories in SAP. This is a manual process - is there a way to bulk upload these documentation to the right categories? Since these documents vary I am not sure that OCR will be a help.
    Pleae could you help in this regard.
    Thank you.
    Regards,
    -- Gustav de Bruyn

  • How to start Archiving process in production system.

    Dear Experts,
    As i was new to archiving process need help regarding this.as our client as 1.3 Tb of data with them,as they want to clean the unwanted data from the database.as they are using windows 2003 as server, oracle as database and using sapr/3 4.6 c.i want to know from where i have to start the process.i have read so many threads regarding this.they are helpful for me.i have some queries that i have to clarify.
    1) I have found which table increasing more in database.,i have checked (DB15 ,SAARA).how to find this has to archive.
    2) Data which is not important who has to delete that data.i heard data deletion is not basis work.
    3) After deleting data in datbase without reorganising space will not be empty is this may be the cause??
    4) is there any data loss in  reoraganising??
    5) is there any way to do reorganising with brtools in oracle.??
    Please clarify me.
    Regards,
    Naveen  N

    Dear Naveen,
    Data archiving is an inter-departmental activity.
    The person responsible for each application (which he/she is using in SAP Solution to carry out the business process) should be involved in SAP Data Archiving Process/Activities.
    The team should be grouped with the persons responsible for each application.
    After the planned Analysis, The team grouped for executing the Data Archiving process produce the checklist for Archive Data (based on your business process and other dependent factors). They will provide the inputs (archiving objects) for Data Archiving requirements.
    Moreover, As a part of Data Archiving process, you will have to delete the planned archive data from the actual Database. So, some spaces will become free in Database but all will be scattered in its respected Tablespace(s) in random manner. You will have to perform Database Reorganization to regain the free space.Because, The Data is stored in Database in form of Fragmented database objects (Tables/Indexes) in its respective Tablespace (s).Reorganization is generally performed to defragment these database objects. By doing so, one can improve performance and regain the space within tablespaces that was being unused due to fragmentation.
    I hope this information will be helpful.
    Regards,
    Bhavik G. Shroff

  • External Storage Repository for SAP Archive Link

    We are about use SAP Archive Link as part of a document scanning process. For storage, the intention is to use EMC Centera. We are on SAP ECC 6.0.
    Does anybody know if any additional software or hardware is required, or would we already be capable of  retrieving images from Centera storage without anything additional.
    If we start to store the scanned images on filestore, would it be possible to migrate the images to EMC Centera storage in the future without losing the link to the documents in SAP?
    Many Thanks
    Claire Crosby-Clarke

    Hi,
    >Does anybody know if any additional software or hardware is required, or would we already be capable >of retrieving images from Centera storage without anything additional.
    If Centera storage is certified by SAP for Archivelink you will be OK.
    In my company, we use Filenet and we had to purchase a special Archivelink connector.
    We scan incoming invoices, do OCR, create automatically the SAP invoice and make the link with the scanned invoice in Filenet.
    >If we start to store the scanned images on filestore, would it be possible to migrate the images to EMC >Centera storage in the future without losing the link to the documents in SAP?
    I don't think so, because the link in SAP is an identifier in the storage software.
    Regards,
    Olivier

  • SAP Archiving & Document Access by Open Text installation problem

    Hi!
    Has anybody successfully installed the "SAP Archiving & Document Access by Open Text"?
    Currently I'm trying to Install the "Runtime and Core Services 10.1.0" - a part of "Open Text Enterprise Library Services 10.1.0".
    The install guides I follow: "Open Text Enterprise Library Services 10.1.0 Installation and Upgrade Guide.pdf" and "Open Text Runtime and Core Services - Installation and Upgrade Guide.pdf".
    All the prerequisites a fulfilled, but the process of installation fails with message "The installation of Runtime and Core Services 10.1.0 returned an error".  There is no any log or trace file created during the installation, so I cannot find out what causes the problems.
    System configuration:
    OS: Windows 2003 SP2 32-bit
    DB: MS SQL 2005
    AS: Tomcat 5.5
    JAVA: JDK 1_5_0_22
    Any idea?

    Hi!
    I had same problem with RCS 10.2.0. I've solved this with good JRE version, see KB : http://knowledge.opentext.com/knowledge/cs.dll/Open/21121763.

  • Can data be purged without the archiving process in BW system?

    Dear all,
    can anybody give me some comments on the data purging process?  After checking some materials about BI data archiving,  I am wondering:
    if data can be purged skipping the archiving process?  As long as no requests to retain the historical data, it should be good for performance reason to purge some useless data.
    My doubt to this question is:
    1. Will there be any inconsistency if we do the data purging manually without instruction of SAP standard process?
    2. Will there be any particular issues for DSO and Cube data purging?  Since there should be some business related dependencies.
    It would be highly appreciated for sharing the best practice in data purging or archiving in BW system!
    thanks in advance!
    Jennifer

    Hi,
    as others are pointing you should never delete data with other means than SAP standard procedures (request, selective, archiving....); in other words, do not delete data directly from the the BW DB tables!! This would definitively lead to big troubles... I am not sure what you mean exactly by "manually"...
    Deleting data (purging and or archiving) should always been considered carfully:
    - is the cube/dso a source of data for other targets (datamart)? If this is the case then the deletion should be carefully analyzed
    - for DSO/ODS it is particularly important to ensure that the keys of the records to be deleted will never be loaded again in the DSO; because of the overwrtie capabilities for these containers, this situation will obviously lead to incorrect data posted in subsequent targets
    Currently we've never used the archiving functionalities of BW... We monitor closely the data genererated by BW itself in datamart scenarios (PSA in between providers can grow very fast) and have implemented a customized process for that since it wasn't possible to archive PSA in BW3x....
    Most of the data stored in our providers are still used by our consummers (six years of data, some 3TB) and we use the logical partitionning extensively on the calendar year; this enable us to "move" "old" data to less expensive storage line.
    hope this helps....
    Olivier.

  • Archive Process

    Hi,
    Can any one expain about the discontinuation document process in the archiving.
    Thanks,
    Santhosh

    Hi Santosh,
    I have never heard about discontinuation document process in SAP Archiving. In case you are refering to any process flow in SAP which is discontinued in its life cycle (may be w.r.t MM or FI/CO or PP etc) then, just for your information SAP will not allow to archive any document till it completes it's life cycle defined by SAP.
    So if your planning to archive any data from SAP then you should select business completed data only. Hope this helped you a bit, in case your not satisfied then please be clear on your question on archiving and let us know definately will help you with.
    --Thanks
    Pankaj.K.S

  • CRM  Data Archiving Process

    t

    Hi Ganesh,
    It will take couple of days to inform all the information of archiving, but i will try to give brief explanation regarding archiving FILE configuration:
    Prerequisites of archiving are
    You should have storage system connected to your system.
    FILE configuration: T.Code -- FILE
    You can create your own Logical file name, Logical file path, Physical file name
    and Physical file path. For more detail information please see SAP Help.
    1. Create Logical File path: Example: Z_ARCHIVE_PATH
    2. Assign this to physical file path, Here you should know what is your operating system. Based on that select the syntax group and create your Physical file path. Example: if it is UNIX -- /archiving/<FILENAME>
    3 Create Logical File Name. Example:
    Enter “Z_ARCHIVE_DATA_FILE_TECH”
       Physical File:
    Enter “D_<PARAM_3>_<PARAM_2>_<DATE>_<TIME>”
    PARAM 3 gives Archiving object name, PARAM_2 gives one character alphabetic code, which guarantees that the archive file is unique.
    *Data Format:
    Set to “ASC”
    Application. Area:
    Set to “BC”
    *Logical Path:
    Set to “Z_ARCHIVE_DATA_PATH”
    After creating the this assign the logical file name to Archiving objects. It is assigned in SARA > Enter AO > Click on Customizing > Archiving Object specific customizing Technical settings --- Assign Logical file name.
    Here in this screen you can also mention File size as 100 MB. Meaning after 100MB of first file second archive file is created.
    You can also configure whether u want to have automatic deletion run or manually to schedule the deletion job.
    Similarly you can also schedule the automatic store job.
    - Ganesh, Prerequisites of archiving application data will various between archiving object. Better to study archiving process in <u>sap.help.com</u> 
    -Thanks,
    Ajay

  • SAP Archived object in MM . Cross verify and test.

    Hi All,
    We are on Process of Archiving data from Production, Configuration and  Object done up to Quality client. We are started testing in Quality. As a functional consultant how to cross verify and check how the data like PO,PR,Material documents archived or not.
    Now data archived for only May 2008. I would like to test the case with in the period, the data is properly archived or not. any difference is there??
    Can you please let me know how to take care of this and do the testing for SAP archiving in MM perspective.
    Thanks for all your help.

    Hi Hare,
    For SAP Data Archiving testing you have to look at mainly two things: residence period and business completion requirements. Residence period is the time the data needs to be in SAP before it's eligible to be archived (data old enough) and business completion requirements is basically checking that the object is "closed" before archiving it. For instance, you can not archive a service order that is opened or an equipment that is installed.
    You should check different combinations like data old enough (for the period that you want) and not complete (should not be archived) and then data old enough and complete (should be archived).
    Check the data before running the archiving jobs and see what should be / not be archived according to you, run the archiving jobs and then check the data again to see if results are expected.
    Hope this guidelines help you.
    Cheers.

  • Sap archiving steps

    Hi
    We are implementing sap archiving in our company. I have installed opentext and configured the archive server by creating pool/buffers/logical archive. I have gone through some couple of online docs where it talks about TAANA/ SARA/SARI, ADK and all the process but what i am looking for is how i connect my SAP ECC system to archive server. I have made the connection from archive server to R3 .Now i want to do configuration in r3 system where i can mention which object goes to which pool in archive server.
    I am not able to find that documentation so if some one has steps or document which can tell me how to that configuration will be great.
    Thanks in advance
    Andy

    So what i need is basically steps of  "Configuring SAP R/3 for  Archiving Server   "

  • SAP ARCHIVE

    Hi All,
    Can anybody tell me what is the sap archive, how we can retrive data from archive, is there is any function module for retriveing data, if function module is there what is the name of the function module.
    Thanks  &  Regards,
    Murali

    Hi DMK,
    Please check this link
    http://help.sap.com/saphelp_crm60/helpdata/en/8f/f3b142304cc511e10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_crm60/helpdata/en/e1/f5fc37a0ca3144e10000009b38f8cf/frameset.htm
    Data Archiving is a secure and reliable process through which data from closed business processes, meaning data that is no longer needed in online business, is written from the database to an archive, and then deleted from the database. Optionally the archived data can then be moved to an external storage system (ADK-based archiving). An important aspect of this entire process is that the archived data must be accessible at any time, in case it needs to be viewed again, for example as part of an audit or for business reasons. Data archiving provides several functions for the reading of archived data.
    In terms of legal compliance some data can or even must be deleted, as soon as the required data retention period has been reached. This scenario variant offers different functions that allow you to store archived data separately based on specific criteria. This facilitates the deletion of the archived data later
    Best regards,
    raam

  • Error when printing a PDF direct from SAP archive

    Hi together,
    I try to print a PDF direct from SAP archive and I got the error ""C:\Program files\Adobe\Acrobat 7.0\reader\AcroRd.exe" /p /h" not found. In the internet I found several topics which describe this way to print a PDF without open the Acrobat Reader.
    The customizing for document management (Local application) seems o.k. If I remove in the registry the parameters /h /p for print, Acrobat Reader will open and show the document if I use the print button in SAP.
    Have anybody an idea what the reason is?
    Thank you.
    Regards
    Thomas

    The rundll32 error seems to have disappeared.  Until this morning I had both this error and the rundll32.  All other types of docs open fine.  This one will open if saved to the desktop first, but not from Outlook.

  • MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-19
    MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )
    =========================================================
    PURPOSE
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Explanation
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    Oracle 8i에는 다중 archive destination을 지원하며, 단일 archiver의
    부하를 줄여주기 위해 multiple archiver process를 사용할 수 있다.
    LOG_ARCHIVE_MAX_PROCESSES 파라미터로 구동시킬 최대 ARCH 프로세스의
    갯수를 지정하면 된다.
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면 인스턴스 구동시
    init 파일에 지정된 LOG_ARCHIVE_MAX_PROCESSES에 지정된 값을 읽어
    들인다. 만약 LOG_ARCHIVE_START 값이 true인데 LOG_ARCHIVE_MAX_PROCESSES
    값이 별도로 지정되어 있지 않을 경우에는 arc0 프로세스만을 구동시킨다.
    LOG_ARCHIVE_MAX_PROCESSES 값이 별도로 지정되었을 경우 (1 - 10 범위),
    arc0, arc1 과 같은 추가 프로세스를 구동시킨다.
    하지만, 이 파라미터 값을 기본값 1이 아닌 다른 값으로 명시적으로 지정할
    필요는 없다. 왜냐하면, 시스템에서 몇개의 ARCn 프로세스가
    필요한지를 판단하여, 추가 ARCn 프로세스를 생성하기 때문이다.
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    alter system 명령에서 set LOG_ARCHIVE_MAX_PROCESSES=n 으로
    지정하여 동적으로 값을 변경할 수 있다. 이때 n은 1 부터 10사이의
    값이어야 한다. 하지만, LOG_ARCHIVE_START 값이 FALSE로 지정되어
    있을 경우에는, 명령을 실행시켜도 아무런 영향을 미치지
    못한다.
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면, 오라클에서는
    구동시 하나의 archiver process (ARC0)를 구동시킨다. 이 파라미터
    값은 필요시 LATER SYSTEM 명령으로 지정된 갯수만큼의
    archive process를 구동시킬 수 있다.
    예)
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=4;
    위 명령을 실행 시키면 다음과 같은 절차에 따라 ARC1, ARC2, ARC3를
    추가한다.
    1) Shadow process는 primary archive process에게 프로세스 갯수를
    늘릴 것을 요청한다.
    2) Archiver process는 kcrrschd 함수를 호출한다. (kcrrschd:
    다중 arch process를 schedule)
    3) 만약 요청된 process의 갯수가 현재 사용중인 archiver process
    갯수보다 작은지 확인한다. 만약 새로 지정된 값이 적거나, ARCHIVING이
    DISABLE 된 상태라면 다른 조치를 취하지 않고 return 한다. 그렇지
    않다면 지원되는 최대 갯수인 10을 넘는지 확인하고 10을 넘을 경우에는
    프로세스의 갯수를 10으로 지정한다.
    4) Scheduler 함수는 kcrxs{} structure에 대한 latch를 확보한다.
    여기서 kcrxs{} structure는 ARCH Activation status를 나타낸다.
    5) Scheduler 함수는 지정된 process 갯수 만큼 loop를 돌면서
    schedule된 상태를 structure KCRRSCHED에 반영시킨다.
    6) 그리고 나서 latch를 release 시킨 후 kcrrsmp 함수를 ( kcrrsmp:
    다중 arch processs 구동 ) 호출한다.
    7) kcrrsmp 함수는 kcrrxs{} structure (ARCH 구동 상태)에 대한 latch를
    확보하여 code 실행을 serialize 시켜 이 함수가 동시에 실행되더라도
    한번에 하나씩 실행될 수 있도록 한다.
    8) pending 상태에 있는 archiver process를 스케쥴링 하고, dead process가
    있으면 clean up 시킨다.
    9) 그리고 나서 이 함수는 지정된 process 갯수 만큼 loop를 돌면서
    KCRRSCHED 상태를 KCRRSTART으로 바꾸어, archiver process들을
    구동 준비 상태로 만든다.
    10) latch를 release 시킨 후 ARCH 프로세스를 구동시킨다.
    11) kcrrsmp함수는 latch를 다시 획득한다. 각각의 archiver 프로세스는
    자기 자신을 activate 시킬 것을 통보 받는다. archiver process는
    자기 자신을 activate 시킨 후 alert file에 관련 사항을 기록한다.
    12) 호출을 하는 함수는 모든 archiver process가 자기 자신을
    activate 시키고, kcrrxs structure의 내용을 갱신할 때 까지
    sleep 상태에서 대기한다.
    13) 끝으로, 현재 archiver process의 갯수가 요청된 archiver process
    의 갯수와 일치 하면, latch를 release 시키고 break 한다. ( C
    에서의 break )
    alert. log 에는 위 과정이 다음과 같이 반영된다.
    sql: prodding the archiver
    ALTER SYSTEM SET log_archive_max_processes=4;
    Tue Jul 13 02:15:14 1999
    ARC0: changing ARC1 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC2 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC3 KCRRNOARCH->KCRRSCHED
    ARC0: STARTING ARCH PROCESSES
    ARC0: changing ARC1 KCRRSCHED->KCRRSTART
    ARC0: changing ARC2 KCRRSCHED->KCRRSTART
    ARC0: changing ARC3 KCRRSCHED->KCRRSTART
    ARC0: invoking ARC1
    Tue Jul 13 02:15:15 1999
    ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC1
    ARC0: ARC1 invoked
    ARC0: invoking ARC2
    ARC1 started with pid=10
    ARC1: Archival started
    Tue Jul 13 02:15:15 1999
    ARC2: changing ARC2 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC2
    ARC2 와 ARC3도 동일한 절차를 따른다.
    흥미로운 사실은 프로세스의 갯수를 줄일 수도 있다는 것이다. 예를 들어
    다음과 같은 명령을 실행시킬 경우
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=2;
    다음과 같은 작업이 순서대로 실행된다.
    1) shadow process는 현재 active 상태인 archiver process와 접속을 한다.
    2) archiverprocess는 kcrrxmp 함수를 호출한다. ( kcrrxmp: 다중
    Arch process 정지 )
    3) kcrrxmp 함수는 kcrrxs{} structure에 (ARCH 구동 상태) 대한 latch를 획득하여 다른 프로세스에서 structure를 동시에 병경하지 않도록 한다.
    4) 새로 요청된 archiver process의 갯수가 현재 사용중인 archiver process
    의 갯수보다 작은지 확인한다.
    5) 만약 작다면, archiver process 목록 가운데, 가장 최근에 schedule
    되어, archival 작업에 schedule 차례가 금방 돌아 오지 않을 프로세스를
    찾아낸다.
    6) 각각의 프로세스에 대해 KCRRACTIVE 상태에서 KCRRSHUTDN로 상태를
    변경할 것을 요청한다.
    7) 상태가 바뀌면, OS에서 해당 프로세스를 종료시키도록 하고, 상태를
    KCRRDEAD로 바꾼다. 관련된 상태 정보가 정리되고 kcrrxs{} structure의
    내용이 갱신된다.
    6) ,7) 과정은 지정된 archiver process 갯수로 줄어들 때 까지
    반복된다.
    8) 새로운 archiver process의 갯수로 kcrrxs structure 내용이 갱신된다.
    9) latch를 release 시킨다.
    상태 변동은 다음과 같이 alert. log 파일에 반영된다.
    sql: prodding the archiver
    Tue Jul 13 00:34:20 1999
    ARC3: changing ARC0 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC0 shutdown message
    ARC3: changing ARC1 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC1 shutdown message
    ARC3: received prod
    Tue Jul 13 00:34:20 1999
    ALTER SYSTEM SET log_archive_max_processes=2;
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC0: Archival stopped
    ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC1: Archival stopped
    ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Archiver process는 round-robin 방식으로 archiving 작업을 수행하도록
    schedule 된다. 만약 다중 archiver process가 부하에 따라 activate 된
    경우는 여러가지 경우의 수가 있을 수 있다. Oracle 8i에서는 다중
    archive log dest를 지원하면서 archive log에 대한 duplexing을 지원
    하기 때문에, 어떤프로세스가 log file을 archive 시켰는지를 기록할 필요가
    있다.
    Oracle 8i에서 archival 작업은 성공할 때 마다 trace file에 archiver
    process 명을 기록한다.
    다음은 관련 trace file의 주요 내용이다.
    Instance name: v815
    Redo thread mounted by this instance: 1
    Oracle process number: 12
    Unix process pid: 3658, image: oracle@oracle8i (ARC3)
    *** Session ID:(12. 1) 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Begin archiving log# 1 seq# 38 thrd# 1
    ARC3: VALIDATE
    ARC3: PREPARE
    ARC3: INITIALIZE
    ARC3: SPOOL
    ARC3: Creating archive destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Archiving block 1 count 1 to : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Closing archive destination 1 : /bigdisk/oracle8i/dbs/arch/1_38.
    dbf
    ARC3: FINISH
    ARC3: Archival success destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: COMPLETE, all destinations archived
    ARC3: ArchivedLog entry added: /bigdisk/oracle8i/dbs/arch/1_38. dbf
    ARC3: ARCHIVED
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Completed archiving log# 1 seq# 38 thrd# 1
    이 정보를 가지고, archive process 3이 log sequence 38번을
    destination 1 :/bigdisk/oracle8i/dbs/arch 에 archive 시킨 것을 알 수 있다.
    Reference Ducumment
    <Note:73163.1>

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-19
    MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )
    =========================================================
    PURPOSE
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Explanation
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    Oracle 8i에는 다중 archive destination을 지원하며, 단일 archiver의
    부하를 줄여주기 위해 multiple archiver process를 사용할 수 있다.
    LOG_ARCHIVE_MAX_PROCESSES 파라미터로 구동시킬 최대 ARCH 프로세스의
    갯수를 지정하면 된다.
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면 인스턴스 구동시
    init 파일에 지정된 LOG_ARCHIVE_MAX_PROCESSES에 지정된 값을 읽어
    들인다. 만약 LOG_ARCHIVE_START 값이 true인데 LOG_ARCHIVE_MAX_PROCESSES
    값이 별도로 지정되어 있지 않을 경우에는 arc0 프로세스만을 구동시킨다.
    LOG_ARCHIVE_MAX_PROCESSES 값이 별도로 지정되었을 경우 (1 - 10 범위),
    arc0, arc1 과 같은 추가 프로세스를 구동시킨다.
    하지만, 이 파라미터 값을 기본값 1이 아닌 다른 값으로 명시적으로 지정할
    필요는 없다. 왜냐하면, 시스템에서 몇개의 ARCn 프로세스가
    필요한지를 판단하여, 추가 ARCn 프로세스를 생성하기 때문이다.
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    alter system 명령에서 set LOG_ARCHIVE_MAX_PROCESSES=n 으로
    지정하여 동적으로 값을 변경할 수 있다. 이때 n은 1 부터 10사이의
    값이어야 한다. 하지만, LOG_ARCHIVE_START 값이 FALSE로 지정되어
    있을 경우에는, 명령을 실행시켜도 아무런 영향을 미치지
    못한다.
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면, 오라클에서는
    구동시 하나의 archiver process (ARC0)를 구동시킨다. 이 파라미터
    값은 필요시 LATER SYSTEM 명령으로 지정된 갯수만큼의
    archive process를 구동시킬 수 있다.
    예)
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=4;
    위 명령을 실행 시키면 다음과 같은 절차에 따라 ARC1, ARC2, ARC3를
    추가한다.
    1) Shadow process는 primary archive process에게 프로세스 갯수를
    늘릴 것을 요청한다.
    2) Archiver process는 kcrrschd 함수를 호출한다. (kcrrschd:
    다중 arch process를 schedule)
    3) 만약 요청된 process의 갯수가 현재 사용중인 archiver process
    갯수보다 작은지 확인한다. 만약 새로 지정된 값이 적거나, ARCHIVING이
    DISABLE 된 상태라면 다른 조치를 취하지 않고 return 한다. 그렇지
    않다면 지원되는 최대 갯수인 10을 넘는지 확인하고 10을 넘을 경우에는
    프로세스의 갯수를 10으로 지정한다.
    4) Scheduler 함수는 kcrxs{} structure에 대한 latch를 확보한다.
    여기서 kcrxs{} structure는 ARCH Activation status를 나타낸다.
    5) Scheduler 함수는 지정된 process 갯수 만큼 loop를 돌면서
    schedule된 상태를 structure KCRRSCHED에 반영시킨다.
    6) 그리고 나서 latch를 release 시킨 후 kcrrsmp 함수를 ( kcrrsmp:
    다중 arch processs 구동 ) 호출한다.
    7) kcrrsmp 함수는 kcrrxs{} structure (ARCH 구동 상태)에 대한 latch를
    확보하여 code 실행을 serialize 시켜 이 함수가 동시에 실행되더라도
    한번에 하나씩 실행될 수 있도록 한다.
    8) pending 상태에 있는 archiver process를 스케쥴링 하고, dead process가
    있으면 clean up 시킨다.
    9) 그리고 나서 이 함수는 지정된 process 갯수 만큼 loop를 돌면서
    KCRRSCHED 상태를 KCRRSTART으로 바꾸어, archiver process들을
    구동 준비 상태로 만든다.
    10) latch를 release 시킨 후 ARCH 프로세스를 구동시킨다.
    11) kcrrsmp함수는 latch를 다시 획득한다. 각각의 archiver 프로세스는
    자기 자신을 activate 시킬 것을 통보 받는다. archiver process는
    자기 자신을 activate 시킨 후 alert file에 관련 사항을 기록한다.
    12) 호출을 하는 함수는 모든 archiver process가 자기 자신을
    activate 시키고, kcrrxs structure의 내용을 갱신할 때 까지
    sleep 상태에서 대기한다.
    13) 끝으로, 현재 archiver process의 갯수가 요청된 archiver process
    의 갯수와 일치 하면, latch를 release 시키고 break 한다. ( C
    에서의 break )
    alert. log 에는 위 과정이 다음과 같이 반영된다.
    sql: prodding the archiver
    ALTER SYSTEM SET log_archive_max_processes=4;
    Tue Jul 13 02:15:14 1999
    ARC0: changing ARC1 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC2 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC3 KCRRNOARCH->KCRRSCHED
    ARC0: STARTING ARCH PROCESSES
    ARC0: changing ARC1 KCRRSCHED->KCRRSTART
    ARC0: changing ARC2 KCRRSCHED->KCRRSTART
    ARC0: changing ARC3 KCRRSCHED->KCRRSTART
    ARC0: invoking ARC1
    Tue Jul 13 02:15:15 1999
    ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC1
    ARC0: ARC1 invoked
    ARC0: invoking ARC2
    ARC1 started with pid=10
    ARC1: Archival started
    Tue Jul 13 02:15:15 1999
    ARC2: changing ARC2 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC2
    ARC2 와 ARC3도 동일한 절차를 따른다.
    흥미로운 사실은 프로세스의 갯수를 줄일 수도 있다는 것이다. 예를 들어
    다음과 같은 명령을 실행시킬 경우
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=2;
    다음과 같은 작업이 순서대로 실행된다.
    1) shadow process는 현재 active 상태인 archiver process와 접속을 한다.
    2) archiverprocess는 kcrrxmp 함수를 호출한다. ( kcrrxmp: 다중
    Arch process 정지 )
    3) kcrrxmp 함수는 kcrrxs{} structure에 (ARCH 구동 상태) 대한 latch를 획득하여 다른 프로세스에서 structure를 동시에 병경하지 않도록 한다.
    4) 새로 요청된 archiver process의 갯수가 현재 사용중인 archiver process
    의 갯수보다 작은지 확인한다.
    5) 만약 작다면, archiver process 목록 가운데, 가장 최근에 schedule
    되어, archival 작업에 schedule 차례가 금방 돌아 오지 않을 프로세스를
    찾아낸다.
    6) 각각의 프로세스에 대해 KCRRACTIVE 상태에서 KCRRSHUTDN로 상태를
    변경할 것을 요청한다.
    7) 상태가 바뀌면, OS에서 해당 프로세스를 종료시키도록 하고, 상태를
    KCRRDEAD로 바꾼다. 관련된 상태 정보가 정리되고 kcrrxs{} structure의
    내용이 갱신된다.
    6) ,7) 과정은 지정된 archiver process 갯수로 줄어들 때 까지
    반복된다.
    8) 새로운 archiver process의 갯수로 kcrrxs structure 내용이 갱신된다.
    9) latch를 release 시킨다.
    상태 변동은 다음과 같이 alert. log 파일에 반영된다.
    sql: prodding the archiver
    Tue Jul 13 00:34:20 1999
    ARC3: changing ARC0 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC0 shutdown message
    ARC3: changing ARC1 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC1 shutdown message
    ARC3: received prod
    Tue Jul 13 00:34:20 1999
    ALTER SYSTEM SET log_archive_max_processes=2;
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC0: Archival stopped
    ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC1: Archival stopped
    ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Archiver process는 round-robin 방식으로 archiving 작업을 수행하도록
    schedule 된다. 만약 다중 archiver process가 부하에 따라 activate 된
    경우는 여러가지 경우의 수가 있을 수 있다. Oracle 8i에서는 다중
    archive log dest를 지원하면서 archive log에 대한 duplexing을 지원
    하기 때문에, 어떤프로세스가 log file을 archive 시켰는지를 기록할 필요가
    있다.
    Oracle 8i에서 archival 작업은 성공할 때 마다 trace file에 archiver
    process 명을 기록한다.
    다음은 관련 trace file의 주요 내용이다.
    Instance name: v815
    Redo thread mounted by this instance: 1
    Oracle process number: 12
    Unix process pid: 3658, image: oracle@oracle8i (ARC3)
    *** Session ID:(12. 1) 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Begin archiving log# 1 seq# 38 thrd# 1
    ARC3: VALIDATE
    ARC3: PREPARE
    ARC3: INITIALIZE
    ARC3: SPOOL
    ARC3: Creating archive destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Archiving block 1 count 1 to : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Closing archive destination 1 : /bigdisk/oracle8i/dbs/arch/1_38.
    dbf
    ARC3: FINISH
    ARC3: Archival success destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: COMPLETE, all destinations archived
    ARC3: ArchivedLog entry added: /bigdisk/oracle8i/dbs/arch/1_38. dbf
    ARC3: ARCHIVED
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Completed archiving log# 1 seq# 38 thrd# 1
    이 정보를 가지고, archive process 3이 log sequence 38번을
    destination 1 :/bigdisk/oracle8i/dbs/arch 에 archive 시킨 것을 알 수 있다.
    Reference Ducumment
    <Note:73163.1>

  • Webinar:SAP NetWeaver Process Integration u2013 Advanced Adapter Engine in PI 7

    +Please Note: This webinar is aimed at consultants, Patners and Customers in APJ Region and is scheduled at  2.00 - 3.00 p.m. Singapore Time (UTC +8)+
    Dear valued SAP Experts,
    Next SAP Intelligence Platform & NetWeaver RIG Expert Call Session will take place on Tuesday, September 1. The SAP Intelligence Platform & NetWeaver RIG Expert Call Sessions are designed to support consultants, partners and customers during their implementation projects. The sessions cover all different aspects of SAP NetWeaver and are aimed at experts, thus provide knowledge which is not available via standard training courses. The session duration is typically 60min and includes questions and answers.
    Tuesday, September 1, 2009:
    SAP NetWeaver Process Integration u2013 Advanced Adapter Engine in PI 7.1 EHP1
    Time: 2.00 - 3.00 p.m. Singapore Time (UTC +8)
    This event will feature Charu Goel with the SAP Intelligence Platform & NetWeaver Regional Implementation Group. Charu provides the following abstract:
    With PI 7.1 Enhancement pack 1, we continuously thrive to bring in advancements in AAE. In this APJ expert call, we talk about the much awaited capability of IDOC packaging along with the enhancements in JMS. You will learn about the interaction between the AAE and ABAP stack, new enhancements on the JDBC adapter and overall improvements in the technical adapters available with PI 7.1 EHP1.
    For meeting and Dial-in detail please register [here|http://www.surveymonkey.com/s.aspx?sm=qkBfQCM5FM_2f_2b4O0ihVpFRw_3d_3d]
    Thanks & Best Regards,
    Sarma Sishta

    Hi,
    This was one of the good session...
    Especially the features such as;
    -- TCP/IP connection control with JMS adapters
    -- Empty file Handling
    -- Set the various Attributes for Mail attachments via dynamic configuration (intially only subject line was possible)
    -- Message Prioritization
    -- single IDOC Ack for IDOC Packaging
    -- IDOC packaging
    -- and many more........................
    Even the future indicatioon of IDOC J2EE based adapter will provide new direction to the existion IDOC related scenarios..
    One thing that was changed in IDOC packaging is to control it via Sender Communication channel...intially we were doing it either via ECC system or with t-code "idxpw"
    Its almost in same way...but from J2EE based adapter perceptive IDOC packaging will be available with Java stack...and will no need to depend only ABAp stack for this feature.
    Thanks
    Swarup

Maybe you are looking for