New features of process chains in BI.7

Hi,
I want New features of process chains in BI.7.Any one give the  list that u have avail.
Thank you.
sekhar..

Good,Thank you.

Similar Messages

  • Process LOADING has no predecessor in Process chain

    Hi Experts,
    I have copied Process chain with all process and I have included new steps in process chain like change log deletion  and delete & create index(the process chain has data mart functionality which mean that, first it load to DSO and then from DSO it load to Cube) so after Start variant I have included Change log deletion Step and then after Activation of DSO I have included delete Index step and then after IP load to Cube I have included Create Index step therefore which is the PC in the form of
    1) Start Process
    2) Change log deletion
    3) Load IP (Target to only PSA)
    4) Read PSA and Update to data target (DSO)
    5) Activate the DSO
    6) Delete Index
    7) Load IP (Data mart from DSO to Cube)
    8) Create Index
    9) Delete request from PSA.
    all process are connected but when I activated I was getting error message that Process LOADING has no predecessor
    so pls advise me what went wrong or what should I do.
    Note
    1) load type is full load and I was in BI.7
    2) there is no DTP engaged

    Hi
    during creation of modification of chain you must have deleted few links to Loading step.
    Go to settings-> Default chains  and see if autosuggest is on, if it is on this might have cause this.
    To Resolve this problem go to edit mode of chain and then go to View->Detail view on
    now you will be able to see unlinked loading step.
    Delete unlinked and un necessary steps from here and reactivate chain.
    Hope this works
    Regards
    Sudeep

  • Emails for process chains

    Hi Friends,
    I need your valuable suggestions for a problem we are currently facing regarding emails in process chains.
    An explanation of the issue is as follows:
    The users who load data (present at different geographical locations Ex. A, B, C) use a custom ABAP program to load and this program only triggers the process chain. The final status of the load is to be communicated back to the users.
    If the email feature of process chain is used and the users list from locations A, B and C is maintained, then users from B and C get emails regarding loads abt C and so on. In other words users from A, B and C have irrelevant emails in their mailbox.
    Also the users have no technical knowledge of BW and hence cannot interpret the Process chain log. So there is another technical team which will receive the detailed email messages.
    All the users need to know is whether the load was successful or not.
    ---Accessing the Monitor using RSMON is not an option
    ---Maintaining email groups is not an option.
    Has anyone faced a similar problem before? If so, how did you resolve it?

    You have now 1 chain that is triggered by A, B and C and they all get e-mail when the chain is finished?
    Create 3 meta-chains C1, C2 and C3. Users A, B and C trigger their respective chain. The actual chain that is doing the work is C4. C4 is a subchain of C1, C2 and C3 so you need to model it only once. At the end of C1, C2 and C3 you can send email to your users.
    C1
    - C1 trigger
    - C4
    - C1 finish & email A
    C2
    - C2 trigger
    - C4
    - C2 finish & email B
    C3
    - C3 trigger
    - C4
    - C3 finish & email C
    You could optionally connect the technical e-mail to the last process in C4. and the user e-mail to C1, C2 and C3.

  • Best practices to modify process chain

    Hi,
    What is the best practice to modify the  process chain whether in in the production or transport back to the dev.
    Does Query performing tuning setting should be done in production like read modes,cache settings.
    Thanks
    nikhil

    Hi Nikhil,
    The best practice to modify the process chains will be making the change in Development system and transporting to Quality and Production. But if you are making a simple change like changing the scheduling time ( change of scheduled date and time by editing the variant) you can do it directly in Production. But if you are adding new steps to process chain, it is better to make change in Dev and move it to Prod.
    Also once the change reach production you may need to open the chain in edit mode and activate it manually.
    It is a common issue for the process chains containing delta DTP, that the chain will be saved in modified (M)  version and not in active version by transports.
    The query read mode and cache settings, you can do it in Development and move it to production.
    But the pre-filling of cache using broadcaster settings can be done directly in production.
    Also creating of cube aggregates to improve query can be done directly in production rather than creating in development and transporting to Prod.
    Again, it depends on project, as in some projects they make changes directly in Prod, while in some projects changes in Prod are not allowed. I think it is better always start the changes in dev and move it to P.
    Hope it helps,
    Thanks,
    Vinod-

  • Process chain repair going to Debugger mode

    Hi everyone,
    When I try to repair a failed DTP process in a process chain, it goes into debugger mode. After exiting from debug mode, I can see that DTP has been executed successfully.
    Why process chain goes into Debug mode?
    Secondly after successful execution of DTP process, other remaining succeeding processes are not executed.
    any idea how to solve these issues.
    thanks for help
    Ahmad

    hi guys,
    Thank you all for taking time to reply.
    @Muhammad: there are no breakpoint(s) in custom routines.
    @Baskaran: DTP processing mode is set to Serial extraction, immediate parallel processing.
    @Amit, Martin: at DTP process repair, the Debug stop at following code
    * ==== Debugging ====
    * Breakpoint after start routine
        if i_r_trfn_cmd is bound.
          READ TABLE i_r_trfn_cmd->n_th_bp
               TRANSPORTING NO FIELDS
               WITH TABLE KEY bpid    = 3
                              datapid = i_r_inbound->n_datapakid.
          IF sy-subrc = 0.
    * --- Data ---
    *     See datapackage below
    * --- Debugging ---
            BREAK-POINT.                                       "#EC NOBREAK
          ENDIF.
        endif.
    when I simply exit, I can see in the process chain that DTP has executed successfully, but remaining processes don't execute.
    In the generated program of transformation of DTP, there are two breakpoints; one given above and the other is :
    * ==== Debugging ====
    * Breakpoint before end routine
        if i_r_trfn_cmd is bound.
          READ TABLE i_r_trfn_cmd->n_th_bp
               TRANSPORTING NO FIELDS
               WITH TABLE KEY bpid    = 4
                              datapid = i_r_inbound->n_datapakid.
          IF sy-subrc = 0.
    * --- Data ---
    *     See datapackage above..
    * --- Debugging ---
            BREAK-POINT.                                       "#EC NOBREAK
          ENDIF.
        endif.
    I have checked generated programs of other transformations both SAP delivered (business-content) and client custom transformations and they all have these two pieces of code.
    on additional question:
    when we do repair the below box appears. Can anyone explains what exactly is this? and if this has any thing to do repair going into the debug mode. What I read on
    https://help.sap.com/saphelp_nw70ehp1/helpdata/en/67/13843b74f7be0fe10000000a114084/content.htm
    "Specify how long (in seconds) you want the delay to be between one event being triggered and the next process starting."
    I am thinking of deleting this DTP and using a new one in process chain.
    regards
    Ahmad

  • MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-19
    MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )
    =========================================================
    PURPOSE
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Explanation
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    Oracle 8i에는 다중 archive destination을 지원하며, 단일 archiver의
    부하를 줄여주기 위해 multiple archiver process를 사용할 수 있다.
    LOG_ARCHIVE_MAX_PROCESSES 파라미터로 구동시킬 최대 ARCH 프로세스의
    갯수를 지정하면 된다.
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면 인스턴스 구동시
    init 파일에 지정된 LOG_ARCHIVE_MAX_PROCESSES에 지정된 값을 읽어
    들인다. 만약 LOG_ARCHIVE_START 값이 true인데 LOG_ARCHIVE_MAX_PROCESSES
    값이 별도로 지정되어 있지 않을 경우에는 arc0 프로세스만을 구동시킨다.
    LOG_ARCHIVE_MAX_PROCESSES 값이 별도로 지정되었을 경우 (1 - 10 범위),
    arc0, arc1 과 같은 추가 프로세스를 구동시킨다.
    하지만, 이 파라미터 값을 기본값 1이 아닌 다른 값으로 명시적으로 지정할
    필요는 없다. 왜냐하면, 시스템에서 몇개의 ARCn 프로세스가
    필요한지를 판단하여, 추가 ARCn 프로세스를 생성하기 때문이다.
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    alter system 명령에서 set LOG_ARCHIVE_MAX_PROCESSES=n 으로
    지정하여 동적으로 값을 변경할 수 있다. 이때 n은 1 부터 10사이의
    값이어야 한다. 하지만, LOG_ARCHIVE_START 값이 FALSE로 지정되어
    있을 경우에는, 명령을 실행시켜도 아무런 영향을 미치지
    못한다.
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면, 오라클에서는
    구동시 하나의 archiver process (ARC0)를 구동시킨다. 이 파라미터
    값은 필요시 LATER SYSTEM 명령으로 지정된 갯수만큼의
    archive process를 구동시킬 수 있다.
    예)
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=4;
    위 명령을 실행 시키면 다음과 같은 절차에 따라 ARC1, ARC2, ARC3를
    추가한다.
    1) Shadow process는 primary archive process에게 프로세스 갯수를
    늘릴 것을 요청한다.
    2) Archiver process는 kcrrschd 함수를 호출한다. (kcrrschd:
    다중 arch process를 schedule)
    3) 만약 요청된 process의 갯수가 현재 사용중인 archiver process
    갯수보다 작은지 확인한다. 만약 새로 지정된 값이 적거나, ARCHIVING이
    DISABLE 된 상태라면 다른 조치를 취하지 않고 return 한다. 그렇지
    않다면 지원되는 최대 갯수인 10을 넘는지 확인하고 10을 넘을 경우에는
    프로세스의 갯수를 10으로 지정한다.
    4) Scheduler 함수는 kcrxs{} structure에 대한 latch를 확보한다.
    여기서 kcrxs{} structure는 ARCH Activation status를 나타낸다.
    5) Scheduler 함수는 지정된 process 갯수 만큼 loop를 돌면서
    schedule된 상태를 structure KCRRSCHED에 반영시킨다.
    6) 그리고 나서 latch를 release 시킨 후 kcrrsmp 함수를 ( kcrrsmp:
    다중 arch processs 구동 ) 호출한다.
    7) kcrrsmp 함수는 kcrrxs{} structure (ARCH 구동 상태)에 대한 latch를
    확보하여 code 실행을 serialize 시켜 이 함수가 동시에 실행되더라도
    한번에 하나씩 실행될 수 있도록 한다.
    8) pending 상태에 있는 archiver process를 스케쥴링 하고, dead process가
    있으면 clean up 시킨다.
    9) 그리고 나서 이 함수는 지정된 process 갯수 만큼 loop를 돌면서
    KCRRSCHED 상태를 KCRRSTART으로 바꾸어, archiver process들을
    구동 준비 상태로 만든다.
    10) latch를 release 시킨 후 ARCH 프로세스를 구동시킨다.
    11) kcrrsmp함수는 latch를 다시 획득한다. 각각의 archiver 프로세스는
    자기 자신을 activate 시킬 것을 통보 받는다. archiver process는
    자기 자신을 activate 시킨 후 alert file에 관련 사항을 기록한다.
    12) 호출을 하는 함수는 모든 archiver process가 자기 자신을
    activate 시키고, kcrrxs structure의 내용을 갱신할 때 까지
    sleep 상태에서 대기한다.
    13) 끝으로, 현재 archiver process의 갯수가 요청된 archiver process
    의 갯수와 일치 하면, latch를 release 시키고 break 한다. ( C
    에서의 break )
    alert. log 에는 위 과정이 다음과 같이 반영된다.
    sql: prodding the archiver
    ALTER SYSTEM SET log_archive_max_processes=4;
    Tue Jul 13 02:15:14 1999
    ARC0: changing ARC1 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC2 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC3 KCRRNOARCH->KCRRSCHED
    ARC0: STARTING ARCH PROCESSES
    ARC0: changing ARC1 KCRRSCHED->KCRRSTART
    ARC0: changing ARC2 KCRRSCHED->KCRRSTART
    ARC0: changing ARC3 KCRRSCHED->KCRRSTART
    ARC0: invoking ARC1
    Tue Jul 13 02:15:15 1999
    ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC1
    ARC0: ARC1 invoked
    ARC0: invoking ARC2
    ARC1 started with pid=10
    ARC1: Archival started
    Tue Jul 13 02:15:15 1999
    ARC2: changing ARC2 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC2
    ARC2 와 ARC3도 동일한 절차를 따른다.
    흥미로운 사실은 프로세스의 갯수를 줄일 수도 있다는 것이다. 예를 들어
    다음과 같은 명령을 실행시킬 경우
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=2;
    다음과 같은 작업이 순서대로 실행된다.
    1) shadow process는 현재 active 상태인 archiver process와 접속을 한다.
    2) archiverprocess는 kcrrxmp 함수를 호출한다. ( kcrrxmp: 다중
    Arch process 정지 )
    3) kcrrxmp 함수는 kcrrxs{} structure에 (ARCH 구동 상태) 대한 latch를 획득하여 다른 프로세스에서 structure를 동시에 병경하지 않도록 한다.
    4) 새로 요청된 archiver process의 갯수가 현재 사용중인 archiver process
    의 갯수보다 작은지 확인한다.
    5) 만약 작다면, archiver process 목록 가운데, 가장 최근에 schedule
    되어, archival 작업에 schedule 차례가 금방 돌아 오지 않을 프로세스를
    찾아낸다.
    6) 각각의 프로세스에 대해 KCRRACTIVE 상태에서 KCRRSHUTDN로 상태를
    변경할 것을 요청한다.
    7) 상태가 바뀌면, OS에서 해당 프로세스를 종료시키도록 하고, 상태를
    KCRRDEAD로 바꾼다. 관련된 상태 정보가 정리되고 kcrrxs{} structure의
    내용이 갱신된다.
    6) ,7) 과정은 지정된 archiver process 갯수로 줄어들 때 까지
    반복된다.
    8) 새로운 archiver process의 갯수로 kcrrxs structure 내용이 갱신된다.
    9) latch를 release 시킨다.
    상태 변동은 다음과 같이 alert. log 파일에 반영된다.
    sql: prodding the archiver
    Tue Jul 13 00:34:20 1999
    ARC3: changing ARC0 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC0 shutdown message
    ARC3: changing ARC1 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC1 shutdown message
    ARC3: received prod
    Tue Jul 13 00:34:20 1999
    ALTER SYSTEM SET log_archive_max_processes=2;
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC0: Archival stopped
    ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC1: Archival stopped
    ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Archiver process는 round-robin 방식으로 archiving 작업을 수행하도록
    schedule 된다. 만약 다중 archiver process가 부하에 따라 activate 된
    경우는 여러가지 경우의 수가 있을 수 있다. Oracle 8i에서는 다중
    archive log dest를 지원하면서 archive log에 대한 duplexing을 지원
    하기 때문에, 어떤프로세스가 log file을 archive 시켰는지를 기록할 필요가
    있다.
    Oracle 8i에서 archival 작업은 성공할 때 마다 trace file에 archiver
    process 명을 기록한다.
    다음은 관련 trace file의 주요 내용이다.
    Instance name: v815
    Redo thread mounted by this instance: 1
    Oracle process number: 12
    Unix process pid: 3658, image: oracle@oracle8i (ARC3)
    *** Session ID:(12. 1) 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Begin archiving log# 1 seq# 38 thrd# 1
    ARC3: VALIDATE
    ARC3: PREPARE
    ARC3: INITIALIZE
    ARC3: SPOOL
    ARC3: Creating archive destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Archiving block 1 count 1 to : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Closing archive destination 1 : /bigdisk/oracle8i/dbs/arch/1_38.
    dbf
    ARC3: FINISH
    ARC3: Archival success destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: COMPLETE, all destinations archived
    ARC3: ArchivedLog entry added: /bigdisk/oracle8i/dbs/arch/1_38. dbf
    ARC3: ARCHIVED
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Completed archiving log# 1 seq# 38 thrd# 1
    이 정보를 가지고, archive process 3이 log sequence 38번을
    destination 1 :/bigdisk/oracle8i/dbs/arch 에 archive 시킨 것을 알 수 있다.
    Reference Ducumment
    <Note:73163.1>

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-19
    MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )
    =========================================================
    PURPOSE
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Explanation
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    Oracle 8i에는 다중 archive destination을 지원하며, 단일 archiver의
    부하를 줄여주기 위해 multiple archiver process를 사용할 수 있다.
    LOG_ARCHIVE_MAX_PROCESSES 파라미터로 구동시킬 최대 ARCH 프로세스의
    갯수를 지정하면 된다.
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면 인스턴스 구동시
    init 파일에 지정된 LOG_ARCHIVE_MAX_PROCESSES에 지정된 값을 읽어
    들인다. 만약 LOG_ARCHIVE_START 값이 true인데 LOG_ARCHIVE_MAX_PROCESSES
    값이 별도로 지정되어 있지 않을 경우에는 arc0 프로세스만을 구동시킨다.
    LOG_ARCHIVE_MAX_PROCESSES 값이 별도로 지정되었을 경우 (1 - 10 범위),
    arc0, arc1 과 같은 추가 프로세스를 구동시킨다.
    하지만, 이 파라미터 값을 기본값 1이 아닌 다른 값으로 명시적으로 지정할
    필요는 없다. 왜냐하면, 시스템에서 몇개의 ARCn 프로세스가
    필요한지를 판단하여, 추가 ARCn 프로세스를 생성하기 때문이다.
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    alter system 명령에서 set LOG_ARCHIVE_MAX_PROCESSES=n 으로
    지정하여 동적으로 값을 변경할 수 있다. 이때 n은 1 부터 10사이의
    값이어야 한다. 하지만, LOG_ARCHIVE_START 값이 FALSE로 지정되어
    있을 경우에는, 명령을 실행시켜도 아무런 영향을 미치지
    못한다.
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면, 오라클에서는
    구동시 하나의 archiver process (ARC0)를 구동시킨다. 이 파라미터
    값은 필요시 LATER SYSTEM 명령으로 지정된 갯수만큼의
    archive process를 구동시킬 수 있다.
    예)
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=4;
    위 명령을 실행 시키면 다음과 같은 절차에 따라 ARC1, ARC2, ARC3를
    추가한다.
    1) Shadow process는 primary archive process에게 프로세스 갯수를
    늘릴 것을 요청한다.
    2) Archiver process는 kcrrschd 함수를 호출한다. (kcrrschd:
    다중 arch process를 schedule)
    3) 만약 요청된 process의 갯수가 현재 사용중인 archiver process
    갯수보다 작은지 확인한다. 만약 새로 지정된 값이 적거나, ARCHIVING이
    DISABLE 된 상태라면 다른 조치를 취하지 않고 return 한다. 그렇지
    않다면 지원되는 최대 갯수인 10을 넘는지 확인하고 10을 넘을 경우에는
    프로세스의 갯수를 10으로 지정한다.
    4) Scheduler 함수는 kcrxs{} structure에 대한 latch를 확보한다.
    여기서 kcrxs{} structure는 ARCH Activation status를 나타낸다.
    5) Scheduler 함수는 지정된 process 갯수 만큼 loop를 돌면서
    schedule된 상태를 structure KCRRSCHED에 반영시킨다.
    6) 그리고 나서 latch를 release 시킨 후 kcrrsmp 함수를 ( kcrrsmp:
    다중 arch processs 구동 ) 호출한다.
    7) kcrrsmp 함수는 kcrrxs{} structure (ARCH 구동 상태)에 대한 latch를
    확보하여 code 실행을 serialize 시켜 이 함수가 동시에 실행되더라도
    한번에 하나씩 실행될 수 있도록 한다.
    8) pending 상태에 있는 archiver process를 스케쥴링 하고, dead process가
    있으면 clean up 시킨다.
    9) 그리고 나서 이 함수는 지정된 process 갯수 만큼 loop를 돌면서
    KCRRSCHED 상태를 KCRRSTART으로 바꾸어, archiver process들을
    구동 준비 상태로 만든다.
    10) latch를 release 시킨 후 ARCH 프로세스를 구동시킨다.
    11) kcrrsmp함수는 latch를 다시 획득한다. 각각의 archiver 프로세스는
    자기 자신을 activate 시킬 것을 통보 받는다. archiver process는
    자기 자신을 activate 시킨 후 alert file에 관련 사항을 기록한다.
    12) 호출을 하는 함수는 모든 archiver process가 자기 자신을
    activate 시키고, kcrrxs structure의 내용을 갱신할 때 까지
    sleep 상태에서 대기한다.
    13) 끝으로, 현재 archiver process의 갯수가 요청된 archiver process
    의 갯수와 일치 하면, latch를 release 시키고 break 한다. ( C
    에서의 break )
    alert. log 에는 위 과정이 다음과 같이 반영된다.
    sql: prodding the archiver
    ALTER SYSTEM SET log_archive_max_processes=4;
    Tue Jul 13 02:15:14 1999
    ARC0: changing ARC1 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC2 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC3 KCRRNOARCH->KCRRSCHED
    ARC0: STARTING ARCH PROCESSES
    ARC0: changing ARC1 KCRRSCHED->KCRRSTART
    ARC0: changing ARC2 KCRRSCHED->KCRRSTART
    ARC0: changing ARC3 KCRRSCHED->KCRRSTART
    ARC0: invoking ARC1
    Tue Jul 13 02:15:15 1999
    ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC1
    ARC0: ARC1 invoked
    ARC0: invoking ARC2
    ARC1 started with pid=10
    ARC1: Archival started
    Tue Jul 13 02:15:15 1999
    ARC2: changing ARC2 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC2
    ARC2 와 ARC3도 동일한 절차를 따른다.
    흥미로운 사실은 프로세스의 갯수를 줄일 수도 있다는 것이다. 예를 들어
    다음과 같은 명령을 실행시킬 경우
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=2;
    다음과 같은 작업이 순서대로 실행된다.
    1) shadow process는 현재 active 상태인 archiver process와 접속을 한다.
    2) archiverprocess는 kcrrxmp 함수를 호출한다. ( kcrrxmp: 다중
    Arch process 정지 )
    3) kcrrxmp 함수는 kcrrxs{} structure에 (ARCH 구동 상태) 대한 latch를 획득하여 다른 프로세스에서 structure를 동시에 병경하지 않도록 한다.
    4) 새로 요청된 archiver process의 갯수가 현재 사용중인 archiver process
    의 갯수보다 작은지 확인한다.
    5) 만약 작다면, archiver process 목록 가운데, 가장 최근에 schedule
    되어, archival 작업에 schedule 차례가 금방 돌아 오지 않을 프로세스를
    찾아낸다.
    6) 각각의 프로세스에 대해 KCRRACTIVE 상태에서 KCRRSHUTDN로 상태를
    변경할 것을 요청한다.
    7) 상태가 바뀌면, OS에서 해당 프로세스를 종료시키도록 하고, 상태를
    KCRRDEAD로 바꾼다. 관련된 상태 정보가 정리되고 kcrrxs{} structure의
    내용이 갱신된다.
    6) ,7) 과정은 지정된 archiver process 갯수로 줄어들 때 까지
    반복된다.
    8) 새로운 archiver process의 갯수로 kcrrxs structure 내용이 갱신된다.
    9) latch를 release 시킨다.
    상태 변동은 다음과 같이 alert. log 파일에 반영된다.
    sql: prodding the archiver
    Tue Jul 13 00:34:20 1999
    ARC3: changing ARC0 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC0 shutdown message
    ARC3: changing ARC1 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC1 shutdown message
    ARC3: received prod
    Tue Jul 13 00:34:20 1999
    ALTER SYSTEM SET log_archive_max_processes=2;
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC0: Archival stopped
    ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC1: Archival stopped
    ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Archiver process는 round-robin 방식으로 archiving 작업을 수행하도록
    schedule 된다. 만약 다중 archiver process가 부하에 따라 activate 된
    경우는 여러가지 경우의 수가 있을 수 있다. Oracle 8i에서는 다중
    archive log dest를 지원하면서 archive log에 대한 duplexing을 지원
    하기 때문에, 어떤프로세스가 log file을 archive 시켰는지를 기록할 필요가
    있다.
    Oracle 8i에서 archival 작업은 성공할 때 마다 trace file에 archiver
    process 명을 기록한다.
    다음은 관련 trace file의 주요 내용이다.
    Instance name: v815
    Redo thread mounted by this instance: 1
    Oracle process number: 12
    Unix process pid: 3658, image: oracle@oracle8i (ARC3)
    *** Session ID:(12. 1) 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Begin archiving log# 1 seq# 38 thrd# 1
    ARC3: VALIDATE
    ARC3: PREPARE
    ARC3: INITIALIZE
    ARC3: SPOOL
    ARC3: Creating archive destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Archiving block 1 count 1 to : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Closing archive destination 1 : /bigdisk/oracle8i/dbs/arch/1_38.
    dbf
    ARC3: FINISH
    ARC3: Archival success destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: COMPLETE, all destinations archived
    ARC3: ArchivedLog entry added: /bigdisk/oracle8i/dbs/arch/1_38. dbf
    ARC3: ARCHIVED
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Completed archiving log# 1 seq# 38 thrd# 1
    이 정보를 가지고, archive process 3이 log sequence 38번을
    destination 1 :/bigdisk/oracle8i/dbs/arch 에 archive 시킨 것을 알 수 있다.
    Reference Ducumment
    <Note:73163.1>

  • Better process for introducing new features and explaining how they work

    There are all kinds of new features being added to the Creative Cloud app but knowing what they are and learning about how they work isn't the most user friendly process. The current process is read the release notes after updating to a new version. The release notes should hyperlink to the related features in the product help. It would be great if there was a "What's New" section for the Creative Cloud app itself with new releases which included an introduction to new features just like there is for the desktop products.

    This is a poorly explained reason about why is so important having a loupe in Photoshop. I've not also, used the best example, but I will. This belongs to the ADDITIONAL FEATURES: Loupe/Magnificator View:. Also, I will try to make my english clrearer, but here it goes:

  • How to move the process chain from Unassigned node to a new node in 3.5?

    Hi all,
    I have created a new process chain in the development and it is falling under the Unassigned nodes. I want to move that process chain to an another node. but i am unable to do that
    Can anyone let me know how to move the process chain from unassigned node. I have drag and dropped but still the same in BW 3.5 ?
    Thanks
    Poooja

    Hello,
    Try this....
    Double click on your process chain. Via the menu select:
    Process chain > Attributes > Display components
    Select F4 (possible entries)
    At the bottom of the window you will find a create icon
    to make your own component.
    After you created it, assign it to your process chain.
    Don't forget to save the process chain.
    Regards,
    Sivaram

  • How to trigger process chain when datasource is loaded with new data? PUSH

    Hi all,
    Till now we use the pull method to load data in BW which is done manually......but we would like to work with the PUSH method where whenever new data is loaded in the datasource an event is triggered which inturn triggers the process chain...
    how is this possible? can we do this on a timestamp on the datasource to trigger the event?
    rgds,
    wills

    hi Geo,
    Thanks for ur response. I appreciate it.
    The case is slightly different. I am working on Bank analyzer data which is residing in a source system defined to load the results from the Results DataBase, a part of the Bank analyzer.
    If it was R3 we have the standard calling procedures...but now the data in not in R3 but in Bank analyzer.
    I am keen to look at some procedure to push the data automatically whenever an end user execution is done at the BA level into the BW.
    ur help would be highly appreciated....
    thks,
    rgds.
    wills

  • New system, problems in process chain

    Hi,
    Recently we made a copy from Prod system to new one. The copy was successfully and the connection with the R/3 system too. But when I try to execute a process chain, I receive the following message: Job BI_PROCESS_CHAIN could not be scheduled. Termination with returncode 8
    and the system log shows:
    BP_STEPLIST_EDITOR: Invalid step values (step 1 ) found. Reason
    > User RFCUSER cannot be scheduled due to its type
    BP_JOB_EDITOR: Job BI_PROCESS_CHAIN is invalid. Reason:
    > Step 1 contains illegal values
    As I see, the problem is with RFCUSER (communications user) but I don't know what can be?
    I will appreciate your help
    regards,
    victoria

    Check <a href="https://service.sap.com/sap/support/notes/947690">SAP Note 947690</a>
    <b>Reason and Prerequisites</b>
    1. The user type of background user is incorrect.
    2. Invalid host name setting.
    3. Server name spelt wrong or wrong case-sensitivity for server name in sm51.
    4. Process chain scheduled to run on a server which does not exist.
    5. No instance is defined but the operation mode is set in sm63..
    <b>Solution</b>
    1. Set the background user profile according to note 511475.
    2. Check the host name setting in sm65 and correct if necessary (refer note 23538).
    3. Check the server settings in sm51 i.e., check for the spelling and correct case.
    4. Schedule the chain with a valid server.
    5. Maintain an instance of operation mode in rz04
    Assign points if helpful
    Regards, Uday Pothireddy

  • Creating the New Process Chains in BW System

    Hi SDN,
    What is the best practice of creating the New Process Chain?
    1. Whether to create the Process Chain in BW Development first and trasferring it through transfer request
    OR
    2. Create directly the Process Chain in the BW Production and schedule it
    Please provide the views.........
    Kind regards,
    Rajesh Giribuwa

    hI rAJESH,
    Ya,we must create any development in development system and then subsequesntly to test and then production system.
    but my exp was differnt in one of my project .
    i have developed the pc in dev first and then trasported to test and then prod,
    it worked fine in dev and test but in prod it was not running ,due to some system resources.
    so i did the change direclty in prod and then it worked .
    but it was risk .
    hope this is helpful
    Jimmy
    dont hesitate to move u r cursor towards the max point

  • Urgent Issue - Copy DTP and Process Chains in New Client Instance

    We have created new client Instance ( CLNT 510 ) based on our present client ( CLNT 500 ).
    We have all the process changes, application components from source system CLNT 500 and data ( both for master and transaction data ) in place for the CLNT 500.
    Now we need to use the CLNT 510 for our futher development. RFC connection has been established.
    Just want to know, do we need to create all the Application component trasfer , Transformation, DTP , Infopackages , Process chains again for the CLNT 510 data.
    please help.
    Thanks and Kind Regards,
    Pratap Gade.

    Hello Jr Roberto,
    Hello Jr Roberto,
    We have company implementing mutiple SAP Implementations in different countries.
    Client for BW Dev is 400.
    Source system for Client BWD :
    ECC Client ERP Dev - 500 ( for Holland Implementation )
    ECC Client EPR Dev - 510 ( for German Implementation  - Instance of ECC 500)
    We are completed with the Client 500 implementation.
    We are starting the german implementation and they have created a intance of Holland ECC Dev client 500 for german implementation with german data.
    Now they have setup 2 rfc connection in BWD 400.
    Also finally when they move to QAS client they are going to integrate the data of the both countries into one client.
    Presenlty I need to use ECC client 510 client for our german development.
    We have around 40 SAP BI reports in our first implementation. and around 10 in our second implementation.
    Just want to know, what are issues to be considered in loading master data ( will it overwrite the master data from the client 500, if i extract the master data from client 510 )...and all the other issues like can I copy all the process chains...etc  from client ECC 500 to ECC 510, without creating all transformation, DTP , Infopackages to load all the master from client 510 again.
    I need a broad understand on how to go about the development ...starting with Application Components transfer from source system Client 510 ....to report development. ( if possible using the already created objects....infopackages , DTP etc..)
    your help would be really appricated Mr Roberto.
    Thanks and Kind Regards,
    Shekar.

  • New Process Chain Activation

    When a new process is transported to qa or prod it has to be MANUALLY activated one time before the NEW: prefix is removed and the chain can be used by a meta chain.
    Is there a configuration setting or some way to have the chain AUTOMATICALLY activated when it is transported to qa or prod?
    Thanks,
    Mike S.

    Hi Mike,
    Welcome to SDN!!
    You would need to do that to enable the schedule the process chain. I did notice an OSS Note or some document which stated the settings usinh which we can avoid this. I dont have access to my documents now. Would search and post.
    Meanwhile all the posts discussed here talk about manual activation:
    Process chain scheduling in QA
    Process Chain Status is new after transport to QA
    Also chek this blog:/people/sap.user72/blog/2005/09/05/sap-bw-and-business-content-datasources-in-pursuit-of-the-origins
    Bye
    Dinesh

  • Process Chain statistics (new stats in 2004s)

    Hi,
    We have activated and are loading all stats (new ones) for 2004s.  We are checking the stats through the portal.  We have many stats showing in the default web templates used in the portal to display stats.  Process Chain stats are the only ones showing no data...which can't be true because we have only been loading with process chains.
    Where do you "activate" stats for Process chains themselves?  I am aware of RSPC_API_CHAIN_GET_STATUS, but it appears with 2004s, delivered is a process chain statistics?
    Thanks,  Mark.

    For data-load statistics of process chains and processes:
    use 0TCT_C21, 0TCT_VC21 and 0TCT_MC21
    For more Insights -
    The new InfoProviders are:
    For more highly aggregated query-runtime statistics:
    0TCT_C01, 0TCT_VC01 and 0TCT_MC01
    These replace InfoCube 0BWTC_C02.
    For more detailed query-runtime statistics:
    0TCT_C02, 0TCT_VC02 and 0TCT_MC02 
    These replace InfoCube 0BWTC_C02.
    For data manager statistics:
    0TCT_C03, 0TCT_VC03 and 0TCT_MC03 
    These replace InfoCube 0BWTC_C03.
    For data-load statistics of process chains and processes:
    0TCT_C21, 0TCT_VC21 and 0TCT_MC21
    For data-load statistics of data transfer processes:
    0TCT_C22, 0TCT_VC22 and 0TCT_MC22
    For data-load statistics of InfoPackages:
    0TCT_C23, 0TCT_VC23 and 0TCT_MC23
    These deliver essentially the same information as InfoCube 0BWTC_C05 but they use the new InfoObjects. The remaining new InfoProviders also use the new InfoObjects.
    For the current data-load status of process chains and processes:
    0TCT_VC11 and 0TCT_MC11
    For the current status of requests loaded to InfoProviders, InfoObjects that have been updated flexibly, and PSA tables:
    0TCT_VC11 and 0TCT_MC11
    Hope it Helps
    Chetan
    @CP..

  • Process chain alteration in BI 7 with new DTPs and Infopackages

    Dear All,
    For my BI system, the source system got replaced from 4.7 to ECC 6.0 system. All the datasources that were earlier part of old system were recreated in the new one too.
    Now, I have a process chain in my production system which handles all the master data and transactional data loading. It makes use of the Infopackages(with respect to the old system) to load these data. Since I have new datasources, so I will be having new Infopackages too(with respect to the new system).  I need to replace all the old infopackages with the newly created ones in the process chain. Is there any direct step or method for this or shall I have to create a new process chain from scratch ?
    Regards,
    Srinivas

    hii
    As soon as u click on the info pkg & right click on it it shows the number of options like to display the variant , exchange the variant, connect with, etc.
    So go in the edit mode of process chain, right click on info pkg variant & select exchange variant, it opens a box having aal info pkgs, select the desired info pkg & save & activate the process chain
    thanks
    neha

Maybe you are looking for