Multiple bzcat processes
Oct 2008 20" Intel iMac running 10.6.8 with 6 GB of memory.
Memory had filled up recently -- due to Firefox, of course! Was down to 35 MB free out of 6 GB.
After quitting FF (which took a few minutes) I noticed several bzcat processes, each using about 25% - 30% CPU. Initially the kernel was taking up maybe 70% CPU. Quit all apps. Still had the bzcat processes running. CPU was maxed out at some point (at least during FF shutdown). Rebooted. Fixed. Any idea what this is? See attached screenshot.
I don't know if this is related, but I'm also haveing trouble with Time Machine backups taking over an hour (I have a separate discussion thread on that: https://discussions.apple.com/message/22528453#22528453).
Time machine backups take way too long
I found the problem. Time Machine Buddy broke, and seems to have created the following processes:
new-host:~ alanfeldman$ cat a.tmp | egrep ' 220 | 2839 | 2846 '
501 220 124 0 0:07.18 ?? 0:09.65 /System/Library/CoreServices/Dock.app/Contents/Resources/DashboardClient.app/Co ntents/MacOS/DashboardClient
501 2839 220 0 0:00.00 ?? 0:00.01 /bin/sh -c bzcat /var/log/system.log.0.bz2 | cat - /var/log/system.log | grep -E 'backupd\[353\]'
501 2840 2839 0 0:00.33 ?? 0:11.07 bzcat /var/log/system.log.0.bz2
501 2841 2839 0 0:00.44 ?? 0:00.48 cat - /var/log/system.log
501 2842 2839 0 0:00.29 ?? 0:00.54 grep -E backupd\[353\]
501 2846 220 0 0:00.00 ?? 0:00.01 /bin/sh -c bzcat /var/log/system.log.0.bz2 | cat - /var/log/system.log | grep -E 'backupd\[[0-9]+\]: Starting ' | tail -n25 | sed -E 's/^.+backupd\[//; s/]:.+$//' | uniq
501 2847 2846 0 0:00.24 ?? 0:05.99 bzcat /var/log/system.log.0.bz2
501 2848 2846 0 0:00.23 ?? 0:00.25 cat - /var/log/system.log
501 2849 2846 0 0:00.16 ?? 0:00.31 grep -E backupd\[[0-9]+\]: Starting
501 2850 2846 0 0:00.00 ?? 0:00.00 tail -n25
501 2851 2846 0 0:00.00 ?? 0:00.00 sed -E s/^.+backupd\[//; s/]:.+$//
501 2852 2846 0 0:00.00 ?? 0:00.00 uniq
new-host:~ alanfeldman$
Sorry to have bothered you all.
Similar Messages
-
MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )
제품 : ORACLE SERVER
작성날짜 : 2002-04-19
MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )
=========================================================
PURPOSE
1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
Explanation
1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
Oracle 8i에는 다중 archive destination을 지원하며, 단일 archiver의
부하를 줄여주기 위해 multiple archiver process를 사용할 수 있다.
LOG_ARCHIVE_MAX_PROCESSES 파라미터로 구동시킬 최대 ARCH 프로세스의
갯수를 지정하면 된다.
만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면 인스턴스 구동시
init 파일에 지정된 LOG_ARCHIVE_MAX_PROCESSES에 지정된 값을 읽어
들인다. 만약 LOG_ARCHIVE_START 값이 true인데 LOG_ARCHIVE_MAX_PROCESSES
값이 별도로 지정되어 있지 않을 경우에는 arc0 프로세스만을 구동시킨다.
LOG_ARCHIVE_MAX_PROCESSES 값이 별도로 지정되었을 경우 (1 - 10 범위),
arc0, arc1 과 같은 추가 프로세스를 구동시킨다.
하지만, 이 파라미터 값을 기본값 1이 아닌 다른 값으로 명시적으로 지정할
필요는 없다. 왜냐하면, 시스템에서 몇개의 ARCn 프로세스가
필요한지를 판단하여, 추가 ARCn 프로세스를 생성하기 때문이다.
2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
alter system 명령에서 set LOG_ARCHIVE_MAX_PROCESSES=n 으로
지정하여 동적으로 값을 변경할 수 있다. 이때 n은 1 부터 10사이의
값이어야 한다. 하지만, LOG_ARCHIVE_START 값이 FALSE로 지정되어
있을 경우에는, 명령을 실행시켜도 아무런 영향을 미치지
못한다.
3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면, 오라클에서는
구동시 하나의 archiver process (ARC0)를 구동시킨다. 이 파라미터
값은 필요시 LATER SYSTEM 명령으로 지정된 갯수만큼의
archive process를 구동시킬 수 있다.
예)
SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=4;
위 명령을 실행 시키면 다음과 같은 절차에 따라 ARC1, ARC2, ARC3를
추가한다.
1) Shadow process는 primary archive process에게 프로세스 갯수를
늘릴 것을 요청한다.
2) Archiver process는 kcrrschd 함수를 호출한다. (kcrrschd:
다중 arch process를 schedule)
3) 만약 요청된 process의 갯수가 현재 사용중인 archiver process
갯수보다 작은지 확인한다. 만약 새로 지정된 값이 적거나, ARCHIVING이
DISABLE 된 상태라면 다른 조치를 취하지 않고 return 한다. 그렇지
않다면 지원되는 최대 갯수인 10을 넘는지 확인하고 10을 넘을 경우에는
프로세스의 갯수를 10으로 지정한다.
4) Scheduler 함수는 kcrxs{} structure에 대한 latch를 확보한다.
여기서 kcrxs{} structure는 ARCH Activation status를 나타낸다.
5) Scheduler 함수는 지정된 process 갯수 만큼 loop를 돌면서
schedule된 상태를 structure KCRRSCHED에 반영시킨다.
6) 그리고 나서 latch를 release 시킨 후 kcrrsmp 함수를 ( kcrrsmp:
다중 arch processs 구동 ) 호출한다.
7) kcrrsmp 함수는 kcrrxs{} structure (ARCH 구동 상태)에 대한 latch를
확보하여 code 실행을 serialize 시켜 이 함수가 동시에 실행되더라도
한번에 하나씩 실행될 수 있도록 한다.
8) pending 상태에 있는 archiver process를 스케쥴링 하고, dead process가
있으면 clean up 시킨다.
9) 그리고 나서 이 함수는 지정된 process 갯수 만큼 loop를 돌면서
KCRRSCHED 상태를 KCRRSTART으로 바꾸어, archiver process들을
구동 준비 상태로 만든다.
10) latch를 release 시킨 후 ARCH 프로세스를 구동시킨다.
11) kcrrsmp함수는 latch를 다시 획득한다. 각각의 archiver 프로세스는
자기 자신을 activate 시킬 것을 통보 받는다. archiver process는
자기 자신을 activate 시킨 후 alert file에 관련 사항을 기록한다.
12) 호출을 하는 함수는 모든 archiver process가 자기 자신을
activate 시키고, kcrrxs structure의 내용을 갱신할 때 까지
sleep 상태에서 대기한다.
13) 끝으로, 현재 archiver process의 갯수가 요청된 archiver process
의 갯수와 일치 하면, latch를 release 시키고 break 한다. ( C
에서의 break )
alert. log 에는 위 과정이 다음과 같이 반영된다.
sql: prodding the archiver
ALTER SYSTEM SET log_archive_max_processes=4;
Tue Jul 13 02:15:14 1999
ARC0: changing ARC1 KCRRNOARCH->KCRRSCHED
ARC0: changing ARC2 KCRRNOARCH->KCRRSCHED
ARC0: changing ARC3 KCRRNOARCH->KCRRSCHED
ARC0: STARTING ARCH PROCESSES
ARC0: changing ARC1 KCRRSCHED->KCRRSTART
ARC0: changing ARC2 KCRRSCHED->KCRRSTART
ARC0: changing ARC3 KCRRSCHED->KCRRSTART
ARC0: invoking ARC1
Tue Jul 13 02:15:15 1999
ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
Tue Jul 13 02:15:15 1999
ARC0: Initializing ARC1
ARC0: ARC1 invoked
ARC0: invoking ARC2
ARC1 started with pid=10
ARC1: Archival started
Tue Jul 13 02:15:15 1999
ARC2: changing ARC2 KCRRSTART->KCRRACTIVE
Tue Jul 13 02:15:15 1999
ARC0: Initializing ARC2
ARC2 와 ARC3도 동일한 절차를 따른다.
흥미로운 사실은 프로세스의 갯수를 줄일 수도 있다는 것이다. 예를 들어
다음과 같은 명령을 실행시킬 경우
SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=2;
다음과 같은 작업이 순서대로 실행된다.
1) shadow process는 현재 active 상태인 archiver process와 접속을 한다.
2) archiverprocess는 kcrrxmp 함수를 호출한다. ( kcrrxmp: 다중
Arch process 정지 )
3) kcrrxmp 함수는 kcrrxs{} structure에 (ARCH 구동 상태) 대한 latch를 획득하여 다른 프로세스에서 structure를 동시에 병경하지 않도록 한다.
4) 새로 요청된 archiver process의 갯수가 현재 사용중인 archiver process
의 갯수보다 작은지 확인한다.
5) 만약 작다면, archiver process 목록 가운데, 가장 최근에 schedule
되어, archival 작업에 schedule 차례가 금방 돌아 오지 않을 프로세스를
찾아낸다.
6) 각각의 프로세스에 대해 KCRRACTIVE 상태에서 KCRRSHUTDN로 상태를
변경할 것을 요청한다.
7) 상태가 바뀌면, OS에서 해당 프로세스를 종료시키도록 하고, 상태를
KCRRDEAD로 바꾼다. 관련된 상태 정보가 정리되고 kcrrxs{} structure의
내용이 갱신된다.
6) ,7) 과정은 지정된 archiver process 갯수로 줄어들 때 까지
반복된다.
8) 새로운 archiver process의 갯수로 kcrrxs structure 내용이 갱신된다.
9) latch를 release 시킨다.
상태 변동은 다음과 같이 alert. log 파일에 반영된다.
sql: prodding the archiver
Tue Jul 13 00:34:20 1999
ARC3: changing ARC0 KCRRACTIVE->KCRRSHUTDN
ARC3: sending ARC0 shutdown message
ARC3: changing ARC1 KCRRACTIVE->KCRRSHUTDN
ARC3: sending ARC1 shutdown message
ARC3: received prod
Tue Jul 13 00:34:20 1999
ALTER SYSTEM SET log_archive_max_processes=2;
Tue Jul 13 00:34:20 1999
ARCH shutting down
ARC0: Archival stopped
ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD
Tue Jul 13 00:34:20 1999
ARCH shutting down
ARC1: Archival stopped
ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD
4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
Archiver process는 round-robin 방식으로 archiving 작업을 수행하도록
schedule 된다. 만약 다중 archiver process가 부하에 따라 activate 된
경우는 여러가지 경우의 수가 있을 수 있다. Oracle 8i에서는 다중
archive log dest를 지원하면서 archive log에 대한 duplexing을 지원
하기 때문에, 어떤프로세스가 log file을 archive 시켰는지를 기록할 필요가
있다.
Oracle 8i에서 archival 작업은 성공할 때 마다 trace file에 archiver
process 명을 기록한다.
다음은 관련 trace file의 주요 내용이다.
Instance name: v815
Redo thread mounted by this instance: 1
Oracle process number: 12
Unix process pid: 3658, image: oracle@oracle8i (ARC3)
*** Session ID:(12. 1) 1999. 07. 13. 02. 15. 15. 000
*** 1999. 07. 13. 02. 15. 15. 000
*** 1999. 07. 13. 02. 33. 06. 000
ARC3: Begin archiving log# 1 seq# 38 thrd# 1
ARC3: VALIDATE
ARC3: PREPARE
ARC3: INITIALIZE
ARC3: SPOOL
ARC3: Creating archive destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
dbf'
ARC3: Archiving block 1 count 1 to : '/bigdisk/oracle8i/dbs/arch/1_38.
dbf'
ARC3: Closing archive destination 1 : /bigdisk/oracle8i/dbs/arch/1_38.
dbf
ARC3: FINISH
ARC3: Archival success destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
dbf'
ARC3: COMPLETE, all destinations archived
ARC3: ArchivedLog entry added: /bigdisk/oracle8i/dbs/arch/1_38. dbf
ARC3: ARCHIVED
*** 1999. 07. 13. 02. 33. 06. 000
ARC3: Completed archiving log# 1 seq# 38 thrd# 1
이 정보를 가지고, archive process 3이 log sequence 38번을
destination 1 :/bigdisk/oracle8i/dbs/arch 에 archive 시킨 것을 알 수 있다.
Reference Ducumment
<Note:73163.1>제품 : ORACLE SERVER
작성날짜 : 2002-04-19
MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )
=========================================================
PURPOSE
1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
Explanation
1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
Oracle 8i에는 다중 archive destination을 지원하며, 단일 archiver의
부하를 줄여주기 위해 multiple archiver process를 사용할 수 있다.
LOG_ARCHIVE_MAX_PROCESSES 파라미터로 구동시킬 최대 ARCH 프로세스의
갯수를 지정하면 된다.
만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면 인스턴스 구동시
init 파일에 지정된 LOG_ARCHIVE_MAX_PROCESSES에 지정된 값을 읽어
들인다. 만약 LOG_ARCHIVE_START 값이 true인데 LOG_ARCHIVE_MAX_PROCESSES
값이 별도로 지정되어 있지 않을 경우에는 arc0 프로세스만을 구동시킨다.
LOG_ARCHIVE_MAX_PROCESSES 값이 별도로 지정되었을 경우 (1 - 10 범위),
arc0, arc1 과 같은 추가 프로세스를 구동시킨다.
하지만, 이 파라미터 값을 기본값 1이 아닌 다른 값으로 명시적으로 지정할
필요는 없다. 왜냐하면, 시스템에서 몇개의 ARCn 프로세스가
필요한지를 판단하여, 추가 ARCn 프로세스를 생성하기 때문이다.
2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
alter system 명령에서 set LOG_ARCHIVE_MAX_PROCESSES=n 으로
지정하여 동적으로 값을 변경할 수 있다. 이때 n은 1 부터 10사이의
값이어야 한다. 하지만, LOG_ARCHIVE_START 값이 FALSE로 지정되어
있을 경우에는, 명령을 실행시켜도 아무런 영향을 미치지
못한다.
3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면, 오라클에서는
구동시 하나의 archiver process (ARC0)를 구동시킨다. 이 파라미터
값은 필요시 LATER SYSTEM 명령으로 지정된 갯수만큼의
archive process를 구동시킬 수 있다.
예)
SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=4;
위 명령을 실행 시키면 다음과 같은 절차에 따라 ARC1, ARC2, ARC3를
추가한다.
1) Shadow process는 primary archive process에게 프로세스 갯수를
늘릴 것을 요청한다.
2) Archiver process는 kcrrschd 함수를 호출한다. (kcrrschd:
다중 arch process를 schedule)
3) 만약 요청된 process의 갯수가 현재 사용중인 archiver process
갯수보다 작은지 확인한다. 만약 새로 지정된 값이 적거나, ARCHIVING이
DISABLE 된 상태라면 다른 조치를 취하지 않고 return 한다. 그렇지
않다면 지원되는 최대 갯수인 10을 넘는지 확인하고 10을 넘을 경우에는
프로세스의 갯수를 10으로 지정한다.
4) Scheduler 함수는 kcrxs{} structure에 대한 latch를 확보한다.
여기서 kcrxs{} structure는 ARCH Activation status를 나타낸다.
5) Scheduler 함수는 지정된 process 갯수 만큼 loop를 돌면서
schedule된 상태를 structure KCRRSCHED에 반영시킨다.
6) 그리고 나서 latch를 release 시킨 후 kcrrsmp 함수를 ( kcrrsmp:
다중 arch processs 구동 ) 호출한다.
7) kcrrsmp 함수는 kcrrxs{} structure (ARCH 구동 상태)에 대한 latch를
확보하여 code 실행을 serialize 시켜 이 함수가 동시에 실행되더라도
한번에 하나씩 실행될 수 있도록 한다.
8) pending 상태에 있는 archiver process를 스케쥴링 하고, dead process가
있으면 clean up 시킨다.
9) 그리고 나서 이 함수는 지정된 process 갯수 만큼 loop를 돌면서
KCRRSCHED 상태를 KCRRSTART으로 바꾸어, archiver process들을
구동 준비 상태로 만든다.
10) latch를 release 시킨 후 ARCH 프로세스를 구동시킨다.
11) kcrrsmp함수는 latch를 다시 획득한다. 각각의 archiver 프로세스는
자기 자신을 activate 시킬 것을 통보 받는다. archiver process는
자기 자신을 activate 시킨 후 alert file에 관련 사항을 기록한다.
12) 호출을 하는 함수는 모든 archiver process가 자기 자신을
activate 시키고, kcrrxs structure의 내용을 갱신할 때 까지
sleep 상태에서 대기한다.
13) 끝으로, 현재 archiver process의 갯수가 요청된 archiver process
의 갯수와 일치 하면, latch를 release 시키고 break 한다. ( C
에서의 break )
alert. log 에는 위 과정이 다음과 같이 반영된다.
sql: prodding the archiver
ALTER SYSTEM SET log_archive_max_processes=4;
Tue Jul 13 02:15:14 1999
ARC0: changing ARC1 KCRRNOARCH->KCRRSCHED
ARC0: changing ARC2 KCRRNOARCH->KCRRSCHED
ARC0: changing ARC3 KCRRNOARCH->KCRRSCHED
ARC0: STARTING ARCH PROCESSES
ARC0: changing ARC1 KCRRSCHED->KCRRSTART
ARC0: changing ARC2 KCRRSCHED->KCRRSTART
ARC0: changing ARC3 KCRRSCHED->KCRRSTART
ARC0: invoking ARC1
Tue Jul 13 02:15:15 1999
ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
Tue Jul 13 02:15:15 1999
ARC0: Initializing ARC1
ARC0: ARC1 invoked
ARC0: invoking ARC2
ARC1 started with pid=10
ARC1: Archival started
Tue Jul 13 02:15:15 1999
ARC2: changing ARC2 KCRRSTART->KCRRACTIVE
Tue Jul 13 02:15:15 1999
ARC0: Initializing ARC2
ARC2 와 ARC3도 동일한 절차를 따른다.
흥미로운 사실은 프로세스의 갯수를 줄일 수도 있다는 것이다. 예를 들어
다음과 같은 명령을 실행시킬 경우
SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=2;
다음과 같은 작업이 순서대로 실행된다.
1) shadow process는 현재 active 상태인 archiver process와 접속을 한다.
2) archiverprocess는 kcrrxmp 함수를 호출한다. ( kcrrxmp: 다중
Arch process 정지 )
3) kcrrxmp 함수는 kcrrxs{} structure에 (ARCH 구동 상태) 대한 latch를 획득하여 다른 프로세스에서 structure를 동시에 병경하지 않도록 한다.
4) 새로 요청된 archiver process의 갯수가 현재 사용중인 archiver process
의 갯수보다 작은지 확인한다.
5) 만약 작다면, archiver process 목록 가운데, 가장 최근에 schedule
되어, archival 작업에 schedule 차례가 금방 돌아 오지 않을 프로세스를
찾아낸다.
6) 각각의 프로세스에 대해 KCRRACTIVE 상태에서 KCRRSHUTDN로 상태를
변경할 것을 요청한다.
7) 상태가 바뀌면, OS에서 해당 프로세스를 종료시키도록 하고, 상태를
KCRRDEAD로 바꾼다. 관련된 상태 정보가 정리되고 kcrrxs{} structure의
내용이 갱신된다.
6) ,7) 과정은 지정된 archiver process 갯수로 줄어들 때 까지
반복된다.
8) 새로운 archiver process의 갯수로 kcrrxs structure 내용이 갱신된다.
9) latch를 release 시킨다.
상태 변동은 다음과 같이 alert. log 파일에 반영된다.
sql: prodding the archiver
Tue Jul 13 00:34:20 1999
ARC3: changing ARC0 KCRRACTIVE->KCRRSHUTDN
ARC3: sending ARC0 shutdown message
ARC3: changing ARC1 KCRRACTIVE->KCRRSHUTDN
ARC3: sending ARC1 shutdown message
ARC3: received prod
Tue Jul 13 00:34:20 1999
ALTER SYSTEM SET log_archive_max_processes=2;
Tue Jul 13 00:34:20 1999
ARCH shutting down
ARC0: Archival stopped
ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD
Tue Jul 13 00:34:20 1999
ARCH shutting down
ARC1: Archival stopped
ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD
4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
Archiver process는 round-robin 방식으로 archiving 작업을 수행하도록
schedule 된다. 만약 다중 archiver process가 부하에 따라 activate 된
경우는 여러가지 경우의 수가 있을 수 있다. Oracle 8i에서는 다중
archive log dest를 지원하면서 archive log에 대한 duplexing을 지원
하기 때문에, 어떤프로세스가 log file을 archive 시켰는지를 기록할 필요가
있다.
Oracle 8i에서 archival 작업은 성공할 때 마다 trace file에 archiver
process 명을 기록한다.
다음은 관련 trace file의 주요 내용이다.
Instance name: v815
Redo thread mounted by this instance: 1
Oracle process number: 12
Unix process pid: 3658, image: oracle@oracle8i (ARC3)
*** Session ID:(12. 1) 1999. 07. 13. 02. 15. 15. 000
*** 1999. 07. 13. 02. 15. 15. 000
*** 1999. 07. 13. 02. 33. 06. 000
ARC3: Begin archiving log# 1 seq# 38 thrd# 1
ARC3: VALIDATE
ARC3: PREPARE
ARC3: INITIALIZE
ARC3: SPOOL
ARC3: Creating archive destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
dbf'
ARC3: Archiving block 1 count 1 to : '/bigdisk/oracle8i/dbs/arch/1_38.
dbf'
ARC3: Closing archive destination 1 : /bigdisk/oracle8i/dbs/arch/1_38.
dbf
ARC3: FINISH
ARC3: Archival success destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
dbf'
ARC3: COMPLETE, all destinations archived
ARC3: ArchivedLog entry added: /bigdisk/oracle8i/dbs/arch/1_38. dbf
ARC3: ARCHIVED
*** 1999. 07. 13. 02. 33. 06. 000
ARC3: Completed archiving log# 1 seq# 38 thrd# 1
이 정보를 가지고, archive process 3이 log sequence 38번을
destination 1 :/bigdisk/oracle8i/dbs/arch 에 archive 시킨 것을 알 수 있다.
Reference Ducumment
<Note:73163.1> -
Photoshop elements 8 not proceeding at multiple file processing
Hi everyone,
I've got the following problem with photoshop elements 8. I worked for a long time with it. Using the multiple file processing functionality a lot. I do quite a lot of big shoots and I only use the raw settings for processing. After that I let photoshop elements do the rest of the work, which is converting all my raw images into jpegs and resizing them to prepare them for uploading. These shoots often contain more then 200 photographs.
As of this last weekend when I start up the multiple file processing window it starts opening the first file but does not proceed. Which means that I have to edit all the images by myself and resize them to the proper size etc etc etc. I really don't want to do this. This funcionality is one of the main reasons why I use photoshop. Anyone got any idea why it suddenly stopped working? Anyone else experienced this and solved it?
Thanks a lot.
BenOne thing to check is the bit depth that camera raw is set to.
Open a raw photo in the camera raw dialog and look down along
the bottom of the dialog where it says bit depth. If it says 16bit,
change to 8 bit and press done.
Photoshop elements sometimes won't process 16 bit files using process
multiple files.
MTSTUNER -
Automatic Payment Programme for multiple vendors process
Hi SAP FICO Experts,
Please send Automatic Payment Programme for multiple vendors process
Step by step
Regards
MaranHi,
You need to follow same process.
Regards,
Tejas -
Multiple BPEL Process Managers in one environment?
Hi,
I'm quite new to the Oracle SOA product line, so please be gentle when my question doesn't make sense.
I’m in the process of designing a new enterprise architecture based on service orientation. I’m looking at the Oracle SOA Suite software to get a better understanding of what things will look like when getting to the actual implementation.
Some internal projects are moving fast, so I’m trying to get an understanding on how things will work out in the future. One case has my current attention, related to the BPEL Process Manager.
A 3rd party vender is offering a specific solution which is using or based on the BPEL Process Manager. This solution can be purchased as a black-box. From the perspective of my new architecture I would like to purchase the SOA suite as a whole. New developments will most likely be using the BPEL Process Manager for orchestration. But when purchasing the black-box application, the environment will be extended with an extra BPEL Process Manager. My first thought is to try and incorporate the BPEL processes from the 3rd party vendor onto one single BPEL Process Manager. But I also think two BPEL Process Managers can have advantages.
I’m not sure how to go forward with this. How should I handle this? Is it common to have multiple BPEL Process Managers in one environment or should I pursue maximum integration and only go for one BPEL Process Manager?Hi,
We have similar situation at my current client site and we are going with two seperate BPEL PM installs for various reasons. One of the main driver in our case is conflicting in timelines of two seperate projects.
But my suggestion is to go with single enterprise install if that's not breaking the 3rd-party vendor supporting contracts. This has obviously benefits of redecued infrastructure/administrative and other related costs. How many processes does the vendor has as part of their solution? If they are handful you know it doesn't justify having seperate environments.
On the otherhand having two seperate installs isn't bad either depending on how big or small your SOA service/process orchestration footprint is. We don't have complete handle on the performance/scalability aspects of BPEL PM based on number of processes deployed. But as you can see that may be one of drivers to have redundancy in BPEL PM servers.
End of the day there is no clear cut answer. If you have funding and resources, go with two seperate instances. But from architectural perspective better to have single install with clustering for better performance/scalability.
HTH
Regards,
Rajesh -
Individialized multiple file processing
Hi everybody,
I would appreciate your answer in the following: Does PSE provide a frame for individual setting of multiple file processing (macro command function?). The standardized one does not help me, as the quick fix option do not turn out as I need it.
I usually have about 30-40 photos per day which I need to adjust with "smart fix 1%" - takes ages to do these all individually.
Thanks for your reply.No, there's no macro function. What you see in the "Process Multiple Files" dialog is all you get.
You can use the free XnView image viewer application:
http://www.xnview.com/
which allows batch processing with image adjustments which can be customized.
Ken -
Multiple BPEL processes polling one inbound directory ?
Hi All-
Somewhere it mentioned that :
"Multiple BPEL processes or multiple file adapters polling one inbound directory are not supported. Ensure that all are polling their own unique directory."
Is this issue still there in 10.1.3.3.0?
Please advice.
Regards,
SreejitHi,
I have one directory say c:/files and I have two BPEL process accessing this folder say BPELA and BPELB.
BPELA looking for the file with pattern file_A*.xml
BPELB looking for the file with pattern file_B*.xml
Still this phrase *"Multiple file adapters polling one inbound directory are not supported. Ensure that each is polling its own unique directory."* is a problem?
Please advice.
Regards,
Sreejit -
Decision Between Multiple Alternatives-Process Chain Formula
Hi,
I need to write a formula in the process type 'Decision Between Multiple Alternatives'.
If the current date is on the end of month then i need to trigger a monthly snapshot dataflow if not then daily dataflow needs to be triggered.
Anyone have any idea of how to write a formula for this?You can check both of these docs which defines how to use 'Decision Between Multiple Alternatives' process type:
[http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/900be605-7b59-2b10-c6a8-c4f7b2d98bae&overridelayout=true]
[http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/301fb325-9d90-2c10-7199-89fc7b5a17b9&overridelayout=true] -
How to use the 'Decision between multiple alternatives' Process variant
Hi ,
Can anyone tell me how to use the 'Decision between multiple alternatives' Process variant in BI 7 Process chains ?
The requuirement is that I have a DSO eg DSO 1 from which i have to load DSO 2 and DSO 3 . Now if the number of Records in DSO 1 are < 100 i will load DSO 2 from DSO 1 else i will load DSO 3 from DSO 1 .
So in the PC i have used a 'ABAP Program' variant (which counts the number of rows in DSO 1) after loading DSO 1 and after this 'ABAP Program' Variant , i have used the 'Decision between multiple alternatives' Process variant. Problem is it is giving me only some some system fields in the formula , like sy-datum or sy-timlo , which are of no use here .
It would be great if i can receive some help here .
--Devrajin RSPC you create a ABAP processing type "ending with specific values".
We use true or false for things as you specify.
your abap program should return a true or a false to indicate which of the situations is applicable.
Is this helpful?
Marco -
Invoking MBean on multiple OC4J processes
Hi
I have an OAS with a OC4J instance that is running on multiple OC4J processes. Using the javax.management.remote classes how do I tell the JMXConnectorFactory class that it needs to connect to multiple OC4J processes to invoke the mbean methods?
Regards,
Néstor BoscánHi
I have an OAS with a OC4J instance that is running on multiple OC4J processes. Using the javax.management.remote classes how do I tell the JMXConnectorFactory class that it needs to connect to multiple OC4J processes to invoke the mbean methods?
Regards,
Néstor Boscán -
Hello--
I just upgraded to leopard (I did "Archive and Install") on my imac and since then, I have been having interesting finder issues... specifically with my desktop. Sometimes when I mount a drive or create a new file/folder on my desktop, i get this "ghost icon" behind it. When it happened today I went to "Force Quit" to relaunch finder when I noticed that I had two finders in the "Force Quit" dialog box. Quitting one of them got rid of the ghost icons. Anyone know why a second finder would start up like that?
Potentially helpful information:
- one user on system
- i am using spaces
- does not happen when system just starts up
Thank you!
-JacobDo you have JSP pages? It is most likely javac compiling those pages
into servlets.
-- Rob
Ajay Upadhyaya wrote:
Hi,
I'm seeing multiple JAVA processes getting started by WebLogic,
The Environment is -
Sun Sparc E250
JAVA_OPTIONS="-ms256m -mx512m"
WLS 6.1 sp1
JDK 1.3.1 (default with WLS)
The ps command shows this process relationships:
ps -ef -o user,pid,ppid,vsz,fname,args | grep nmurthy | grep -v ksh |
grep -v grep
nmurthy 24626 7225 412952 java
/home/nmurthy/bea/jdk131/jre/bin/../bin/sparc/native_threads/java -hotspot -
ms2
nmurthy 27171 7225 413360 java
/home/nmurthy/bea/jdk131/jre/bin/../bin/sparc/native_threads/java -hotspot -
ms2
nmurthy 7222 392 1056 startWeb /bin/sh startWebLogic.sh
nmurthy 21746 7225 412856 java
/home/nmurthy/bea/jdk131/jre/bin/../bin/sparc/native_threads/java -hotspot -
ms2
nmurthy 26138 7225 412976 java
/home/nmurthy/bea/jdk131/jre/bin/../bin/sparc/native_threads/java -hotspot -
ms2
nmurthy 7268 7225 413336 java
/home/nmurthy/bea/jdk131/jre/bin/../bin/sparc/native_threads/java -hotspot -
ms2
nmurthy 7225 7222 417504 java
/home/nmurthy/bea/jdk131/jre/bin/../bin/sparc/native_threads/java -hotspot -
ms2
Summary:
7222 - this is the startWebLogic.sh process (running in nohup)
7225 - this is the main Java process running WebLogic
7268 , 26138 , 21746 , 27171 , 24626 are started from 7225
In which scenarios this is possible to have multiple processes like these
getting started by the main java process (which runs WLS). This is causing
heavy load on the system.
Any pointers ?
Regards,
Ajay -
How to protect the creation of a db across multiple threads/processes?
Given a multi-process, multi-threaded application, and one database file to be created, how can I guarantee that only one of the threads in one of the processes successfully creates the database, when ALL of the threads are going to either attempt to create it, or open it (if it already exists) upon startup?
My current logic for all threads is:
set open flags to DB_THREAD
start transaction
attempt to open the db
if ENOENT
abort transaction
change open flags to DB_CREATE | DB_EXCL | DB_THREAD
retry
else if EEXIST
abort transaction
change open flags to DB_THREAD
retry
else if !ok
# some other error
end
commit transaction
I'm testing on Linux right now, with plans to move to Windows, AIX, and Solaris. What I'm experiencing on Linux is several of the threads (out of 10 threads I'm testing) will succeed in creating the database. The others will receive either either succeed in opening the first time through, or will receive the EEXIST when they do the open w/ create flags - ultimately, they open the same created db (I'm presuming the last one that's created by one of the other threads). Effectively, the open with DB_CREATE | DB_EXCL is not ensuring that only one DB is created. I was under the impression that opening in a transaction would guarantee this, but it does not, or maybe I'm doing something incorrectly?
Should DB_CREATE | DB_EXCL and opening in a transaction guarantee that only one thread can create the database? Do I need to use another synchronization method?
Note: I am running off of a local disk, not over NFS or anything like that.
I tried taking out my transaction and using DB_AUTO_COMMIT instead, still no go - multiple threads still report they successfully created the DB. Using BDB 4.5.
Thanks,
Kevin BurgeBrian,
Thanks for the reply. I think I'm doing what you said, unless I'm misunderstanding. I do have all threads try to do the DB_CREATE | DB_EXCL. Are you saying I shouldn't use the DB_EXCL flag?
The problem I was seeing with 10 threads calling open w/ DB_CREATE | DB_EXCL on the same db:
* Between 1 and 9 threads would return success from db->open with the creation flags.... but the last one to create "wins".
* All the other threads would get EEXIST, as expected.
The threads that "lost", do get a successful return code from "open" with the create flags, but all data written to them is lost. They act normally except for the fact that the have a deleted file-handle that they are writing to. There's no indicator that records written to them are going into the void.
My test:
I had 10 threads each trying to create or open a recno db, then append 10 records, for a total of 100 records expected. Ultimately, I would end up with between 20 to 100 records in the db, depending on how many of the threads said they successfully created the db. So, if 5 threads said they created the db successfully, then 40 records would be missing, because 4 of those threads were writing to deleted file handles. If 2 threads said they created the db, then 10 records would be missing....If 8 threads, then 70 records missing, etc.
In other words, multiple threads creating the db appears to work correctly, because there are no errors. It was the missing records that caught my attention, and prompted my question on this forum.
For what it's worth, I've worked around the problem by opening a similarly named file via the open() system call, with O_CREAT|O_EXCL, which is guaranteed to be atomic. The first thread that can create this temp file is the only thread that can actually create the db - all others sleep upon open with ENOENT until it's created.
Thanks,
Kevin -
Dump during processing on multiple work process.
Hi ,
I have designed a prohgram "report1" which creates variants for another program "report2" and schedules those variants in back ground on multiple work processess.
When run the "report1", it creates 19 variants for "report2" and those variants will be scheduled for background process on multiple workprocess.
My problem is, when i execute "report1", out of 19 jobs 7 jobs have been cancelled due to dump.
The dump analysis is as follows*
Runtime errors TSV_TNEW_PAGE_ALLOC_FAILED
Short dump has not been completely stored. It is too big.
h1 No storage space available for extending the internal table.
What happened?*
You attempted to extend an internal table, but the required space was
not available.
Error analysis*
The internal table (with the internal identifier "IT_77") could not be
enlarged any further. To enable error handling, the internal table had
to be deleted before this error log was formatted. Consequently, if you
navigate back from this error log to the ABAP Debugger, the table will
be displayed there with 0 lines.
When the program was terminated, the internal table concerned returned
the following information:
Line width: 1240
Number of lines: 340000
Allocated lines: 340000
New no. of requested lines: 10000 (in 1250 blocks).
Last error logged in SAP kernel*
Component............ "EM"
Place................ "SAP-Server SR-3110_ISI_30 on host SR-3110 (wp 26)"
Version.............. 37
Error code........... 7
Error text........... "Warning: EM-Memory exhausted: Workprocess gets PRIV "
Description.......... " "
System call.......... " "
Module............... "emxx.c"
Line................. 1886.
Source code extract*
001640 FORM get_new_info .
001650
001660 DATA: wa_res LIKE zrisu_openitems_view,
001670 it_temp_final TYPE TABLE OF zrisu_openitems_view.
001680
001690 REFRESH: it_temp_final, it_final.
001700 * collect the free memory
001710 CALL METHOD cl_abap_memory_utilities=>do_garbage_collection.
001720
001730 * First find all the open FI document items on the system satisfying
001740 * the selection criteria.
001750 SELECT opbel opupw opupk opupz gpart vtref vkont abwbl abwtp stakz
001760 waers studt betrw mahnv blart xblnr
001770 INTO CORRESPONDING FIELDS OF TABLE it_temp_final
001780 FROM dfkkop
001790 PACKAGE SIZE c_blksiz
001800 FOR ALL ENTRIES IN s_opbel
001810 WHERE opbel = s_opbel-low
001820 AND augst = space
001830 AND gpart IN s_part
001840 AND abwtp = space
001850 AND abwbl = space
001860 AND bldat IN s_bldat
001870 AND blart IN s_blart.
APPEND LINES OF it_temp_final TO it_final. ---> dump occurs at this statement
001890 REFRESH it_temp_final.
001900 ENDSELECT.
001910 REFRESH s_opbel.
001920
01930 SORT it_final BY opbel opupw opupk opupz.
01940
01950 LOOP AT it_final INTO wa_final.
01960 * Check the documentdate if relevant
01970 IF wa_res-opbel EQ wa_final-opbel.
01980 * Use information from last record if same document to make the
01990 * program faster.
02000 IF wa_res-txt EQ 'DELETE NEXT RECORD'.
02010 * The record should be deleted for the same reason as the last item
02020 DELETE it_final.
02030 CONTINUE.
02040 ELSE.
02050 wa_final-bldat = wa_res-bldat.
02060 ENDIF.
02070 ENDIF.
If i run the jobs for which i got dump individualyy one after other, then the jobs executed successfully.
Pls help me in this regrad.
Can i use filed-symbols in thsi case to avoid dump?
Thanks in advance .
TajHI
It seems memory available to a particular process in your case was not enough to continue.
Could you try to implement the same by convertin Report2 as an FM and calling it as:
CALL FUNCTION func STARTING NEW TASK task
So now instead of making different variants you could just loop over the FM with different values.
If this does not seem feasible restrict your data anyhow.
Revet with findings! -
JDK 1.6 on Solaris. Multiple java processes and thread freezes
Hi, we've come across a really weird behavior on the Solaris JVM, reported by a customer of ours.
Our server application consists of multiple threads. Normally we see them all running within a single Java process, and all is fine.
At some point in time, and only on Solaris 10, it seems that the main Java process starts a second Java process. This is not our code trying to execute some other application/command. It's the JVM itself forking a new copy of itself. I assumed this was because of some JVM behaviour on Solaris that uses multiple processes if the number of threads is > 128. However at the time of spawn there are less than 90 threads running.
In any case, once this second process starts, some of the threads of the application (incidentally, they're the first threads created by the application at startup, in the first threadgroup) stop working. Our application dumps a list of all threads in the system every ten minutes, and even when they're not working, the threads are still there. Our logs also show that when the second process starts, these threads were not in the running state. They had just completed their operations and were sleeping in their thread pool, in a wait() call. Once the second process starts, jobs for these threads just queue up, and the wait() does not return, even after another thread has done a notify() to inform them of the new jobs.
Even more interesting, when the customer manually kills -9 the second process, without doing anything in our application, all threads that were 'frozen' start working again, immediately. This (and the fact that this never happens on other OSes) makes us think that this is some sort of problem (or misconfiguration) specific to the Solaris JVM, and not our application.
The customer initially reported this with JDK 1.5.0_12 , we told them to upgrade to the latest JDK 1.6 update 6, but the problem remains. There are no special JVM switches (apart from -Xms32m -Xmx256m) used. We're really at a dead end here in diagnosing this problem, as it clearly seems to be outside our app. Any suggestion?Actually, we've discovered that that's not really what was going on. I still believe there's a bug in the JVM, but the fork was happening because our Java code tries to exec a command line tool once a minute. After hours of this, we get a rogue child process with this stack (which is where we are forking this command line tool once a minute):
JVM version is 1.5.0_08-b03
Thread t@38: (state = IN_NATIVE)
- java.lang.UNIXProcess.forkAndExec(byte[], byte[], int, byte[], int, byte[], boolean, java.io.FileDescriptor, java.io.FileDescriptor, java.io.FileDescriptor) @bci=168980456 (Interpreted frame)
- java.lang.UNIXProcess.forkAndExec(byte[], byte[], int, byte[], int, byte[], boolean, java.io.FileDescriptor, java.io.FileDescriptor, java.io.FileDescriptor) @bci=0 (Interpreted frame)
- java.lang.UNIXProcess.<init>(byte[], byte[], int, byte[], int, byte[], boolean) @bci=62, line=53 (Interpreted frame)
- java.lang.ProcessImpl.start(java.lang.String[], java.util.Map, java.lang.String, boolean) @bci=182, line=65 (Interpreted frame)
- java.lang.ProcessBuilder.start() @bci=112, line=451 (Interpreted frame)
- java.lang.Runtime.exec(java.lang.String[], java.lang.String[], java.io.File) @bci=16, line=591 (Interpreted frame)
- java.lang.Runtime.exec(java.lang.String, java.lang.String[], java.io.File) @bci=69, line=429 (Interpreted frame)
- java.lang.Runtime.exec(java.lang.String) @bci=4, line=326 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=595 (Interpreted frame)There are also several dozen other threads all with the same stack:
Thread t@32: (state = BLOCKED)
Error occurred during stack walking:
sun.jvm.hotspot.debugger.DebuggerException: can't map thread id to thread handle!
at sun.jvm.hotspot.debugger.proc.ProcDebuggerLocal.getThreadIntegerRegisterSet0(Native Method)
at sun.jvm.hotspot.debugger.proc.ProcDebuggerLocal.getThreadIntegerRegisterSet(ProcDebuggerLocal.java:364)
at sun.jvm.hotspot.debugger.proc.sparc.ProcSPARCThread.getContext(ProcSPARCThread.java:35)
at sun.jvm.hotspot.runtime.solaris_sparc.SolarisSPARCJavaThreadPDAccess.getCurrentFrameGuess(SolarisSPARCJavaThreadPDAccess.java:108)
at sun.jvm.hotspot.runtime.JavaThread.getCurrentFrameGuess(JavaThread.java:252)
at sun.jvm.hotspot.runtime.JavaThread.getLastJavaVFrameDbg(JavaThread.java:211)
at sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:50)
at sun.jvm.hotspot.tools.JStack.run(JStack.java:41)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:204)
at sun.jvm.hotspot.tools.JStack.main(JStack.java:58)I'm pretty sure this is because the fork part of the UnixProcess.forkAndExec is using the Solaris fork1 system call, and thus all the Java context thinks all those threads exist, whereas the actual threads don't exist in that process.
It seems to me that something is broken in UnixProcess.forkAndExec in native code; it did the fork, but not the exec, and this exec thread just sits there forever. And of course, it's still holding all the file descriptors of the original process, which means that if we decide to restart our process, we can't reopen our sockets for listening or whatever else we want to do.
There is another possibility, which I can't completely rule out: this child process just happened to be the one that was fork'd when the parent process called Runtime.halt(), which is how the Java process exits. We decided to exit halfway through a Runtime.exec(), and got this child process stuck. But I don't think that's what happens... from what I understand that we collected, we see this same child process created at some point in time, and it doesn't go away.
Yes, I realize that my JVM is very old, but I cannot find any bug fixes in the release notes that claim to fix something like this. And since this only happens once every day or two, I'm reluctant to just throw a new JVM at this--although I'm sure I will shortly.
Has anyone else seen anything like this? -
How to create multiple pdf files from multiple batch processing
I have several file folders of jpeg images that I need to convert to separate multi-page pdf files. To give a visual explanation:
Folder 1 (contains 100 jpeg image files) converted to File 1 pdf (100 pages; 1 file)
Folder 2 (contains 100 jpeg image files) converted to File 2 pdf (100 pages; 1 file)
and so on.
I know I can convert each folder's content as a batch process, but I can only figure out how to do so one folder at a time. Is it at all possible to convert multiple folders (containing jpegs) to multiple pdf files? Put differently, does anyone know how to process a batch of folders into multiple (corresponding) pdf files?
Many thanks.There are two approaches to do this:
- First convert all JPG files to PDF files using a "blank" batch process. Then combine all PDF files in the same folder using another batch process and a folder-level script (to do the actual combining). I have developed this tool in the past and it's available here:
http://try67.blogspot.com/2010/10/acrobat-batch-combine-all-files-in.html
- The othe option is to do everything in a single process, but that requires an application outside of Acrobat, which I'm currently developing.
If you're interested in any of these tools, feel free to contact me personally.
Maybe you are looking for
-
How can I submit the page from javascript in html header?
I followed http://www.oracle.com/technology/products/database/htmldb/howtos/htmldb_javascript_howto2.html#client It says "doSubmit('Delete');" to submit the delete action. If I do the same in the java script that is embededed in html header, it does
-
Hi Experts, Can the filter on table be case insensitive ? Jdev 11.1.1.4 version thnks
-
How do I put in a button for each day of the calender (done in this code): for (int i=1; i<=nod; i++) int row = new Integer((i+som-2)/7); int column = (i+som-2)%7; mtblCalendar.setValueAt(i, row, column); I want a screen that I have coded t
-
Dynamically Access a plsql table
If I have 2 plsql tables : vt1_games and vt2_games that I usuallly update with the following: vt1_games(vt1_count) := ...... or vt2_games(vt2(count) := ...... How can I use a variable for the plsql table name and pointer? Thanks, Paul
-
Saving Captured picture from ADF Mobile Application into Remote Database
Hi, I am developing adf mobile application by using Device Demo Application from ADF Mobile Samples.I had deployed this application into Android mobile and capturing picture. My Question is how can i save this captured pictured into Remote Database.