Multiple Archive databases

I can split out multiple archive databases just like I split out multiple primary databases on Exch 2013, correct?
Thank you.

Hi,
Do you mean you want to split one archive database into multiple archive databases? If so, you can move some archive mailboxes from one archive database to another archive database using the New-MoveRequest command.
If I have misunderstood your concern, please feel free to let me know. For more details about the New-MoveRequest command, please refer to
New-MoveRequest.
Best regards,
Belinda
Belinda Ma
TechNet Community Support

Similar Messages

  • Multiple Stickies databases? Open a few Stickies only per application?

    I love Stickies, but I hate having so many open at the same time. It's always the case that I need only a few stickies at a time, depending on the other apps I'm working on. For example, I might have a few "tips" stickies I created for myself when I am Photoshopping, or a couple other brainstorm stickies when I am trying to do some writing. I might have a dozen subsets of stickies that need to be opened up only when the relevant application is also open. I will never need all my stickies open all the time.
    My questions are whether it's possible to:
    a) create multiple Stickies databases which each contain only a small set of Stickie notes; and/or
    b) to have only a few Sticky notes opened up automatically depending on the app I am using. (Is there some sort of Applescript that could help with this?)
    As it is now my screen is far too cluttered, and that discourages any use of Stickies at all.
    Perhaps I've been ignoring some obvious feature, but in fact, I couldn't find this issue mentioned anywhere else.
    Any ideas? Thanks!

    Hi,
    For question 1, by using powershell:
    http://blogs.msdn.com/b/rslaten/archive/2013/07/21/get-all-overrides-in-a-scom-2012-management-group-using-powershell.aspx
    How to get overrides that are in a particular management pack for a group
    http://social.technet.microsoft.com/Forums/systemcenter/en-US/fb1893ff-a88f-4758-81f0-5a3c3f702ba2/how-to-get-overrides-that-are-in-a-particular-management-pack-for-a-group?forum=operationsmanagerextensibility
    For question
    2: we can use powershell command Get-SCOMNotificationSubscription, below is a similar thread:
    http://social.technet.microsoft.com/Forums/systemcenter/en-US/28d9595a-6de3-4a12-a0fa-b794f60a5a48/scom-how-to-get-a-report-or-file-with-all-subscription-criteria?forum=operationsmanagerreporting
    Regards, Yan Li

  • Multiple standby database about FAL_SERVER and FAL_CLIENT parameter in DG

    Hi,
    I am little bit confused about FAL_SERVER and FAL_CLIENT parameter in Data Guard.
    We are planning to configure multiple standby database in Data Guard environment. Let Assume that, i have production db named as 'PROD' and multiple standby name like standby1,standby2, standby3.
    My Environment is:_
    DB Version: 11.2.0.1
    OS Version: OE5LU6
    So in this case how to specify above Net service name in spfile on production server and also on other standby server.
    Kindly Suggests me.
    Regards
    Athish

    Athish wrote:
    Hi,
    I am little bit confused about FAL_SERVER and FAL_CLIENT parameter in Data Guard.
    We are planning to configure multiple standby database in Data Guard environment. Let Assume that, i have production db named as 'PROD' and multiple standby name like standby1,standby2, standby3.
    My Environment is:_
    DB Version: 11.2.0.1
    OS Version: OE5LU6
    So in this case how to specify above Net service name in spfile on production server and also on other standby server.
    Kindly Suggests me.
    Regards
    AthishFAL_CLIENT is Oracle TNS service of the local system and FAL_SERVER is Oracle TNS service of remote system.
    if you have three standby databases of primary , then from primary you must have three values in FAL_SERVER so that archives will be send to the all destinations.
    and From the stadnby database, FAL_SERVER should be the oracle TNS service where you receiving the archive log files,
    Note:- If you have RAC primary then mention each service differentiated by commas as shown example below.
    From standby to Primary RAC:- FAL_SAERVER='PROD1','PROD2'
    These parameters are dynamic, so that you can alter them at any time.
    HTH.

  • Load Data from a table on one server's database, to the same table structure in multiple server databases

    Hi,
    I have a situation where i have to load data from one server/database table to multiple servers/databases.
    Example:
    I need to load data from dbo.TABLE_A  (on Server: Server_A & Database: Database_A)  to the same table on the list of server databases like
    Server: Server_B , Database: Database_B
    Server: Server_C , Database: Database_C
    Server: Server_D , Database: Database_D
    Server: Server_E , Database: Database_E
    Server: Server_F , Database: Database_F
    Server: Server_G , Database: Database_G
    Server: Server_H , Database: Database_H
    so on and so forth on 250 such server database combinations.
    The table structure is the same on all the servers.
    If i make the source or destination dynamic, it throws an error while mapping ?
    I cannot get Linked server permissions and SQL Server Config thing doesn't work as well.
    Please suggest on how to load data from one source to multiple server/databases.
    Thank you.

    I just need to transfer one table's data. its like i have to use a query to pick data for
    the most recent data. So i use something like, select A, B, C, D from dbo.table where ETL_TIMESTAMP > (the max(etltimestamp) in the destination on different server). There are no foreign key relationships and the data should not be truncated. it just had
    to append the new records.

  • MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-19
    MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )
    =========================================================
    PURPOSE
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Explanation
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    Oracle 8i에는 다중 archive destination을 지원하며, 단일 archiver의
    부하를 줄여주기 위해 multiple archiver process를 사용할 수 있다.
    LOG_ARCHIVE_MAX_PROCESSES 파라미터로 구동시킬 최대 ARCH 프로세스의
    갯수를 지정하면 된다.
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면 인스턴스 구동시
    init 파일에 지정된 LOG_ARCHIVE_MAX_PROCESSES에 지정된 값을 읽어
    들인다. 만약 LOG_ARCHIVE_START 값이 true인데 LOG_ARCHIVE_MAX_PROCESSES
    값이 별도로 지정되어 있지 않을 경우에는 arc0 프로세스만을 구동시킨다.
    LOG_ARCHIVE_MAX_PROCESSES 값이 별도로 지정되었을 경우 (1 - 10 범위),
    arc0, arc1 과 같은 추가 프로세스를 구동시킨다.
    하지만, 이 파라미터 값을 기본값 1이 아닌 다른 값으로 명시적으로 지정할
    필요는 없다. 왜냐하면, 시스템에서 몇개의 ARCn 프로세스가
    필요한지를 판단하여, 추가 ARCn 프로세스를 생성하기 때문이다.
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    alter system 명령에서 set LOG_ARCHIVE_MAX_PROCESSES=n 으로
    지정하여 동적으로 값을 변경할 수 있다. 이때 n은 1 부터 10사이의
    값이어야 한다. 하지만, LOG_ARCHIVE_START 값이 FALSE로 지정되어
    있을 경우에는, 명령을 실행시켜도 아무런 영향을 미치지
    못한다.
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면, 오라클에서는
    구동시 하나의 archiver process (ARC0)를 구동시킨다. 이 파라미터
    값은 필요시 LATER SYSTEM 명령으로 지정된 갯수만큼의
    archive process를 구동시킬 수 있다.
    예)
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=4;
    위 명령을 실행 시키면 다음과 같은 절차에 따라 ARC1, ARC2, ARC3를
    추가한다.
    1) Shadow process는 primary archive process에게 프로세스 갯수를
    늘릴 것을 요청한다.
    2) Archiver process는 kcrrschd 함수를 호출한다. (kcrrschd:
    다중 arch process를 schedule)
    3) 만약 요청된 process의 갯수가 현재 사용중인 archiver process
    갯수보다 작은지 확인한다. 만약 새로 지정된 값이 적거나, ARCHIVING이
    DISABLE 된 상태라면 다른 조치를 취하지 않고 return 한다. 그렇지
    않다면 지원되는 최대 갯수인 10을 넘는지 확인하고 10을 넘을 경우에는
    프로세스의 갯수를 10으로 지정한다.
    4) Scheduler 함수는 kcrxs{} structure에 대한 latch를 확보한다.
    여기서 kcrxs{} structure는 ARCH Activation status를 나타낸다.
    5) Scheduler 함수는 지정된 process 갯수 만큼 loop를 돌면서
    schedule된 상태를 structure KCRRSCHED에 반영시킨다.
    6) 그리고 나서 latch를 release 시킨 후 kcrrsmp 함수를 ( kcrrsmp:
    다중 arch processs 구동 ) 호출한다.
    7) kcrrsmp 함수는 kcrrxs{} structure (ARCH 구동 상태)에 대한 latch를
    확보하여 code 실행을 serialize 시켜 이 함수가 동시에 실행되더라도
    한번에 하나씩 실행될 수 있도록 한다.
    8) pending 상태에 있는 archiver process를 스케쥴링 하고, dead process가
    있으면 clean up 시킨다.
    9) 그리고 나서 이 함수는 지정된 process 갯수 만큼 loop를 돌면서
    KCRRSCHED 상태를 KCRRSTART으로 바꾸어, archiver process들을
    구동 준비 상태로 만든다.
    10) latch를 release 시킨 후 ARCH 프로세스를 구동시킨다.
    11) kcrrsmp함수는 latch를 다시 획득한다. 각각의 archiver 프로세스는
    자기 자신을 activate 시킬 것을 통보 받는다. archiver process는
    자기 자신을 activate 시킨 후 alert file에 관련 사항을 기록한다.
    12) 호출을 하는 함수는 모든 archiver process가 자기 자신을
    activate 시키고, kcrrxs structure의 내용을 갱신할 때 까지
    sleep 상태에서 대기한다.
    13) 끝으로, 현재 archiver process의 갯수가 요청된 archiver process
    의 갯수와 일치 하면, latch를 release 시키고 break 한다. ( C
    에서의 break )
    alert. log 에는 위 과정이 다음과 같이 반영된다.
    sql: prodding the archiver
    ALTER SYSTEM SET log_archive_max_processes=4;
    Tue Jul 13 02:15:14 1999
    ARC0: changing ARC1 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC2 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC3 KCRRNOARCH->KCRRSCHED
    ARC0: STARTING ARCH PROCESSES
    ARC0: changing ARC1 KCRRSCHED->KCRRSTART
    ARC0: changing ARC2 KCRRSCHED->KCRRSTART
    ARC0: changing ARC3 KCRRSCHED->KCRRSTART
    ARC0: invoking ARC1
    Tue Jul 13 02:15:15 1999
    ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC1
    ARC0: ARC1 invoked
    ARC0: invoking ARC2
    ARC1 started with pid=10
    ARC1: Archival started
    Tue Jul 13 02:15:15 1999
    ARC2: changing ARC2 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC2
    ARC2 와 ARC3도 동일한 절차를 따른다.
    흥미로운 사실은 프로세스의 갯수를 줄일 수도 있다는 것이다. 예를 들어
    다음과 같은 명령을 실행시킬 경우
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=2;
    다음과 같은 작업이 순서대로 실행된다.
    1) shadow process는 현재 active 상태인 archiver process와 접속을 한다.
    2) archiverprocess는 kcrrxmp 함수를 호출한다. ( kcrrxmp: 다중
    Arch process 정지 )
    3) kcrrxmp 함수는 kcrrxs{} structure에 (ARCH 구동 상태) 대한 latch를 획득하여 다른 프로세스에서 structure를 동시에 병경하지 않도록 한다.
    4) 새로 요청된 archiver process의 갯수가 현재 사용중인 archiver process
    의 갯수보다 작은지 확인한다.
    5) 만약 작다면, archiver process 목록 가운데, 가장 최근에 schedule
    되어, archival 작업에 schedule 차례가 금방 돌아 오지 않을 프로세스를
    찾아낸다.
    6) 각각의 프로세스에 대해 KCRRACTIVE 상태에서 KCRRSHUTDN로 상태를
    변경할 것을 요청한다.
    7) 상태가 바뀌면, OS에서 해당 프로세스를 종료시키도록 하고, 상태를
    KCRRDEAD로 바꾼다. 관련된 상태 정보가 정리되고 kcrrxs{} structure의
    내용이 갱신된다.
    6) ,7) 과정은 지정된 archiver process 갯수로 줄어들 때 까지
    반복된다.
    8) 새로운 archiver process의 갯수로 kcrrxs structure 내용이 갱신된다.
    9) latch를 release 시킨다.
    상태 변동은 다음과 같이 alert. log 파일에 반영된다.
    sql: prodding the archiver
    Tue Jul 13 00:34:20 1999
    ARC3: changing ARC0 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC0 shutdown message
    ARC3: changing ARC1 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC1 shutdown message
    ARC3: received prod
    Tue Jul 13 00:34:20 1999
    ALTER SYSTEM SET log_archive_max_processes=2;
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC0: Archival stopped
    ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC1: Archival stopped
    ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Archiver process는 round-robin 방식으로 archiving 작업을 수행하도록
    schedule 된다. 만약 다중 archiver process가 부하에 따라 activate 된
    경우는 여러가지 경우의 수가 있을 수 있다. Oracle 8i에서는 다중
    archive log dest를 지원하면서 archive log에 대한 duplexing을 지원
    하기 때문에, 어떤프로세스가 log file을 archive 시켰는지를 기록할 필요가
    있다.
    Oracle 8i에서 archival 작업은 성공할 때 마다 trace file에 archiver
    process 명을 기록한다.
    다음은 관련 trace file의 주요 내용이다.
    Instance name: v815
    Redo thread mounted by this instance: 1
    Oracle process number: 12
    Unix process pid: 3658, image: oracle@oracle8i (ARC3)
    *** Session ID:(12. 1) 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Begin archiving log# 1 seq# 38 thrd# 1
    ARC3: VALIDATE
    ARC3: PREPARE
    ARC3: INITIALIZE
    ARC3: SPOOL
    ARC3: Creating archive destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Archiving block 1 count 1 to : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Closing archive destination 1 : /bigdisk/oracle8i/dbs/arch/1_38.
    dbf
    ARC3: FINISH
    ARC3: Archival success destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: COMPLETE, all destinations archived
    ARC3: ArchivedLog entry added: /bigdisk/oracle8i/dbs/arch/1_38. dbf
    ARC3: ARCHIVED
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Completed archiving log# 1 seq# 38 thrd# 1
    이 정보를 가지고, archive process 3이 log sequence 38번을
    destination 1 :/bigdisk/oracle8i/dbs/arch 에 archive 시킨 것을 알 수 있다.
    Reference Ducumment
    <Note:73163.1>

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-19
    MULTIPLE ARCHIVER PROCESSES FAQ ( ORACLE 8I NEW FEATURE )
    =========================================================
    PURPOSE
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Explanation
    1. LOG_ARCHIVE_MAX_PROCESSES가 하는 역할
    Oracle 8i에는 다중 archive destination을 지원하며, 단일 archiver의
    부하를 줄여주기 위해 multiple archiver process를 사용할 수 있다.
    LOG_ARCHIVE_MAX_PROCESSES 파라미터로 구동시킬 최대 ARCH 프로세스의
    갯수를 지정하면 된다.
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면 인스턴스 구동시
    init 파일에 지정된 LOG_ARCHIVE_MAX_PROCESSES에 지정된 값을 읽어
    들인다. 만약 LOG_ARCHIVE_START 값이 true인데 LOG_ARCHIVE_MAX_PROCESSES
    값이 별도로 지정되어 있지 않을 경우에는 arc0 프로세스만을 구동시킨다.
    LOG_ARCHIVE_MAX_PROCESSES 값이 별도로 지정되었을 경우 (1 - 10 범위),
    arc0, arc1 과 같은 추가 프로세스를 구동시킨다.
    하지만, 이 파라미터 값을 기본값 1이 아닌 다른 값으로 명시적으로 지정할
    필요는 없다. 왜냐하면, 시스템에서 몇개의 ARCn 프로세스가
    필요한지를 판단하여, 추가 ARCn 프로세스를 생성하기 때문이다.
    2. LOG_ARCHIVE_MAX_PROCESSES 값은 동적으로 변경 가능한지 여부
    alter system 명령에서 set LOG_ARCHIVE_MAX_PROCESSES=n 으로
    지정하여 동적으로 값을 변경할 수 있다. 이때 n은 1 부터 10사이의
    값이어야 한다. 하지만, LOG_ARCHIVE_START 값이 FALSE로 지정되어
    있을 경우에는, 명령을 실행시켜도 아무런 영향을 미치지
    못한다.
    3. Archiver process의 갯수가 동적으로 바뀌는 메카니즘
    만약 LOG_ARCHIVE_START 값이 TRUE로 지정되어 있다면, 오라클에서는
    구동시 하나의 archiver process (ARC0)를 구동시킨다. 이 파라미터
    값은 필요시 LATER SYSTEM 명령으로 지정된 갯수만큼의
    archive process를 구동시킬 수 있다.
    예)
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=4;
    위 명령을 실행 시키면 다음과 같은 절차에 따라 ARC1, ARC2, ARC3를
    추가한다.
    1) Shadow process는 primary archive process에게 프로세스 갯수를
    늘릴 것을 요청한다.
    2) Archiver process는 kcrrschd 함수를 호출한다. (kcrrschd:
    다중 arch process를 schedule)
    3) 만약 요청된 process의 갯수가 현재 사용중인 archiver process
    갯수보다 작은지 확인한다. 만약 새로 지정된 값이 적거나, ARCHIVING이
    DISABLE 된 상태라면 다른 조치를 취하지 않고 return 한다. 그렇지
    않다면 지원되는 최대 갯수인 10을 넘는지 확인하고 10을 넘을 경우에는
    프로세스의 갯수를 10으로 지정한다.
    4) Scheduler 함수는 kcrxs{} structure에 대한 latch를 확보한다.
    여기서 kcrxs{} structure는 ARCH Activation status를 나타낸다.
    5) Scheduler 함수는 지정된 process 갯수 만큼 loop를 돌면서
    schedule된 상태를 structure KCRRSCHED에 반영시킨다.
    6) 그리고 나서 latch를 release 시킨 후 kcrrsmp 함수를 ( kcrrsmp:
    다중 arch processs 구동 ) 호출한다.
    7) kcrrsmp 함수는 kcrrxs{} structure (ARCH 구동 상태)에 대한 latch를
    확보하여 code 실행을 serialize 시켜 이 함수가 동시에 실행되더라도
    한번에 하나씩 실행될 수 있도록 한다.
    8) pending 상태에 있는 archiver process를 스케쥴링 하고, dead process가
    있으면 clean up 시킨다.
    9) 그리고 나서 이 함수는 지정된 process 갯수 만큼 loop를 돌면서
    KCRRSCHED 상태를 KCRRSTART으로 바꾸어, archiver process들을
    구동 준비 상태로 만든다.
    10) latch를 release 시킨 후 ARCH 프로세스를 구동시킨다.
    11) kcrrsmp함수는 latch를 다시 획득한다. 각각의 archiver 프로세스는
    자기 자신을 activate 시킬 것을 통보 받는다. archiver process는
    자기 자신을 activate 시킨 후 alert file에 관련 사항을 기록한다.
    12) 호출을 하는 함수는 모든 archiver process가 자기 자신을
    activate 시키고, kcrrxs structure의 내용을 갱신할 때 까지
    sleep 상태에서 대기한다.
    13) 끝으로, 현재 archiver process의 갯수가 요청된 archiver process
    의 갯수와 일치 하면, latch를 release 시키고 break 한다. ( C
    에서의 break )
    alert. log 에는 위 과정이 다음과 같이 반영된다.
    sql: prodding the archiver
    ALTER SYSTEM SET log_archive_max_processes=4;
    Tue Jul 13 02:15:14 1999
    ARC0: changing ARC1 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC2 KCRRNOARCH->KCRRSCHED
    ARC0: changing ARC3 KCRRNOARCH->KCRRSCHED
    ARC0: STARTING ARCH PROCESSES
    ARC0: changing ARC1 KCRRSCHED->KCRRSTART
    ARC0: changing ARC2 KCRRSCHED->KCRRSTART
    ARC0: changing ARC3 KCRRSCHED->KCRRSTART
    ARC0: invoking ARC1
    Tue Jul 13 02:15:15 1999
    ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC1
    ARC0: ARC1 invoked
    ARC0: invoking ARC2
    ARC1 started with pid=10
    ARC1: Archival started
    Tue Jul 13 02:15:15 1999
    ARC2: changing ARC2 KCRRSTART->KCRRACTIVE
    Tue Jul 13 02:15:15 1999
    ARC0: Initializing ARC2
    ARC2 와 ARC3도 동일한 절차를 따른다.
    흥미로운 사실은 프로세스의 갯수를 줄일 수도 있다는 것이다. 예를 들어
    다음과 같은 명령을 실행시킬 경우
    SVRMGRL>alter system set LOG_ARCHIVE_MAX_PROCESSES=2;
    다음과 같은 작업이 순서대로 실행된다.
    1) shadow process는 현재 active 상태인 archiver process와 접속을 한다.
    2) archiverprocess는 kcrrxmp 함수를 호출한다. ( kcrrxmp: 다중
    Arch process 정지 )
    3) kcrrxmp 함수는 kcrrxs{} structure에 (ARCH 구동 상태) 대한 latch를 획득하여 다른 프로세스에서 structure를 동시에 병경하지 않도록 한다.
    4) 새로 요청된 archiver process의 갯수가 현재 사용중인 archiver process
    의 갯수보다 작은지 확인한다.
    5) 만약 작다면, archiver process 목록 가운데, 가장 최근에 schedule
    되어, archival 작업에 schedule 차례가 금방 돌아 오지 않을 프로세스를
    찾아낸다.
    6) 각각의 프로세스에 대해 KCRRACTIVE 상태에서 KCRRSHUTDN로 상태를
    변경할 것을 요청한다.
    7) 상태가 바뀌면, OS에서 해당 프로세스를 종료시키도록 하고, 상태를
    KCRRDEAD로 바꾼다. 관련된 상태 정보가 정리되고 kcrrxs{} structure의
    내용이 갱신된다.
    6) ,7) 과정은 지정된 archiver process 갯수로 줄어들 때 까지
    반복된다.
    8) 새로운 archiver process의 갯수로 kcrrxs structure 내용이 갱신된다.
    9) latch를 release 시킨다.
    상태 변동은 다음과 같이 alert. log 파일에 반영된다.
    sql: prodding the archiver
    Tue Jul 13 00:34:20 1999
    ARC3: changing ARC0 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC0 shutdown message
    ARC3: changing ARC1 KCRRACTIVE->KCRRSHUTDN
    ARC3: sending ARC1 shutdown message
    ARC3: received prod
    Tue Jul 13 00:34:20 1999
    ALTER SYSTEM SET log_archive_max_processes=2;
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC0: Archival stopped
    ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD
    Tue Jul 13 00:34:20 1999
    ARCH shutting down
    ARC1: Archival stopped
    ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD
    4. 어떤 archiver process가 online log를 archive시켰는지 판단 방법
    Archiver process는 round-robin 방식으로 archiving 작업을 수행하도록
    schedule 된다. 만약 다중 archiver process가 부하에 따라 activate 된
    경우는 여러가지 경우의 수가 있을 수 있다. Oracle 8i에서는 다중
    archive log dest를 지원하면서 archive log에 대한 duplexing을 지원
    하기 때문에, 어떤프로세스가 log file을 archive 시켰는지를 기록할 필요가
    있다.
    Oracle 8i에서 archival 작업은 성공할 때 마다 trace file에 archiver
    process 명을 기록한다.
    다음은 관련 trace file의 주요 내용이다.
    Instance name: v815
    Redo thread mounted by this instance: 1
    Oracle process number: 12
    Unix process pid: 3658, image: oracle@oracle8i (ARC3)
    *** Session ID:(12. 1) 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 15. 15. 000
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Begin archiving log# 1 seq# 38 thrd# 1
    ARC3: VALIDATE
    ARC3: PREPARE
    ARC3: INITIALIZE
    ARC3: SPOOL
    ARC3: Creating archive destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Archiving block 1 count 1 to : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: Closing archive destination 1 : /bigdisk/oracle8i/dbs/arch/1_38.
    dbf
    ARC3: FINISH
    ARC3: Archival success destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38.
    dbf'
    ARC3: COMPLETE, all destinations archived
    ARC3: ArchivedLog entry added: /bigdisk/oracle8i/dbs/arch/1_38. dbf
    ARC3: ARCHIVED
    *** 1999. 07. 13. 02. 33. 06. 000
    ARC3: Completed archiving log# 1 seq# 38 thrd# 1
    이 정보를 가지고, archive process 3이 log sequence 38번을
    destination 1 :/bigdisk/oracle8i/dbs/arch 에 archive 시킨 것을 알 수 있다.
    Reference Ducumment
    <Note:73163.1>

  • 1 SQL instances with several archive Databases using all AWE RAM memory of server

    Hello,
    I just migrated my accounting system to a new SQL Server deployment of the software.
    We just purchased the expensive SQL Server enterprise to accomodate.
    I have some replicated databases to of lower priority that I put on the same instance that we occasionally query.  I also imported a 70GB old archive DB that we on very rare occasions.  We are not concerned about performance on these databases
    as we are about the accounting DB on the same instance.
    The MAX memory was set to unlimited on that instance.  As soon as I put in this monster 70GB archived databases the AWE memory usage used up my full 30GB of RAM.
    Is there a way to set the memory usage so the archive databases do not get loaded into the AWE but still the critical accounting system DB on the same instance is taken care of?
    Or do I have to shell out another $3-6k for a separate instance?  SQL Server Express has a 4GB limitation and one of the backup DB we don't really care about is 20GB replicated from Azure.

    Hi,
    >>70GB archived databases the AWE memory usage used up my full 30GB of RAM.
    How did you checked that Archived database is using 30 G did you used sys.dm_os_buffer_descriptor.Do SQL server have locked pages in memory
    SQL Server bring pages in memory if it is requested ,if you access Archived database heavily its bound to take memory but if yous top accessing it and access your other database SQL server will flush out pages of archive IF REQUIRED.
    SQL server manages memory dynamically so I guess you do not need to worry
    >>s there a way to set the memory usage so the archive databases do not get loaded into the AWE but still the critical accounting system DB on the same instance is taken care of?
    No ,there is no way buffer pool is shared region
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Multiple source databases

    Hello,
    I need to load data into dimensions and cube from multiple source databases. We have separate transaction databases for each country, but data must be loaded into common target schema. At the moment we have created mappings which loads data from one source database.
    What is the best way to populate, for example customers dimension from customers transaction tables (located in multiple source databases) if business identifiers can overlap?

    I would break this up into two pieces: the load from the source systems into the staging area, and then the load from the staging area into the target table, which you mentioned was a dimension.
    There is no reason you would have to use views or synonyms if you don't want to. You could approach this either with a SET operator, in this case a union, that brought all the sources together into the staging table. However, I would likely create multiple mappings for each source system. I would create a mapping for each table that you have to pull data from, and then create a separate process flow that controls all the tables being pulled from a particular source database. This allows you to schedule the extractions from the source systems independently of each other in case they need to run at different times. The end result of all of this will be a series of staging tables that are loaded from source tables from all the different locations. For each of the singular mappings, I would create a constant that defined the source of the data for this mapping, and load that constant into a column called SOURCE_SYSTEM, or perhaps COUNTRY, as you mentioned. Then, this SOURCE_SYSTEM/COUNTRY column + SOURCE_SYSTEM_ID would serve as the natural key.
    You can choose to persist these tables (as an ODS of sorts) or not... that is up to you and your requirements.
    Finally, I would create a single mapping that loads the final target table from the single staging table.
    Let me know if this doesn't make sense.
    Regards,
    Stewart Bryson

  • Multiple tempdb databases

    Is it possible to create multiple tempdb databases in SQL Server 2012 enterprise edition 64 bit on windows server 2012 data center edition 64 bit.

    Hello,
    You should create 4 or 2 tempdb file per core/processor as explained on the following article:
    http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-1230-tempdb-should-always-have-one-data-file-per-processor-core/
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • How to create a archive database?

    Emails which are older than 1yr should be archived into a separate database from the mail email database. Its
    a Exchange 2010 version. Please help me know the proceedure.
    Thank you

    Hi There,
    Archive database is the same as a normal database and there is no special method to create it.
    You just assign users to it a secondary mailbox and apply policies to it.
    Once it configured the emails that are stored on the secondary mailbox (archive) will not be cached by outlook.
    I recommend you set the the archive DB to be excluded from provisioning mailboxes automatically using the cmdlet below.
    Set-MailboxDatabase "ArchiveDB" -IsExcludedFromProvisioning $true
    Exchange Blog:
    www.ntweekly.com
    MCSA, MCSE, MCITP:SA, MCITP:EA, MCITP:Enterprise Messaging Administrator 2010,MCTS:Virtualization

  • 9300 synchronization of multiple contact databases...

    I currently use multiple Contact databases on my 9300, such as "professional" and "personal" databases. The default one is systematically synchronized through the PC Suite into my outlook/contact database, but I know how to synchronize the other and make it go to outlook/contact2 database. Any hint would be welcome. Thanks

    Hi nadjibnet,
    According to your description, you want to use SQL Server replication to sync the SQL Server Express database and LocalDB database. From the comparison of SQL Server Express and LocalDB, LocalDB is the full SQL Server Express engine, but invoked directly
    from the client provider. It not support subscriber for merge replication. However, you need to sync database in both directions. Only merge replication can merges incremental data changes that occurred at the Publisher or Subscribers after the initial snapshot
    was created, and detects and resolves any conflicts according to rules you configure.
    So personally, I recommend you use synchronize among SQL Server Express , LocalDB databases by using Microsoft Sync Framework.
    Sync Framework includes classes that can be adapted to synchronize between a SQL Server database and any other database that is compatible with ADO.NET. For detailed documentation of the Sync Framework database synchronization components, see
    Synchronizing Databases. For a comparison between Sync Framework and Merge Replication, see
    Synchronizing Databases Overview.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Multiple IQ databases on one host ---- NLS for BW-HANA

    Hi, 
    We are planning to build a Sybase IQ NLS landscape to support BW on HANA landscapes.
    We have thought of installing DEV/QA NLS (IQ) on one server and accordingly have created the file systems on the Linux host. My question is whether we have to install sybase IQ 16 SP8 PL27 twice on the server to have two seperate $SYBASE install directories, from which we can source the environmenmt profiles while launching each of these databases.
    The FS layout is as follows --- All FS's will be owned by sybase (2 sets of FS will be created with SIDs for DEV and QAS)
    /usr/sap/SID/sybase ------------- sybase install dir
    /usr/sap/SID/database -------- catalog DB
    /usr/sap/SID/main -------------- IQ_SYSTEM_TEMP
    /usr/sap/SID/log -- ---- Transaction logs and other system logs  ----- Can the transaction log be stored here for Point in Time Recovery?
    /usr/sap/SID/messages --------- messages
    /usr/sap/SID/html ---------- html query plans
    /usr/sap/SID/sapdata1 to /usr/sap/SID/sapdata4 --- for iq db spaces (user db spaces)
    /usr/sap/SID/temp/temp1 to /usr/sap/SID/temp/temp4 --- for iqtmp spaces  (both the sapdatax and tempx have same sizes)
    Question ---- can the base directory /usr/sap/SID/temp become the IQTMP16 and will it access the other file systems (temp1, temp2 and so on) undreneath it or do we have to just create one file system - /usr/sap/SID/temp?
    /usr/sap/SID/tmp ----- path for -dt option in the config file to store server dependant files. - I am not sure what this -dt stands for and how different is it from iqtmp FS (usr/sap/SID/temp....)
    Question - In the First Guidance guide, this filesystem is mentioned as (IQ for Sort)... Does /usr/sap/SID/temp (iqtmp) and /usr/sap/SID/tmp (.tmp) have same sizes?
    Question - Does the /usr/sap/SID/sapdata, /usr/sap/SID/temp and /usr/sap/SID/tmp have same sizes?
    We will install DEV under /usr/sap/DEV/sybase and configure it to use the above file systems. Similarly, we will have filesystems for QAS but the install will begin from /usr/sap/QAS/sybase.....
    Is it a viable way of installting multiple NLS databases on one server?
    Please help....

    already answered here - http://scn.sap.com/thread/3717450

  • Reporting data in the Archive Database

    Environment: 10gR3 StandAlone Enterprise.
    I successfully configured Archiving and I can see data being written to the archiving database. I want to now report on the data present in this database. My reports need to be more detailed than what the Archive Viewer displays.
    1. Is there any document that defines the Archive schema?
    2. Are there any SQLs available that make the proper joins against the tables to present the data?
    For example, one report would be to list every completed instance below which would be listed the activities and participant who completed them.
    thanks

    Any help with archive database SQL is appreciated.
    thanks

  • How to enable access of Archive database mails on Outlook.

    We are using Exchange 2010 and I recently created a new database to use as a Archive database to move 1 year old messages  and be accessible to users as a folder on their emails. I was able to apply the tag "All other folders" on
    Retention tags and applied to a test mailbox. It was successful and I was able to access those archived mails on a separate folder that showed the database for the user. It was visible on OWA whereas when we configured the email id to a Outlook, it was
    not showing that database folder or the mails. Please let me know what more of setting I should be looking at to get this working.
    FYI, I tried on a Ms Outlook Professional plus 2010 and 2013. But didn't show.

    Hi Shasti,
    According to your description, I understand that Outlook client cannot display the online archive folder, however it works in OWA.
    If I misunderstand your concern, please do not hesitate to let me know.
    I want to double confirm some points, please help to collect answers for following questions:
    1. Ensure that deploy Outlook on Terminal Server environment.
    2. What’s the version of Outlook?
    The reason why I’m asking is that other user experience similar question, it may be related to Outlook license.
    Additional, I find a similar thread about your question, for your reference:
    https://social.technet.microsoft.com/Forums/en-US/224019df-cbf7-471a-94c5-5a2cd44d6c6e/outlook-2010-not-showing-exchange-2010-archives-owa-does?forum=exchangesvrclientslegacy
    “This is permissions related. Give the user FULL access to the mailbox. We migrated from 2003 to 2010, then later introduced archiving. Granting or removing and regranting permissions to the primary user will resolve this issue. Once you login to RDP or
    another local machine, it may take a few seconds to update, but it will populate.”
    Best Regards,
    Allen Wang

  • ACS SE multiple windows databases

    Hi there
    is it possible to have multiple windows databases on an ACS SE? The problem is, that we need access to two differen domains, that are not trusted and have no super domain.
    Thanks a lot and best regards
    Dominic

    Hi,
    We would require two way external/transitive trust between the two domains.
    There are 2 ways to work around our problem:
    1. Install another ACS at the remote site/domain and forward all the
    requests for the users of remote domain to that ACS.
    2. Configure partner domain as LDAP on the ACS (at corp site), this should not require domain trust. The only problem we will have certain authentication methods will not be supported when using ldap.
    Here is the complete list of stuff which is supported with LDAP:
    http://www.cisco.com/en/US/docs/net_mgmt/cisco_secure_access_control_server​_for_windows/4.1/user/Overvw.html#wp824733​
    Hope that helps!
    Regards,
    ~JG
    Do rate helpful posts

  • How to prevent duplicate keys in archive database?

    I am struggling with this problem.
    Background: I'm working on a project where I have to make an archive database. The archive database should get
    all data of the operational database. It should even save every update ever made, so it literally contains the entire history of the operational database (this is a must and the whole project revolves around this idea). So this is solved by using Change Data
    Capture. After that the data should go through a staging area and eventually in the data warehouse database. I came out with a solution and worked it out in the prototype and it seemed to be working all fine. I stupidly forgot to include the foreign keys,
    so the archive database didn't have the original structure but it should ofcourse (no wonder it went okay without too much hassle).
    Problem: Because we want to store everything in archive, there will be duplicate primary keys (just for instance,
    many same contact_id's because telephone number changes a couple of times). I thought to solve this by adding a new primary key which says is auto-increment and purely exist to make a record unique. But when it comes to foreign keys, it's impossible. You want
    contact_id to be able to be duplicate and in that case it cannot be a primary key. But foreign key is only able to reference to a primary key or another unique key but not other normal columns.
    Any advice on this? It's an absolute must to store all changes.

    All of you, thanks for replying, I'm happy you're trying to help me out with this problem. 
    Visakh and Louis, thanks that seems like the solution for this case indeed. Yes, the dimensional design appeals more to me as well.
    I read the articles and watched some tutorials. But I can't work it around the solution that I had.
    More background info: I use CDC to track all the changes done in the operational database and SSIS (following one of Matt Mason's tutorials and with a lot of alterations to make it fit for my project). I have this control flow (don't mind that
    error haha):
    (Oh apparently I cannot add images yet, so here's the link for the screenshot:) http://nl.tinypic.com/r/w0p1u0/8
    Basically I create staging tables in my archive database next to my normal archive tables. Then start CDC control task to get the processing range and then it copies everything from the operational database (joined with a few CDC columns) to the staging
    tables. After that the processing range ends so it will only get the rows it hasn't processed before. And then I do some updates on the staging tables and then finally insert everything into the archive tables. The staging tables then can be truncated. After
    this the data will go to the staging area for transformations and then finally at last to the DWH. The reason for having a staging area between the archive and DWH is that the archive will not only be used as source for the DWH but also on it's own. The DWH
    will not contain 100% the same stuff as the archive (like maybe some transformations, extra columns with calculated fields, plus some columns don't need to be in the DWH at all). When all the ETL stuff is done in SSIS, I have to use SSAS to define all the
    facts, dimensions, cubes. 
    Example: So I try to work with the SCD type 2. If I understood it correctly (and maybe I didn't): for example, the contact table in archive should have the surrogate key ID (the auto-increment one). The business key is the contact_id
    and can be used uniquely with the time range columns. 
    Following Visakh's post, the ID becomes the key that the foreign key will reference to. For example: 
    Contact table:
    ID: 1 contact_id: 100
    Name: Glenn start_time: 2014-01-01
    End_time: 2014-08-20
    ID: 2 Contact_id: 100
    Name: Danzig Start_time: 2014-08-20
    end_time: NULL
    Sorry, I couldn't style it as table somehow. So the employee changed his name. It makes sense that the time period tells when the first name was valid. 
    Organisation table: 
    ID: 1
    org_id: 20 
    Contact_id: 1
    Start_time: 2014-01-01
    End_time:NULL
    (it references to ID instead of contact_id as suggested)
    The employee belongs to an organisation. It references 1 which is still old data. But this is the last version of the organisation record. 
    So then I need a table to link the 2: 
    organisation_contact table
    contact_id:100
    org_id: 20
    and then I need another one to join with the surrogate key?
    ID: 1
    org_id: 20
    ID: 2
    org_id: 20
    (Guess it would make more sense to have org_id in the contact table but for now it's an example)
    Problems: I don't quite understand how this works. From the example I saw you have to have another table (the fact table) to link it to the surrogate key. Would this mean I have to have facts and dimension tables in my archive database?
    My intention was actually to have all records of the operational databases (all the updates too) in my archive. And after that create the facts and dimensions in the DWH with SSAS. The example looks like I should do it earlier. 
    I don't know how to combine this with the cdc solution. I want to get all the data by using CDC. Like how every update gets registered in the accompanying CDC table. Then the archive will get the CDC data. But then how to combine this in use with SCD. I
    have the surrogate key in archive (ID) and then I make the start and end time columns. I need to point all references to the ID and then make the other table to keep track of the contact_id (original PK) and another key. At last make another table to track
    all the current data in the fact. 
    Another question: Would you recommend the SCD task in SSIS. I read it was not that great if you have many rows to work with. What would you think is the best method to implement it. 
    Thanks so much again.
    EDIT: What about slowly changing dimensions type 4? It looks like you don't have to change the references of the foreign key then. Why do you prefer 2 over 4?

Maybe you are looking for

  • Logical Database PNP

    Hi, im using standard logical database PNP. After the user filles the selection screen, i want to manipulate some of the selection fields, but evrything i tried out til now does not change the amount of data i got by GET PERNR. How can i manipulate t

  • Video adapter issue

    Illustrator CC is being crashed when i try to open any .ai file or create new file. If i disable video adapter, it allows to do so, but when i enable it again it starts occurring problems. Please help

  • Fixed block & text

    Hi, can anyone help with these two questions; 1) Can the "blank" boxes be removed (in the layout format there are always 'blank' boxes that can neither take text or a photo.... +- ie can these be removed/made transparent+ 2) can the text in a text 'b

  • IDOC to XML file filter

    Hi Experts, I have created a custom IDOC message type and send as XML file outbound. In the XML file, there are other details as follows just before my header data. Is there any filter for this? The requirement is only send the header and item data.

  • While trying to burn a cd in itunes, I receive the message "unknown error occurred" (-3)

    I had no trouble burning with Itunes until two days ago.  When I tried to burn, the disc was prepared, then the burn was cancelled with the message  "unknown error occurred" (-3) I use a speed of 2x to burn discs.