Split Etherchannel Clusters

I am looking for documentation on how to set up a series of "split Etherchannel" connections. I believe that this can be done with 802.3ad (LACP), but I need some specific documentation on the subject.
In a split LACP environment, an Etherchannel group is divided between two core switches and those core switches coordinate L2 communication via an Inter-Switch Trunk. I need to be able to "layer" this configuration to create a "cluster of clusters".
See the attached diagram.
I need to understand the proper Cisco terminology for this structure, and I need a reference to documentation on how to set it up using 6500 switches as the "core".
Thanks.

LACP is the IEEE, standarized, version of PAgP (ciscos proprietary portAggregationProtocol)
as with PAgP, LACP cannot be configured to create an etherChannel bundle of links divided across multiple switches.
a LACP channel must be comprised of ports with the same parameters, within a single chassis. you cannot create an etherChannel with 2 ports from SwitchA bundled with 2 ports from SwitchB to another endpoint(switch3).
see this link for more info on etherChannel configuration:
http://www.cisco.com/en/US/products/hw/switches/ps708/products_configuration_guide_chapter09186a008007e6d3.html

Similar Messages

  • Teaming servers to multiple switches

    I am looking for a configuration guide / example that shows how to connect a Server with multiple NIC's to 2 different switches for failover. The servers could be Window's or Unix based. Thank you.

    Actually, on the 3750s if stacked at least, I believe this is now possible to channel those ports.
    I'm not sure if that carries over to the 2970s or not.
    The "other company" called this "Split Multi Link Trunking". I'm not sure if Cisco calls it Split Etherchannel or not, but I'm 99% sure that that was one of the benefits of the stacked 3750s for uplinks, and there's no reason that it wouldn't apply to servers too, so long as they are capable of etherchannels.
    ...N

  • CRS 와 10G REAL APPLICATION CLUSTERS

    제품 : ORACLE SERVER
    작성날짜 : 2004-11-30
    CRS 와 10G REAL APPLICATION CLUSTERS
    ===================================
    PURPOSE
    이 문서는, 10g Real Application Cluster의 CRS (Cluster Ready Services)에 대한 추가적인
    정보를 제공하는 것을 목적으로 한다.
    Explanation
    1. CRS 와 10g REAL APPLICATION CLUSTERS
    CRS (Cluster Ready Services)는 10g Real Application Cluster의 새로운 기능으로,
    모든 플랫폼에 대해 표준화된 클러스터 인터페이스를 제공 해 주고, 이전 버전에서는 없었던
    새로운 고가용 서비스를 제공해 준다.
    2. CRS 핵심 기능
    CRS와 10g RAC를 설치하기 전에, CRS및 10g RAC에 대해 사전에 알아 두어야 할 사항이 있다 :
    - 10g RAC를 설치를 위해서는 CRS는 사전에 설치 되고 실행되어야 한다.
    - CRS는 하드웨어 공급 업체에서 제공하는 클러스터 제품 (예 : Sun Cluster,
    HP Serviceguard, IBM HACMP, TruCluster, Veritas Cluster, Fujitsu Primecluster,
    기타 ...) 위에서 실행 될 수도 있고, 하드웨어 공급 업체에서 제공하는 클러스터 제품
    없이도 실행될 수 있다. 하드웨어 업체에서 공급하는 클러스터 제품은 9i RAC까지는 반드시
    필요했지만, 10g RAC에서는 선택 사항이다.
    - CRS HOME 과 ORACLE_HOME 은 반드시 다른 디렉토리에 설치 되어야 한다.
    - CRS를 설치하기 전에, voting 파일 또는 OCR (Oracle Configuration Repository) 파일을
    설치 할 수 있는 공유된 디렉토리 또는 디바이스가 셋업되어야 한다. voting file은 최소 20MB 정도
    크기이며, OCR 파일은, 최소 100MB 크기가 되어야 한다.
    - CRS 및 RAC를 설치하기 위해서는 다음과 같은 네트워크 인터페이스가 구성되어야 한다 :
    - Public Interface
    - Private Interface
    - Virtual (Public) Interface
    관련된 추가적인 정보는 <Bulletin No: 22345> 참조.
    - CRS 설치 후 root.sh를 실행시키면 CRS 서비스를 구동시킨다. 만약 CRS가 정상적으로
    구동되지 않는다면, Note 240001.1 참조.
    - RAC 노드당 1개의 CRS 데모만 실행 가능.
    - 유닉스 시스템의 경우, CRS 서비스는 /etc/inittab 상의 ‘respawn’ entry로 등록되어 있다.
    - 네트워크 split이 있을 경우 (노드간 통신 두절) data corruption을 방지 하기 위해 하나
    또는 그 이상의 노드에서 리부팅이 발생할 수 있다.
    - CRS 서비스를 구동시키는 올바른 방법은 장비를 부팅시키는 것이다.
    - 서비스를 중단시키는 올바른 방법은, 장비를 shutdown 시키거나, "init.crs stop" 명령을
    실행시키는 것이다.
    - CRS 데몬을 kill 시키는 것은 올바른 방법이 아니며, 오직 설치된 CRS를 제거한 경우에만
    적용해도 되는 방법이다. (Bulletin No: 22343 참조) 이것은 , 플래그 파일에 불일치가 발생 할
    수 있기 때문이다.
    - 시스템 유지 보수를 위해서는, OS를 단일 사용자 모드 (single user mode)로 전환한다.
    서비스 스택이 구동되면, ps -ef 명령으로 관련된 데몬 프로세스를 확인 할 수 있다 :
    [rac1]/u01/home/beta> ps -ef | grep crs
    oracle 1363 999 0 11:23:21 ? 0:00 /u01/crs_home/bin/evmlogger.bin -o /u01
    oracle 999 1 0 11:21:39 ? 0:01 /u01/crs_home/bin/evmd.bin
    root 1003 1 0 11:21:39 ? 0:01 /u01/crs_home/bin/crsd.bin
    oracle 1002 1 0 11:21:39 ? 0:01 /u01/crs_home/bin/ocssd.bin
    3. CRS DAEMON 기능
    다음은 각각의 CRS 데몬 프로세스에 대한 간략한 설명이다 :
    CRSD:
    - HA 작업을 위한 엔진
    - '애플리케이션 자원'관리
    - '애플리케이션 자원'을 구동, 정지, fail over 처리
    - 애플리케이션 자원 구동/정지/점검 하기 위한 별도의 'actions'을 spawn
    - OCR(Oracle Configuration Repository)의 구성 프로파일 관리
    - OCR의 현재 알려진 상태를 저장
    - root 권한으로 실행
    - 장애 발생시 자동으로 재 구동됨
    OCSSD:
    - OCSSD는 RAC의 일부로, ASM과 함께 단일 인스턴스를 구성함
    - 노드 멤버쉽에 대한 액세스를 제공
    - 그룹 서비스 제공
    - 기본적인 클러스터 lock 기능 제공
    - 하드웨어 공급 업체에서 제공하는 클러스터 소프트웨어가 설치되어 있을 경우, 통합을 실시
    - 하드웨어 공급 업체에서 제공하는 클러스터 소프트웨어 없이도 실행 가능
    - 오라클 계정으로 실행
    - 장애로 인한 종료시 시스템 리부팅됨
    --- 리부팅은 split brain 현상 발생시, 데이터 corruption 방지를 목적으로 함.
    EVMD:
    - 특정한 사건 발생 시 이벤트 생성
    - 자식 프로세스로 evmlogger를 spawn 시킴
    - Evmlogger는 필요시 자식 프로세스를 spawn 시킴
    - callout directory를 스캔하고 callout을 호출
    - 오라클 계정으로 실행.
    - 장애로 인한 종료시 자동으로 재 구동됨
    4. CRS 로그 디렉토리
    CRS 문제의 원인을 추적할 때는, CRS 홈 디렉토리 아래 디렉토리를 살펴보는 것이
    중요하다.
    $ORA_CRS_HOME/crs/log - 이 디렉토리는, CRS 자원들에 대한 트레이스를 포함하며,
    CRS에 의해 식별된 가입(joining), 탈퇴(leaving), 재구동(restarting), 재배치(relocating)와
    관련된 정보들이 기록된다.
    $ORA_CRS_HOME/crs/init - crsd.bin 데몬과 관련된 모든 core dump가 기록된다.
    $ORA_CRS_HOME/css/log - css 로그는, 재구성(reconfiguration)이나, 성공하지
    못한 체크인 (missed checkin), 클라이언트의 css listener로 부터 발생한 연결(connect) 및
    연결해제(disconnect)와 관련된 모든 액션을 기록한다. 때에 따라서는 로거에서는 (auth.crit)
    유형의 메시지를 남기는데 이것은 오라클에 의해 리부팅이 발생할 때 남는다. 이 정보는
    리부팅이 정확히 언제 발생했는지를 확인하는데 사용될 수 있다.
    $ORA_CRS_HOME/css/init - 기본적으로는 ocssd로 부터의 core dump 파일을 저장하며, 프로세스의
    종료가 심각한 문제로 간주되는 css 데몬의 pid 정보 또한 기록된다. css의 비정상 재 구동이 발생할
    경우, core 파일은, core.<pid> 형태로 기록된다.
    $ORA_CRS_HOME/evm/log - evn과 evmlogger 데몬의 로그 파일이 기록된다. CRS 또는 CSS 관련 디렉토리
    처럼 디버깅 용도로 자주 사용되지는 않는다.
    $ORA_CRS_HOME/evm/init - EVM의 pid와 lock 파일이 저장된다. EVM으로 부터 발생한 core 파일 또한
    이 디렉토리에 저장된다. 디버깅을 위해서는 Note 1812.1 참조.
    $ORA_CRS_HOME/srvm/log - OCR을 위한 로그 파일.
    5. CRS 자원의 상태
    RAC를 설치하고, RAC root.sh을 실행시키면 VIPCA (Virtual IP Configuration Assistant)가
    구동된다. crs_stat 명령을 이용하여 모든 CRS 자원을 확인할 수 있다. 예 :
    cd $ORA_CRS_HOME/bin
    ./crs_stat
    NAME=ora.rac1.gsd
    TYPE=application
    TARGET=ONLINE
    STATE=ONLINE
    NAME=ora.rac1.oem
    TYPE=application
    TARGET=ONLINE
    STATE=ONLINE
    NAME=ora.rac1.ons
    TYPE=application
    TARGET=ONLINE
    STATE=ONLINE
    NAME=ora.rac1.vip
    TYPE=application
    TARGET=ONLINE
    STATE=ONLINE
    NAME=ora.rac2.gsd
    TYPE=application
    TARGET=ONLINE
    STATE=ONLINE
    NAME=ora.rac2.oem
    TYPE=application
    TARGET=ONLINE
    STATE=ONLINE
    NAME=ora.rac2.ons
    TYPE=application
    TARGET=ONLINE
    STATE=ONLINE
    NAME=ora.rac2.vip
    TYPE=application
    TARGET=ONLINE
    STATE=ONLINE
    CRS 자원을 읽기 쉬운 형태로 확인하기 위한 스크립트 또한 사용할 수 있다.
    다음은 shell script의 예이다 :
    --------------------------- Begin Shell Script -------------------------------
    #!/usr/bin/ksh
    # Sample 10g CRS resource status query script
    # Description:
    # - Returns formatted version of crs_stat -t, in tabular
    # format, with the complete rsc names and filtering keywords
    # - The argument, $RSC_KEY, is optional and if passed to the script, will
    # limit the output to HA resources whose names match $RSC_KEY.
    # Requirements:
    # - $ORA_CRS_HOME should be set in your environment
    RSC_KEY=$1
    QSTAT=-u
    AWK=/usr/xpg4/bin/awk # if not available use /usr/bin/awk
    # Table header:echo ""
    $AWK \
    'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State";
              printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}'
    # Table body:
    $ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \
    'BEGIN { FS="="; state = 0; }
    $1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
    state == 0 {next;}
    $1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
    $1~/STATE/ && state == 2 {appstate = $2; state=3;}
    state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'
    --------------------------- End Shell Script -------------------------------
    실행 결과 예시 :
    [opcbsol1]/u01/home/usupport> ./crsstat
    HA Resource Target State
    ora.V10SN.V10SN1.inst ONLINE ONLINE on opcbsol1
    ora.V10SN.V10SN2.inst ONLINE ONLINE on opcbsol2
    ora.V10SN.db ONLINE ONLINE on opcbsol2
    ora.opcbsol1.ASM1.asm ONLINE ONLINE on opcbsol1
    ora.opcbsol1.LISTENER_OPCBSOL1.lsnr ONLINE ONLINE on opcbsol1
    ora.opcbsol1.gsd ONLINE ONLINE on opcbsol1
    ora.opcbsol1.ons ONLINE ONLINE on opcbsol1
    ora.opcbsol1.vip ONLINE ONLINE on opcbsol1
    ora.opcbsol2.ASM2.asm ONLINE ONLINE on opcbsol2
    ora.opcbsol2.LISTENER_OPCBSOL2.lsnr ONLINE ONLINE on opcbsol2
    ora.opcbsol2.gsd ONLINE ONLINE on opcbsol2
    ora.opcbsol2.ons ONLINE ONLINE on opcbsol2
    ora.opcbsol2.vip ONLINE ONLINE on opcbsol2
    6. CRS 자원 관리
    자원을 관리하기 위해서는, srvctl 명령을 사용한다. 다음은 명령어 문법 예제이다.
    1) CRS 자원 상태
    srvctl status database -d <database-name> [-f] [-v] [-S <level>]
    srvctl status instance -d <database-name> -i <instance-name> >[,<instance-name-list>]
    [-f] [-v] [-S <level>]
    srvctl status service -d <database-name> -s <service-name>[,<service-name-list>]
    [-f] [-v] [-S <level>]
    srvctl status nodeapps [-n <node-name>]
    srvctl status asm -n <node_name>
    예제:
    데이터베이스의 상태, 모든 인스턴스와 모든 서비스
    srvctl status database -d ORACLE -v
    이름이 부여된 인스턴스의 상태와 현재 서비스
    srvctl status instance -d ORACLE -i RAC01, RAC02 -v
    이름이 부여된 서비스의 상태.
    srvctl status service -d ORACLE -s ERP -v
    데이터베이스 애플리케이션을 지원하는 모든 노드의 상태.
    srvctl status node
    2) CRS 자원의 구동
    srvctl start database -d <database-name> [-o < start-options>]
    [-c <connect-string> | -q]
    srvctl start instance -d <database-name> -i <instance-name>
    [,<instance-name-list>] [-o <start-options>] [-c <connect-string> | -q]
    srvctl start service -d <database-name> [-s <service-name>[,<service-name-list>]]
    [-i <instance-name>] [-o <start-options>] [-c <connect-string> | -q]
    srvctl start nodeapps -n <node-name>
    srvctl start asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]
    예제:
    데이터베이스를 모든 활성화된 인스턴스와 함께 구동.
    srvctl start database -d ORACLE
    이름이 부여된 인스턴스의 구동.
    srvctl start instance -d ORACLE -i RAC03, RAC04
    이름이 부여된 서비스의 구동. 연관된 인스턴스는 필요시 구동됨.
    srvctl start service -d ORACLE -s CRM
    이름이 부여된 인스턴스의 서비스의 구동.
    srvctl start service -d ORACLE -s CRM -i RAC04
    노드 애플리케이션의 구동.
    srvctl start nodeapps -n myclust-4
    3) CRS 자원의 정지
    srvctl stop database -d <database-name> [-o <stop-options>]
    [-c <connect-string> | -q]
    srvctl stop instance -d <database-name> -i <instance-name> [,<instance-name-list>]
    [-o <stop-options>][-c <connect-string> | -q]
    srvctl stop service -d <database-name> [-s <service-name>[,<service-name-list>]]
    [-i <instance-name>][-c <connect-string> | -q] [-f]
    srvctl stop nodeapps -n <node-name>
    srvctl stop asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]
    예제:
    데이터베이스, 모든 인스턴스, 모든 서비스를 정지 시킴.
    srvctl stop database -d ORACLE
    이름이 부여된 인스턴스를 정지 시킴. 그 전에 우선 존재하는 모든 서비스를 재배치 함.
    srvctl stop instance -d ORACLE -i RAC03,RAC04
    서비스를 정지시킴.
    srvctl stop service -d ORACLE -s CRM
    이름이 부여된 인스턴스의 서비스를 정지 시킴.
    srvctl stop service -d ORACLE -s CRM -i RAC04
    노드 애플리케이션을 정지 시킴. 인스턴스와 서비스 역시 정지됨.
    srvctl stop nodeapps -n myclust-4
    4) CRS 자원의 추가
    srvctl add database -d <name> -o <oracle_home> [-m <domain_name>] [-p <spfile>]
    [-A <name|ip>/netmask] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}]
    [-s <start_options>] [-n <db_name>]
    srvctl add instance -d <name> -i <inst_name> -n <node_name>
    srvctl add service -d <name> -s <service_name> -r <preferred_list>
    [-a <available_list>] [-P <TAF_policy>] [-u]
    srvctl add nodeapps -n <node_name> -o <oracle_home>
    [-A <name|ip>/netmask[/if1[|if2|...]]]
    srvctl add asm -n <node_name> -i <asm_inst_name> -o <oracle_home>
    OPTIONS:
    -A vip range, node, and database, address specification. The format of
    address string is:
    [<logical host name>]/<VIP address>/<net mask>[/<host interface1[ |
    host interface2 |..]>] [,] [<logical host name>]/<VIP address>/<net mask>
    [<host interface1[ | host interface2 |..]>]
    -a for services, list of available instances, this list cannot include
    preferred instances
    -m domain name with the format “us.mydomain.com”
    -n node name that will support one or more instances
    -o $ORACLE_HOME to locate Oracle binaries
    -P for services, TAF preconnect policy - NONE, PRECONNECT
    -r for services, list of preferred instances, this list cannot include
    available instances.
    -s spfile name
    -u updates the preferred or available list for the service to support the
    specified instance. Only one instance may be specified with the -u
    switch. Instances that already support the service should not be
    included.
    예제:
    새로운 노드의 추가.
    srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME ?A
    139.184.201.1/255.255.255.0/hme0
    새로운 데이터 베이스의 추가.
    srvctl add database -d ORACLE -o $ORACLE_HOME
    이미 존제하는 데이터베이스에 이름이 부여된 인스턴스 추가.
    srvctl add instance -d ORACLE -i RAC01 -n myclust-1
    srvctl add instance -d ORACLE -i RAC02 -n myclust-2
    srvctl add instance -d ORACLE -i RAC03 -n myclust-3
    서비스를 이미 존재하는 데이터베이스에 추가하며, 선호되는 인스턴스를 지정 (-r)하고,
    가용한 인스턴스를 지정함(-a). 가용 인스턴스에 대해서는 기본 failover를 사용함.
    srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04
    이미 존재하는 데이터베이스에 선호되는 인스턴스를 list 1, 가용한 인스턴스를 list 2에
    추가함. 가용 인스턴스에 대해서는 사전연결 (preconnect) 방식을 사용함.
    srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 -P PRECONNECT
    5) CRS RE자원의 제거
    srvctl remove database -d <database-name>
    srvctl remove instance -d <database-name> [-i <instance-name>]
    srvctl remove service -d <database-name> -s <service-name> [-i <instance-name>]
    srvctl remove nodeapps -n <node-name>
    예제:
    데이터베이스에 대한 애플리케이션의 제거.
    srvctl remove database -d ORACLE
    이미 존재하는 데이터베이스의 이름이 부여된 인스턴스에 대한 애플리케이션 제거.
    srvctl remove instance -d ORACLE -i RAC03
    srvctl remove instance -d ORACLE -i RAC04
    서비스 제거.
    srvctl remove service -d ORACLE -s STD_BATCH
    인스턴스로부터 서비스 제거.
    srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
    노드로 부터 모든 노드 애플리케이션 제거.
    srvctl remove nodeapps -n myclust-4
    6) CRS 자원의 변경
    srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>]
    [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}]
    [-s <start_options>]
    srvctl modify instance -d <database-name> -i <instance-name> -n <node-name>
    srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
    srvctl modify service -d <database-name> -s <service_name> -i <instance-name>
    -t <instance-name> [-f]
    srvctl modify service -d <database-name> -s <service_name> -i <instance-name>
    -r [-f]
    srvctl modify nodeapps -n <node-name> [-A <address-description> ] [-x]
    OPTIONS:
    -i <instance-name> -t <instance-name> the instance name (-i) is replaced by the
    instance name (-t)
    -i <instance-name> -r the named instance is modified to be a preferred instance
    -A address-list for VIP application, at node level
    -s <asm_inst_name> add or remove ASM dependency
    예제:
    인스턴스가 다른 노드에서 실행되도록 변경.
    srvctl moinstance -d ORACLE -n myclust-4
    서비스가 다른 노드에서 실행되도록 변경.
    srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02
    인스턴스가 서비스의 선호되는 인스턴스가 되도록 변경.
    srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 ?r
    7) SERVICE의 재 배치
    srvctl relocate service -d <database-name> -s <service-name> [-i <instance-name >]-t<instance-name > [-f]
    예제:
    서비스를 한 인스턴스에서 다른 인스턴스로 재 배치
    srvctl relocate service -d ORACLE -s CRM -i RAC04 -t RAC01
    8) CRS 자원을 활성화 (자원은 이 기능을 사용할 당시 실행중이거나, 정지된 상태일 수 있음)
    srvctl enable database -d <database-name>
    srvctl enable instance -d <database-name> -i <instance-name> [,<instance-name-list>]
    srvctl enable service -d <database-name> -s <service-name>] [, <service-name-list>] [-i <instance-name>]
    예제:
    데이터베이스를 활성화.
    srvctl enable database -d ORACLE
    이름이 부여된 인스턴스의 활성화.
    srvctl enable instance -d ORACLE -i RAC01, RAC02
    서비스의 활성화.
    srvctl enable service -d ORACLE -s ERP,CRM
    이름이 부여된 인스턴스에서 서비스의 활성화.
    srvctl enable service -d ORACLE -s CRM -i RAC03
    9) CRS 자원의 비활성화 (자원은 이 기능을 사용할 당시 정지된 상태 이어야만 함)
    srvctl disable database -d <database-name>
    srvctl disable instance -d <database-name> -i <instance-name> [,<instance-name-list>]
    srvctl disable service -d <database-name> -s <service-name>] [,<service-name-list>] [-i <instance-name>]
    예제:
    데이터베이스를 전역(global) 비활성화 시킴.
    srvctl disable database -d ORACLE
    이름이 부여된 인스턴스의 비활성화.
    srvctl disable instance -d ORACLE -i RAC01, RAC02
    서비스를 전역(global) 비활성화.
    srvctl disable service -d ORACLE -s ERP,CRM
    이름이 부여된 인스턴스상의 서비스를 비 활성화.
    srvctl disable service -d ORACLE -s CRM -i RAC03,RAC04
    추가 정보는 Oracle10g Real Application Clusters Administrator's Guide - Appendix B 참조
    Example
    Reference Documents
    <Note:259301.1> CRS and 10g Real Application Clusters

    To download the registered logos by Oracle for OCP, OCA, OCE, & OCM Certifications.
    You have to request to Oracle Exam Support Team to provide the link and the credentials (User Name/Password) to download the same.
    Mail to the following E-Mail: [email protected]
    Note 1: To get the response on time, you may need to provide your prometric information details i.e. Exam Passed Date, Prometric ID, Full Name, and Corresponding Address.
    Note 2: You will never have to disclose your credentials (User Name/Password) to anybody, as Oracle is going to give this for you especially.
    Regards,
    Sabdar Syed,
    http://sabdarsyed.blogspot.com

  • Error on Data Flow Task MSSQL 2012 Clustered "Description: The version of Lookup is not compatible with this version of the DataFlow. "

    We have an SSIS package that runs on clustered MSSQL 2012 Enterprise Nodes that is failing.  We use a job to executer the package.
    Environmental information:
    Product - Microsoft SQL Server Enterprise: Core-based Licensing (64-bit)
    Operating System - Microsoft Windows NT 6.1 (7601)
    Patform - NT x64
    Version - MSSQL Version 11.0.3349.0
    Package is set to 32 -bit.  All permissions verified.  Runs in lower environments, same MSSQL version.  All environments are clustered.  In the failing environment, all nodes are at the same service pack.  I have not verified if all
    nodes in the failing environment have SSIS installed.  Data access is installed.  We have other simpler packages that run in this environment, just not this one.  Time to ask the community for help!
    Error:
    Source: Data Flow Task - Data Flow Task (SSIS.Pipeline)     Description: The version of Lookup is not compatible with this version of the DataFlow.  End Error  Error:  Code: 0xC0048020    
    Description: Component "Conditional Split, clsid {7F88F654-4E20-4D14-84F4-AF9C925D3087}" could not be created and returned error code 0x80070005 "Access is denied.". Make sure that the component is registered correctly.  End Error 
    Description: The component is missing, not registered, not upgradeable, or missing required interfaces. The contact information for this component is "Conditional Split;Microsoft Corporation; Microsoft SQL Server; (C) Microsoft Corporation; All Rights
    Reserved; http://www.microsoft.com/sql/support;0".  End Error 
    (Left out shop specific information.  This is the first error in the errors returns by the job history for this package. )
    Thanks in advance.

    Hi DeveloperMax,
    According to your description, the error occurs when you execute the package with Agent job on clustered MSSQL 2012 Enterprise Nodes.
    As per my understanding, I think this issue can be caused by you use SQL Server Agent to schedule a SQL Server Integration Services package in a 64-bit environment. And the SSIS package is referencing some 32-Bit DLL or 32-Bit drivers which are available
    only in 32-bit versions, so the job failed.
    To fix this issue, we should use the 32-bit version of the DTExec.exe utility to schedule the 64-bit SQL Server Agent to run a package. To run a package in 32-bit mode from a 64-bit version of SQL Server Agent, we can go to the Job Step dialog box, then
    select “32 bit runtime” in the Advanced tab.
    Besides, we should make sure that SQL Server Integration Services is installed on the failing environment.
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • How do I stop my itunes albums in cover flow from being split up?

    It gets a little confusing- for every possible view of the albums, except cover flow, all of the tracks in one album are clustered together- it works fine.
    when I search for a term in cover flow that would only bring up one album as a result, it comes up as just one album- this is great.
    But- when I search for a common album title term, or if I view every album together, the different albums randomly get split up.
    Example: If I search "machine" in cover flow, the albums "extraordinary machine" and "lungs" (by Florence+ the machine) each get split up 5 times (5 of the same picture, about 2-5 songs under each picture). But, when I search "Florence" I get all the tracks from "lungs" under one picture. (because that is the only result)
    Question: How do I stop this from happening? Its not really affecting anything, it's just inconvenient and flustering.

    No, an external drive will not always be mounted. If the drive goes to sleep, it can unmount, or it may unmount for other reasons. As Chris said, confim that the drive is mounted and active before launching iTunes.
    Regards.

  • Clustering across subnets.

    I am trying to split our CF multiserver cluster into seperate
    subnets, and I am having great difficult making it work. I am
    hoping that someone else here has done this, can can be of
    assistance.
    Background:
    2 boxes both running mx7 multiserver EE.
    2 Jrun servers each box, one cluster.
    IIS on both boxes pointed to cluster. Browsing successful
    from both webservers.
    System was 100% operational pre-split. Testing confirms all 4
    severs processing CF code in RR fashion.
    Changes:
    Moved one server to seperate subnet seperated only by a SOHO
    router.
    SOHO router configed in router mode, "inside" server set as
    DMZ host for 100% port forwarding (no rules)
    jrun servers deleted and re-built w/ ip addresses instead of
    DNS names.
    edited security.propertes to update jrun.trusted.hosts
    Results of move:
    host 1 - cluster controller operates as expected, less 2
    remote jrun servers
    2 remote servers as show as stopped.
    can access jrun admin and jrun servers cf admin pages (8301,
    8302) on remote host
    host 2 (remote) - WSconfig can see only default "cfusion"
    server.
    wsconfig unable to see cluster or other jrun servers on host
    1.
    IIS will process pages if connected to host 1 - "cfusion"
    can access jrun admin and jrun servers cf admin pages (8301,
    8302) on remote host

    here is the gospal on clusters although I'm not sure it will answer your question
    http://download.oracle.com/docs/cd/E10291_01/core.1013/e10294/toc.htm
    what do you mean by subnet, generally all machines are on the same subnet as they sit side by side?
    cheers
    James

  • Windows Clustering Networks question...

    Hi all;
    This is my scenario:
    I have installed Windows Server 2012 on two servers. Then enabled Windows Clustering feature on it. The shared storage is based on Fibre Channel technology. Each server has 4 NICs and I have splitted them as followis:
    One NIC for remote mangement of the servers with the range of 172.16.105.0/24.
    One NIC dedicated for heartbeat communication.
    Two NICs has been bundled together with NIC Teaming feature of the operating system.
    But as you see in the following figure there are 4 Cluster Network links:
    Is it normal?
    Thanks
    Please VOTE as HELPFUL if the post helps you and remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
    the thread.

    Hi,
    Just want to confirm the current situations.
    Please feel free to let us know if you need further assistance.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • How To Split Parameter Values

    Hello,
    I have the following code which generates a lot of values stored in Parameter Values. How do I split this to see the individual values. Is it an array?
    Nevica
    Attachments:
    Capture.JPG ‏25 KB

    Hi nevica,
    The data type which is produced from this function is called a Cluster. Clusters are essentially a data type which allows the concatenation of multiple entries of data, quite like an array; however they allow entries of different data types rather than just one.
    For example, the error terminals you have wired together contain a special type of Cluster called an Error Cluster. Error Clusters contain three different data types:
    Status -  a Boolean which indicates if an error has occurred.
    Code - a Numeric data type which contains a number which represents the specific error that occured.
    Source - a String data type which indicates where in your VI the error occured.
    These different data types are bundled together because they all logically represent different aspects of an Error.
    In your VI, the parameter that is returned is of type cluster. If you right click on the cluster indicator and navigate to Create » Cluster, Class and Variant » Unbundle you will create a small node which can be used to view the individual contents of a cluster separately. You can stretch out the Unbundle function to view more entries of the parameter set. To place these items once you've manipulated back into their original cluster, use the Bundle function.
    Best regards,
    Alex Thomas, University of Manchester School of EEE LabVIEW Ambassador (CLAD)

  • Array of clusters does not updates properly

    Hello...
    I builded a VI to split a series of 10000 values in many segments, each one overlapped by 50 %. The product of all the values of each segment is calculated and by means of visual inspection and Led selection (green means that the segment will be used, red means that the segment will not be used), I can select which segment will be used for the final sum. For example: if I have 20 segments, but only segments 4, 8, 15 are green, only the products of the segments 4, 8 and 15 will be summed to generate the final result. The final result should be refreshed if the Lenght of the Segment is changed or if any Led is switched to green or red, but the update does not occur properly. Sometimes all the segments are showed properly but if I change one Led or the Lenght of Segment control, the given result is wrong. When the Lenght of Segments control is changed, all the Leds need to be adjusted to green state.
    I don't know where is the problem in my code, but I think that is something related to the cluster constant that is wired to the first Unbundle by name, since that this constant was created from the cluster before it was inserted into array, and not from the array. If I create the constant from the array, the code returns an error.
    To avoid race conditions I used flat sequence.
    Thanks in advance for attention and help.
    Dan07
    Message Edited by dan07 on 07-26-2008 08:31 AM
    Attachments:
    Array of clusters.vi ‏43 KB

    I spotted one problemous area:
    Where you have an uninitialized shift register, so for every event the value of the last iteration is used.
    What's the meaning of this piece of code?
    I think you devide the length of the array by half the length of the segment?
    Ton
    Message Edited by TonP on 07-26-2008 05:34 PM
    Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
    Nederlandse LabVIEW user groep www.lvug.nl
    My LabVIEW Ideas
    LabVIEW, programming like it should be!
    Attachments:
    Array of clusters_BD.png ‏2 KB
    Array of clusters_BD2.png ‏1 KB

  • How to split presentation level and business level using two ATG instances

    Hello All!
    We are investigating possibility of splitting ATG presentation (web store with jsp pages and other presentation components) and business (ATG components such as Pricing, Catalog, etc.) levels. The first idea that we have is simply start two instances of ATG. One instance will serve client requests (presentation level) and communicates with another ATG instance (business level) where all ATG components are situated.
    The main problem is a Nucleus container which is used for accessing all ATG components. And we don't know right solution how to point to a Nucleus container that is situated on a remote ATG instance. Right now we have two ideas how to establish communication between two ATG instances:
    - try to replace local Nucleus container by remote one using RMI;
    - do not replace Nucleus container by implementing some custom filter that can redirect all servlet requests to another ATG instance. In that case we will have two Nucleus containers.
    What do you think about all this? Do you have any other solutions how to solve the task? Maybe we lost something? Can we deploy a cluster of ATG instances that will communicate between each other?
    Thanks in advance.
    Andrey.
    Edited by: 945758 on Jul 11, 2012 7:00 AM

    Yes ATG system can have multiple nodes grouped in one or more clusters managed by load balencer
    If the services you are talking about are inherently ATG services like login, add to cart, checkout then its better to implement it with ATG.
    ATG provides and support both REST and SOAP based Webservice and allows you to expose any ATG component as service thus making it available outside ATG space.
    To be able to manage load better you can split your servers to page serving servers and services oriented servers and place them into multiple clusters.
    Though I haven't seen anyone using this kind of topology so not sure whether it's there is any challenge in setting up this topology.

  • Splitting data from waveform output from niScope Multi Fetch Cluster

    I am trying to split the data of a 1-D array of clusters.  I am using a PXI-5421 function generator and a PXI-5122 digitizer.  The NiScope Multifetch Cluster.vi retruns output data of the waveform as a 1-D array of cluster of 3 elements.  This array contains information from channels 0 and 1.  I am trying to extract the waveform component information from each of the channels so I can operate on them an re-assemble two waveforms.  Can someone point me in the right direction?  I can't seem to come up with the right tool from the array or cluster tools.  Thanks.
    Jeff
    Solved!
    Go to Solution.

    You just use an Index Array and an Unbundle by Name or Unbundle.
    Message Edited by Dennis Knutson on 04-30-2009 10:41 PM
    Attachments:
    Index and Unbundle.PNG ‏4 KB

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Multiple stored procedure run across clusters

    Hi there,
    Currently we are having a single Oracle 11g instance. All our stored procedures are run on this database either directly from within the database (DBMS_JOB) or called externally from front-end java web apps.
    The question is we now have to cater for scenarios where there will lot more avenues (other java apps) calling the stored procedure and we want to provision for such scenarios without impacting the performance or agreed upon throughput back to the calling application.
    One option I read was about clustering (RAC) and how this can be configured at the database level to transparently cater to a huge volume of stored procedure calls to the database without affecting (requiring changes on the calling entities). So the java front end apps will only refer to a single database but Oracle RAC configured for the database will handle the heavy load scenario in seamless and transparent fashion across the clusters
    We don't want to split the execution of one single stored procedure run into multiple process for performance we have that part covered with optimizing the queries of the stored procedure
    but we want to provision for a scenario where multiple apps can spawn calls to the stored procedures simultaneously and the database is efficient about handling these parallel stored procedure invocations and does not overwhelm under the pressure of large volume of stored procedure run causing degradation of stored procedure runtime/response time.
    Please provide your thoughts.

    If those stored procedures are making DML calls against more-or-less the same data, you will be introduce contention.
    In a single instance (non-RAC) contention is within the SGA (buffer_cache, shared_pool, latches, enqueues). If the application is not scalable within the single instance, you will likely make performance worse when running it in RAC.
    So, you must first evaluate how it works (or would work) in a single instance database -- find out if it scalable merely by adding hardware. If it is scalable but your current hardware is limited, you can consider RAC. If it is not scalable and you have serialisation or contention, performance would be worse in RAC.
    Hemant K Chitale

  • Layer 3 etherchannel - load 0x00

    folks
    i have a quick question i'm hoping you can help with
    i have a layer 3  etherchannel between 2 * 3750G stacks, both clusters use gi1/0/25 and gi/2/0/25 for the etherchannel and both are set as 'on'
    i have traffic passing over the link but i get the following output from a show etherchannel detail
    there are no bits being recorded and the load is 0x00
    can someone help to explain this please?
    thanks to anyone taking the time to reply
    Group: 10
    Group state = L3
    Ports: 2   Maxports = 8
    Port-channels: 1 Max Port-channels = 1
    Protocol:    -
    Minimum Links: 0
                    Ports in the group:
    Port: Gi1/0/25
    Port state    = Up Mstr In-Bndl
    Channel group = 10          Mode = On              Gcchange = -
    Port-channel  = Po10        GC   =   -             Pseudo port-channel = Po10
    Port index    = 0           Load = 0x00            Protocol =    -
    Age of the port in the current state: 10d:01h:43m:57s
    Port: Gi2/0/25
    Port state    = Up Mstr In-Bndl
    Channel group = 10          Mode = On              Gcchange = -
    Port-channel  = Po10        GC   =   -             Pseudo port-channel = Po10
    Port index    = 0           Load = 0x00            Protocol =    -
    Age of the port in the current state: 10d:01h:43m:54s
                    Port-channels in the group:
    Port-channel: Po10
    Age of the Port-channel   = 10d:01h:45m:24s
    Logical slot/port   = 10/10          Number of ports = 2
    GC                  = 0x00000000      HotStandBy port = null
    Passive port list   = Gi1/0/25 Gi2/0/25
    Port state          = Port-channel L3-Ag Ag-Inuse
    Protocol            =    -
    Port security       = Disabled
    Ports in the Port-channel:
    Index   Load   Port     EC state        No of bits
    ------+------+------+------------------+-----------
      0     00     Gi1/0/25 On                 0
      0     00     Gi2/0/25 On                 0
    Time since last port bundled:    10d:01h:43m:54s    Gi1/0/25

    Hi,
    I guess this is just a hardware limitation on the 3k platforms (ASICs).

  • How to create Groups and Group Leaders in Clusters.

    Hi,
    As we know in unicast there is one to one communication and there are groups to control the Thread
    Management, How the Groups and the Group Leaders are created.
    Regards,
    Vardhan.

    Unicast clustering uses TCP/IP sockets to pass cluster messages between members. To avoid requiring each cluster member
    to have connectivity to every other cluster member, WebLogic Server uses a group leader strategy whereby the oldest member
    of the group (in other words, the server that was started first) is designated the group leader. All members of the cluster
    connect to the group leader so that the group leader acts as the relay point for cluster messages between members.
    If the group leader goes down, the next oldest member becomes the new group leader.
    As you can imagine, the group leader strategy works well for small groups but becomes less efficient as the number of members
    of the group grows large. As such, WebLogic Server uses a multiple group leader strategy where it limits the number of members
    in a group to 10. If the cluster is larger than 10 members, WebLogic Server splits into two or more groups, each with their own
    group leader. The group leaders themselves are all interconnected to minimize the number of hops that a cluster message must
    traverse to reach all cluster members.

Maybe you are looking for

  • Error when creating DataWarehouse Tables

    I am getting an error when trying to create Datawarehouse tables from DAC ETL Management. See log below. ===================================== STD OUTPUT ===================================== CREATING SIEBEL DATABASE OBJECTS D:\orahome\10gR3_1\bifoun

  • Unable to start the activitygraph-engines application

    Hello, I'm trying to set up a WebCenter domain with the activity graph application. While starting the WC_Utilities server I get this error: weblogic.application.ModuleException: Failed to load webapp: '/activitygraph-engines' at weblogic.servlet.int

  • After fresh windows install, can't scroll with screen

    Before my disk image got corrupted, I was able to scroll by dragging my finger down the screen (X230 Tablet), but now I can't. Does anyone know how I could get this functionality back?

  • CAS servers became unresponsive while applying rollup 7

    When I applied rollup 7 to a CAS/Hub server, it became unresponsive for several minutes. Users in outlook were disconnected from their mailboxes, and some still hit the bad server on closing/reopening outlook. Is there an easy way to take a CAS serve

  • ILLUSTRATOR/P.SHOP EPS HELP!

    Hi People... I've got a fairly tricky one here involving MAC OS X 10.52 - Leopard. Using illustrator/photoshop CS3. Also using quark/indesign. I generally save my photoshop images as EPS rather than tiff. I double clicked a photoshop eps in the finde