APIC Cluster - Why minimum of 3 controllers?

Hi!
I'm just getting started on learning about the Cisco ACI, and one of the things that struck me was that Cisco has recommended (or mandated?) a minimum of 3 APICs in a cluster. Is this a requirement? If so, why can't we just have 2 controllers in a cluster?
In this webpage (http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/unified-fabric/white-paper-c11-730021.html), there was a discussion on the number of controllers vs data loss, but there is no explanation about why 2 controllers can't be used.
Thanks!

Hello 
To understand why three APICs is the recommended minimum you must understand how the APICs distribute information between the three. All parts of ACI are datasets generated and processed by the Distributed Policy Repository and that data for those APICs functions are partitioned into logically banded subsets called shards (like  DB shard). a Shard is then broken into three replicas or copies. each APIC has a replica for every shard but only 1 APIC is the master for a particular replica/shard. This is a way to distribute the workload evenly and load balance processing across the cluster of 3 as well as a fail safe in case an APIC goes down.
Now that the theory is out of the way, imagine one of your three APICs goes down. the remaining two will negotiate who will now be the master for the shards that the down APIC was in charge of. Workload is then load balance to the two and the cluster becomes fully fit again. Working with 2 APICs is really unadvised due to the split brain condition. This occurs when APIC 1 and APIC 2 thing they are both leaders for a shard and cannot agree so the shard is in contention and the cluster is unfit/"data layer partially diverged". with the cluster in this state it is unadvised to make changes in the GUI, i don't remember if its even allowed. 
With the case of only 1 APIC, that APIC does all the work, it is the leader for all shards but if it goes down then you can not make any changes at all. data plane will continue forwarding but since no APIC, theres no way to create new policies or changes. 
Thanks for using the support forums! hope this helps!

Similar Messages

  • Why we need Custom controllers in Model Applications

    Hi Friends
    In Adaptive RFC Model Application we can use both the controllers for creating contex structure
    wat is the main aim for using custom controller
    it has any special feature in custom conrollers
    i read both the controllers have same functionality n flexibility
    any body differentiate these two controllers
    n which one is efficient for connecting Model applications
    Regards
    Narayana

    Hi Narayana,
    There are many situations when you need custom controllers over component controller.
    Suppose you have one DC and you want to make some part of this DC as Public which may include Local context nodes, elements, methods or model context with specified methods to process some functionality. By Interfacing a custom controller with all the exportable methods and context you reach to a certain level of security.
    In this case you are distributing your module in multiple custom controllers which ultimately reduces complexity of programming, code and obviously execution time.
    This is up to  the developer, if he/she wants to create custom controllers or not, but yes in certain situations you should go for custom controllers. It is always good to have multiple custom controllers when you are using single DC for different modules.
    Regards,
    Amol

  • JMS in cluster - why is it assymetric?

    We use 8.1 in a 2 noded cluster/+1 adminserver/.
              On each node, there is a JMS server on migratable target. Lets say 'test1' and 'test2'.
              A distributed queue has 2 destination , one on the each node. The JNDI name of the queue is 'test-dist' so the 2 destinations are test-dist@test1 & test-dist@test2.
              The situation seems to be competely symmetric. If I check the JNDI tree of each of them I can see all three of the prev JNDI Names.
              OK
              Now I turn off one of the nodes.
              If I turn off node1 then in the JNDI tree of node2 there is:
              test-dist && test-dist@test2
              OK
              But if I had turned off node 2 then in the JNDI tree of node 1 I can just find
              test-dist@test1
              and 'test-dist' cannot be found thus cannot be accessed. Am I doing something wrong? Or istn't it supposed to be compeletly symmetric? If more details eeded feel free to ask.
              Thanks for your remarks.

    Hello,
              This does sound odd. If you can see the distributed destination and other destinations on the JNDI tree of both nodes it suggests to me that eveything is deployed correctly.
              Can you double check that you target your distributed destination to your cluster and selected all the weblogic server instances within that cluster (default setting)?
              If the problem persists I would raise a support case.
              Hussein Badakhchani
              www.orbism.com

  • APIC error message

    Hi 
    Today, while i prepare the ACI PoC, i seem to be a lot of question 
    In order to again later, turned off the ACI all Components(APICs, Leaf switch, Spain switch)
    And today, the power turned on again. i have applied the new policy to the APIC,
    can not be apply because the following message:
    SERVER ERRO
    The Policy change was net implement due to internal communication failure from 
    the APIC to other APICs/nodes on the fabic, please retry request
    this message is occured every time, whenever the power is reapply
    Could you tell me what i should do?
    Thank you

    The "The policy change was not implemented due to internal communication failure from the APIC to other APICs/nodes on the fabric, please retry the request" is most likely that the APIC cluster is "Not" Fully Fit.
    The Policy updates may NOT execute until the APIC Cluster is "FULLY FIT". The APICS may be waiting for "Waiting for Cluster Convergence" and will not proceed with policy updates.
    This "Waiting for Cluster Convergence" could be caused by a process(s) that has crashed or Core'd. If you discover that your cluster is not "Fully Fit", evaluate the processes of each APIC and try to start the process(s) that has crashed or core'd.
    To evaluate processes of each APIC:
    Access APIC Admin GUI
    Select SYSTEM -> CONTROLLERS
    Expand CONTROLLERS in the left work pane
    Expand Each APIC
    Select the PROCESSES Folder for each APIC
    Change the OBJECTS PER PAGE from 15 to 50 so that you can see & scroll thru all listed processes
    Look at the PROCESS IDs to see if you see any processes with a PROCESS ID of 0 (Exclude the KERNEL process)
    If you see a PROCESS ID of 0 (that is not the KERNEL process), the associated PROCESS NAME is the process that needs to be STARTED.
    Open a terminal window and SSH to the APIC(s) with the failed process(s).
    Use the CLI command "acidiag start" to try to start the failed process(s).
    For example:
    Starting a failed "dhcpd" process on APIC1
    deadbeef@fab1_apic1:~> acidiag start
    usage: acidiag start [-h]
    {xinetd,mgmt,ae,lldpad,observer,dbgr,idmgr,dhcpd,eventmgr,policymgr,reader,bootmgr,topomgr,nginx,vmmmgr,appliancedirector,scripthandler}
    deadbeef@fab1_apic1:~> acidiag start dhcpd
    {u'dme': {u'output': u'dhcpd start/running, process 4264\n', u'error_code': 0, u'error_string': u''}}
    NOTE: If you try starting the failed processes and the APIC Cluster is "NOT FULLY FIT" and you are experiencing errors and can not make configuration or policy changes, Please open a Service Request with the Cisco TAC
    Before opening a Service Request with the Cisco TAC please gather the following:
    (SSH to an APIC in the CLUSTER and run the following commands)
    1. version
    deadbeef@fab2_apic1:~> version
    node type   node id  node name    version
    controller  1        fab2_apic1   1.0(1k)
    leaf        101      fab2_leaf1   n9000-11.0(1d)
    leaf        102      fab2_leaf2   n9000-11.0(1d)
    leaf        103      fab2_leaf3   n9000-11.0(1d)
    leaf        104      fab2_leaf4   n9000-11.0(1d)
    spine       201      fab2_spine1  n9000-11.0(1d)
    spine       202      fab2_spine2  n9000-11.0(1d)
    2. show cores
    deadbeef@fab2_apic1:~> show cores
    # Executing command: 'cat /aci/fabric/inventory/pod-1/troubleshooting/summary; cd /aci/system/controllers/; find . -name troubleshooting -exec echo ';' -exec cat '{}'/summary ';' '
    troubleshooting:
    node  module  creation-time          file-size  service-name   process  original-location                                exit-code  death-reason  last-heartbeat
    202   27      2014-08-               35002939   policy_mgr     4177     /var/sysmgr/logs/                                11         2             0.000000
                  26T14:45:57.000-04:00                                     1409078757_0x1b01_policy_mgr_log.4177.tar.gz
    104   1       2014-09-               53567750   event_manager  4017     /var/sysmgr/logs/                                6          2             0.000000
                  04T16:07:05.000-04:00                                     1409861225_0x101_event_manager_log.4017.tar.gz
    3. techsupport all
    deadbeef@fab2_apic1:~> techsupport all
    Triggering techsupport for Switch 201 using policy supNode201
    Triggered on demand tech support successfully for node 201, will be available at: /data/techsupport on the controller.
    Triggering techsupport for Switch 202 using policy supNode202
    Triggered on demand tech support successfully for node 202, will be available at: /data/techsupport on the controller.
    Triggering techsupport for Switch 102 using policy supNode102
    Triggered on demand tech support successfully for node 102, will be available at: /data/techsupport on the controller.
    Triggering techsupport for Switch 103 using policy supNode103
    Triggered on demand tech support successfully for node 103, will be available at: /data/techsupport on the controller.
    Triggering techsupport for Switch 101 using policy supNode101
    Triggered on demand tech support successfully for node 101, will be available at: /data/techsupport on the controller.
    Triggering techsupport for Switch 104 using policy supNode104
    Triggered on demand tech support successfully for node 104, will be available at: /data/techsupport on the controller.
    Triggering techsupport for APIC using policy ts_exp_pol
    Triggered on demand tech support successfully for controllers, will be available at: /data/techsupport on the controller.
    Use 'status' option with your command to check techsupport status
    (Note: The "techsupport all" may have issues depending on process that has cored or crash.  A "techsupport local" command may need to be run instead)
    The Tech Support files will be located on the APIC(s) in the following directory:  "/data/techsupport"
    Thank you for using the ACI Cisco Support Forum.

  • 2008 Failover cluster unable to create computer account

    Hello,
    I have created a 2008 R2 Failover cluster and I am trying to add a Fail over File server to this.
    I get the dreaded
    Cluster network name resource 'OfMaClusterFS' failed to create its associated computer object in domain 'xxx.domain' for the following reason: Unable to create computer account.
    The text for the associated error code is: Access is denied.
    Please work with your domain administrator to ensure that:
    - The cluster identity 'OFMACLUSTER$' can create computer objects. By default all computer objects are created in the 'Computers' container; consult the domain administrator if this location has been changed.
    - The quota for computer objects has not been reached.
    - If there is an existing computer object, verify the Cluster Identity 'OFMACLUSTER$' has 'Full Control' permission to that computer object using the Active Directory Users and Computers tool.
    I have created clusters frequently in the past, on my own Domains that I am a domain admin of.  Now I am trying to make one on our larger corporate domain that I am not a domain admin of and get this error.
    By default, domain users can not add computer accounts to our domain.  I do however have an limited account that can add computers to the domain... but I have tried all the tricks I can think of to try and add the Network name to AD and no luck.#
    I have tried running the cluster service with this account, but it is still trying to use the OFMACLUSTER$ identity to create the Network name.  I have tried manually creating the network name using my limited account, but that doesn't work either,
    same error.  I don't have the ability to change permissions on the computer name I added for the network name to AD.
    I have raised a ticket to our wintel team to try and get them to help, but they aren't exactly the most responsive bunch.  I'm just wondering what the best way around this problem is if I am not a domain admin and I can't make the changes I need, or
    what concise instructions I can give to the domain admins so that they can help me out without saying that it is a security breach etc.
    I would appreciate any advice on this as it's now urgent and also something I will have to do in the future fairly regularly and don't want to get caught in the situation in the future.

    Hi jogdial,
    To create a cluster, the minimum permission is: Requires administrative permissions on the servers that will become cluster nodes. Also requires
    Create Computer objects and Read All Properties permissions in the container that is used for computer accounts in the domain.
    If you create the cluster name account (cluster name object) before creating the cluster—that is, prestage the account—you must give it the
    Create Computer objects and Read All Properties permissions in the container that is used for computer accounts in the domain. You must also disable the account, and give
    Full Control of it to the account that will be used by the administrator who installs the cluster.
    The related KB:
    Failover Cluster Step-by-Step Guide: Configuring Accounts in Active Directory
    http://technet.microsoft.com/en-us/library/cc731002(v=ws.10).aspx
    More information:
    How to Create a Cluster in a Restrictive Active Directory Environment
    http://blogs.msdn.com/b/clustering/archive/2012/03/30/10289577.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • How can i make a APIC to a factory default ?

    In case fabric domain name mismatch, the mismatched APIC under a APIC cluster can be handled by console and GUI.
    I've checked a lot of documents regarding APIC troubleshooting but i couldn't find it.
    I think i have to do initial setup to put it on APIC cluster. 
    Could you tell me What i should do?

    Try this: 
    Resetting APIC 'admin' password 
    1- Connect USB to the APIC. The USB is to contains single dummy file named 'aci-admin-passwd-reset'
    2- Reboot the APIC
    3- Interrupt the reboot when the prompt is shown "Press any key to enter the menu"
    4- The next screen will show version of Linux installed on the APIC
    5- Select the correct version 
    6- Type 'e' to edit this command
    7- Add "aci-admin-password-reset' to the end of the command and press enter
    8- Press 'b' to boot.
    9- the APIC will boot to a prompt and ask for the new 'admin' password
    Let us know how it goes

  • Solaris 10 cluster:failover project or zone can not have same name?

    Oracle on Solaris 10 cluster: two node SUN cluster fail over, SA advised using different account (oracle01 for node01, oracle02 for node02) to failover cluster, why can't I create same 'oracle' account on both node?
    failover different project or zone can not have same user or group account name?
    thanks.

    Hi Vangelis,
    Building a cluster, requires some planning and understanding the concepts.
    A good start would be reading some of the documents linked to in this url: http://docs.sun.com/app/docs/doc/819-2969/gcbkf?a=view
    Regards,
    Davy

  • Failover cluster node - You do not have administrative privileges on the server 'servername' ?

    Hi Hello & Good morning Technet's,
    I would like to post a question which i really expecting a solution.
    I got 2 domain in one single forest.
    Domain 1: hg.corp
    Domain 2: iac.corp (iac.corp
    is a tree domain under hg.corp forest)
    Trust : Transitive
    trust between hg.corp and iac.corp
    Domain controller 1: dc.hg.corp
    (for hg.corp)
    Domain controller 2: iacdc.iac.corp
    (for iac.corp)
    I want to make a Fail-over cluster between this 2 domain controllers ( But both are in different domain literally, but in same forest )
    Process Validate cluster in dc.hg.corp
    dc.hg.corp can added, but iacdc.iac.corp failed (Error: You do not have administrative privilages on the
    server 'iacdc')
    Process Validate cluster in iacdc.iac.corp
    iacdc.iac.corp can added, but dc.hg.corp failed (Error: You do not have administrative privilages on the
    server 'dc.hg.corp')
    Technet please provide me a solution for this issue, So i can reduce server box counts.
    Thank you & Have a nice day.
    Shamil Mohamed

    Hi,
    We do not support combining the AD DS role and the failover cluster feature in Windows Server 2012.
    This behavior is by design.
    The related KB:
    You cannot add a domain controller as a node in a Windows Server 2012 failover cluster environment
    http://support.microsoft.com/kb/2795523
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Decorations outside cluster - part of cluster or not?

    LabVIEW 8.6.1.f1
    If I have a cluster on a panel, and I have a decoration (a flat frame, for example) behind it, then if I hide the cluster, the whole frame appears.
    That's just what I would expect.
    If I build a cluster in the CONTROL EDITOR, and put a flat frame outside of, and behind, the cluster, and call it a TYPEDEF, then when I put an instance on the panel, and hide the cluster, I see no frame.  It's just not there.
    Why?
    The DECOS[ ] property of the cluster won't help - it's not part of the cluster.  But if it's not part of the cluster, why does it get hidden with it? 
    Message Edited by CoastalMaineBird on 07-21-2009 12:58 PM
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks
    Solved!
    Go to Solution.

     You'd see the same thing with a boolean or a numeric or any other data type instead. 
    True, but since I encountered the problem while dealing with a cluster (container), the "container" characteristics led me down the wrong path. 
    A custom control must have one and only one control/indicator to define a basic data type.
    Decorations are attached to it, but not necessarily contained by it.
    Once defined in the control editor, the thing is a whole unit, decorations and all.
    If you can get references to pieces of it (decorations included), then you can deal with them separately (if it's not a STRICT typedef).
    You cannot get references to decorations outside a container.
    I think that about covers the rules. 
    Thanks, Nathan 
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

  • Cluster or array or other solution

    Hi
    I have an labView6i program roughly consisting of 3 components :
    'main program' : calls the 'read subvi' every program cycle, and displays
    the the output.
    'read subvi' : read a number of ADC channels, convert the ADC voltages to
    physical values, and return them as an cluster/array.
    'save subvi' : save current readback values and names of ADC channel to a
    file.
    Every physical measurable quantity has a name, conversion formula, and an
    ADC channel. So I want all these informations in the 'read subvi' (as an
    constant) and only once (so it is easy to change).
    The 'save subvi' needs the names from the 'read subvi' so I have changes
    the 'read subvi' to output both values and names(two different
    clusters/arrays).
    1:
    If I output the va
    lues from the 'read subvi' as an named cluster, they are
    farely easy to distribute to difrent instruments(placed in different tabs)
    in the 'main program' using 'unbundle by name'. But in that way every
    channel has two names... the label in the cluster, and the string conctant
    used e.g. by the 'save subvi' .
    2:
    If I output the values an an array there i a big risk that an instrument in
    the 'main program' vill show the wrong channel (if you change the order in
    the 'read subvi' and forget to do the same thing in the 'main program').
    Is it posible to convert an string constant to an label/name(later to be
    used to 'unbundle by name') on runtime ? or is there an other brilliant
    solution to my problem?
    / Peter

    Why not define the cluster array in your main VI and pass it (as an array or as individual clusters) to the read and save sub-VI's? The sub-VI's will then be more general purpose: they'll read and save whatever you tell them to.
    You can use Property Nodes to read the names (label text) of controls in a cluster. But you can't use Property Nodes to change the control names: you'll get an error like "Error 1073 occurred at Property Node (arg 1) in Read Cluster Control Names.vi. Possible reasons: LabVIEW: This property is writable or this method is available only when the VI is in edit mode."
    You need to select the item for Unbundle by name while editing the VI, not at run-time.
    If the main VI defines the cluster, why do you need to read the control labels at r
    un-time? Why not just use a string control in your cluster instead of constants and labels?
    Here's an example of reading and trying to write label text for controls in a cluster. I really don't think you need to do anything like this, but I don't really understand your application.
    Attachments:
    Read_Cluster_Control_Names.vi ‏47 KB

  • ACI - APIC installation

    Hi, 
    Anyone has tried to install APIC software within ESXi before? Is it possible to virtualized APIC controller?
    My company recently purchased an ACI lab with N9K switches, but no APIC. I managed to find a UCS C220 M3 server and had intention to build APIC cluster within it. 
    Your advises are much appreciated.
    Thank you.
    Regards,
    Alex

    There is an ACI simulator available. See this post for details:
    http://www.linkedin.com/pulse/20141207100323-42273637-virtual-networking-labs-showdown?trk=prof-post

  • Xserve RAID lost RAID 5 array on right/bottom controller

    I cleanly shut down an Xserve G5 with an Xserve RAID attached and then powered off the Xserve RAID this morning. I replaced a failing memory module in the Xserve and moved the rack 6". I then powered on the Xserve RAID, waited for a couple minutes until it was fully booted, and then booted the Xserve. When it came up, my RAID 50 volume failed to mount. Upon further investigation, it appears that the RAID 5 array on the right hand disks is "gone". In RAID Admin (Disks and Drives tab), the left side array is visible, but all disks on the right side list status as "OK" and Type: as "Spare".
    Configuration: Xserve RAID with 14 250 GB disks. Each side was configured with 6 disks in a RAID 5 array and one spare. These were striped using Software RAID on the Xserve into RAID 50 and mounted as a single volume. The RAID firmware is currently 1.5 and the Xserve is running OS X Server 10.4.3.
    To summarize today's activities: under Apple's guidance (as this entire system is covered under a Premium Support contract), I swapped the (bottom) controller for the right side, updated firmware (from v1.3/1.20a to 1.5/1.50), multiple resets of both controllers, repeated forced firmware updates, etc.
    Also, I swapped the left set of disks for the right set, and the array from the left set now shows up on the right side, and the missing array from the right is still missing with the disks on the left side. I replaced the disks to their original positions with the same results.
    RAID Admin's Utility "Recognize Array" will not perform any operations on the rigth side disks and Apple tells me that there's nothing further that I can do with it.
    None of this has solved the problem or substantially changed the issue, and the disks on the right side are still missing their associated array. Apple is unable/unwilling to offer any further help except to refer me to 3rd party data recovery services.
    Does anyone have any suggestion at all that might possibly recover the missing array? Is anyone aware of any tools that I might use to recreate the RAID 5 array on the right controller? I was hoping that there might exist some low-level tools with which the disks could be recreated into an array by hand? Are there any commercial products that would work on this? Any other ideas?
    Many thanks for any suggestions.
    - Martin
    Xserve RAID (14 250GB disks) on Xserve G5   Mac OS X (10.4.3)  

    Not sure why you swapped the controllers back and forth
    William, I don't think I was clear on this. I swapped the controller with a new controller that I had in a spare parts kit. This was at Apple's request.
    really, RAID is not a backup...
    I'm well aware, but despite my incessant warnings, users will become lulled in to a false sense of security when something "just works" for a very long time. This array wasn't intended to store valuable data that couldn't be lost, but...
    It may be possible for Apple to re-create the RAID set, did you ask AppleCare about the possibility?
    Apple has told me more than once that there's nothing else that they can do for me...and yet I keep calling back.
    I was surprised that this array was lost when there were absolutely no prior signs of a problem and the system was merely shutdown and restarted cleanly. I'm more surprised and quite disappointed to learn that Apple will do nothing else for me (under a Premium Support contract) to attempt to repair a damaged array. When I asked questions about where the RAID information is stored, I received the answers: "I can't tell you that" and "Apple doesn't release that information". I did not get the feeling that they were working with me, but rather holding my hand while they walked me through published documentation.
    I expected that there would be utilities (analagous to filesystem repair utilities) such as RAID Admin's "Recognize Array" that could help repair and recover damaged array data. I think that the lesson that I've leanred today is that I was naive to have expected such a thing without actually having investigated it ahead of time.
    Anyway, thanks for your input, William.

  • Changing MIDI Port Order is wreaking havoc on my Environment

    Anyone seen this before?
    I've got a MOTU Midi Express with 4 ins and outs - it's my main interface.
    I've also got a Novation Remote25SL which works as a control surface/controller with 3 out ports. It requires some special Environment cabling to work properly with Logic.
    Everything was working fine until lately all my projects stopped responding to MIDI from my master keyboard...
    I eventually discovered it was because Logic had some how changed the order of MIDI ports. It used to be "Midi Express 1.... 4" followed by "Remote25SL 1...3".
    Now if I look at the "outputs & Ports" page in the Environment, it's reversed. "Remote25SL 1...3" followed by "Midi Express 1....4".
    So what? Well, the dumb cables in the environment don't move when the ports do. So the cables that used to be connected to the Remote25SL are now connected to the Motu Midi Express, which wreaks havoc on my project.
    Any ideas a) what causes this, and b) how to re-order the ports so it goes back the way it was before?
    I've tried messing around in the AudioMidi setup tool but don't see any thing here that will help.
    It points up, I think, a flaw in Logic's "logic". Environment cables should be tied to physical/logical ports, not just handed out on a 'first come, first serve' to whoever shows up on the midi bus first. The system apparently can "see" the difference between the Motu and the Novation. So why can't Logic?
    Grrr. Time to start reading the Nuendo brochures again.

    I HAVE A SOLUTION
    and it works fine for me after a series of trial and error experiments.
    Quit Logic (for now) and do the following.
    1) Download the app MidiPipe from this site:
    http://www.apple.com/downloads/macosx/audio/midipipe.html
    2) Plug in all of your MIDI controllers (I mean, all of them)
    3) Start making one 'pipe' at a time. Think of them as 'rules' for MIDI flow.
    I wish I could post screen shots here... I can show you that I have 6 'pipes' happening.
    Why: I have these controllers (according to Logic) happening:
    Korg nanoKEYS, nanoPAD, nanoKONTROL; Alesis Photon X25; midiman (now M-Audio) midisport 2x2; Tascam US-144 (the MIDI portion)
    4) In these pipes, ONE AT A TIME, these pipes are very simple:
    -drag a •Midi In for each controller,
    -click the 'hijack' button.
    -drag a •Midi Out next,
    -create a new virtual Midi Out Port and name it. In my case I went numerically (e.g. 1 nanoPAD, 2 nanoKEYS...) just so I can see the port name is different than the input. Do what works for you here
    5) Create a new pipe, redo step 4 until you've created a new pipe for every controller you have.
    Save this Midipipe file, and open it prior to any Logic session, before you open Logic.
    6) Open Logic. In the global preferences, go to Controller Assignments (Command-K)
    and you'll have to define any custom controller data (this may not be necessary for you). But I had to change the controller listening port for all nanoKONTROL assignments. You may not have to do anything here at all.
    7) In the case of the folks trying to use two (or more) controllers, as in three keyboards at a time you still have to change one setting: and unfortunately this is not global for Logic, you'll have to change it every single project:
    -Under File>Project Settings>Recording, you need to check the box that says
    "Auto Demix by channel if multitrack recording"
    -For each virtual instrument, you need to set the Midi Channel it responds to. Generally on the left above your channel strip, named Inst 1 (2, 3, 4, ...) depending on which track you're on, click the drop-triangle and you'll see: MIDI Channel: ALL --- change it to whatever you assigned your controller to
    -After you've assigned those MIDI Channe;s per track; now arm each track you wish to record onto (or play, trigger, etc) ---arm meaning 'click the R so it's red'
    You should be good to go!
    To boil it down to simply: download MidiPipe, figure it out, define controller data if necessary, check Demix by Channel in Project Settings, Set each track's Midi Channel, Arm each track, record two (or more) parts at once.
    You'll spend 30 minutes or more figuring out MidiPipe for your setup (less if you're brilliant); anywheres up to an hour or more in Controller Assignments but you may not need to do anything here); and less than a minute for the rest. Do the heavy work once (and tweak it) -- after that, the checkboxes/Midi Channel assignments/track arming is like, 10 clicks every project.
    Good luck!
    Works great for me with this setup:
    all the controllers listed above, Logic 8.0.2, iMac 2.4Ghz, OSX 10.5.6
    ~Robb

  • What are files in $AGENT_HOME/sysman/emd/state/statemgmt/oracle_home

    Greetings.
    I have found my OEM 12c, 12.1.0.1 reporting on some targets which don't exist ad some which are reported incorrectly. Some poking around on a particular host for which this behavior is exhibited (this host is a member of a two node cluster) lead me to the directory $AGENT_HOME/sysman/emd/state/statemgmt/oracle_home on that host. In that directory I find the files -
    agent12c1_8_hammer
    cmdevel.ucdavis.edu_cmdevel1_oracle_database_home
    cmupg.ucdavis.edu_cmupg1_oracle_database_home
    hammersteele_cluster_home
    LISTENER_SCAN10_hammersteele_oracle_listener_home
    LISTENER_SCAN2_hammersteele_oracle_listener_home
    cmupg is a database which does not exist, It may have existed at one point but if it did was removed some time ago.
    LISTENER_SCAN10 is a non-existent scan listener.
    ./emctl config agent listtargets lists both of those tqargets -
    mytest1:product/agent12g/agent_inst/bin->./emctl config agent listtargets
    Oracle Enterprise Manager 12c Cloud Control 12.1.0.1.0
    Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
    [hammer.ucdavis.edu, host]
    [hammer.ucdavis.edu:3872, oracle_emd]
    [LISTENER_SCAN10_hammersteele_oracle_listener_home, oracle_home]
    [LISTENER_SCAN2_hammersteele_oracle_listener_home, oracle_home]
    [cmdevel.ucdavis.edu_cmdevel1_oracle_database_home, oracle_home]
    [cmupg.ucdavis.edu_cmupg1_oracle_database_home, oracle_home]
    [LISTENER_SCAN3_hammersteele, oracle_listener]
    [LISTENER_hammer.ucdavis.edu, oracle_listener]
    [+ASM1_hammer.ucdavis.edu, osm_instance]
    [agent12c1_8_hammer, oracle_home]
    [LISTENER_SCAN2_hammersteele, oracle_listener]
    [has_hammer.ucdavis.edu, has]
    (some data deleted by me)
    My question is, can anyone explain to me where those files come from, how the OEM uses them and should I clean that up? I did see SCAN_LISTENER10 reported by the console but I manually deleted it form the console.
    Another interesting observation is that there are three SCAN_LISTENERS on this cluster, 1, 2 & 3. The console reports only LISTENER_SCAN2 as being up, the others are reported as being down. COuld this have some bearing on it. SCAN_LISTENER1 is running on the other node and the file LISTENER_SCAN1_hammersteele_oracle_listener_home exists in the analocous directory on the other cluster.
    I'm puzzled. If anyone can help me out I would be most appreciative.
    Thank you.
    Bill Wagman

    Courtney,
    That makes sense and in fact the 11.2.0.2 installation was upgraded to 11.2.0.3 and 11.2.0.2 was not uninstalled so these entries from the targets.xml make sense -
    <HOME NAME="Ora11g_gridinfrahome1" LOC="/usr/local/oraGrid/product/11.2.0.2-grid" TYPE="O" IDX="1">
    <NODE_LIST>
    <NODE NAME="hammer"/>
    <NODE NAME="steele"/>
    </NODE_LIST>
    </HOME>
    <HOME NAME="OraDb11g_home1" LOC="/usr/local/oracle/product/11.2.0.2/dbhome_1" TYPE="O" IDX="2">
    <NODE_LIST>
    <NODE NAME="hammer"/>
    <NODE NAME="steele"/>
    </NODE_LIST>
    Now, on to the next question. In the statemgmt/oracle_home directory I see these files (among others) -
    cmupg.ucdavis.edu_cmupg1_oracle_database_home
    LISTENER_SCAN10_hammersteele_oracle_listener_home
    cmupg is a database which no longer exists and LISTENER_SCAN10 no longer exists. Where do those files come from? The database cmdevel does exist and I see the file -
    cmdevel.ucdavis.edu_cmdevel1_oracle_database_home
    there are three other databases on this cluster, why are there not files for these other databases? I'm very confused and appreciate your help.
    Thank you.
    Bill Wagman

  • What agent_home/sysman/log/agabend.log can hold?

    There are log files in the agent_home/sysman/log.agabend.log is one of them.In my agabend.log shows in the following:
    Thu May 24 09:38:07 2007
    XXXXXXXXXXXXXXXX
    Thu May 24 09:40:29 2007
    XXXXXXXXXXXXXXXX
    I am interesting why this occur when I starting agent?
    Does anyone know the reason of them or why this occur?

    Courtney,
    That makes sense and in fact the 11.2.0.2 installation was upgraded to 11.2.0.3 and 11.2.0.2 was not uninstalled so these entries from the targets.xml make sense -
    <HOME NAME="Ora11g_gridinfrahome1" LOC="/usr/local/oraGrid/product/11.2.0.2-grid" TYPE="O" IDX="1">
    <NODE_LIST>
    <NODE NAME="hammer"/>
    <NODE NAME="steele"/>
    </NODE_LIST>
    </HOME>
    <HOME NAME="OraDb11g_home1" LOC="/usr/local/oracle/product/11.2.0.2/dbhome_1" TYPE="O" IDX="2">
    <NODE_LIST>
    <NODE NAME="hammer"/>
    <NODE NAME="steele"/>
    </NODE_LIST>
    Now, on to the next question. In the statemgmt/oracle_home directory I see these files (among others) -
    cmupg.ucdavis.edu_cmupg1_oracle_database_home
    LISTENER_SCAN10_hammersteele_oracle_listener_home
    cmupg is a database which no longer exists and LISTENER_SCAN10 no longer exists. Where do those files come from? The database cmdevel does exist and I see the file -
    cmdevel.ucdavis.edu_cmdevel1_oracle_database_home
    there are three other databases on this cluster, why are there not files for these other databases? I'm very confused and appreciate your help.
    Thank you.
    Bill Wagman

Maybe you are looking for

  • Help with an initial cap issue

    I'm working along with iBooks Author and found something in my way. I have an unusual font I am using for a very large initial cap at the start of each new chapter. The font is .otf and with a color box under it to make the rest of the text move "off

  • DTW user-defined object issues

    Dear all, I've tried to import a user-defined object with a template through DTW. However, I got an error message "To generate this document, first define the numbering series in the Administration moduleApplication-defined or object-defined error. A

  • Is it possible to get the fields in to our user exit .....?

    Hi All,     In the user exit some structures are passed as import parameters in the function module. but we want other than those structure fields in to our user exit. that means those fields are dynamically populated in to the transaction.based on t

  • Trackpad inadvertently resizes screen icon size

    I'm noticing that often when I flip back to the Finder or have left the machine idle awhile and come back to the desktop that the very first gesture on the trackpad is shrinking the whole desktop. I thought maybe this was related to the Two Finger Sc

  • HyperV 2012 R2 Multiple VM Copies

    I have 2 HyperV 2012 R2 servers in a datacenter and 1 HyperV 2012 R2 server in a disaster recovery site. Both servers at DC have access to shared storage (fibre based SAN) if needed. Ideally, I would like a given VM to reside on DCVM1 and auto-failov