Multiple NIC's on the same subnet

Hey All,
I want to start off with saying I did NOT set these servers up.
I administer 2 Load Balanced Server 2012 R2 servers hosting multiple Hyper-V instances each. On both servers in Network Connections I see that there's 3 virtual switches and 1 cluster switch.
I simply need to figure out which NIC is being used by the host so that I can finish setting up a cloud backup. The only solution I've come across yet is to disconnect the ethernet to each NIC individually and see if the host can get to the internet. But
that's not practical until the weekend (a time which I prefer not to work).
So, is the cluster switch the NIC that the host is using or is that not necessarily true?
On the primary server the cluster switch appears to be the hosts NIC. It has a static IP address and it has a reservation in DHCP. It's the only one of the NIC's on either server with a static IP. Doing an nslookup on the primary server by hostname resolves
to that IP address. However, on the secondary server when doing an nslookup, it resolves to the IP address of all 4 NIC's. If I ping the hostname of the secondary server it returns only one IP address. Is that the correct IP address or not?
Thanks for your help in advance.

Hi Sir,
Sorry for the dealy .
Please run the following command to check what physical NIC was bound to external virtual switch:
PS C:\Users\administrator.SERVERLAB> get-vmswitch
Name SwitchType NetAdapterInterfaceDescription
Internal Internal
New Virtual Switch for andy Internal
qostest Internal
team2 Internal
External External Intel(R) 82579LM Gigabit Network Connection
Best Regards,
Elton Ji
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

Similar Messages

  • Multiple SSID's on the same subnet?

    Can you have Multiple SSID's on the same subnet?
    SSID1 authenticates clients via radius.
    Our corporation bought printers with wireless cards that only support WPA-PSK so we created SSID2 for the printers. We can connect to both SSID's and ping from SSID1 to SSID2 but we can not perform other functions such as view the printer management interface with a browser. Should it be possibe to communicate between SSID1 and SSID2 on the same subnet?

    Yes you should have no issue, but the only thing is that you are using a lower security method... so either you put them on different subnets so you can control the traffic via acl's or might as well use the same security method to make it easier. The fact that you can ping sounds like you should be able to http to the device.

  • Multiple Scopes for the same subnet with Cisco CNR

    I have a requeriment from my internal client that he wants to have  multiple scopes within the same /21  subnet , but he wants to have different scopes for each group of servers ( in the /21 subnet).
    We have 2 Cisco CNR ( I dont know the version they are running because they are managed by an external vendor) but they are in active / standby mode.
    My question is .. it is posible to make the CNR have different scopes (ranges of IP's) for the same subnet ( /21) and assign those based on the client ( I dont know what criteria can be followed .. maybe Mac Address ???).
    I appreciate any help or pointing to the right documentation
    Thank you in advanced

    I don't know that you would want a separate PO print. You can print the same PO print twice to provide a copy for the freight vendor. I think it should be okay as long as it is clear that the PO is with the vendor for the products but there is a condition specifying the freight deal. The second vendor (say the freight vendor) should be okay with this PO as well. I don't remember how the standard SAP PO forms handled the vendor condition but it should be enough I would think. I never came across a separate PO for freight since it is usually an value added service that is an integrated part of shipping goods and the paper-work usually is leveraged from PO, bill-of-lading, etc. SAP will make it difficult if you do need a separate PO since there is no way to use account assignment to material and therefore would need to manually account code and invoice verification, post to a clearing account in FI for re-class to material or charge to internal order with settlement to material or something like this.
    I would just make sure that the PO print makes sense in terms of being able to see the freight charge and maybe the vendor name or something (look at the standard SAP PO form to see what the best practice is).

  • Multiple 11gR2 Clusterware installations on the same subnet using GNS

    Hi,
    I am hoping someone could shed some light on an issue I appear to have regarding the installation of 11gR2 Clusterware.   The main issue is that I have a host-vip.subdomain  that fails to startup on installation.  This is my 3rd cluster in my environment; the other 2 installed/configured ..fine.   This install fails because the host-vip.subdomain does not resolve with its own GNS service.
    Further investigation lead me down a path involving my other clusters.   I found in my DNS server /var/log/messages  file that the host-vip.subdomain  was trying to resolve to  host-vip.subdmain.subdomain_clust1 on IP xx.xxx.130.20.   However, this cluster's GNS service is listening on xx.xxx.130.22
    More detail on the environment:
    I currently have in production,  two  2-node clusters.   Not RAC..just 2-node linux clusters; On the production clusters, I do have a single-instance database running in a 'warm failover' configuration.   I do use SCAN to access each of the database on their respective clusters. 
    I am using GNS and DHCP (obviously) to generate the VIPs for each cluster.    Cluster1 GNS IP - xx.xxx.130.20  ; Cluster2 GNS IP xx.xxx.130.21.   Both are configured in DNS and resolve the SCAN address to each of the VIPs on that server.
    When testing SCAN access to the database, I noticed that a tnsping DB1  - which uses  SCAN-name1.subdomain1  connects fine.   Also, tnsping to DB2 using SCAN-name2.subdomain2  connects fine as well. 
    However, the weird part is that if I use each others subdomain..they still connect ...ie....tnsping DB1 using SCAN-name1.subdomain2  ...still connects to DB1.. albeit , it takes about 10x longer to get a response.   
    This has lead me to the idea that since GNS is basically a mDNS service, broadcasting on the subnet .130...could it be possible that ANY GNS service on that subnet could resolve a name lookup request for any other GNS on the same subnet , and during installation of a new server, cause a new VIP to go to the wrong GNS service  ? 
    So, my question is this :  Is there any requirement I may have missed that state multiple Clusters using GNS / SCAN  MUST BE on different subnets, as to not interfere with each other's lookup requests  ?
    Any info would be helpful
    ~ Allan

    Was able to override the multicast address thru the command line and have just P1 work with
    the following overrides.
    -Dtangosol.coherence.clusteraddress=P1Cluster -Dtangosol.coherence.clusterport=11111
    where p1Cluster=224.2.1.99 (say)
    Thanks,
    Vipin

  • Multiple Clusters on the same subnet

    Hi,
    We have two separate projects using coherence (3.5.2) in our location.
    Unfortunately they will be deployed on the same subnet.
    We use ExtencdTCP on the client side to connect to the cluster.
    What steps should be taken so that Project1 (P1) is kept separate from P2...
    We only have control over P1 or is there something that can be done purely from P1 config that will achieve this aim?
    The config for the client is given below, since we can restrict the hostnames and the ports in the tcp-initiator elements, we can easily force the client to connect only to the relevant P1 machines.
    +<cache-config xmlns="http://schemas.tangosol.com/cache">+
    +<caching-scheme-mapping>+
    +<cache-mapping>+
    +<cache-name>dist-*</cache-name>+
    +<scheme-name>extend-direct</scheme-name>+
    +</cache-mapping>+
    +</caching-scheme-mapping>+
    +<caching-schemes>+
    +<remote-cache-scheme>+
    +<scheme-name>extend-direct</scheme-name>+
    +<service-name>ExtendTcpCacheService</service-name>+
    +<initiator-config>+
    +<tcp-initiator>+
    +<remote-addresses>+
    +          <socket-address>+
    +<address>P1.1</address>+
    +<port>8078</port>+
    +</socket-address>+
    +               <socket-address>+
    +                    <address>P1.2</address>+
    +                    <port>8078</port>+
    +               </socket-address>+
    +               +
    +          </remote-addresses>+
    +</tcp-initiator>+
    +<outgoing-message-handler>+
    +               <heartbeat-interval>50s</heartbeat-interval>+
    +               <heartbeat-timeout>35s</heartbeat-timeout>+
    +               <request-timeout>30s</request-timeout>+
    +          </outgoing-message-handler>+
    +</initiator-config>+
    +</remote-cache-scheme>+
    +</caching-schemes>+
    +</cache-config>+
    On the server(s) there are
    a.ExtendTCPService running on each
    b.Mulitple cache servers with a distributed scheme running
    The cache-config for the server is given below, how can we restrict the hosts that it searches for for clusters to join?
    We noticed the configuration elements clusterport etc. that seem to be able to do this also authorized-hosts? Can this be done in the cache-config element or only in the cluster-config element.
    +<?xml version="1.0"?>+
    +<!--+
    +|+
    -->
    +<!DOCTYPE cache-config SYSTEM "cache-config.dtd">+
    +<cache-config>+
    +     <logging-config>+
    +     <destination>cache-server.log</destination>+
    +     <destination>stderr</destination>+
    +     </logging-config>+
    +<caching-scheme-mapping>+
    +<cache-mapping>+
    +<cache-name>dist-*</cache-name>+
    +<scheme-name>dist-default</scheme-name>+
    +</cache-mapping>+
    +<cache-mapping>+
    +<cache-name>repl-*</cache-name>+
    +<scheme-name>repl-default</scheme-name>+
    +</cache-mapping>+
    +</caching-scheme-mapping>+
    +<caching-schemes>+
    +<distributed-scheme>+
    +<scheme-name>dist-default</scheme-name>+
    +<serializer>+
    +          <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>+
    +          <init-params>+
    +          <init-param>+
    +               <param-type>string</param-type>+
    +               <param-value>custom-types-pof-config.xml</param-value>+
    +          </init-param>+
    +          </init-params>+
    +     </serializer>+
    +<backing-map-scheme>+
    +<local-scheme/>+
    +</backing-map-scheme>+
    +<autostart>true</autostart>+
    +</distributed-scheme>+
    +<replicated-scheme>+
    +<scheme-name>repl-default</scheme-name>+
    +<backing-map-scheme>+
    +<local-scheme/>+
    +</backing-map-scheme>+
    +<autostart>true</autostart>+
    +</replicated-scheme>+
    +<proxy-scheme>+
    +<service-name>ExtendTcpProxyService</service-name>+
    +<thread-count>5</thread-count>+
    +<acceptor-config>+
    +<tcp-acceptor>+
    +<local-address>+
    +<address>localhost</address>+
    +<port>8078</port>+
    +</local-address>+
    +</tcp-acceptor>+
    +<serializer>+
    +<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>+
    +<init-params>+
    +<init-param>+
    +<param-type>string</param-type>+
    +<param-value>custom-types-pof-config.xml</param-value>+
    +</init-param>+
    +</init-params>+
    +</serializer>+
    +</acceptor-config>+
    +<autostart>true</autostart>+
    +</proxy-scheme>+
    +</caching-schemes>+
    +</cache-config>+
    Thanks for your response,
    Vipb

    Was able to override the multicast address thru the command line and have just P1 work with
    the following overrides.
    -Dtangosol.coherence.clusteraddress=P1Cluster -Dtangosol.coherence.clusterport=11111
    where p1Cluster=224.2.1.99 (say)
    Thanks,
    Vipin

  • Howto deal with multiple source files having the same filename...?

    Ahoi again.
    I'm currently trying to make a package for the recent version of subversive for Eclipse Ganymede and I'm almost finished.
    Some time ago the svn.connector components have been split from the official subversive distribution and have to be packed/packaged extra. And here is where my problem arises.
    The svn.connector consists (among other things) of two files which are named the same:
    http://www.polarion.org/projects/subversive/download/eclipse/2.0/update-site/features/org.polarion.eclipse.team.svn.connector_2.0.3.I20080814-1500.jar
    http://www.polarion.org/projects/subversive/download/eclipse/2.0/update-site/plugins/org.polarion.eclipse.team.svn.connector_2.0.3.I20080814-1500.jar
    At the moment makepkg downloads the first one, looks at its cache, and thinks that it already has the second file, too, because it has the same name. As a result, I can neither fetch both files nor use both of them in the build()-function...
    Are there currently any mechanisms in makepkg to deal with multiple source files having the same name?
    The only solution I see at the moment would be to only include the first file in the source array, install it in the build()-function and then manually download the second one via wget and install it after that (AKA Quick & Dirty).
    But of course I would prefer a nicer solution to this problem if possible. ^^
    TIA!
    G_Syme

    Allan wrote:I think you should file a bug report asking for a way to deal with this (but I'm not sure how to fix this at the moment...)
    OK, I've filed a bug report and have also included a suggestion how to solve this problem.

  • Ip in the same subnet gets routed Why?

    Hi
    In windows 2008 server R2, it is connecting to production network through the teamed network adapter.Ip of the teamed adapter is 10.157.86.31 255.255.255.0 and its gateway is 10.157.86.1
    And the server is getting backed up with the backup interface configured with 10.128.141.64 255.255.248.0 in one of the nic in the server connected to the NAS Drive with the ip address 10.128.141.28 .The backup was happening perfectly as the backup is a non-routed
    network.
    Once the motherboard of the server is changed, suddenly the backup stopped worked failing in authentication to the NAS drive because in NAS Drive authentication is setup based on the ip addresses of the servers connected to the NAS Drive.
    What I have found is that even though the server's backup ip address is not changed , still the NAS drive ip 10.128.141.28 is reached via another network gateway 10.157.86.31 even though the NAS DRive is connected in the same subnet.
    Since the NAS DRive is reached by the server through the ip address of the gateway 10.157.86.31, the authentication fails with the NAS drive because it expects the ip address to be as 10.128.141.64
    how to force the traffic to the ip 10.128.141.28 to initiate through the nic 10.128.141.64 ?
    Any suggestions please
    Below is the answer for the problem ; already implemented and backup is working
    But needed the clarification
    Below is the solution too:
    In spite of rebuild of the server from scratch by freshly installing the operating system , the backup vlan is not connected.So I decided to connect the cable coming from the switch port to the unused nic port of the server and it solved the issue by reconfiguring
    the ip address to the new nic port.
    But needed one clarification here:
    Before swapping the cable to spare nic the picture is below
    Highlighted above is the nic connected to the backup vlan through which no communication happens.
    So decided to swap the cable from lan1 (backup) which is a separate nic to the spare nic available in the server which is higlighted below
    After configuring the backup ip address to the new spare nic also resulted in the traffic routed the production vlan which should not be the case.And moreover my observation from the above screeshot , why swapping the cable from the lan1(non-teamed adapter)
    is showing that lan4 (teamed adapter)as disconnected .
    And moreover , currently the setup is working as below with the backup traffic happening through the backup vlan when it is configured in the above manner
    Thanks & Regards S.Swaminathan Live & let others live!!!

    Did you set the interface binding order correctly or to match the previous server?
    DNS: Valid network interfaces should precede invalid interfaces in the binding order
    http://technet.microsoft.com/en-us/library/dd391967(v=WS.10).aspx
    Modify the protocol bindings and network provider order
    http://technet.microsoft.com/en-us/library/cc732472(v=WS.10).aspx
    An incorrect IP address is returned when you ping a server by using its NetBIOS name in Windows Server 2008 or in Windows Server 2008 R2
    http://support2.microsoft.com/kb/981953
    You can view your current binding order by using this script, but please note, that I haven't tried this script, yet:
    Show NIC Binding Order
    http://gallery.technet.microsoft.com/scriptcenter/Get-NIC-Binding-Order-a2dc8087
    Also, prior to setting up the teams, make sure that the NIC is set to obtain IP automatically and not have a static entry on it. I've seen this cause problems in the past.
    If you have any unused NICs, such as Local Area Connection 2, don't just unplug them. You must disable them, otherwise they will try to register the APIPA in DNS and that will cause problems.
    Make sure that the correct DNS are on the interfaces that you need to use, too.
    Ace Fekay
    MVP, MCT, MCSE 2012, MCITP EA & MCTS Windows 2008/R2, Exchange 2013, 2010 EA & 2007, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.

  • Two controllers on the same subnet

    One of our office that already has 1 2000 controller needs to install another one. Can the new controller be on the same subnet as the old one or does it have to be on a different subnet?
    thanks

    Roaming is an 8 hour conversation in itself, but I will answer your question with a 'yes', you should have a mobility group defined if wireless clients may move between APs associated with different controllers.
    Roaming actually has much more to do with the wireless security in use, the config of the client and back-end user database, and the layer 2 connectivity of the multiple controllers.
    If you are using WEP or WPA pre-shared-key with the same layer-2 termination on the controllers, then your users really aren't 'roaming' at all, they are constantly re-associating to the different APs anyway.
    Roaming, in my mind, means 'fast roaming' meaning less than 100 ms. This would require either Cisco proprietary CCKM, or some of the *sort of* WPA2 fast-reconnect features.

  • No translation group for a statically nat'd ip connecting to an external IP of a device in the same subnet

    Hi, 
    I've currently got an issue where I have a device configured with static nat that is trying to communicate to a nat'd ip address of a device in the same subnet.
    I'm getting "No translation grou found for tcp src sourceip/80 dst destip/80.
    I'm not 100% which areas of the config to post.
    Cheers,
    Neil

    Did you set the interface binding order correctly or to match the previous server?
    DNS: Valid network interfaces should precede invalid interfaces in the binding order
    http://technet.microsoft.com/en-us/library/dd391967(v=WS.10).aspx
    Modify the protocol bindings and network provider order
    http://technet.microsoft.com/en-us/library/cc732472(v=WS.10).aspx
    An incorrect IP address is returned when you ping a server by using its NetBIOS name in Windows Server 2008 or in Windows Server 2008 R2
    http://support2.microsoft.com/kb/981953
    You can view your current binding order by using this script, but please note, that I haven't tried this script, yet:
    Show NIC Binding Order
    http://gallery.technet.microsoft.com/scriptcenter/Get-NIC-Binding-Order-a2dc8087
    Also, prior to setting up the teams, make sure that the NIC is set to obtain IP automatically and not have a static entry on it. I've seen this cause problems in the past.
    If you have any unused NICs, such as Local Area Connection 2, don't just unplug them. You must disable them, otherwise they will try to register the APIPA in DNS and that will cause problems.
    Make sure that the correct DNS are on the interfaces that you need to use, too.
    Ace Fekay
    MVP, MCT, MCSE 2012, MCITP EA & MCTS Windows 2008/R2, Exchange 2013, 2010 EA & 2007, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.

  • SCVMM 2012 R2 – two iSCSI network interfaces connected to the same subnet

    I would like to configure two networks in SCVMM 2012 R2 which will be used by VMs to connect to iSCSI SAN. Both of these networks should be connected to the same subnet (192.168.100.0/24) because they will connect VMs to Dell EqualLogic using iSCSI MPIO.
    Those networks should be available on all Windows Server 2012 R2 Hyper-V cluster nodes.
    When I try to create two logical networks in SCVMM with the same subnet, I receive error (Unable to assign the subnet 192.168.100.0/24 because it overlaps with an existing subnet)
    How should I configure networking in SCVMM to allow one virtual machine to connect to the same subnet using two network interfaces?

    "How should I configure networking in SCVMM to allow one virtual machine to connect to the same subnet using two network interfaces?"
    You can achieve this by simply adding multiple vNICs to a VM - connected to the same VM Network. 
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • What happens on iCloud (ex. contacts) when multiple family members use the same Apple ID?

    What happens on iCloud when multiple family members use the same Apple ID?  For example if we all choose to use iCloud for contacts, are they all merged together?  We use the same Apple ID so we can use find my iPhone to keep track of the whole family.

    Of course if you are both connected to the same iCloud account you have the same contacts - what did you expect?. The contacts live on the server and are read from there by the devices; so as you've both managed to sync your contacts up to iCloud they are now inextricably mixed. You can only delete your contacts by deleting individual ones, and doing that will delete them from your phone as well.
    You can only unravel this by
    1. In the iCloud contacts page at http://icloud.com, select all the contacts, click on the cogwheel icon at bottom left and choose 'Export vCard'.
    2. Sign out of System Preferences>iCloud
    3. Create a new Apple ID and open a new iCloud account with it for your own use.
    4. Import the vCard back into the iCloud contacts page.
    5. Go to http://icloud.com and sign in with the original ID. This is now his ID. Work through the contacts individually deleting the ones you don't want him to have. When done sign out and advise him to change his password.
    6. Go to the new iCloud account and delete his contacts individually.
    Of course if you have also been syncing calendars and using the same email address there are problems with doing this.

  • How do I use multiple local workspaces for the same website in VS2013?

    For quite some time I've been using VS2010 with multiple workspaces mapped from the same TFS branch to test different variants of a website in parallel. The internal web server (Cassini) seems quite happy to run multiple versions and simply maps a new port
    number for the second and subsequent instances that start up.
    We are now looking to move to VS2013, so have to use IISExpress since the 'internal' web server is no longer provided.
    How do I configure things so that if I open a single workspace, IISExpress runs against that workspace using the default port number (specified in the solution), but if I open a second instance in a different workspace it correctly uses a new port number
    and working folder. Do I need to invoke IISExpress using the command line (presumably from each workspace) and specify the relevant defaults or is there some way to enter this in the local config file for IISExpress?
    My initial attempt at editing to config file resulted in the 2nd and subsequent instance using the same physicalfolder as the first instance, presumably because the speficied site name and port number are the same!

    Hello,
    Thank you for your post.
    I am afraid that the issue is out of support range of VS General Question forum which mainly discusses
    the usage of Visual Studio IDE such as WPF & SL designer, Visual Studio Guidance Automation Toolkit, Developer Documentation and Help System
    and Visual Studio Editor.
    Based on your description, I suggest that you can consult your issue on ASP.NET forum:
    http://forums.asp.net/
     or IIS forums: http://forums.iis.net/ for better solution and support.
    Best regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to load multiple HTML5 canvas on the same page (the proper method)

    Hi,
    I've been struggling to load multiple canvas animations on the same page. At the beggining I thought that exporting the movies with different namespaces and reloading the libraries in a sequential flow might work, but it doesn't. It always loads just the last animation loaded. More info here: Coding challenge: what am I doing wrong?
    Here is a sample of what I'm doing:
    1st: Publish two flash movies with custom namespaces for "lib" set in the "publish settings": "libFirst" and "libSecond".
    2nd: Edit the canvas tags in the HTML page. One called "firstCanvas" and the other one called "secondCanvas"
    3rd: Edit the javascript like this:
            <script>
                // change the default namespace for the CreateJS libraries:
                var createjsFirst = createjsFirst||{};
                var createjs = createjsFirst;
            </script>
            <script src="//code.createjs.com/easeljs-0.7.1.min.js"></script>
            <script src="//code.createjs.com/tweenjs-0.5.1.min.js"></script>
            <script src="//code.createjs.com/movieclip-0.7.1.min.js"></script>
            <script src="{{assets}}/js/first.js"></script>
            <script>
                function initFirstAnimation() {
                    var canvas, stage, exportRoot;
                    canvas = document.getElementById("firstCanvas");
                    exportRoot = new libFirst.first();
                    stage = new createjsFirst.Stage(canvas);
                    stage.addChild(exportRoot);
                    stage.update();
                    createjsFirst.Ticker.setFPS(libFirst.properties.fps);
                    createjsFirst.Ticker.addEventListener("tick", stage);
            </script>
            <script>
                // change the default namespace for the CreateJS libraries:
                var createjsSecond = createjsSecond||{};
                var createjs = createjsSecond;
            </script>
            <script src="//code.createjs.com/easeljs-0.7.1.min.js"></script>
            <script src="//code.createjs.com/tweenjs-0.5.1.min.js"></script>
            <script src="//code.createjs.com/movieclip-0.7.1.min.js"></script>
            <script src="{{assets}}/js/second.js"></script>
            <script>
                function initSecondAnimation() {
                    var canvas, stage, exportRoot;
                    canvas = document.getElementById("secondCanvas");
                    exportRoot = new libSecond.second();
                    stage = new createjsSecond.Stage(canvas);
                    stage.addChild(exportRoot);
                    stage.update();
                    createjsSecond.Ticker.setFPS(libSecond.properties.fps);
                    createjsSecond.Ticker.addEventListener("tick", stage);
            </script>
    <body onload="initFirstAnimation(); initSecondAnimation();">
    Could someone please reply with the best practice on how to do this? If possible, without the need to reload all the libraries...
    If I only need to show one flash movie at a time, would it be more efficient to cut & paste the canvas tag using jQuery in the DOM and reloading a different lib on it?
    Many thanks!
    #flash #reborn

    I was able to fix it. At the end, it was easier than I thought. Just have to publish using a different "lib" namespace for each movie, load all the scripts at the end of the <body> and then add the following to the onload or ready events:
    $(document).ready(function () {
            var canvas, stage, exportRoot;
            // First movie
            canvas = document.getElementById("firstCanvas");
            exportRoot = new libFirst.first();
            stage = new createjs.Stage(canvas);
            stage.addChild(exportRoot);
            stage.update();
            createjs.Ticker.setFPS(libFirst.properties.fps);
            createjs.Ticker.addEventListener("tick", stage);
            // Second movie
            canvas = document.getElementById("secondCanvas");
            exportRoot = new libSecond.second();
            stage = new createjs.Stage(canvas);
            stage.addChild(exportRoot);
            stage.update();
            createjs.Ticker.setFPS(libSecond.properties.fps);
            createjs.Ticker.addEventListener("tick", stage);
            // Third movie
            canvas = dument.getElementById("thirdCanvas");
            exportRoot = new libThird.third();
            stage = new createjs.Stage(canvas);
            stage.addChild(exportRoot);
            stage.update();
            createjs.Ticker.setFPS(libThird.properties.fps);
            createjs.Ticker.addEventListener("tick", stage);

  • Can multiple people post on the same shared photo stream?

    Is there a way to have multiple people post on the same shared photo stream?
    My cousin would like to have a single shared photo stream where we can both post photos on it and comment, instead of having two separate shared photo streams. 
    We think that there should be an option for the album creator to give access to anyone he/she shares it with to enable them to also post pics on the same stream.

    The only link I have is to manage your team account http://forums.adobe.com/thread/1460939?tstart=0
    The only Adobe program/process for team work that I know of is Adobe Anywhere, but that requires a very different process
    Adobe Anywhere http://www.adobe.com/products/adobeanywhere.html
    http://www.creativeimpatience.com/adobe-anywhere-enterprise-solution/

  • Can multiple APEX application use the same parsing schema?

    Hi,
    I have 4.2 APEX thru pl/sql Gatewat, 11gr2 DB and using theme 24.
    Due to the APEX limitation for version control I would be splitting 1 big ERP applications into 24 different APEX applications and each application would be considered as 1 unit for version control.
    I have about 800 tables and I would assume that all of these would need to be stored in 1 schema since a lot of these table are linked thru FK.
    Can I have multiple APEX APPS using the same parsing schema? or is there a better way to do this?
    Thanks in advance!

    Hi,
    Multiple applications can have same (one) parsing schema.
    You can test that on e.g. apex.oracle.com, where normally you have only one schema and you create multiple applications and all use that same schema.
    Regards,
    Jari
    My Blog: http://dbswh.webhop.net/htmldb/f?p=BLOG:HOME:0
    Twitter: http://www.twitter.com/jariolai
    Edited by: jarola on Jan 28, 2013 7:15 PM

Maybe you are looking for