Primary Interface

Hello,
I've read the documentation however I'm still a little confused as to what & how the primary interface is used for. It appears that the IP associated with this interface is registered with the CM. In my instance I require the IP associated with Gi2/0 to be registered with the CM and by changing the primary interface I can change this however I'm not sure how this affects WCCP traffic etc. Please see below for my current config.
My Data Centre config is as follows:
2 x Cisco 4507R Core switches
2 x          OE7341
WAE1 is connected to Core1 & WAE2 is connected to Core2. A L2 EtherChannel exists betwen the cores
I'm using WCCP with L2 redirection and IP Forwarding as egress to a HSRP address. Currently Primary Interface is GigabitEthernet 1/0; default gateway is on the same subnet as the primary interface. GigabitEthernet 2/0 is the management interface with static routing in case Gi1/0 goes down.
Currently I can see all redirected WCCP traffic coming in & out of Gi1/0 which I presume is normal with Gi2/0 sitting idle.
Should I re-configure in a way that WCCP traffic comes in via Gi1/0 but returns via Gi2/0... how do I achieve this configuration? Default gate on the Gi2/0 interface?
Thanks in advance for any replies.
Regards
Craig

Hello Zach,
Thanks for the clarification!
How do I re-set the 'Management IP' if the WAE is already registered to the CM? Do I presume correctly I would need to de-register and register again with correct IP?
With regards to the WCCP config; I understand about the ingress redirection in terms of the lowest IP used for registration. Should I leave it as default with traffic being forwarded to default gateway on Gi1/0 interface or is it best tohave ingress on Gi1/0 and then egress on Gi2/0?
Basically I'm just trying to understand the best setup for WCCP using both the Gi1/0 & Gi2/0 ethernet ports
Thanks!
Craig

Similar Messages

  • Solaris 8: Multiple primary interfaces connected to the same network

    I have a machine with Solaris 8, and it has multiple interfaces that are connected to the same network which means they all have metric 0 (1 hop) to the default gateway.
    assume:
    e1000g0: 192.168.30.70
    e1000g2: 192.168.30.72
    e1000g4: 192.168.30.74
    e1000g5: 192.168.30.76
    gateway: 192.168.30.65 (Cisco Router)
    However, it seems like despite the fact that they have a direct connection, they seem to be using e1000g0 to access the 192.168.30.0 network to get to the default gateway and then to anywhere else.
    When I send a ping to say, 192.168.30.74 (IP of e1000g4) and capture packets on e1000g0, I see the "echo reply" messages going out of it as opposed to e1000g4 even though e1000g4 is the one receiving the "echo request". This should not happen and these should be completely independent as they should all be advertising a 1 hop to that network
    The outputs from netstat -rn and ifconfig -a are shown in the picture on the link below
    [http://img836.imageshack.us/img836/7308/ifconfignetstathiddenip.jpg]
    This gets even more confusing when I go into the Cisco router and run the command: "show mac address-table" where only the MAC address of e1000g0 is shown for the switch port it's connected to, but not for the other interfaces which are connected to the switch. Yes, all ports are active (no shut) and are pingable.
    Also, the odd thing is that ALL of these individual MACs show up in the router ARP table when the machine comes up, however after sending a ping to one of them, after a certain expiry or whatever period, the MACs disappear from the router ARP table and only the MAC for e1000g0 shows up. The arp table of the solaris machine however shows all the relevant MACs of each port of the router that it's physically connected to (This is actually a Cisco Switch with the advanced IP services imagine and L3 routing turned on)
    Before anyone asks: The setting local-mac-address? setting does NOT exist in my machine and it never has, but it used to work fine. Also, from the ifconfig command, once can tell that all the MAC addresses are fine.
    I need to somehow assign all these interfaces equal priority and make them understand that they're physically connected to the 192.168.30.0 network and there's no need to go through e1000g0 to get to it.
    This is causing a lot of problems as eventually all traffic will end up going through the e1000g0 interface and that will become a bottle neck.
    Please help Thanks in advance

    Ok thanks. That was a useful response.
    I did think about the trunking software that is claimed to be available for Solaris 8, but it's only available if you've got paid support contract. Oracle came and ruined everything re: Sun support which is so expensive now.
    The other confusion is, we never had that OR needed to configure trunking/link aggregation on this machine, so why now?
    Lastly, by your explanation, this should be expected and is "normal" behaviour, which would mean that this machine was always doing this and I only just noticed it this time? I thought if you turn off ipv4 forwarding and router function in the machine, it's every interface for itself. But it's not doing that :(
    So then the question is, Can I force it? I've tried a bunch of things by manipulating the tables and it seems to mess things up where nothing is getting through or it now shifts all the traffic to some other port make the problem no different
    Is there a way to give equal weight to all interfaces for the traffic to go directly through them that is originating at those ports?

  • Primary network interface nad utadm config.

    Hello all,
    I am about to setup a SunRay server software for a school that i work at.
    It is a SunFire v880 server, it has a gigabit fibre card and a normal RJ-45 nic aswell.
    Based on our earlier srss servers the fibre card is always dedicated to the sunrays, so we have a fibre backbone and swithc kit dedicated to the sunrays. It does not touch the other network which is all windows and the proxy server.
    So the fibre card is plugged into the deciated sunray infrastructure
    The copper nic is plugged into the normal windows network
    When i am installing solaris it asks which NIC should be the primary, should the fibre card be the primary?
    I have noticed that the utadm script always uses the primary NIC details by default it is is quite tricky changing the details.
    I want the coper NIC to be the primary interface because that NIC has the correct hostname for the server which is lanhs01, the fibre nic is lanhs01-ge0
    The windows DNS servers know the sun server as lanhs01 which corresponds with the copper NIC, but if the fibre interface is the primary the sun servers will use by default lanhs01-ge0.
    What should i do here?
    Also, should i configure a dedicated sunray interconnect or configure the dedicated network address
    So basically ./utadm -A OR ./utadm -a ?
    I don't want any DHCP coming out of the copper nic nor any sort of sunray service, i have noticed recently that our current sunray server is piping out X sessions on the copper NIC, so a sunray plugged into the windows network obtains an address from the windows DHCP servers then finds the sun server and uses it's X server. This is not desirable simply because of the bandwidth issues and also network standard.
    I hope that i ahve given enough info,
    thanks for oyur time

    When i am installing solaris it asks which NIC should be the primary, should the fibre card be the primary?The NIC that connects the machine to your general-use network should be declared to be the primary. From your description that should be the copper RJ-45 NIC.
    I have noticed that the utadm script always uses the primary NIC details by default it is is quite tricky changing the details.'utadm' uses the primary NIC information by default if you invoke it with '-A', which tells 'utadm' that the Sun Ray units will be deployed on your general-use network.
    If your Sun Rays will be deployed on a dedicated private subnet that is not connected at all to your general-use network then you should invoke 'utadm' with the '-a' option, which tells 'utadm' that this network will be completely separate from your general-use network. In that case 'utadm' will default to using the information for the NIC that is connected to the dedicated subnet. It sounds like this is the option you want.
    It sounds like you're previously used 'utadm -A' (or 'utadm -L on"). That's what allows Sun Rays on the general-use network to get sessions from the Sun Ray server. You can use 'utadm -D' to undo the effects of 'utadm -A' or use 'utadm -L off' to undo the results of 'utadm -L on'. Or you can use 'utadm -r' to tear down all of the existing Sun Ray network configuration, then start fresh with 'utadm -a' for the fibre NIC.

  • How do I force LDAP to listen to only one interface?

    So I've been digging through this process and it seems possible, but none of my efforts have been fruitful.
    *The goal*...Have 10.6 slapd (LDAP) listen on the primary interface (bonded EN) so that a secondary (virtual) IP can be used for another directory service. I know this isn't advised, but shy of running a VM for a the task of KMS would just waste resources. Purchasing a second Xserve is out of the question and I/O (mail) is a heavy demand so that cute Mini Server won't work in this environment.
    *The setup*...Single Quad-Core Xenon Xserve with more than enough "umph" running 10.6.2 (files.domain.com with a bonded EN *.20 only running AFP and SMB) using an Xserve RAID as the file storage and Kerio MailServer (KMS setup to listen on a secondary *.21 IP) using a mirrored internal array as storage with a hardware firewall doing the security.
    Everything works now, except LDAP in KMS. So we I need to get the 10.6 slapd to ONLY listen to *.20 so that my KMS will start on the same port, but with it's own IP *.21
    I know this is possible, and Apple pointed me toward a man page for slapd...
    http://developer.apple.com/mac/library/DOCUMENTATION/Darwin/Reference/ManPages/m an8/slapd.8.html
    It appears that "-h" is my flag. I've tried a handful of things, but nothing has worked. I also am fearful of preventing the system from seeing itself and having the files.. go down. This is a production server for a small business.
    Thanks for your thoughts!

    It would help to post what you've tried. It does, indeed, appear that -h is the switch, and the man includes several examples, so if you've followed the examples and it's not working then that hints that you should pass it back to Apple.
    You also don't say how you're implementing your change. Are you invoking slapd manually, or are you editing /System/Library/LaunchDaemons/org.openldap.slapd.plist ?
    You should be doing the latter, and I'd expect to see something like this:
    <array>
      <string>/usr/libexec/slapd</string>
      <string>-d</string>
      <string>0</string>
      <string>-h</string>
      <string>ldap://x.x.x.20/ ldap://127.0.0.1/ ldapi://%2Fvar%2Frun%2Fldapi</string>
    </array>
    (note the inclusion of ldap://127.0.0.1/ as an entry - you'll need to run on localhost as well the .20 address since all the local services will look to localhost)
    You might also need an ldaps URL if you're using SSL but let's walk before we try to run

  • Favoring FireWire interface over USB for external HDD

    I have my iMac hooked up to an external HDD via both USB and FireWire 800 as it is triple-interfaced (USB, FW 400 & FW 800).
    Lately, as I turn on the external HDD (pretty much always once the iMac is up & running), the HDD icon shows in the Finder with the USB symbol.
    It requires me to eject the HDD in the Finder and then disconnect the USB cable for the iMac to detect the external HDD again but now through FireWire.
    The reason I use both interfaces is to handle HDD read/write communication via FW whereas USB only as hub (the external HDD comes with 3 USB ports).
    Is there anything I can do to make sure the primary interface is FireWire and not otherwise?
    Thanks!

    In a recent phone call with OWC's tech support I asked the same question. Answer was "Do not connect both interfaces at the same time". There is no way to arbitrate the connection the Mac will use, and strange things will happen.
    If you think about it, there is no way to specify a preference in the Mac for what ports to connect with, so it does what it wants, not what you'd like.
    I can only imaging what happens on Windows machines. I suspect corruption ahead

  • Role of multiple network interfaces

    Hi,
    I'm trying to setup a Mac Mini Server to act as a gateway and to offer various services from inside and outside of my office. For me this is some kind of test setup before I may look into getting a bigger machine for this job.
    Because a gateway needs two ethernet interfaces I got an Apple USB-Ethernet adapter which technically works without problems. However since the USB-Ethernet is slower than the internal ethernet interface of the Mac Mini I want to connect the USB interface to the internet (fixed IP, DNS forward set, reverse in work) and the faster internal interface to my internal network.
    The server is set up to make NAT but DHCP is done by another server. So the internal address is also manually set. Also there is an internal DNS Server I've setup the server to use it's own DNS service.
    It took me some time to figure out how to make the USB interface (en2) the primary interface by turning off the internal interface (en0) during installation and adding it at a later time. So when I'm now doing a changeip -checkhostname I can see that my external address is my primary interface and my public hostname is correct.
    However my biggest problem at the moment is that it seems as if all services are bound to the internal interface/network (en0) and I'm not able to access services like VPN, iChat or web from the internet or by using the public hostname.
    Do I have to somehow tell all these services to explicitly bind to the external interface/address? Or is there no other way than to use en0 as the external interface in my configuration?
    Thanks,
    Alex

    Please search for existing discussions of establishing static IP routes; this stuff can and does work, but requires manual set-up. IP selects a primary NIC in the absence of a known route regardless of the NIC the message arrived on, which means the Mac box doesn't work like you'd expect without some help. [Here's one discussion|http://discussions.apple.com/message.jspa?messageID=5697532], and there are others.
    Mac boxes don't make particularly easy nor effective nor efficient nor economical firewalls in my experience, and this is inherent in the design differences between a Mac and a dedicated firewall. External firewalls are an added expense for a small network, but (in my experience) tend to have advantages over using a Mac box doubling as a firewall.
    Given the number of folks that try this particular configuration, it would be useful for Apple to provide some guidance and some tools here to set up the static routes and to operate the Mac as a router (in some future release of Mac OS X Server after Snow Leopard Server 10.6); you're going to be using the bash shell to get this stuff going.

  • Ethernet interface priority explanation

    I'm looking for an explanation behind the solution to a problem I recently faced. My configuration (this is on the latest Xserve w/ 10.5.6):
    Ethernet 1
    - NAT'd, static IP
    The Xserve was not providing NAT services to the local subnet. A linksys router with another static IP was acting as a gateway to the local subnet.
    Ethernet 2
    - public, static IP sitting behind a T1 router
    The problem I had was that initially I had configured Ethernet 1 as the "primary interface" so that the server "looked in" rather than out by default. Consequently, I was unable to ping the external IP on Ethernet 2 from my home server (via and SSH connection). (At one point, I thought I was able to ping the external IP from another device sitting behind my T1 router, but I'm not sure this was ever the case). And yes, I had the firewalls configured to let pings through.
    The only solution (and I was able to see this real time by watching my home server attempt to ping the Xserve's external IP) was to make Ethernet 2 the "primary interface". At this point, ping replies started coming back. I have no problem with this solution--I would just like to understand the reasoning behind it.
    My theory is that the Xserve was trying to send replies out the local NAT'd interface (since it was primary) even though the initial ping message came from the secondary interface.
    Background: my overall goal was to use multiple Xserves to distribute remotely accessible network services filtered through the firewall while still maintaining internal NAT'd interfaces so that the Xserves could be administered via the private, local subnet. DNS is setup so that the NAT'd subnet is a subdomain of the external domain.
    Could somebody provide a little insight into this phenomenon so that more of us may better understand how things work?

    ... that ping reply still should have made it back to the source albeit routed through internal network first and out through the NAT gateway. Is this not the case?
    No. Absolutely not.
    Say your machine has two interfaces, a real-world IP 234.56.78.9 and a NAT'd address 192.168.1.2.
    The NAT interface has a router address set to route through 192.168.1.1. The router has a public interface address of 65.43.2.1
    Now, any outgoing traffic flowing through the NAT device gets NAT'd to the 65.43.2.1 address.
    So here's the scenario. Remote user pings 234.56.78.9
    Mac hears the ping and sends a reply via it's 192.168.1.2 to its default router.
    Now, depending on the router make/model one of two things happen. Either:
    1) the router is smart enough to realize that the ping reply is in response to a ping request it never saw and drops the packet (it won't send a reply packet to a query that never came in). or
    2) the router NATs the traffic to 65.43.2.1 and sends the reply.
    In the case of 2, the remote user gets a ping reply from a completely different address than it sent out - it pinged 234.56.78.9 and got a reply from 65.43.2.1
    It quite rightly ignores the ping reply as being bogus. It doesn't know your internal network and doesn't expect to get a ping reply from a completely different address than it pinged in the first place.

  • Choosing VPN interface

    I finally got my Mac Mini server not long ago, and i love it, but i ran into a minor problem with the VPN.
    Ill explain my network setup first:
    - I have two separate WAN connections
    -- WAN1 <==> Airport Extreme =en2=[ Lion Server ]=en0=[ internal network ]
    The network that is behind the Airport Extreme only contains the server secondary interface.
    -- WAN2 <==> BSD Firewall <==> Switch ==[ internal network ]
    The network that is behind the BSD Firewall contains all out local machines, i do not want to punch any holes in this firewall.
    This is where the en0 from the server is connected. En0 is also set as the server primary interface.
    My thought behind this setup was to have VPN access for me and my friends to the network that is on WAN1, without letting them access
    or see any of the machines located on the internal network.
    The problem is that when we try to connect trough WAN1, we get no response at all.
    - the correct ports are open on the Airport Extreme
    - the VPN server is running
    - its configured to assign IP addresses on the subnet of the Airport Extreme, none of these addresses can be assigned by the Airport Extreme.
    - we tried to connect on the Airport Extreme´s subnet, same issues
    If we connect from the subnet behind the BSD firewall
    - no problem at all
    The server will only let us connect to its primary interface(en0), any attempts connecting to its secondary interface(en2) using VPN is ignored.
    Is there some way to choose which interface the VPN server should listen on?
    I have looked at the output from running
    $ sudo serveradmin settings vpn
    but no indication of any setting that dictates the interface to which it should listen.
    Is this something that is even possible to set or in other ways configure?

    Yes it is. Not sure where it is in PDM, somewhere in the vpn options, but is probably a check box that says something about allowing inbound ipsec sessions to bypass interface access lists. Is this pix 6 or version 7? Oops, skimmed through too fast, you're doing pptp, don't think that will work. Post a clean config.

  • Prevent LBFO Primary/Standby fallback

    Hi All,
    I have a client requirement to implement network redundancy on a Windows Server 2012 R2 platform. So enter two switches, firewalls, routers etc.
    As part of this I have set up an LBFO team on NIC 1 & 2.
    The client has since informed me that they have (in the past) had flakey inter-site links which resulted in the NIC team flapping between the Primary and Standby NIC as the primary network interface went up and down. Consequently, one of the tests that
    I need to implement and pass is that if the active primary interface fails, the standby picks up the active role until such a time as the secondary interface fails rather than swapping back to the primary interface when the connection is restored
    TIA
    Jason

    The best way to avoid this problem with Active/Standby is to not use it.  Generally Active/Active mode is better than Active/Standby.
    As you've noticed, Active/Standby mode forces an additional failover event:
    An active NIC goes down, so there is a failover to the standby NIC.
    The previously-active NIC is restored, so there is a failover from the standby NIC back to the newly-restored NIC.
    This additional failover is by design.  The design of Active/Standby is to designate a standby NIC, and the team tries to keep that NIC on standby as much as possible.
    Because of this additional failover, Active/Active mode actually gives you better fault-tolerance than Active/Standby.  And of course, Active/Active also makes better use of your available bandwidth, since you get to use the bandwidth of the redundant
    NIC when there is no fault.  Finally, note that Active/Standby does not have any advantage in failover latency or "effectiveness"; the failover performed by Active/Active is just the same as that performed in Active/Standby.
    Therefore, I actually reccomend that Active/Active is your configuration of first resort.  Active/Active is better or equal to Active/Standby in most respects.  (Exception: If you have a configuration where one NIC
    really is worse than the others, e.g., it has a lower link speed, then Active/Standby may be appropriate.)

  • Using TCP on the second ethernet interface

    Hello,
    I'm using a PXI 8109 running Pharlap.
    I'm trying to use the second ethernet interface of my PXI to send UDP and TCP packets. The primary interface is used to manage Veristand Channels.
    Here is the configuration of my two ethernet interfaces: 
    - eth0 (primary):
    IP : 10.0.0.3
    subnet mask : 255.0.0.0
    - eth1 :
    IP : 192.168.10.9
    subnet mask : 255.255.255.0
    For UDP, I have no problems, packets are sent to the second interface as I want. I think it work because there is a "network address" input on the "UDP Open" VI so the system can choose the right interface.
    For TCP, I use the "TCP Open a connection" VI but there is no this kind of input. And it is not working : I assume the system is trying to use the primary interface but it can't route packets... 
    For information, my two networks are physically independant.
    Can you help me finding out what's going on ? Is it possible to use TCP on the second ethernet interface ? 
    Many thanks,
    Regards,
    Laurent
    Solved!
    Go to Solution.

    Sorry but i don't understand your Input problem!!!
    Could you give me more details?
    The link below can maybe be help you:
    http://digital.ni.com/public.nsf/allkb/67F94BB93BCE32CF86257367006B3659?OpenDocument
    Best regards
    Aurélien Corbin
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    Cahiers de vacances de NI
    Présentations en ligne : 12 fondamentaux à réviser, du 9 juillet au 29 aoû...

  • Loopback0 vs. "Backup interface GigabitEthernetx/x" for redundance

    Hi,
    I am a voice engineer. To configure MGCP gateways I have used the Cisco standard method of creating a loopback interface specially when there are redundant switch connections, e.g. GigabitEthernet0/0 and GigabitEthernet0/0.  Can anybody please explain if I use "backup interface" method, would it work same way.  The backup interface method also has a backup delay, not sure what is default, but secondary interface won't be active until the delay elapses.  If I configure "no backup delay" would there still be an event for MGCP?  Can somebody please share their experience?

    If your primary interface fails, is there an event for MGCP? What happens to active calls?
    Update: I just found that the default delay is 0 seconds (that means "no backup delay" is not required to be explicitly added, so I assume the MGCP gateway should not experience any impact at all. If you done this testing in your environment, please confirm.
    And thanks a lot for quick reply!

  • Hosting all services with a dual (internal / external) interface

    There is no end to the trouble I'm running into trying to get the following setups working. I've re-installed from scratch about five times today to try different configurations, and none work completely.
    Option A. Preferred Configuration
    WAN ↔ Hardware Firewall/Router & Xserve
    Hardware Firewall ↔ Switch ↔ Workstations & Xserve
    Option B. Gateway Configuration
    WAN ↔ Xserve ↔ Switch ↔ Workstations
    General Info
    Xserve host name of 'trinity.cicworks.com' and matching DNS which mirrors the externally hosted DNS, allowing the local network to talk to the server directly, vs. bouncing out then back into the network. Also hosts DHCP, Open Directory, Web, Mail, Update Server, AFP, NFS, NetBoot, etc.
    Website, mail, etc. need to be accessible from both the WAN connection and LAN. Other services are LAN only.
    Questions
    For each setup, which port should the LAN and WAN be connected to?
    How the ** do I get the Open Directory working on the LAN if the primary interface is the WAN? (All attempts at going the gateway route—pun intended—required that the WAN be primary, which instantly killed any hope for my Domain working.)
    A link to step-by-step instructions for one of these setups would be perfect. Multiple links to getting the appropriate network layout set up, then the other services within the network layout would be great.
    I've googled and googled, searched the mailing lists, PDFs, and discussions, and can't find an example setup like this. The last Apple Tech I talked to was stumped and very apologetic for only being able to suggest a re-install.
    My apologies if I just wasn't using the right search terms and there is already a thread regarding this. I'm a n00b.
    Xserve — Mac OS X Server (10.4.10) — 2 × Gigabit Ethernet

    I had the setup you describe (WAN ↔ Hardware Firewall/Router ↔ Switch ↔ Workstations & Xserve) running for a long time. Now that we have external services that we are bringing online (web, mail, VoIP) we need the Xserve to have its own distinct external address. In the future this may be on a dedicated line, separate from the damaging effects of user network usage. Thus, connecting the hardware firewall/router to the WAN and the Xserve to the WAN, in addition to the LAN.
    Option A is having the Xserve connected to the LAN switch and the WAN simultaneously, thus getting around the crappy hardware firewall we have and giving the Xserve an outside address distinct from the LAN's hardware firewall. WAN traffic on the LAN sent to the internet is sent through the hardware firewall, WAN traffic on the Xserve is sent through its own connection, not the LAN.
    Our hardware firewall/router is not smart enough to be able to assign external addresses to internal hosts, which is unfortunate. Previously, we were DMZ'ing the Xserve, which is no longer acceptible.
    I've done a weeks' worth of research, plus, a call with an Apple enterprise support guy that went nowhere - he had no clue how to take our existing system (en0 LAN, en1 WAN, everything going through LAN) to the setup we need - he gave a good effort, but in the end could only suggest a reinstall. Issues like "kadmin: Client not found in Kerberos database while initializing kadmin interface" and "kadmin: Missing parameters in krb5.conf required for kadmin client while initializing kadmin interface" when using changeip didn't help.
    The system was running perfectly, hosting 300GB worth of data across 17 user accounts, with a few hundred megs of IMAP e-mail in its previous configuration. Luckily, this usage is "light" and most people have laptops and can continue doing what they are doing with their mobile accounts.
    Now that I've re-installed from scratch, my basic question is: how do I have the primary network interface be the WAN (and thus have external traffic leaving on the WAN interface, not through the LAN) and still have Open Directory on the LAN? I've read through all of the Apple docs on the associated services and found nothing. I've never found the docs useful.

  • Database design to support parameterised interface with MS Excel

    Hi, I am a novice user of SQL Server and would like some advice on how to solve a problem I have. (I hope I have chosen the correct forum to post this question)
    I have created a SQL Server 2012 database that comprises approx 10 base tables, with a further 40+ views that either summarise the base table data in various ways, or build upon other views to create more complex data sets (upto 4 levels of view).
    I then use EXCEL to create a dashboard that has multiple pivot table data connections to the various views.
    The users can then use standard excel features - slicers etc to interrogate the various metrics.
    The underlying database holds a single days worth of information, but I would like to extend this to cover multiple days worth of data, with the excel spreadsheet having a cell that defines the date for which information is to
    be retrieved.(The underlying data tables would need to be extended to have a date field)
    I can see how the excel connection string can be modified to filter the results such that a column value matches the date field,
    but how can this date value be passed down through all the views to ensure that information from base tables is restricted for the specied date, rather than the final results set being passed back to excel - I would rather not have the server resolve the views
    for the complete data set.
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    What other options do I have, or have I failed to grasp the way SQL server creates its execution plans and simply having the filter at the top level will ensure the result set is minimised at the lower level? (I dont really want the time taken for the dashboard
    refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    As an example of 3 of the views, 
    Table A has a row per system event (30,000+ per day), each event having an identity, a TYPE eg Arrival or Departure, with a time of event, and a planned time for the event (a specified identity will have a sequence of Arrival and Departure events)
    View A compares seperate rows to determine how long between the Arrival and Departure events for an identity
    View B compares seperate rows to determine how long between planned Arrival and Departure events for an identity
    View C uses View A and view B to provide the variance between actual and planned
    Excel dashboard has graphs showing information retrieved from Views A, B and C. The dashboard is only likely to need to query a single days worth of information.
    Thanks for your time.

    You are posting in the database design forum but it seems to me that you have 2 separate but highly dependent issues - neither of which is really database design related at this point.  Rather you have an user interface issue and an database programmability
    issue.  Those I cannot really address since much of that discussion requires knowledge of your users, how they interface with the database, what they use the data for, etc.  In addition, it seems that Excel is the primary interface for your users
    - so it may be that you should post your question to an excel forum.
    However, I do have some comments.  First, views based on views is generally a bad approach.  Absent the intention of indexing (i.e., materializing) the views, the db engine does nothing different for a view than it does for any ad-hoc query. 
    Unfortunately, the additional layering of logic can impede the effectiveness of the optimizer.  The more complex your views become and the deeper the layering, the greater the chance that you befuddle the optimizer. 
    I would rather not have the server resolve the views for the complete data set
    I don't understand the above statement but it scares me.  IMO, you DO want the server to do as much work as possible since it is closest to the data and has (or should have) the resources to access and manipulate the data and generate the desired
    results.  You DON'T want to move all the raw data involved in a query over the network and into the client machine's storage (memory or disk) and then attempt to compute the desired values. 
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    Correct on the first point, though there is such a thing as a TVF which is similar in effect.  Before you go down that path, let's address the second statement.  I don't understand that last bit about "used as pseudo tables" but that sounds more
    like an Excel issue (or maybe an assumption).  You can execute a stored procedure and use/access the resultset of this procedure in Excel, so I'm not certain what your concern is.  User simplicity perhaps? Maybe just a terminology issue?  Stored
    procedures are something I would highly encourage for a number of reasons.  Since you refer to pivoting specifically, I'll point out that sql server natively supports that function (though perhaps not in the same way/degree Excel does).   It
    is rather complex tsql - and this is one reason to advocate for stored procedures.  Separate the structure of the raw data from the user.
    (I dont really want the time taken for the dashboard refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    DTA has its limitations.  What it doesn't do is evaluate the "model" - which is where you might have more significant issues.  Tuning your queries and indexing your tables will only go so far to compensate for a poorly designed schema (not that
    yours is - just a generalization).  I did want to point out that your refresh process involves many factors - the time to generate a resultset in the server (including plan compilation, loading the data from disk, etc.), transmitting that data over the
    network, receiving and storing the resultset in the client application, manipulating the resultset into the desired form/format), and then updating the display.  Given that, you need to know how much time is spent in each part of that process - no sense
    wasting time optimizing the smallest time consumer. 
    So now to your sample table - Table A.  First, I'll give you my opinion of a flawed approach.  Your table records separate facts about an entity as multiple rows.  Such an approach is generally a schema issue for a number of reasons. 
    It requires that you outer join in some fashion to get all the information about one thing into a single row - that is why you have a view to compare rows and generate a time interval between arrival and departure.  I'll take this a step further and assume
    that your schema/code likely has an assumption built into it - specifically that a "thing" will have no more than 2 rows and that there will only be one row with type "arrival" and one row with type "departure". Violate that assumption and things begin to
    fall apart.  If you have control over this schema, then I suggest you consider changing it.  Store all the facts about a single entity in a single row.  Given the frequency that I see this pattern, I'll guess that you
    cannot.  So let's move on.
    30 thousand rows is tiny, so your current volume is negligible.  You still need to optimize your tables based on usage, so you need to address that first.  How is the data populated currently?  Is it done once as a batch?  Is it
    done throughout the day - and in what fashion (inserts vs updates vs deletes)?  You only store one day of data - so how do you accomplish that specifically?  Do you purge all data overnight and re-populate?   What indexes
    have you defined?  Do all tables have a clustered index or are some (most?) of them heaps?   OTOH, I'm going to guess that the database is at most a minimal issue now and that most of your concerns are better addressed at the user interface
    and how it accesses your database.  Perhaps now is a good time to step back and reconsider your approach to providing information to the users.  Perhaps there is a better solution - but that requires an understanding of your users, the skillset of
    everyone involved, what you have to work with, etc.  Maybe just some advanced excel training? I can't really say and it might be a better question for a different forum.   
    One last comment - "identity" has a special meaning in sql server (and most database engines I'm guessing).  So when you refer to identity, do you refer to an identity column or the logical identity (i.e., natural key) for the "thing" that Table A is
    attempting to model? 

  • Cisco BackUp Interface Operation Failing

    OK Experts,
    I have made this issue very simple for you guys to help me out.
    I  have two routers R22 and R23. I have configured the back up interface  on R22 to be interface fas 0/1. Everything appears to be working fine. I  issue the command show backup and I get the following on R22
    r22#show backup
    Primary Interface   Secondary Interface   Status
    FastEthernet0/0     FastEthernet0/1       normal operation
    r22#
    Also I get the following:
    FastEthernet0/1            10.10.13.2      YES manual standby mode          down
    However, when I shutdown interface fast 0/0 to test the back up interface fast 0/1 I get the following:
    r22#show backup
    Primary Interface   Secondary Interface   Status
    FastEthernet0/0     FastEthernet0/1       disabled
    r22#
    FastEthernet0/1            10.10.13.2      YES manual standby mode/disabled down
    So it doesn't work.
    Attached are the configs.
    I was wondering if someone could help me figure out why this won't work.
    Cheers

    Hello Carlton,
    there are some notes about your tests:
    a)  the configuration of the primary interface includes two logical interfaces main interface and a vlan based subinterfaces both fail when you disable fas0/0, the secondary interface has only IP configuration at main interface level
    from your log files:
    interface FastEthernet0/0
    backup interface FastEthernet0/1
    ip address 10.10.14.2 255.255.255.0
    duplex auto
    speed auto
    interface FastEthernet0/0.1
    encapsulation dot1Q 12
    ip address 10.10.12.2 255.255.255.0
    You should remove the subinterface fas0/0.1 as a minimum to make interface configurations compatible,
    b) the specific type of interface you  would like to use for backup is LAN based Fastethernet.
    The backup command had been introduced for providing a backup interface for serial interface and the secondary interface may be a serial interface or ISDN based ( in this case a DDR call is triggered over ISDN).
    The dial backup command reference says that support for gigabit interfaces in C7600 has been later introduced.
    see
    http://www.cisco.com/en/US/docs/ios/dial/command/reference/dia_a1.html#wp1012054
    The use of backup command may be supported or not for fastethernet interface on your routers.
    If it is not supported, you can easily implement an alternate solution, because you are running OSPF in area 0 in all router interfaces
    router ospf 1
    log-adjacency-changes
    >>network 0.0.0.0 255.255.255.255 area 0
    So all you need to do  is to increase OSPF cost on fas0/1 to create a backup path
    on R22, R23:
    conf t
    interface fas0/1
    ip ospf cost 50
    Hope to help
    Giuseppe

  • Dedicated hardware interface using iTouch?

    Hello, my company is looking into SDK 3.0 for its Accessory interface, in fact we've purchased a macbook, joined the program, yesterday got my hello world program down to the device, now learning more about cocoa.
    We are considering ways to use the itouch as an interface to our hardware devices, and a possible problem occurred to me:
    I'll bet that you can't write a program that takes total control of the device, can you? By this I mean, can you write a program that starts when you power on the device, and that doesn't let you go back to the home screen when you push the button? I'm thinking maybe not, whatever we write will always be an app tied to a button on the main menu, right? On power up, the user will always have to navigate to our app. (Thus it probably shouldn't be a primary interface for some hardware.)
    Am I guessing correctly?
    Thanks in advance...

    a real control interface
    I thought we were discussing a virtual interface?
    User/operator interaction w/any system is a careful balance. Since we're talking about using the dock connector and depending on a harness, permanent control (at least while connected) seems fair.
    I know of more than one system that depends on controller detachment as part of system control security. Just like locking a control box lid, etc. No controller...no control. Not a new concept I think.
    In terms of an app, look at the Nike shoe controller...it stays up during each run, and I'd expect that type of scenario to be part of the new interface controls being put out now. I can imagine overrides, but that doesn't mean your app's interface has to close/shut down entirely.

Maybe you are looking for