5596 - zones best practice

I never done storage networking before and while configuring and testing our zone/zoneset config I ended up adding targets and all members to the same zone.  I'm positive this is not best practice and hoping someone will point me in the right direction.
This is working fine with the config below, VMs are seeing LUNs.  Should I create separate zone for each target? So each host would have two zones with target(VNX-SP-A) & (VNX-SP-B) on each switch? all in the same zoneset?
SW1:
!Full Zone Database Section for vsan 31
zone name NX1-VNX-SP-A vsan 31
    member pwwn 50:06:01:64:08:60:21:65
!               [VNX-SP-A]
    member pwwn 50:06:01:6d:08:60:21:65
!               [VNX-SP-B]
    member pwwn 22:ce:00:2a:6a:9d:d4:7f
!               [UCS-FI-A]
    member fcid 0xef00ef
    member fcid 0xef01ef
    member fcid 0x0c0001
    member pwwn 20:00:00:25:b5:b0:00:1e
!               [ESX-01]
    member fcid 0xef0002
    member pwwn 20:00:00:25:b5:b0:00:3f
    member fcid 0xef0003
    member pwwn 20:00:00:25:b5:b0:00:3e
    member fcid 0xef0004
    member pwwn 20:00:00:25:b5:b0:00:1f
    member fcid 0xef0005
zoneset name ZoneSet_NX1-VNX-SP-A vsan 31
    member NX1-VNX-SP-A
zoneset activate name ZoneSet_NX1-VNX-SP-A vsan 31
SW2:
!Full Zone Database Section for vsan 32
zone name NX1-VNX-SP-A vsan 32
    member pwwn 50:06:01:65:08:60:21:65
!               [VNX-SP-A]
    member fcid 0x1400ef
    member pwwn 50:06:01:6c:08:60:21:65
!               [VNX-SP-B]
    member fcid 0x1401ef
    member pwwn 22:cf:00:2a:6a:9d:d7:3f
!               [UCS-FI-B]
    member fcid 0x140001
    member pwwn 20:00:00:25:b5:b0:00:0e
!               [EXS-01]
    member fcid 0x140002
    member pwwn 20:00:00:25:b5:b0:00:2f
    member fcid 0x140003
    member pwwn 20:00:00:25:b5:b0:00:2e
    member fcid 0x140004
    member pwwn 20:00:00:25:b5:b0:00:0f
    member fcid 0x140005
zoneset name ZoneSet_NX2-VNX-SP-A vsan 32
    member NX1-VNX-SP-A
zoneset activate name ZoneSet_NX2-VNX-SP-A vsan 32

Usually most storage vendors mandate single initiator - single target zoning; this could of course end up in quite some work.
Cisco invented some proprietary solution, called smart zoning, see eg.
http://www.cisco.com/c/dam/en/us/products/collateral/storage-networking/mds-9100-series-multilayer-fabric-switches/at_a_glance_c45-708533.pdf
Caveat: not every platform does support smart zoning (N5k doesn't for the moment).
My 2c: use device alias for building your zone's ! and enhanced zoning (to be enabled per vsan)

Similar Messages

  • Software Installs for Zones - Best Practices

    I haven't found anything in the documentation on this subject yet. I'm wondering what the best way is to install software for non global zones. Let's take a simple setup say a webserver and a database and I want them in separate non global zones. I would like to use sparse root zones and have my software in /usr/local. The problem is that if I add the software in the global zone /usr/local, the Web server zone has access to my DB install in /usr/local. I know i will be putting the data else where but I would rather not have that binary accessible. Is that possible? These are not package installs. They are binary distributions. Glassfish and Postgres.
    If anyone has any answers or input it would be much appreciated.

    We are using zones as part of a whole security solution. I don't see installing whole root zones as a solution. I may as well create another Solaris 10 server in that case since we are in a VM cluster anyways.
    I was able to find some documentation on this subject and there is a more sensible solution to this that makes more sense and applies to using sparse root zones.
    Let's say you want to have each child zone have its own writable /usr/local directory. First create the /usr/local directory within the global zone. Next add a file system to the zone configuration. Let say your zones are in /zones. You have zone1 and zone2. So create the /zones/zone1/local directory with 700 permissions. In the zone config you set the fs special to that directory and the dir to /usr/local. Give it rw,nodevices for options. Now zone1 has its own writable /usr/local that zone2 cannot access.

  • Best practice RAC installation in two datacenter zones?

    Datacenter has two separate zones.
    In each zone we have one storage system and one rac node.
    We will install RAC 11gR2 with ASM.
    For data we want to use diskgroup +DATA, normal redundancy mirrored to both storage systems.
    For CRS+Voting we want to use diskgroup +CRS, normal redundancy.
    But for CRS+Voting diskgroup with normal redundancy we need 3 luns and we have only 2 storage systems.
    I believe the third lun is needed to avoid split brain situations.
    If we put two luns to storage #1 and one lun to storage #2, what will happen when storage #1 faills - this means that two of three disks for diskgroup +CRS are unaccessible?
    What will happen, when all equipment in zone #1 fails?
    Is human intervention required: at failure time, when zone#1 is coming up again?
    Is there a best practice for a 2-zone 2-storage rac configuration?
    Joachim

    Hi,
    As far as voting files are concerned, a node must be able to access more than the half of the voting files at any time (simple majority). In order to be able to tolerate a failure of n voting files, one must have at least 2n+1 configured. (n= number of voting files) for the cluster.
    The problem in a stretched cluster configuration is that most installations only use two storage systems (one at each site), which means that the site that hosts the majority of the voting files is a potential single point of failure for the entire cluster. If the storage or the site where n+1 voting files are configured fails, the whole cluster will go down, because Oracle Clusterware will loose the majority of voting files.
    To prevent a full cluster outage, Oracle will support a third voting file on an inexpensive, lowend standard NFS mounted device somewhere in the network. Oracle recommends putting the NFS voting file on a dedicated server, which belongs to a production environment.
    Use the White Paper below to accomplish it:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Also Regarding the Voting File and OCR configuration (11.2), when using ASM. How they should be stored?
    I recommend you read it:
    {message:id=10028550}
    Regards,
    Levi Pereira

  • Best Practices for DB2/UDB in Containers/Zones

    I see there is a best practices guide for Oracle in zones, does anyone have one similar guide for DB2?
    Thanks

    working db2 for me
    just set /etc/system like this:
    set msgsys:msginfo_msgmax = 65535
    set msgsys:msginfo_msgmnb = 65535
    set msgsys:msginfo_msgssz = 32
    set msgsys:msginfo_msgseg = 32767
    set msgsys:msginfo_msgmap = 65535
    set msgsys:msginfo_msgmni = 3584
    set msgsys:msginfo_msgtql = 3584
    set semsys:seminfo_semmap = 1026
    set semsys:seminfo_semmni = 4096
    set semsys:seminfo_semmns = 16384
    set semsys:seminfo_semmnu = 2048
    set semsys:seminfo_semume = 256
    set shmsys:shminfo_shmmax = 4294967295
    set shmsys:shminfo_shmmni = 1024
    set shmsys:shminfo_shmseg = 1024
    set semsys:seminfo_semmsl = 100
    set semsys:seminfo_semopm = 100
    set rlim_fd_cur = 1024
    ;)

  • Looking for best practices when creating DNS reverse zones for DHCP

    Hello,
    We are migrating from ISC DHCP to Microsoft DHCP. We would like the DHCP server to automatically update DNS A and PTR records for computers when they get an IP. The question is, what is the best practice for creating the reverse look up zones in DNS? Here
    is an example:
    10.0.1.0/23
    This would give out IPs from 10.0.1.1-10.0.2.254. So with this in mind, do we then create the following reverse DNS zones?:
    1.0.10.in-addr.arpa AND 2.0.10.in-addr.arpa
    OR do we only create:
    0.10.in-addr.arpa And both 10.0.1 and 10.0.2 addresses will get stuffed into those zones.
    Or is there an even better way that I haven't thought about? Thanks in advance.

    Hi,
    Base on your description, creating two reverse DNS zones 1.0.10.in-addr.arpa and 2.0.10.in-addr.arpa, or creating one reverse DNS zone 0.10.in-addr.arpa, both methods are all right.
    Best Regards,
    Tina

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best practices of having a different external/internal domain

    In the midst of migrating from a joint Windows/Mac server environment to a completely Apple one. Previously, DNS was hosted on the Windows machine using the companyname.local internal domain. When we set up the Apple server, our Apple contact created a new internal domain, called companyname.ltd. (Supposedly there was some conflict in having a 10.5 server be part of a .local domain - either way it was no worries either way.) Companyname.net is our website.
    The goal now is to have the Leopard server run everything - DNS, Kerio mailserver, website, the works. In setting up the DNS on the Mac server this go around, we were advised to just use companyname.net as the internal domain name instead of .ltd or .local or something like that. I happen to like having a separate local domain just for clarity's sake - users know if they are internal/external, but supposedly the Kerio setup would respond much better to just the one companyname.net.
    So after all that - what's the best practice of what I should do? Is it ok to have companyname.net be the local domain, even when companyname.net is also the address to our external website? Or should the local domain be something different from that public URL? Or does it really not matter one way or the other? I've been running companyname.net as the local domain for a week or so now with pretty much no issues, I'd just hate to hit a point where something breaks long term because of an initial setup mixup.
    Thanks in advance for any advice you all can offer!

    Part of this is personal preference, but there are some technical elements to it, too.
    You may find that your decision is swayed by the number of mobile users in your network. If your internal machines are all stationary then it doesn't matter if they're configured for companyname.local (or any other internal-only domain), but if you're a mobile user (e.g. on a laptop that you take to/from work/home/clients/starbucks, etc.) then you'll find it a huge PITA to have to reconfigure things like your mail client to get mail from mail.companyname.local when you're in the office but mail.companyname.net when you're outside.
    For this reason we opted to use the same domain name internally as well as externally. Everyone can set their mail client (and other apps) to use one hostname and DNS controls where they go - e.g. if they're in the office or on VPN, the office DNS server hands out the internal address of the mail server, but if they're remote they get the public address.
    For the most part, users don't know the difference - most of them wouldn't know how to tell anyway - and using one domain name puts the onus on the network administrator to make sure it's correct which IMHO certainly raises the chance of it working correctly when compared to hoping/expecting/praying that all company employees understand your network and know which server name to use when.
    Now one of the downsides of this is that you need to maintain two copies of your companyname.net domain zone data - one for the internal view and one for external (but that's not much more effort than maintaining companyname.net and companyname.local) and make sure you edit the right one.
    It also means you cannot use Apple's Server Admin to manage your DNS on a single machine - Server Admin only understands one view (either internal or external, but not both at the same time). If you have two DNS servers (one for public use and one for internal-only use) then that's not so much of an issue.
    Of course, you can always drive DNS manually by editing the zone files directly.

  • Best practice for default values in EO

    I have and entity called AUTH_USER (a user table) within it has 2 TIMESTAMP WITH TIME ZONE columns like this ...:
    EFF_DATE TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT current_timestamp,
    TERM_DATE TIMESTAMP WITH TIME ZONE
    Notice EFF_DATE has a default constraint and is not nullable.
    In the EO, EFF_DATE is represented as a TIMESTAMPTZ and is checked as MANDATORY in its attribute properties. I cannot commit a NEW RECORD based on VO derived from this EO because of the MANDATORY constraint that is set in the EFF_DATE attribute's properties unless I enter a value. My original strategy was to have the field populated by a DEFAULT DATE if the user should attempt to leave this field null.
    This is my deli ma.
    1. I could have the database populate the value based on the default constraint in the table definition. Since EFF_DATE and TERM_DATE resemble the Effective Date (Start, End) properties that the framework already provides then I could set both fields as Effective Date (Start, End) and then check Refresh After Insert. But this still won't work unless I deselect the mandatory property on EFF_DATE.
    2. The previous solution would work. However, I'm not sure that it is part of a "Best Practices" solution. In my mind if a database column is mandatory in the database then it should be mandatory in the Model as well.
    3. If the first option is a poor choice, then what I need to do is to leave the attribute defined and mandatory and have a DEFAULT VALUE set in the RowImpl create method.
    4. Finally, I could just force the user to enter a value. That would seem to be the common sense thing to do. I mean that's what calendar widgets and AJAX enabled JSF are for!
    Regardless to what the correct answer is, I'd like to see some sample code of how the date can be populated inside the RowImpl create method and it pass to setEffDate(TimestampTZ dt). Keep in mind though that in this instance I need the timezone at the database server side and not the client side. I would also ask for advice on doing this with Groovy Scripting or expressions.
    And finally, what is the best practice in this situation?
    Thanks in advance.

    How about setting the default value property of the attribute in the EO to be adf.currentDate ?
    (assuming you are using 11g).
    This way there is a default date being set when the record is created and the user can change it if he wants to.

  • Auto-Anchor Controller's Best Practice

    Hi All,
    I got confused with this setup. I have 2 Wlc's.One is the internal controller and another one configured for the anchor controller (different subnet-DMZ zone) for guest traffic. Where do i configure DHCP assignment for this users..? Should Production controller intervine in this dhcp process or shall i direct to Anchor to take care of everything..? which is recommended ?
    And also any best practice doc is available for this ..?
    Please help...
    thanks in advance.

    Prasan,
    Just keep in mind that there are best practices that are published and best practices that you learn from experience. Being a consultant, I get to implement wireless in various networks and everyone's network is quite different. Also code versions can change a best practice because of bug issues or how a standard might of changed and how that standard was implemented in code. The biggest best practice secret is really working with various client devices, scanner's, laptops, smartphones, etc., and seeing how those change because of newer models and it firmware updates. It's amazing to understand how some devices will require a few checkboxes in the WLAN to be disabled compared to others. Even with anchoring for guest and using a custom WebAuth to make sure the splash page works with various types of browsers.
    What I can say is to always try the defaults if possible when you have issues and then enable things one by one.
    Sent from Cisco Technical Support iPhone App

  • Static NAT refresh and best practice with inside and DMZ

    I've been out of the firewall game for a while and now have been re-tasked with some configuration, both updating ASA's to 8.4 and making some new services avaiable. So I've dug into refreshing my knowledge of NAT operation and have a question based on best practice and would like a sanity check.
    This is a very basic, I apologize in advance. I just need the cobwebs dusted off.
    The scenario is this: If I have an SQL server on an inside network that a DMZ host needs access to, is it best to present the inside (SQL server in this example) IP via static to the DMZ or the DMZ (SQL client in this example) with static to the inside?
    I think its to present the higher security resource into the lower security network. For example, when a service from the DMZ is made available to the outside/public, the real IP from the higher security interface is mapped to the lower.
    So I would think the same would apply to the inside/DMZ, making 'static (inside,dmz)' the 'proper' method for the pre 8.3 and this for 8.3 and up:
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    Am I on the right track?

    Hello Rgnelson,
    It is not related to the security level of the zone, instead, it is how should the behavior be, what I mean is, for
    nat (inside,dmz) static yy.yy.yy.yy
    - Any traffic hitting translated address yy.yy.yy.yy on the dmz zone should be re-directed to the host xx.xx.xx.xx on the inside interface.
    - Traffic initiated from the real host xx.xx.xx.xx should be translated to yy.yy.yy.yy if the hosts accesses any resources on the DMZ Interface.
    If you reverse it to (dmz,inside) the behavior will be reversed as well, so If you need to translate the address from the DMZ interface going to the inside interface you should use the (dmz,inside).
    For your case I would say what is common, since the server is in the INSIDE zone, you should configure
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    At this time, users from the DMZ zone will be able to access the server using the yy.yy.yy.yy IP Address.
    HTH
    AMatahen

  • Best Practices for File Organizati​on/Project Explorer

    So we are finally getting SCC at my organization to manage our LabVIEW development, and that is good! 
    Now, we are starting in on discussions about how we should organize our files on disk and how we should use the Project Explorer. When I started here about 3 years ago, I wasn't very familiar with the project explorer, so I read the article at http://zone.ni.com/devzone/cda/tut/p/id/7197. Two of the main things I took away from that article are:
    1. Organize Files in a logical manner on disk. Whatever that is, it is not a flat file structure.
    2. The top level VI should be separate from other source code. Preferably, it should reside in the application folder.
    Push Back Against These Recommendations
    Before I was hired, most, if not all LabVIEW development was done utilizing a flat file structure and the top level VI lived with the source code. Since we didn't have a proper SCC, each individual organized files as he saw fit. So I started using the Project Explorer (not even its use is totally accepted right now) and I began follow recommendations 1 and 2 above. I didn't always follow #1 very strictly, but I have been working towards it, and I have always followed #2 religiously. 
    Since we are starting these discussions on how we should organize files on disk I'm starting to get some push back to following these two recommendations.
    The arguments I get in favor of using a flat file structure is that you always know where every file is; including the top-level VI. It is also argued that it is a lot of effort to organize and search for VIs when they all reside in different folders. I think the fear is that by getting "clever" and organizing our files in such a manner we'll make things complicated and we will somehow shoot ourselves in the foot. 
    The argument I get against separating the top level VI from the rest of the source code is that it:
    (a) Won't be clear where it is (like it is buried within hundreds of VIs). However, it is argued, you can just put a "!" in front of the file name and then it is always the at top of the flat file structure.
    (b) An extension of argument of (a) is that things either look or seem messy when VIs (including top level VI) don't live in a sub-folder and are just hanging out with the Project Explorer file. 
    (c) I think there may be some fear of breaking the VI by moving it and altering the dependencies for the VI. 
    Convincing Others its Good to Follow These Recommendations
    So, if I want to follow NI's recommendations, I need to come up with reasons we should follow these recommendations. Also, I should state that I care about following these recommendations because its what NI recommends. They've been around the block a few times and I'm sure there are good reasons why these are best practices. However, I don't think I've given a very compelling case for why these recommendations should be followed.
    So I'll tell you all what I think good reasons are for these recommendations and perhaps I can get some feedback or additional support? If I'm crazy for wanting to follow these recommendations maybe someone can point out why I'm crazy. 
    (a) Arguments for Following Both
    I. I passed the CLAD a couple of weeks ago, and I have started studying for the CLD. Part of the CLD is following both of these recommendations (see page 6 of http://ftp.ni.com/evaluation/certification/cld/cld​_exam_prep_guide_english.pdf). While this isn't a reason in and of itself, it suggests that if it important when being certified it is important in practice!
    II. If we hire new developers that are familiar with LabVIEW, they will most likely be familiar with these recommendations, especially if they are certified. That will lead to increased productivity out of the door because they won't have to learn our special way of doing things.
    (b) Arguments for Organized File Structure
    I. Unused VIs are easier to identify and remove. Right now we never remove VIs because we don't know if they are used or not. This leads to a lot of VI bloat.
    II. It is hard to know what a specific VIs function is in a flat file structure by looking at the name.
    (c) Arguments for Separating Top Level VI from Source Code
    I. Placing the top level VI is an intuitive place for this VI. As long as the top level VI is the only VI in the application folder there is no mistake it is the top level VI, especially once you open it. This makes it easy for new developers to find the top level VI. I'd argue it isn't very intuitive for new developers to know that a VI in the source code folder that is prefaced with a "!" is the top level VI.
    Summary
    So that is what I think so far. Is there anything else I am missing to support following those two recommendations or am I just being inflexible?
    Thanks!

    zenthoef,
    As a CLA, I have struggled with file structure over the years.  Here are my recommendations:
    1.  Put the top level VI and the project in the top-level folder.  This makes it very clear where to begin.
    2.  Put the remaining user interface VIs in a separate folder.  Again, it makes it very clear what the functionality of these VIs are.
    3.  If you are using object, put each object in a separate folder.  Place the family of objects in one folder, with each object in a subfolder.
    4.  Keep the remaining VIs either in a single folder.  This can contain a small number of subfolder if your project is large, but too many folders makes it hard to figure out where your VIs are.  For example, you might have a DAQ subfolder, an Analysis subfolder, and a Report subfolder.  But if you had a Test1 folder, a Test2 folder, and you had a VI that was used by both tests, where would it go?  Keep it simple.
    5.  You inferred that it is hard to figure out what a VI does by its name.  That implies that 1) you need better names, and 2) your VIs are too complicated.  A VI should do a single function which can be adequately described by its name.  That VI might be something like Analyze Data.vi, which would contain a bunch more subVIs (like Get 1st Harmonics.vi), but each VI would contain a single function.  You wouldn't save the data to a report in the Analyze Data.vi, for example.
    The most compelling reason for following these suggestions is that it is easier to figure out what the code is doing after you haven't looked at it for a while.  Once you have an application that is working and bug free, you shouldn't have to touch the code until you want to add features.  If that is even 6 months later, you will probably have forgotten how the code works.  As a consultant, I have had to update other people's code, and just figuring how where to start can be a challenge.
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com

  • Best practice for OSB to OSB communication

    Cross posting this message:
    I am currently in a project where we have two OSB that have to communicate. The OSBs are located in different security zones ("internal" and "secure"). All communication on a network level must be initiated from the secure zone to the internal zone. The message flow should be initated from the internal zone to the secure zone. Ideally we should establish a tcp connection from the secure zone to the internal zone, and then use SOAP over HTTP on this pre-established connection. Is this at all possible?
    Our best approach now, is to establish a jms-queue in the internal zone and let both OSBs connect to this. All communication between the zone is then done over JMS. This is an approach that would work, but is it the only solution?
    Can the t3/t3s protocol be used to achieve our goal? I.e. To have synchronous commincation over a pre-established connection (that is established the in opposite direction of the communication)?
    Is there any other approach that might work?
    What is considered best practice for sending messages from a OSB to another OSB in a more secure zone?
    Edited by: hilmersen on 11.jun.2009 00:59

    Hi,
    In my experience in a live project, we have used secured communication (https) between internal service bus and DMZ/external service bus.
    We also used two way SSL with customers.
    The ports were also secured by firewall in between them.
    If you wish more details, please email [email protected]
    Ganapathi.V.Subramanian[VG]
    Sydney, Australia
    Edited by: Ganapathi.V.Subramanian[VG] on Aug 28, 2009 10:50 AM

  • DNS Configured-Best Practice on Snow Leopard Server?

    How many of you configure and run DNS on your Snow Leopard server as a best practice, even if that server is not the primary DNS server on the network, and you are not using Open Directory? Is configuring DNS a best practice if your server has a FQDN name? Does it run better?
    I had an Apple engineer once tell me (this is back in the Tiger Server days) that the servers just run better when DNS is configured correctly, even if all you are doing is file sharing. Is there some truth to that?
    I'd like to hear from you either way, whether you're an advocate for configuring DNS in such an environment, or if you're not.
    Thanks.

    Ok, local DNS services (unicast DNS) are typically straightforward to set up, very useful to have, and can be necessary for various modern network services, so I'm unsure why this is even particularly an open question.  Which leads me to wonder what other factors might be under consideration here; of what I'm missing.
    The Bonjour mDNS stuff is certainly very nice, too.  But not everything around supports Bonjour, unfortunately.
    As for being authoritative, the self-hosted out-of-the-box DNS server is authoritative for its own zone.  That's how DNS works for this stuff.
    And as for querying other DNS servers from that local DNS server (or, if you decide to reconfigure it and deploy and start using DNS services on your LAN), then that's how DNS servers work.
    And yes, the caching of DNS responses both within the DNS clients and within the local DNS server is typical.  This also means that there is need no references to ISP or other DNS servers on your LAN for frequent translations; no other caching servers and no other forwarding servers are required.

  • SINGLE 9513 with A&B 9216 best practice....

    We have a couple 9216's (A&B side - VSAN numbers identical) and have reached our port count limit. We purchased one 9513 with two 48 port FC modules and one X9304-18k4 line rate card (18FC/4IP). The goal is to make the single 9513 our principle switch ISL'ed to the 9216's.
    Is having one 9513 a good decision? Our thoughts were that it has so much redundancy built in and the performance is far beyond what we need we could just use one. I have never had anything but an A&B side of the fabric so wrapping my head around this and trying to keep it within best practice is proving difficult.
    The 9216A&B VSANS numbers are identical. When we ISL the zones will merge thereby collapsing the A&B sides together. I really don't want to do that (would essentially become a meshed fabric right). I am thinking that I need to rename the VSAN's on the B side to not match A. Maybe make the even numbered VSAN the A side and the odd numbered VSAN the B side. That way I can keep the ISL's independent from each other and prevent the zones from collapsing into each other.
    Also keep in mind that we may (eventually) purchase another 9513 and just move one of the FC48 modules over to separate the A&B side again at a later date. I want to keep this as flexible as possible in case that does happen.
    Thoughts - comments - suggestions all welcome! Just please be nice. I am learning....

    I think you are right on. The single 9513 can not have duplicate VSANs for the A and B 9216s. The odd/even idea makes the most sense. This way you can leverage the 9513 redundancy. You can match the same VSANs that exist in the A fabric, and only permit those VSANs across the ISLs to the A 9216. You will have to renumber the VSANs on the B 9216 and then match them on the 9513 and again, only permit those VSANs on the ISLs to the B 9216.
    Things to keep in mind if you re-number the VSANs on the B 9216. If you match the domain numbers used on the corresponding A 9216 VSANs, and make them static, even if someone cross connects a cable the VSANs will not merge since there is domain confict. If you change the domain from the current one in use, hosts like AIX and HP-UX will get a new FCID. You may have to rescan the host to resolve the lun bindings.
    If you have AIX and HP-UX, you may want to ensure that the target devices they use, get the same FCID after the VSAN renumber to avoid having to perform the rescan. (this may prevent matching the same domains used on the A 9216).
    Hope this helps,
    Mike

  • Disabling IPv6 on 2008R2 Domain Controllers... Best Practice?

    At the end of last year I had a call with Microsoft Support in which I spoke with a member of the Directory Services team regarding an issue.  The issue was resolved with no further problems, but while conversing with the Technical Support Engineer
    I queried him on another issue regarding a second copy of our DNS zone in Active Directory.  He looked at it (remoted in via RDP) then looked at my NIC properties and stated that the reason it happened is because we are running IPv6 on our DCs. 
    I told him we do that on all our servers. (leave IPv6 enabled.)  He then stated that we should not do that, expanding by saying that "Microsoft is in the process of rewriting documentation as IPv6 is no longer supported on Domain Controllers."    
    Needless to say I could not believe this.  I told him how Exchange on an SBS server cannot have IPv6 disabled as the server will stop booting, but he was very adamant about it; he even put me on hold for 10 minutes then came back saying he confirmed
    that this is the case and spoke with the "Documentation Team" and the new Best Practices would be released within the next month. In the meantime he recommended I disable IPv6 on all my DCs. (I work in Consulting so that's a lot of DCs at various different
    business entities.)
    I didn't believe him then, and I don't believe him now.  Reviewing the FAQ linked through http://support.microsoft.com/kb/929852  Says that Microsoft does not recommend disabling IPv6.  Of course no documentation ever came out, nor have I
    found anything to agree with his statements. (we solved the duplicate partition issue ourselves.)
    I just wanted to post here and see if anyone else has heard of this, maybe I'm the one not up and up on my info.  Has or does Microsoft plan on reversing course on the new IPv6 technology that 2008 and up are built on?  I would think that quite
    preposterous!
    Thanks,
    Christopher Long
    Science is a way of thinking much more than it is a body of knowledge. -- Carl Sagan

    There are cases where you DO WANT to disable IPv6 on a domain controller. 
    Example: you have an IPV4 network and do not have IPV6 deployed. In this case if you are not using IPv6 but leave it enabled than Windows will assign itself an IPv6 at random via the APIPA process. That IP address can and does change when you reboot the
    server.... So I bet you see the problem here. 
    If you build a domain controller with IPv6 enabled - it will register it's IPV6 address in DNS as offering AD services. Then when you reboot that domain controller and that address changes - BOOM. AD comes crashing down. AD relies heavily on DNS. Windows
    thinks it's smarter than you and registers it's IPv6 address obtained via APIPA in DNS. Now that's a problem. Particularly because Win Server 2008+ prefer IPV6 over IPV4 networks. So communication can blow up even if a valid IPv4 network is available. 
    So yes - there are instances where you do want to - in fact need to - disable IPv6 on domain controllers. Microsoft's documentation does not reflect this but it should. At a minimum if they want you to leave it on they should at least remind you to set a
    static IPv6 address if you're running an IPv4 network. 
    (ask me how I know all this over a beer some time)
    I opted to just disable it. Despite MS's documentation warning of the contrary - I've seen no adverse impacts. Exchange, Sharepoint, AD, etc. all humm along fine. 

Maybe you are looking for

  • Back Order Report/Query

    Hi Experts, I have a client who do not want the Open Sales Orders that is past it's due date to show up on the Sales Back Order report. I run into some difficulties with the querie: When I use the Open Quantity and Deloivered Quantity fields in the q

  • How do I renew my membership to reactive the Develop module? I paid my montly fee.

    How do I renew my membership to reactive the Develop module? I paid my montly fee. I have never seen this before.

  • Archive inspection lot:-LTIN STIC STUP

    Hi there Issue: The system is not considering the inspection lots for archiving if i changed the status from "LTIN STIC STUP" to "UD   ICCO STUP" . When i ran in test mode RQRQAB00, I can see that the system is able to read the inspection lot,  usage

  • BI Analytics Visual Composer Result, Where I can find it in Portal Content?

    Hello, I've done an html analytics in Visual Composer. Now I want to use it in an iview, but I don't know where is this VC in the Portal Content. I'm using CE 7.2. Someone can help me? Or I have to use it as an URL iview. Thanks Regards SU

  • Awards Text - SAP HR (IT 0183)

    Hi, How can I obtain the values for awards text in infotype 0183 as the text is stored in structure RP50M?  Thanks a lot. Regards, Atel