ASA redundant design questions

Hi, thanks for your time and knowledge. 
I have a topology like below in data center and plan to have a full redundant topology. Currently Primary/Scondary/ASA and another core switch at HQ are running EIGRP. Especially ASA is redistributing all IPsec tunnels (around 70 branches) and remote VPN (10.254.50.0/24) to EIGRP. Blue line is internal and red line is for DMZ, in terms of internal vlans, they are running through EIGRP which means that 
  default gateways for internal vlans are all primary/secondary through HSRP (Virtual IP)
 however for DMZ vlan, it is terminated to ASA interface. for example, from server's perspective, default gateway is not primary/secondary switch, but ASA dmz interface. so servers in DMZ are recognizing Primary/Secondary as L2 switch. 
Question 1) According to my research, I need to have HSRP between two switches ====== ASAs. Is it right? I can't run EIGRP? If I can't run EIGRP between four devices, I need to make a lots of static route in ASA for branch offices (70 subnet) and remote VPN user (1 subnet)
Q2) I like left topology because I don't need to setup redundant interface and less cables. Especially I don't need another IPS sensor (If I choose right topology, I need one more IPS sensor). Also, we don't have VSS between Primary and Secondary (jut trunk) Do you see any problem with left topology? I am ok for couple minutes of downtime due to device failure.
Q3) Both ASA inside/DMZ/outside ip address should be identical? except failover interface?  i.e inside of interface ip is 10.254.5.4 now. then this will be both inside IP for Active/Standby? or I need different ip address for all interfaces? 
Thanks. 

What are your devices?  Router/switches/ASAs?  Your pictures are kind of cut off so it hard to understand your topology.
You need to have two Layer3 devices to run HSRP one will be Primary and one would be Standby.  You should be able to run EIGRP on all the devices.

Similar Messages

  • Redundancy design question

    I have two PIX 520 configured as Active/Standby mode at production. I am adding two Cat 6509 behind the PIX. The inside interface of the Active PIX connects to 6509-A; the inside interface of the Standby PIX connect to 6509-B. The 6509 is running HSRP and has a trunk between the switches. Under this physical connection, it seems if the 6509-A is dead, there is no other path to reach 6509-B. Do i have to add another interface card to the pix so I can connect to 6509-B or is there any other way to achieve redundancy?

    hi
    As mentioned by brandon you need to have failover configured in your pix firewalls to tackle the situation and to handle the active transactions/traffic when 6509A or the primary switch goes down.
    In order to achieve that PIX B should have the whole lot of transaction info being carried out PIXA.
    With that info in place the transition of traffic from PIXA to PIXB will be smooth when theres some probs with 6509A.
    Also refer this link for more info on failover configs and concepts..
    http://cisco.com/en/US/products/sw/secursw/ps2120/products_configuration_guide_chapter09186a00800eb72f.html
    http://cisco.com/en/US/products/hw/vpndevc/ps2030/products_tech_note09186a0080094ea7.shtml
    regds

  • Spanning Tree Redundancy Design

    I am getting ready to deploy a redundant design using 3 3750's running emi software for a core and 2950's for distribution layer and access layer both. The 2950's with fiber attachment will be dual distribution switches providing a redundant link for every switch. My question is that if I set the "stacked" 3750's as the root bridge in stp, I thought it would be to my advantage to set the distro switches to 4096 and 8192 to make sure that stp take the correct path. Should I let stp take care of itself? Please consult the diagram and I appreciate any input. Thanks.

    I would set the root id on the 3750's to 4096 and set one of your distribution switches to 8196 as a backup. Now I am assuming you are not doing load balancing or vlans either; so with that in mind after you set your root id, I would let STP take care of your redundant link between your distro switches and access switches. After STP chooses the links, then you can go back and set cost to the interfaces if the traffic flow is what you want. Make sure you enable rapid pvst+ on all your switches.
    Here is a good reference to follow as well.
    http://www.cisco.com/en/US/customer/products/hw/switches/ps5023/products_configuration_guide_chapter09186a00801cdee4.html#1082107
    Frank

  • About the DWDM 15454 optical ring design questions.

    1 Question
      We have 3 ONS 15454 in A,B,C each place. The connection between different locations is a pair of single mode fiber. The topology is A>B>C>A. The A is main datacenter.
      We want to make a redundancy design based on ROADM. The normal transmission is A to B to C to A. For example, when the problem occurred between C and A, like CxA. We want to change the data transfer direction to C>B>A.
      How could I do that through the 15454 configuration software CTC? Which protocol should I use? I am newbie in the optical network. Please give me an easy grasping answer. (Please don't change the topology because we not only have A,B,C we also have other 3 locations to join in the optical ring. I just want to use A B C to make the question easy to be answered)
    2 Question
    The 15454-GE-XPE card mode made me a little confused. Our each location have 2 15216 MUX/DEMUX patch Panel Odd. One for MUX to next location. One for DEMUX from last location. Does that mean I don’t need to open the GE-XPE cards’  MXP mode?
    The reason I want to use L2 mode is that we have 2 GE-XPE cards for each location for different department function.  The connection is 1st SFP port from GE-XPE to 6509 supervisor engine 2T active. 11th SFP port connects to 6509 supervisor 2T standby.   The 11th SFP will have no data until the
    active supervisor 2T  changed to down. I am worried about  if I open the MXP mode for GE-XPE.  The 21th XPF gets data from 15216  DEMUX  panel  and transfers it to 6509 engine 2T  through  1st SFP. But 6509 can’t transfer the data back to 22th XPF through the 11th SFP on the GE-XPE. Because the 11th SFP connection is on standby engine 2T.
    The question is what is the best way to make the right data flow direction from 15216 DEMUX through 6509 and back to 15216 MUX panel. 
    Thank you for the reading and help!
    I will do my best to reply every feedback.:)

    about Question-1
    there are two ways to achieve your requirements.
    use PSM card or second way is to use WSON.
    PSM card is easy to install and it will cost less.
    for WSON solution, your wavelength will switch automatically on fiber cut, but for this you have to make two nodes omni-directional that is source and drop nodes for that wavelength which seeks proteciton.
    to make omnidirectional you need extra piece of hardware.
    if your node has SMR2 cards, than you need one more SMR2 to make it omni-directional.
    if its WXC than you need one more WXC, plus PRE and BST amplifiers.
    so i think, just go for PSM cards :-)
    about Question-2
    The main question is if I open the 10GE MXP mode on the 15454-GE-XPE card, the port 1~10 can’t do any communication with port 11~20 unless I connect them with a SFP switch, is it right?
    yes this is correct.
    Another problem is I confused about the muxponder mode GE-XPE and the 15216 MUX/DEMUX Patch Panel. Are they doing the same job MUX/DEMUX?
    no there is difference.
    there are three modes on GE_XPE card.
    10GE MXP mode:
    if you select this mode.
    than it will multiplex all the traffic coming on port 1 to port 10 and trasnmit it through port 21-1.
    and traffic received on ports 11 to ports 20 is multiplexed and transmit it through port 22-1.
    in other words, all traffic coming on port-1 to port-10 is encapsulated over a OTN/SONET/SDH and transmitted via port 21. same for port 11 to port 20.
    20GE MXP mode:
    all the traffic recevied over por-1 to port-20 is multiplexed and transmitted through port 21, in this port port 22 is not used.
    Layer-2 mode.
    you have to proviion QinQ and SVLANS to pass traffic.
    15216 MUX/DMUX are entirely different, they are used to multiplex and de-multiplex wavelength.
    so if we have 5 GE_XPE cards on each node A and node B.
    and you want to connected each card with each other, suppose port 21 of each card on each node is connected with each other.
    than you need 10 fibers to connect them, one for each Tx----------Rx.
    so instead of doing this, people use MUX/DMUX, in your case it is 15216 MUX/DMUX
    so trunk port 21 of each 5 cards will be connected with these MUX/DMUX and than the signal is transposrted is this way.
    please go through this picture.

  • Catalyst 3850 Stack VLANs, layer 2 vs. layer 3 design question

    Hello there:
    Just a generic, design question, after doing much reading, I am just not clear as when to use one or the other, and what the benefits/tradeoffs are:
    Should we configure the switch stack w/ layer 3, or layer 2 VLANs?
    We have a Catalyst 3850 Stack, connected to an ASA-X 5545 firewall via 8GB etherchannel.
    We have about 100 servers (some connected w/ bonding or mini-etherchannels), and 30 VLANs.
    We have several 10GB connections to servers.
    We push large, (up to) TB sized files from VLAN to VLAN, mostly using scp.
    No ip phones, no POE.
    Inter-VLAN connectivity/throughput and security are priorities.
    Originally, we planned to use the ASA to filter connections between VLANs, and VACLs or PACLs on the switch stack to filter connections between hosts w/in the same VLAN.
    Thank you.

    If all of your servers are going to the 3850 then I'd say you've got the wrong switch model to do DC job.  If you don't configure QoS properly, then your servers will start dropping packets because Catalyst switches have very, very shallow memory buffers.  These memory buffers get swamped when servers do non-stop traffic. 
    Ideally, Cisco recommends the Nexus solution to connect servers to.  One of the guys here, Joseph, regularly recommends the Catalyst 4500-X as a suitable (and financial) alternative to the more expensive Nexus range.
    In a DC environment, if you have a lot of VM stuff, then stick with Layer 2.  V-Motion and Layer 3 don't go hand-in-hand.

  • Design question: Scheduling a Variable-timeslot Resource

    I originally posted this in general java programming, because this seemed like a more high-level design descussion. But now I see some class design questions. Please excuse me if this thread does not belong here (this is my first time using the forum, save answering a couple questions).
    Forum,
    I am having trouble determining a data structure and applicable algorithm (actually, even more general than the data structure -- the general design to use) for holding a modifiable (but more heavily read/queried than updated), variable-timeslot schedule for a given resource. Here's the situation:
    Let's, for explanation purposes, say we're scheduling a school. The school has many resources. A resource is anything that can be reserved for a given event: classroom, gym, basketball, teacher, janitor, etc.
    Ok, so maybe the school deal isn't the best example. Let's assume, for the sake of explanation, that classes can be any amount of time in length: 50 minutes, 127 minutes, 4 hours, 3 seconds, etc.
    Now, the school has a base operation schedule, e.g. they're open from 8am to 5pm MTWRF and 10am to 2pm on saturday and sunday. Events in the school can only occur during these times, obviously.
    Then, each resource has its own base operation schedule, e.g. the gym is open from noon to 5pm MTWRF and noon to 2pm on sat. and sun. The default base operation schedule for any resource is the school which "owns" the resource.
    But then there are exceptions to the base operation schedule. The school (and therefore all its resources) are closed on holidays. The gym is closed on the third friday of every month for maintenance, or something like that. There are also exceptions to the available schedule due to reservations. I've implemented reservations as exceptions with a different status code to simplify things a little bit: because the basic idea is that an exception is either an addition to or removal from the scheduleable times of that resource. Each exception (reservation, closed for maintenance, etc) can be an (effectively) unrestricted amount of time.
    Ok, enough set up. Somehow I need to be able to "flatten" all this information into a schedule that I can display to the user, query against, and update.
    The issue is complicated more by recurring events, but I think I have that handled already and can make a recurring event be transparent from the application point of view. I just need to figure out how to represent this.
    This is my current idea, and I don't like it at all:
    A TimeSlot object, holding a beginning date and ending date. A data structure that holds list of TimeSlot objects in order by date. I'd probably also hold an index of some sort that maps some constant span of time to a general area in the data structure where times around there can be found, so I avoid O(n) time searching for a given time to find whether or not it is open.
    I don't like this idea, because it requires me to call getBeginningDate() and getEndDate() for every single time slot I search.
    Anyone have any ideas?

    If I am correct, your requirement is to display a schedule, showing the occupancy of a resource (open/closed/used/free and other kind of information) on a time line.
    I do not say that your design is incorrect. What I state below is strictly my views and should be treated that way.
    I would not go by time-slot, instead, I would go by resource, for instance the gym, the class rooms (identified accordingly), the swimming pool etc. are all resources. Therefore (for the requirements you have specified), I would create a class, lets say "Resource" to represent all the resources. I would recommend two attributes at this stage ("name" & "identifier").
    The primary attribute of interest in this case would be a date (starting at 00:00hrs and ending at 24:00hrs.), a span of 24hrs broken to the smallest unit of a minute (seconds really are not very practical here).
    I would next encapsulate the availability factor, which represents the concept of availability in a class, for instance "AvailabilityStatus". The recommended attributes would be "date" and "status".
    You have mentioned different status, for instance, available, booked, closed, under-maintainance etc. Each of these is a category. Let us say, numbered from 0 to n (where n<128).
    The "date" attribute could be a java.util.Date object, representing a date. The "status", is byte array of 1440 elements (one element for each minute of the day). Each element of the byte array is populated by the number designation of the status (i.e, 0,1,2...n etc.), where the numbers represent the status of the minute.
    The "Resource" class would carry an attribute of "resourceStatus", an ordered vector of "ResourceStatus" objects.
    The object (all the objects) could be populated manually at any time, or the entire process could be automated (that is a separate area).
    The problem of representation is over. You could add any number of resources as well as any number of status categories.
    This is a simple solution, I do not address the issues of querying this information and rendering the actual schedule, which I believe is straight forward enough.
    It is recognized that there are scope for optimizations/design rationalization here, however, this is a simple and effective enough solution.
    regards
    [email protected]

  • LDAP design question for multiple sites

    LDAP design question for multiple sites
    I'm planning to implement the Sun Java System Directory Server 5.2 2005Q1 for replacing the NIS.
    Currently we have 3 sites with different NIS domains.
    Since the NFS over the WAN connection is very unreliable, I would like to implement as follows:
    1. 3 LDAP servers + replica for each sites.
    2. Single username and password for every end user cross those 3 sites.
    3. Different auto_master, auto_home and auto_local maps for three sites. So when user login to different site, the password is the same but the home directory is different (local).
    So the questions are
    1. Should I need to have 3 domains for LDAP?
    2. If yes for question 1, then how can I keep the username password sync for three domains? If no for question 1, then what is the DIT (Directory Infrastructure Tree) or directory structure I should use?
    3. How to make auto map work on LDAP as well as mount local home directory?
    I really appreciate that some LDAP experta can light me up on this project.

    Thanks for your information.
    My current environment has 3 sites with 3 different NIS domainname: SiteA: A.com, SiteB:B.A.com, SiteC:C.A.com (A.com is our company domainname).
    So everytime I add a new user account and I need to create on three NIS domains separately. Also, the password is out of sync if user change the password on one site.
    I would like to migrate NIS to LDAP.
    I want to have single username and password for each user on 3 sites. However, the home directory is on local NFS filer.
    Say for userA, his home directory is /user/userA in passwd file/map. On location X, his home directory will mount FilerX:/vol/user/userA,
    On location Y, userA's home directory will mount FilerY:/vol/user/userA.
    So the mount drive is determined by auto_user map in NIS.
    In other words, there will be 3 different auto_user maps in 3 different LDAP servers.
    So userA login hostX in location X will mount home directory on local FilerX, and login hostY in location Y will mount home directory on local FilerY.
    But the username and password will be the same on three sites.
    That'd my goal.
    Some LDAP expert suggest me the MMR (Multiple-Master-Replication). But I still no quite sure how to do MMR.
    It would be appreciated if some LDAP guru can give me some guideline at start point.
    Best wishes

  • Design question for database connection in multithreaded socket-server

    Dear community,
    I am programming a multithreaded socket server. The server creates a new thread for each connection.
    The threads and several objects witch are instanced by each thread have to access database-connectivity. Therefore I implemented factory class which administer database connection in a pool. At this point I have a design question.
    How should I access the connections from the threads? There are two options:
    a) Should I implement in my server class a new method like "getDatabaseConnection" which calls the factory class and returns a pooled connection to the database? In this case each object has to know the server-object and have to call this method in order to get a database connection. That could become very complex as I have to safe a instance of the server object in each object ...
    b) Should I develop a static method in my factory class so that each thread could get a database connection by calling the static method of the factory?
    Thank you very much for your answer!
    Kind regards,
    Dak
    Message was edited by:
    dakger

    So your suggestion is to use a static method from a
    central class. But those static-methods are not realy
    object oriented, are they?There's only one static method, and that's getInstance
    If I use singleton pattern, I only create one
    instance of the database pooling class in order to
    cionfigure it (driver, access data to database and so
    on). The threads use than a static method of this
    class to get database connection?They use a static method to get the pool instance, getConnection is not static.
    Kaj

  • SOA real-time design question

    Hi All,
    We are currently working with SOA Suite 11.1.1.4. I have a SOA application requirement to receive real-time feed for six data tables from an external third party. The implementation consists of five one-way operations in the WSDL to populate the six database tables.
    I have a design question. The organization plans to use this data across various departments which requires to replicate or supply the data to other internal databases.
    In my understanding there are two options
    1) Within the SOA application fork the data hitting the web-service to different databases.
    My concern with this approach is what if organizations keep coming with such requests and I keep forking and supplying multiple internal databases with the same data. This feed has to be real-time, too much forking with impact the performance and create unwanted dependencies for this critical link for data supply.2) I could tell other internal projects to get the data from the populated main database.
    My concern here is that firstly the data is pushed into this database flat without any constraints and it is difficult to query to get specific data. This design has been purposely put in place to facilitate real-time performance.Also asking every internal projects to get data from main database will affect its performance.
    Please suggest which approach should I take (advantage/disadvantage. Apart from the above two solutions, is there any other recommended solution to mitigate the risks. This link between our organization and external party is somewhat like a lifeline for BAU, so certainly don't want to create more dependencies and overhead.
    Thanks

    I had tried implementing the JMS publisher/subscriber pattern before, unfortunately I experienced performance was not so good compared to the directly writing to the db adapter. I feel the organization SOA infrastructure is not setup correctly to cope with the number of messages coming through from external third party. Our current setup consists of three WebLogic Servers (Admin, SOA, BAM) all running on only 8GB physical RAM on one machine. Is there Oracle guideline for setting up infrastructure for a SOA application receiving roughly 600000 messages a day. I am using SOA 11.1.1.4. JMS publisher/subscriber pattern just does not cope and I see significant performance lag after few hours of running. The JMS server used was WebLogic JMS
    Thanks
    Edited by: user5108636 on Jun 13, 2011 4:19 PM
    Edited by: user5108636 on Jun 13, 2011 7:03 PM

  • Workflow design questions: FM vs WF to call FM

    Hereu2019s a couple of workflow design questions.
    1. We have Workitem 123 that allow user to navigate to a custom transaction TX1. User can make changes in TX1.  At save or at user command of TX1, the program will call a FM (FM1) to delete WI 123 and create a new WI to send to a different agent. 
    Since Workitem 123 is still open and lock, the FM1 cannot delete it immediately, it has to use a DO loop to check if the Workitem 123 is dequeued before performing the WI delete.
    Alternative: instead of calling the FM1, the program can raise an event which calls a new workflow, which has 1 step/task/new method which call the FM1.  Even with this alternative, the Workitem 123 can still be locked when the new workflowu2019s task/method calls the FM1.
    I do not like the alternative, which calls the same FM1 indirectly via a new workflow/step/task/method.
    2. When an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow .. step .. task .. method .. FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is to call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object.
    Any recommendation?
    Amy

    Mike,
    Yes, in my first design, the TX can 1. raise a terminating event for the existing workitem/workflow and then 2. raise another event to call another workflow.   Both 1 and 2 will be in FM1. 
    Then the design question is: Should the FM1 be called from TX directly or should the TX raise an event to call a new workflow which has 1 step/task, which calls a method in the Business object, and the method calls the FM1?
    In my second design question, when an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow, which has 1 step/task, which calls a method, which calls the FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is either call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object or raise an event to call a receiver FM (FMx).
    Thanks.
    Amy

  • Method design question...and passing object as parameter to webserice

    I am new to webservice...one design question
    i am writing a webservice to check whether a user is valid user or not. The users are categorized as Member, Admin and Professional. For each user type I have to hit different data source to verify.
    I can get this user type as parameter. What is the best approach to define the method?
    Having one single method �isValidUser � and all the client web service can always call this method and provide user type or should I define method for each type like isValidMember, isValidAdmin ?
    One more thing...in future the requirement may change for professional to have more required field in that case the parameter need to have more attribute. But on client side not much change if I have a single isValidUser method...all they have to do is pass additional values
    isValidUser(String username, String usertype, String[] userAttributes){
    if usertype == member
    call member code
    else if usertype = professional
    call professional code
    else if usertype = admin
    call admin code
    else
    throw error
    or
    isValidMember(String username, String[] userAttributes){
    call member code
    One last question, can the parameter be passed as object in web service like USER object.

    First of all, here is my code
    CREATE OR REPLACE
    TYPE USERCONTEXT AS OBJECT
    user_login varchar2,
    user_id integer,
    CONSTRUCTOR FUNCTION USERCONTEXT (
    P_LOGIN IN INTEGER
    P_ID_ID IN INTEGER
    ) RETURN SELF AS RESULT
    Either your type wont be compiled or this is not the real code..

  • Aggregation level - design  question

    Hi, All
    we are in BI-IP ( Netweaver 2004s SPS16).
    I have a design question for this scenario.
    User needs to plan the amounts for a duration (start period and end period), Jan2008 -Dec2008 (001.2008, 012.2008) = 12000.
    We need to distribute this to the periods equally. 001.2008 = 1000, 002.2008 =1000 .... 012.2008=1000.
    If the user changes the period amounts, it should be reflected back in the duration amount.
    Pl suggest the design for the aggregation levels to achieve this.
    Thanks in advance.
    velu

    Hello Velu,
    As the name also goes, creating an "aggregation level" will only result in "aggregation". What your requirement is, is called disaggregation or distribution - this cannot happen automatically - you will either have to use the feature in input queries or use the distribution planning function or create your own planning function using fox/exit.

  • Centralized WLC Design Question

    Dears,
    In my scenario, i am designing CEntralized WLC deployment. I have 30 AP in Buidling X(200 Users) and 20 AP in Buidling Y(150 Users). I am planning to install HA WLC CLuster where Pimary & Secondary WLC will reside in physically different Data Centers A & B. 
    I have a wireless Design Question and i am not able to get clear answers. Please refer to the attached drawing and answer the following queries:
    If Buidling X users want to talk to building Y Users, then how Control & Data Traffic flow will happen between Buidling X & Y. Would all the traffic will go to Primary WLC from Bldg X APs first and then it will be Re Routed back to Buidling Y APs? Can i achieve direct switching between Bldg X&Y APs without going toward WLC?
    If Building X & Y Users want to access the internet, how would be traffic flow? Would the traffic from X&Y AP will go tunnel all the traffic towards WLC and then it will be routed to internet gateway?is it possible for Bldg X&Y AP to directly send traffic towards Internet Gateway without going to controllers?
    I have planned to put WLC at physically different locations in different DC A & B. Is it recommended to have such a design? What would be the Failver traffic volume if Primary WLC goes down and secondary controller takes over?
    My Reason to go for Centralized deployment is that i want to achieve Centralized Authentication with Local Switching. Please give your recommendations and feedback
    Regards,
    Rameez

    If Buidling X users want to talk to building Y Users, then how Control & Data Traffic flow will happen between Buidling X & Y. Would all the traffic will go to Primary WLC from Bldg X APs first and then it will be Re Routed back to Buidling Y APs? Can i achieve direct switching between Bldg X&Y APs without going toward WLC?
              Traffic flows to the WLC that is the primary for the AP's, then its routed over your network.
    If Building X & Y Users want to access the Internet, how would be traffic flow? Would the traffic from X&Y AP will go tunnel all the traffic towards WLC and then it will be routed to Internet gateway?is it possible for Bldg X&Y AP to directly send traffic towards Internet Gateway without going to controllers?
              The WLC isn't a router, so you would have to put the Internet traffic an a subnet and route.
    I have planned to put WLC at physically different locations in different DC A & B. Is it recommended to have such a design? What would be the Failover traffic volume if Primary WLC goes down and secondary controller takes over?
    Like I mentioned... earlier, the two HA WLC has to be on the same layer 2 subnet in order for you to use HA.  The guide mentions an Ethernet cable to connect both the HA ports on the WLC.
    Thanks,
    Scott
    Help out other by using the rating system and marking answered questions as "Answered"

  • Dreamweaver design question

    Hi all. I'm new to the forum and ha da design question. My site took about 3 weeks to complete and after finishing what I though was a pretty error free website I noticed that dreamwever 8 was coming up with numerous errors that matched http://validator.w3.org's scans. My question is this. Why does dreamwever ( regardless of the release ) allow the designer of the website he/she is creating without pointing out the errors as they go along with simple instructions on how to fx them.  As an example My meta tags
    <META NAME="keywords" CONTENT="xxxxxxx">
    <META NAME="description" CONTENT="xxxxxxxx">
    <META NAME="robots" CONTENT="xxxxx">
    <META NAME="author" CONTENT="xxxxxx">
    <META NAME="copyright" CONTENT="xxxxxx">
    all had to be changed over to
    <meta name="keywords" xxxxxxxxxxxxx">
    <meta name="description" CONTENT="xxxxxxx">
    <meta name="robots" CONTENT="xxxxxx">
    <meta name="author" CONTENT="xxxxxxxx">
    <meta name="copyright" CONTENT="xxxxxxxx">
    all because dreamweaver didnt tell me that the <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
       "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    didnt fit the original design. Now my site ( if you wish to view the code ) is www.gamblingwhore.com and if you look at the page source you will see that the code has been corrected on dw 8 but still shows more than 30 errors on http://validator.w3.org. Does dreamwevaer not have the basic tool available to fix these errors without such hassle. Its not just my site either, many sites built in dreamwever can be checked with the http://validator.w3.org website only to find more than 20 -100 different errors.
    Dream weaver creators need to focus on these errors because they hinder seo and they create alot of extra work
    Thank you

    The w3c and XHTML have come a ways since the release of Dreamweaver 8 (I used it in late 2004 and 2005).
    Dreamweaver 8 will build transitional XHTML files as well as old style single tag HTML. It all depends on the personal preferences of the designer.
    Just for kicks, go to say... 20 random websites and see just how many get a green light when you validate them. If its half, you're lucky. This page doesn't even validate;
    Dreamweaver has the menu option (at least in CS3 an CS4) under the Commands menu to "Clean Up HTML" or "Clean Up XHTML" depending on what you're building. I make a point of running that command as I build along with Apply Source Formatting.
    I also use a local validator program to check my code before putting anything.
    That's why they call it WYSIWYG software.
    If it did everything perfectly for everyone every single time, good web designers would find themselves out of work.

  • OSPF Area Addition - Design Question

    Hello,
    I have a design question regarding OSPF. I am looking to add a new ospf area (1). The area will live on two Core routers and two Distribution routers. Can you please look at the attached Pics and tell me which design is better.
    I would like to be able to connect Core-01 to Dist-01 and Core-02 to Dist-02 with a connection between Dist-01 and Dist-02, but this will result in a discontiguous area, correct?
    Thanks,
    Lee

    I would say that the more common design is to have just backbone area links between the core routers. But there is no real issue with having an area 1 link between them...
    If I were you, I would not make the area a totally NSSA. Here are my reasons for that:
    - you will get sub-optimal routing out of the area since you have two ABRs and each distribution router will pick the closest one of them to get out to the backbone even though it may be more optimal to use the other one
    - in an NSSA case, one of the two ABRs will be designated as the NSSA translator, which means that if you are doing summarisation on the ABRs, all traffic destined for these summarised routes will be drawn to the area through that one ABR.
    Paresh

Maybe you are looking for

  • Flat file output to OBIEE server

    Hello: Is it possible to write a output of a report on the OBIEE server folder instead of the client browser folder? Also I would like to write the output file name in the format 'OBIEE_Login_Name|<FileName>_Timestamp.csv' I would appreciate any poin

  • Macbook Pro 15-inch (Retina Display) HDMI doesn't output audio to TV

    My rMBP's HDMI port doesn't output audio through the HDMI cable to my TV. Instead, the audio plays on the laptop's speakers. I know the problem is not the HDMI cable nor the TV because my TV speakers play sound when I connect my Windows laptop via th

  • UC540 and memory issues

    I had a customer with a UC540 call approx. three or four days ago, and he couldn't get from his home PC into his work PC via RDP.  The work PC is behind a UC540.  He restarted the UC540 (power cycle) and it seemed to start behaving. Today he was havi

  • Row and Column Template in Hyperion Reporting

    Hi Whenver I am trying to save a column of a report as a template , it is giving an error : "The selected columns contain one or more of the items below that cannot be included in templates: Invalid Input " I am not sure what does this mean. Though I

  • At  first record of innerloop , want to know how many records exists

    How can I know the number of records in INTBSEG for each record of int_maintable before running the loop for that where condition of INTBSEG. Please see below: LOOP AT int_maintable.   Loop at INTBSEG where bukrs = int_maintable-bukrs