SLD Landscape Design Question

After reading the Planning Guide - System Landscape Directory, I am left with a question.
The Setup:
Standard three system landscape
ECC 6.0 Ehp5 on NW 7.02ECC Instances: DE1, QE1, PE1
PI 7.31
PI Instances: DP1, QP1, PP1
PI Business Systems: DE1CLNT100, QE1CLNT100, PE1CLNT100
Each PI Instance has a local SLD
Management LandscapeSolution Manager 7.1 on NW 7.02
SolMan Instance: SP1
Local SLD
LMDB
Reading through the Planning Guide, the recommendation (pg. 45, picture below) seems to be to have all systems self-report to PP1. That is, without synchronization, only PP1 knows about any systems because they all report to PP1 and do not report in to their local SLDs. The data is then sent back to DP1 & QP1 via bridge forwarding. Business Systems are manually entered into their respective local SLDs, but then have to be manually exported/imported from DP1 --> QP1 --> PP1. Once it's all gathered into PP1 again, all data is pushed to SolMan's SLD on SP1 via a unidirectional content synchronization.
If I read this correctly, then all SLD's will end up "knowing" all systems and business systems: e.g. the sld of DP1 will have entries for QP1 and PP1 and for business systems QE1CLNT100 and PECLNT100.
Earlier in the guide, I gathered that there was value in not having all of the data in Development in that there is then clear separation between non-production SLDs and production SLDs. The above recommended setup seems to go against this idea of separation. Plus, it requires a significant amount of manual upkeep.
As an alternative (pictured below), what if each environment only reported to it's local SLD (DE1 --> DP1, QE1--> QP1, PE1--PP1). (Note that in the picture the SLD is drawn separate, but I've heard that most customers run their local SLD on PI, so the SLD depicted in each environment actually runs on the PI system in this scenario.) Further, what if the SLD's then synchronized sequentially in a full unidirectional content sync fashion. (DP1 does a full uni-directional Content Sync to QP1, QP1 does a full uni-directional Content Sync to PP1, and PP1 does a full uni-directional Content Sync to SP1.)
DP1 would only know about it's own systems. QP1 would know about it's own plus DP1 (which it needs). PP1 and SP1 would both have full visibility into all systems.  Furthermore, the manually entered business systems would also be automatically synchronized and the whold manual export/import process could be avoided or  a manual export/import could be implemented between Q and Production for safety with an automatic bridge forward of Data Supplier data.
Are there downsides to the second scenario that I am not seeing? Please comment!
Best regards,
  --Tom

Although the guide states that Dev systems only needs to know about other systems in the DEV environment and QAS systems only needs to know about systems in the DEV and QAS environments and only PRD needs to know all systems, I am finding that I can't get the groups set up correctly when the DEV environment has no visibility to the QAS environment.
It seems that the recommended structure (where all systems are known in all environments) will facilitate the correct setup of the PI transports. So far, that is the biggest arguement I've been able to find against the second approach outlined above. Using full sync all the way through is also not an option if you're doing any development as any changes in the DEV system automatically flow through to PRD almost instantly. Using automatic bridge forwarding and import/export might be a usable alternative, but the recommended approach from page 41 of the planning guide does seem to have the best of all worlds.

Similar Messages

  • SLD Landscape Question

    Hi
    We are about to implement the SLD landscape for our PI systems.
    From the documentation (SDN and HELP.SAP.COM), it is recommended to have all SLD Data Suppliers send to the PI Production SLD Bridge.  The SLD data is then automatically forwarded to the DEV/QA SLD Bridge (Design Time SLD).
    This is straight forward and I can see how to get the ABAP stack to look at the correct SLD Server.
    However, when it comes to the JAVA stack, I can't see how to configure the SLD CLIENT of an instance in the design-time landscape to use the design-time SLD Server.
    Version of Netweaver in 7.0.
    Thanks
    Doug

    SAP Web AS (Java) 6.30/6.40 Start-up: JProbe's Direct Support
    This feature is available starting with JProbe 5.2.2. For details of application server integration, refer to the JProbe documentation. Here, the single steps for the SAP application server are briefly described.
    Step 1: Create an Integration for the instance to be profiled.
          Click Tools -> Application Server Integration.
          Choose SAP Application Server 6.30.
          Click Create.
          To fill out the columns, follow the help you get via tooltip.
          Specify whether server or dispatcher should be profiled.
    Step 2: Create a Configuration.
          Click Tools -> Manage J2EE Configurations.
          Click Add.
          Fill in Configuration Name and Integration.
          Optionally, you could specify an application to be profiled; this is just used to help to set the correct filters for profiling. If you know which packages or classes should be profiled, you can leave these fields empty; otherwise, specify the directory where the application is deployed and the .ear or .war file in this directory.
    Step 3: Create the J2EE Settings.
          Click Session -> New J2EE Settings.
          Specify the configuration.
          Specify the other options, especially set filters (optionally, you could use the application specified in the configuration for filtering).
    Step 4: Use the Connection Manager.
          Different to "startup via jlaunch" below the Connection Manager has to be used.
          Click Tools -> Options -> Advanced Session; mark "Use Connection Manager."
    Step 5: Check the property 'IDxxxxxx.JavaPath' in instance.properties in the directory /usr/sap/<SID>/JCxx/j2ee/cluster
          JProbe relies on the property to find Java home, but there are Web AS 6.30/6.40 SPs that do not have this property.
          If it is not included, add it for dispatcher or server (dependent on which one should be profiled)
          For example, add "ID8156650.JavaPath=C:/java/jdk1.4.2_06."
          Be aware that using the standard startup procedure instance.properties is overwritten, so you might repeat this step.
    Step 6: Start the J2EE Engine with JProbe
          Start the database and the central instance (enqueue and message server).
          In JProbe click "Run" for the J2EE settings defined before.
          Dispatcher and server of the J2EE Engine will be started; the one specified in the Integration will be profiled.
    SAP Web AS (Java) 6.30/6.40 Start-up: Using JProbe 5.2.x via JLaunch
    Starting the Server via JLaunch
    Detailed instructions how to start JLaunch can be found in the documentation of SAP Web AS (Java). Here we give only the minimal list of steps:
       1.
          Include the location of JLaunch in the PATH
       2.
          Go to base directory of dispatcher or server (depending on which one you want to start).
          jlaunch pf=c:\usr\sap\<SID>\SYS\profile\<SID>_JCxx_<Host>                   
          -nodename=IDxxxxxxx 
          where "IDxxxxxxx" can be figured out from the "instance.properties" in directory "cluster."
    Creating a JProbe Startup File
    In order to start up JProbe via JLaunch, you need to create a start-up file (.jpl) with the JProbe launchpad as follows:
       1.
          Click Session -> New J2SE Settings
       2.
          Enter filters, triggers, ...
       3.
          In the Configuration main class and the classpath fields are mandatory; a dummy entry for main class is sufficient, for classpath you could use %CLASSPATH%
       4.
          Save the .jpl file
       5.
          Switch off the Connection Manager; in JProbe 5.2.x a Connection Manager is introduced, which does not work if a Java application is started from a C-framework; therefore use Tools -> Options -> Session -> Advanced to switch off the Connection Manager
       6.
          Change the .jpl file with a text editor
                Comment out the line starting with -jp_java
                Add the connection information and the snapshot directory information; related to the introduction of the Connection Manager this is not written when the .jpl file is saved in the launchpad
                    o
                      -jp_socket=<host for analysis>:<port> (e.g. localhost:4444)
                    o
                      -jp_snapshot_dir=<snapshot directory> (e.g. c:
    Jprobe
    snapshots)
    Starting JProbe with the Server
    Start JLaunch as above, but attach the following parameters to the JLaunch command line:
    jlaunch pf=... -nodename=... -Xbootclasspath/a:JProbe-Base-Dir\lib\jpagent.jar
                -Xrunjprobeagent:-jp_input=.jpl-file
    For the .jpl-file please, specify the complete path. If there are blanks in the file or directory names, use double quotes or the DOS notation (e.g. PROGRA~1). After starting JLaunch you can now attach the viewer to it (JProbe -> Program -> Attach to Remote/Running Session).
    regards
    chandra

  • SLD Landscape Best Practice recomendation

    I am seeking advise on setting up SLD in my ever growing landscape.
    I currently have a Master SLD on the same system as my NWDI system and a local SLD on my Solution manager 7.0 system that is updated from the master.
    The Master SLD is shared by our ECC6 Duel Stack landscape and my BI 70 Duel stack portal landscape.
    I have upcoming projects of implementing a PI 7.1 landscape, implementing CTS+ and a Solution Manager Enterprise upgrade all of which will be heavily dependent on SLD.  
    I have seen documentation that PI would like it's own local SLD.
    My question is what would be the prefered SLD landscape and how to I get there.  Any recomendations or best practices would be most appreciated.
    Bill Stouffer
    Basis Administrator.

    Hi,
    SLD that we have implemented in our landscape is like bleow:
    1) All PI  and Portal system has local SLD.
    2) For all non- production system, we have one SLD and production system we have seperate SLD.
    3) It means we are following 3 tier SLD landscape as resommended by SAP.
    4) Main SLd lies on solman, for production  we have seperate and non-production we have seperate. non production and production sld sent data to main sld that is on solman.
    4) All systems except PI and portal send data to production and non-production sld. PI and portal systems first send data to local which in turns send to production and non-production sld.
    5) So by this way you whole environment is secure as production sld is different.
    So, i will recommend to have 3 tier SLD approach. One of the important thing is don't use cenrtal user to send data across SLD as one user lock will in turns be the fallback of whole environment. So, always make each system specific user for data transfer so that user lock of one system will not impact other.
    If you need any other information please let me know.
    Thanks
    Sunny

  • Design question: Scheduling a Variable-timeslot Resource

    I originally posted this in general java programming, because this seemed like a more high-level design descussion. But now I see some class design questions. Please excuse me if this thread does not belong here (this is my first time using the forum, save answering a couple questions).
    Forum,
    I am having trouble determining a data structure and applicable algorithm (actually, even more general than the data structure -- the general design to use) for holding a modifiable (but more heavily read/queried than updated), variable-timeslot schedule for a given resource. Here's the situation:
    Let's, for explanation purposes, say we're scheduling a school. The school has many resources. A resource is anything that can be reserved for a given event: classroom, gym, basketball, teacher, janitor, etc.
    Ok, so maybe the school deal isn't the best example. Let's assume, for the sake of explanation, that classes can be any amount of time in length: 50 minutes, 127 minutes, 4 hours, 3 seconds, etc.
    Now, the school has a base operation schedule, e.g. they're open from 8am to 5pm MTWRF and 10am to 2pm on saturday and sunday. Events in the school can only occur during these times, obviously.
    Then, each resource has its own base operation schedule, e.g. the gym is open from noon to 5pm MTWRF and noon to 2pm on sat. and sun. The default base operation schedule for any resource is the school which "owns" the resource.
    But then there are exceptions to the base operation schedule. The school (and therefore all its resources) are closed on holidays. The gym is closed on the third friday of every month for maintenance, or something like that. There are also exceptions to the available schedule due to reservations. I've implemented reservations as exceptions with a different status code to simplify things a little bit: because the basic idea is that an exception is either an addition to or removal from the scheduleable times of that resource. Each exception (reservation, closed for maintenance, etc) can be an (effectively) unrestricted amount of time.
    Ok, enough set up. Somehow I need to be able to "flatten" all this information into a schedule that I can display to the user, query against, and update.
    The issue is complicated more by recurring events, but I think I have that handled already and can make a recurring event be transparent from the application point of view. I just need to figure out how to represent this.
    This is my current idea, and I don't like it at all:
    A TimeSlot object, holding a beginning date and ending date. A data structure that holds list of TimeSlot objects in order by date. I'd probably also hold an index of some sort that maps some constant span of time to a general area in the data structure where times around there can be found, so I avoid O(n) time searching for a given time to find whether or not it is open.
    I don't like this idea, because it requires me to call getBeginningDate() and getEndDate() for every single time slot I search.
    Anyone have any ideas?

    If I am correct, your requirement is to display a schedule, showing the occupancy of a resource (open/closed/used/free and other kind of information) on a time line.
    I do not say that your design is incorrect. What I state below is strictly my views and should be treated that way.
    I would not go by time-slot, instead, I would go by resource, for instance the gym, the class rooms (identified accordingly), the swimming pool etc. are all resources. Therefore (for the requirements you have specified), I would create a class, lets say "Resource" to represent all the resources. I would recommend two attributes at this stage ("name" & "identifier").
    The primary attribute of interest in this case would be a date (starting at 00:00hrs and ending at 24:00hrs.), a span of 24hrs broken to the smallest unit of a minute (seconds really are not very practical here).
    I would next encapsulate the availability factor, which represents the concept of availability in a class, for instance "AvailabilityStatus". The recommended attributes would be "date" and "status".
    You have mentioned different status, for instance, available, booked, closed, under-maintainance etc. Each of these is a category. Let us say, numbered from 0 to n (where n<128).
    The "date" attribute could be a java.util.Date object, representing a date. The "status", is byte array of 1440 elements (one element for each minute of the day). Each element of the byte array is populated by the number designation of the status (i.e, 0,1,2...n etc.), where the numbers represent the status of the minute.
    The "Resource" class would carry an attribute of "resourceStatus", an ordered vector of "ResourceStatus" objects.
    The object (all the objects) could be populated manually at any time, or the entire process could be automated (that is a separate area).
    The problem of representation is over. You could add any number of resources as well as any number of status categories.
    This is a simple solution, I do not address the issues of querying this information and rendering the actual schedule, which I believe is straight forward enough.
    It is recognized that there are scope for optimizations/design rationalization here, however, this is a simple and effective enough solution.
    regards
    [email protected]

  • LDAP design question for multiple sites

    LDAP design question for multiple sites
    I'm planning to implement the Sun Java System Directory Server 5.2 2005Q1 for replacing the NIS.
    Currently we have 3 sites with different NIS domains.
    Since the NFS over the WAN connection is very unreliable, I would like to implement as follows:
    1. 3 LDAP servers + replica for each sites.
    2. Single username and password for every end user cross those 3 sites.
    3. Different auto_master, auto_home and auto_local maps for three sites. So when user login to different site, the password is the same but the home directory is different (local).
    So the questions are
    1. Should I need to have 3 domains for LDAP?
    2. If yes for question 1, then how can I keep the username password sync for three domains? If no for question 1, then what is the DIT (Directory Infrastructure Tree) or directory structure I should use?
    3. How to make auto map work on LDAP as well as mount local home directory?
    I really appreciate that some LDAP experta can light me up on this project.

    Thanks for your information.
    My current environment has 3 sites with 3 different NIS domainname: SiteA: A.com, SiteB:B.A.com, SiteC:C.A.com (A.com is our company domainname).
    So everytime I add a new user account and I need to create on three NIS domains separately. Also, the password is out of sync if user change the password on one site.
    I would like to migrate NIS to LDAP.
    I want to have single username and password for each user on 3 sites. However, the home directory is on local NFS filer.
    Say for userA, his home directory is /user/userA in passwd file/map. On location X, his home directory will mount FilerX:/vol/user/userA,
    On location Y, userA's home directory will mount FilerY:/vol/user/userA.
    So the mount drive is determined by auto_user map in NIS.
    In other words, there will be 3 different auto_user maps in 3 different LDAP servers.
    So userA login hostX in location X will mount home directory on local FilerX, and login hostY in location Y will mount home directory on local FilerY.
    But the username and password will be the same on three sites.
    That'd my goal.
    Some LDAP expert suggest me the MMR (Multiple-Master-Replication). But I still no quite sure how to do MMR.
    It would be appreciated if some LDAP guru can give me some guideline at start point.
    Best wishes

  • Design question for database connection in multithreaded socket-server

    Dear community,
    I am programming a multithreaded socket server. The server creates a new thread for each connection.
    The threads and several objects witch are instanced by each thread have to access database-connectivity. Therefore I implemented factory class which administer database connection in a pool. At this point I have a design question.
    How should I access the connections from the threads? There are two options:
    a) Should I implement in my server class a new method like "getDatabaseConnection" which calls the factory class and returns a pooled connection to the database? In this case each object has to know the server-object and have to call this method in order to get a database connection. That could become very complex as I have to safe a instance of the server object in each object ...
    b) Should I develop a static method in my factory class so that each thread could get a database connection by calling the static method of the factory?
    Thank you very much for your answer!
    Kind regards,
    Dak
    Message was edited by:
    dakger

    So your suggestion is to use a static method from a
    central class. But those static-methods are not realy
    object oriented, are they?There's only one static method, and that's getInstance
    If I use singleton pattern, I only create one
    instance of the database pooling class in order to
    cionfigure it (driver, access data to database and so
    on). The threads use than a static method of this
    class to get database connection?They use a static method to get the pool instance, getConnection is not static.
    Kaj

  • SOA real-time design question

    Hi All,
    We are currently working with SOA Suite 11.1.1.4. I have a SOA application requirement to receive real-time feed for six data tables from an external third party. The implementation consists of five one-way operations in the WSDL to populate the six database tables.
    I have a design question. The organization plans to use this data across various departments which requires to replicate or supply the data to other internal databases.
    In my understanding there are two options
    1) Within the SOA application fork the data hitting the web-service to different databases.
    My concern with this approach is what if organizations keep coming with such requests and I keep forking and supplying multiple internal databases with the same data. This feed has to be real-time, too much forking with impact the performance and create unwanted dependencies for this critical link for data supply.2) I could tell other internal projects to get the data from the populated main database.
    My concern here is that firstly the data is pushed into this database flat without any constraints and it is difficult to query to get specific data. This design has been purposely put in place to facilitate real-time performance.Also asking every internal projects to get data from main database will affect its performance.
    Please suggest which approach should I take (advantage/disadvantage. Apart from the above two solutions, is there any other recommended solution to mitigate the risks. This link between our organization and external party is somewhat like a lifeline for BAU, so certainly don't want to create more dependencies and overhead.
    Thanks

    I had tried implementing the JMS publisher/subscriber pattern before, unfortunately I experienced performance was not so good compared to the directly writing to the db adapter. I feel the organization SOA infrastructure is not setup correctly to cope with the number of messages coming through from external third party. Our current setup consists of three WebLogic Servers (Admin, SOA, BAM) all running on only 8GB physical RAM on one machine. Is there Oracle guideline for setting up infrastructure for a SOA application receiving roughly 600000 messages a day. I am using SOA 11.1.1.4. JMS publisher/subscriber pattern just does not cope and I see significant performance lag after few hours of running. The JMS server used was WebLogic JMS
    Thanks
    Edited by: user5108636 on Jun 13, 2011 4:19 PM
    Edited by: user5108636 on Jun 13, 2011 7:03 PM

  • Workflow design questions: FM vs WF to call FM

    Hereu2019s a couple of workflow design questions.
    1. We have Workitem 123 that allow user to navigate to a custom transaction TX1. User can make changes in TX1.  At save or at user command of TX1, the program will call a FM (FM1) to delete WI 123 and create a new WI to send to a different agent. 
    Since Workitem 123 is still open and lock, the FM1 cannot delete it immediately, it has to use a DO loop to check if the Workitem 123 is dequeued before performing the WI delete.
    Alternative: instead of calling the FM1, the program can raise an event which calls a new workflow, which has 1 step/task/new method which call the FM1.  Even with this alternative, the Workitem 123 can still be locked when the new workflowu2019s task/method calls the FM1.
    I do not like the alternative, which calls the same FM1 indirectly via a new workflow/step/task/method.
    2. When an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow .. step .. task .. method .. FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is to call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object.
    Any recommendation?
    Amy

    Mike,
    Yes, in my first design, the TX can 1. raise a terminating event for the existing workitem/workflow and then 2. raise another event to call another workflow.   Both 1 and 2 will be in FM1. 
    Then the design question is: Should the FM1 be called from TX directly or should the TX raise an event to call a new workflow which has 1 step/task, which calls a method in the Business object, and the method calls the FM1?
    In my second design question, when an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow, which has 1 step/task, which calls a method, which calls the FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is either call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object or raise an event to call a receiver FM (FMx).
    Thanks.
    Amy

  • Method design question...and passing object as parameter to webserice

    I am new to webservice...one design question
    i am writing a webservice to check whether a user is valid user or not. The users are categorized as Member, Admin and Professional. For each user type I have to hit different data source to verify.
    I can get this user type as parameter. What is the best approach to define the method?
    Having one single method �isValidUser � and all the client web service can always call this method and provide user type or should I define method for each type like isValidMember, isValidAdmin ?
    One more thing...in future the requirement may change for professional to have more required field in that case the parameter need to have more attribute. But on client side not much change if I have a single isValidUser method...all they have to do is pass additional values
    isValidUser(String username, String usertype, String[] userAttributes){
    if usertype == member
    call member code
    else if usertype = professional
    call professional code
    else if usertype = admin
    call admin code
    else
    throw error
    or
    isValidMember(String username, String[] userAttributes){
    call member code
    One last question, can the parameter be passed as object in web service like USER object.

    First of all, here is my code
    CREATE OR REPLACE
    TYPE USERCONTEXT AS OBJECT
    user_login varchar2,
    user_id integer,
    CONSTRUCTOR FUNCTION USERCONTEXT (
    P_LOGIN IN INTEGER
    P_ID_ID IN INTEGER
    ) RETURN SELF AS RESULT
    Either your type wont be compiled or this is not the real code..

  • Aggregation level - design  question

    Hi, All
    we are in BI-IP ( Netweaver 2004s SPS16).
    I have a design question for this scenario.
    User needs to plan the amounts for a duration (start period and end period), Jan2008 -Dec2008 (001.2008, 012.2008) = 12000.
    We need to distribute this to the periods equally. 001.2008 = 1000, 002.2008 =1000 .... 012.2008=1000.
    If the user changes the period amounts, it should be reflected back in the duration amount.
    Pl suggest the design for the aggregation levels to achieve this.
    Thanks in advance.
    velu

    Hello Velu,
    As the name also goes, creating an "aggregation level" will only result in "aggregation". What your requirement is, is called disaggregation or distribution - this cannot happen automatically - you will either have to use the feature in input queries or use the distribution planning function or create your own planning function using fox/exit.

  • Landscape Design Plans

    Hi
    I'm looking for help to create plans for a landscape design and I would like it to be vector-based if possible.
    I have outlines of trees as shown below (ArchiCAD) but I'm not sure of the technique to make the grass look how it does and how to make the trees look that way (apart from drop shadow)
    The site I'm creating a landscape plan for is around 60% grassland, so I want to use the same effect on the first image to represent grass (rather than a solid colour). The next biggest element of the site is trees, around 20% in various sizes, so I want to re-create the trees as shown in the first image. I have Vector outlines for the trees (From ArchiCAD) but I need to apply some sort of style to achieve the effect as shown above.. The other main part of the landscape is limestone pathways, which are very 'rough' i.e merge into the grass, like on the second image.
    What I'm asking is techniques in Illustrator to basically re-create the first image, and add some limestone pathways... I can't really give you a plan because it's basically just grass and a few trees... I have to plan the landscape for a project.
    The overall 'theme' for the plan is 'sketchy' If that helps
    Thank You

    For the edges of the grassy fills, you can apply a solid green fill, apply an Inner Glow Effect to the fill to get the vignette. To get the stipple and/or grain, combine this with another raster effect, like Spatter or Grain.
    You do not need a high Document Raster Effects Resolution setting for this kind of thing, even if it's intended for commercial print. Screen res or 150 PPI is a gracious plenty. When you hit upon a combination you like, store it as a Graphic Style so it can be applied to other objects with a click.
    For your trees, draw three or four variations. Store each one as a Symbol. Place Instances of the Symbols whereever you want on the drawing. Use Transform Each to randomly alter their scale/rotation without disturbing their positions. You can paint an entire forest this way in minutes, and it's also a data-economical way to build the file.
    JET

  • Centralized WLC Design Question

    Dears,
    In my scenario, i am designing CEntralized WLC deployment. I have 30 AP in Buidling X(200 Users) and 20 AP in Buidling Y(150 Users). I am planning to install HA WLC CLuster where Pimary & Secondary WLC will reside in physically different Data Centers A & B. 
    I have a wireless Design Question and i am not able to get clear answers. Please refer to the attached drawing and answer the following queries:
    If Buidling X users want to talk to building Y Users, then how Control & Data Traffic flow will happen between Buidling X & Y. Would all the traffic will go to Primary WLC from Bldg X APs first and then it will be Re Routed back to Buidling Y APs? Can i achieve direct switching between Bldg X&Y APs without going toward WLC?
    If Building X & Y Users want to access the internet, how would be traffic flow? Would the traffic from X&Y AP will go tunnel all the traffic towards WLC and then it will be routed to internet gateway?is it possible for Bldg X&Y AP to directly send traffic towards Internet Gateway without going to controllers?
    I have planned to put WLC at physically different locations in different DC A & B. Is it recommended to have such a design? What would be the Failver traffic volume if Primary WLC goes down and secondary controller takes over?
    My Reason to go for Centralized deployment is that i want to achieve Centralized Authentication with Local Switching. Please give your recommendations and feedback
    Regards,
    Rameez

    If Buidling X users want to talk to building Y Users, then how Control & Data Traffic flow will happen between Buidling X & Y. Would all the traffic will go to Primary WLC from Bldg X APs first and then it will be Re Routed back to Buidling Y APs? Can i achieve direct switching between Bldg X&Y APs without going toward WLC?
              Traffic flows to the WLC that is the primary for the AP's, then its routed over your network.
    If Building X & Y Users want to access the Internet, how would be traffic flow? Would the traffic from X&Y AP will go tunnel all the traffic towards WLC and then it will be routed to Internet gateway?is it possible for Bldg X&Y AP to directly send traffic towards Internet Gateway without going to controllers?
              The WLC isn't a router, so you would have to put the Internet traffic an a subnet and route.
    I have planned to put WLC at physically different locations in different DC A & B. Is it recommended to have such a design? What would be the Failover traffic volume if Primary WLC goes down and secondary controller takes over?
    Like I mentioned... earlier, the two HA WLC has to be on the same layer 2 subnet in order for you to use HA.  The guide mentions an Ethernet cable to connect both the HA ports on the WLC.
    Thanks,
    Scott
    Help out other by using the rating system and marking answered questions as "Answered"

  • Dreamweaver design question

    Hi all. I'm new to the forum and ha da design question. My site took about 3 weeks to complete and after finishing what I though was a pretty error free website I noticed that dreamwever 8 was coming up with numerous errors that matched http://validator.w3.org's scans. My question is this. Why does dreamwever ( regardless of the release ) allow the designer of the website he/she is creating without pointing out the errors as they go along with simple instructions on how to fx them.  As an example My meta tags
    <META NAME="keywords" CONTENT="xxxxxxx">
    <META NAME="description" CONTENT="xxxxxxxx">
    <META NAME="robots" CONTENT="xxxxx">
    <META NAME="author" CONTENT="xxxxxx">
    <META NAME="copyright" CONTENT="xxxxxx">
    all had to be changed over to
    <meta name="keywords" xxxxxxxxxxxxx">
    <meta name="description" CONTENT="xxxxxxx">
    <meta name="robots" CONTENT="xxxxxx">
    <meta name="author" CONTENT="xxxxxxxx">
    <meta name="copyright" CONTENT="xxxxxxxx">
    all because dreamweaver didnt tell me that the <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
       "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    didnt fit the original design. Now my site ( if you wish to view the code ) is www.gamblingwhore.com and if you look at the page source you will see that the code has been corrected on dw 8 but still shows more than 30 errors on http://validator.w3.org. Does dreamwevaer not have the basic tool available to fix these errors without such hassle. Its not just my site either, many sites built in dreamwever can be checked with the http://validator.w3.org website only to find more than 20 -100 different errors.
    Dream weaver creators need to focus on these errors because they hinder seo and they create alot of extra work
    Thank you

    The w3c and XHTML have come a ways since the release of Dreamweaver 8 (I used it in late 2004 and 2005).
    Dreamweaver 8 will build transitional XHTML files as well as old style single tag HTML. It all depends on the personal preferences of the designer.
    Just for kicks, go to say... 20 random websites and see just how many get a green light when you validate them. If its half, you're lucky. This page doesn't even validate;
    Dreamweaver has the menu option (at least in CS3 an CS4) under the Commands menu to "Clean Up HTML" or "Clean Up XHTML" depending on what you're building. I make a point of running that command as I build along with Apply Source Formatting.
    I also use a local validator program to check my code before putting anything.
    That's why they call it WYSIWYG software.
    If it did everything perfectly for everyone every single time, good web designers would find themselves out of work.

  • OSPF Area Addition - Design Question

    Hello,
    I have a design question regarding OSPF. I am looking to add a new ospf area (1). The area will live on two Core routers and two Distribution routers. Can you please look at the attached Pics and tell me which design is better.
    I would like to be able to connect Core-01 to Dist-01 and Core-02 to Dist-02 with a connection between Dist-01 and Dist-02, but this will result in a discontiguous area, correct?
    Thanks,
    Lee

    I would say that the more common design is to have just backbone area links between the core routers. But there is no real issue with having an area 1 link between them...
    If I were you, I would not make the area a totally NSSA. Here are my reasons for that:
    - you will get sub-optimal routing out of the area since you have two ABRs and each distribution router will pick the closest one of them to get out to the backbone even though it may be more optimal to use the other one
    - in an NSSA case, one of the two ABRs will be designated as the NSSA translator, which means that if you are doing summarisation on the ABRs, all traffic destined for these summarised routes will be drawn to the area through that one ABR.
    Paresh

  • WLPI Design Question

    I've got a bit of a design question for Process Integrator. Currently I'm building
    a prototype for an exception handling system using Process Integrator. The application
    has to be web based and I'm using the Front Controller design pattern that is
    described in the J2EE Blueprint docs.
    I've come across a bit of a design problem. Should I design the application so
    that all the user actions in a task are accessed via the api set or should I build
    this functionality into the template. For example, a user will action a task which
    requires the user to update some variables in the template. In the template definition
    should use a Send XML to Client action and then use the taskExecute method on
    the worklist, or should I do it all programatically?
    Also if I do use the Send XML to Client should I then mark the task done using
    the APIs or using the Studio. I have noticed that if I mark the task done wihtin
    the studio after sending the xml, the task becomes available for the next user,
    even if the variables haven't been updated yet.
    Sorry about the rambling nature of this post.
    Thanks in advance.
    Preyesh

    If you want to write code that's easier for you to write, you do whatever the hell you want.
    If you want to write good code, retain the ID.

Maybe you are looking for