Best practice for having an access point giving out only a specific range

Hey All,
I have an access point which is currently set to relay all dhcp request to the server DC-01, However the range that has been setup is becoming low on available IP addresses so I have been asked if it is possible to setup another range for the AP only.
Is there a way to set the DHCP up with a new range and say anything from that access point it will then give out a 192.168.2 subnet address as apposed to the standard 192.168.1 subnet?
Or would it be easier / better to create a superscope? and slowly migrate the users to a new subnet with a larger range?
Any help suggestions would be appreciated
thanks
Anthony

Hi,
Maybe we could configure a DHCP superscope to achieve your target.
For details, please refer to the following articles.
Configuring a DHCP Superscope
http://technet.microsoft.com/en-us/library/dd759168.aspx
Create a superscope to solve the problem of dwindling IP addresses
http://www.techrepublic.com/article/create-a-superscope-to-solve-the-problem-of-dwindling-ip-addresses/
Best Regards,
Andy Qi
Andy Qi
TechNet Community Support

Similar Messages

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

  • Best practice for how to access a set of wsdl and xsd files

    I've recently beeing poking around with the Oracle ESB, which requires a bunch of wsdl and xsd files from HOME/bpel/system/xmllib. What is the best practice for including these files in a BPEL project? It seems like a bad idea to copy all these files into every project that uses the ESB, especially if there are quite a few consumers of the bus. Is there a way I can reference this directory from the project so that the files can just stay in a common place for all the projects that use them?
    Bret

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Best practice for having separate clone data for development purposes?

    Hi
    I am on a hosted Apex environment
    I have a workspace containing two instances/ copies of the application: DEV and PROD
    I would like to be able to develop functionality and data in/ with the DEV instance and then insert it into DEV.
    I gather that I can insert pages from DEV to PROD via Create -> New page as copy -> Page in another application
    But I don't know how I can mimic this process with database objects, eg. if I want to create a new table or manipulate the data in an existing table in a DEV environment before implementing in a PROD environment.
    Ideally this would be done in such a way that minimises changing table names etc when elevating pages from DEV to PROD.
    Would it be possible to create a clone schema that could contain the same tables (with the same names) as PROD?
    Any tips, best practices appreciated :)
    Thanks

    Hi,
    ideally you should have a little more separation between your dev and prod environments. At the minimum you should have separate workspaces each addressing separate schemas. Apex can be a little difficult if you want to move individual Apex application objects, such as pages, between applications (a much requested improvement), but this can be overcome by exporting and importing the whole application. You should also have some form of version control/backup of export files.
    As far as database objects go, tables etc, if you have tns access to your hosted environment, then you can use SQL Developer to develop, maintain and synchronize between your development and production schemas and objects in the different environments should have identical names. If you don't have that access, then you can use the Apex SQL Workshop features, but these are a little more cumbersome than a tool like SQL Developer. Once again, scripts for creating and upgrading your database schemas should be kept under some sort of version control.
    All of this is supposing your hosting solution allows more than one workspace and schema, if not you may have to incur the cost of a second environment. One other option would be to do your development locally in an instance of Oracle XE, ensuring you don't have any version conflicts between the different database object features and the Apex version.
    I hope this helps.
    Regards
    Andre

  • Best Practice for Developer update access to database in Production

    I am curious to find out what other organizations are doing as to developer access to sysadm in production. Such as using a database account created like sysadm that can be checked out for use and locked when not in use? or ?

    Developer can be provided with Read only access to SYSADM schema.
    Thanks
    Soundappan
    Edited by: Soundappan on Dec 26, 2011 12:00 PM

  • Best Practice for Updating Administrative Installation Point for Reader 9.4.1 Security Update?

    I deployed adobe reader 9.4 using an administrative installation point with group policy when it was released. This deployment included a transform file.  It's now time to update reader with the 9.4.1 security msp.
    My question is, can I simply patch the existing AIP in place with the 9.4.1 security update and redeploy it, or do I need to create a brand new AIP and GPO?
    Any help in answering this would be appreciated.
    Thanks in advance.

    I wouldn't update your AIP in place. I end up keeping multiple AIPs on hand. Each time a security update comes out I make a copy and apply the updates to that. One reason is this: when creating the AIPs, you need to apply the MSPs in the correct  order; you cannot simply apply a new MSP to the previous AIP.
    Adobe's support patch order is documented here:  http://kb2.adobe.com/cps/498/cpsid_49880.html.
    That link covers Adode  Acrobat and Reader, versions 7.x through 9.x. A quarterly update MSP can  only be applied to the previous quarterly. Should Adobe Reader 9.4.2  come out tomorrow as a quarterly update, you will not be able to apply it to  the 9.4.1 AIP; you must apply it to the previous quarterly AIP - 9.4.0. At a minimum I keep the previous 2 or 3 quarterly AIPs around, as well as the MSPs to update them. The only time I delete my old AIPs is when I am 1000% certain they are not longer needed.
    Also, when Adobe's developers author the MSPs they don't include the correct metadata entries for in place upgrades of AIPs - any AIP based on the original 9.4.0 MSI will not in-place upgrade any installtion that is based on the 9.4.0 MSI and AIP - you must uninstall Adobe Reader, then re-install. This deficiency affects all versions of Adobe Reader 7.x through 9.x. Oddly, Adobe Acrobat AIPs will correctly in-place upgrade.
    Ultimately, the in-place upgrade issue and the patch order requirements are why I say to make a copy, then update and deploy the copy.
    As for creating the AIPs:
    This is what my directory structure looks like for my Reader AIPs:
    F:\Applications\Adobe\Reader\9.3.0
    F:\Applications\Adobe\Reader\9.3.1
    F:\Applications\Adobe\Reader\9.3.2
    F:\Applications\Adobe\Reader\9.3.3
    F:\Applications\Adobe\Reader\9.3.4
    F:\Applications\Adobe\Reader\9.4.0
    F:\Applications\Adobe\Reader\9.4.1
    The 9.4.0 -> 9.4.1 MSP is F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp
    When I created my 9.4.1 AIP, I entered these at a cmd.exe prompt - if you don't have robocopy on your machine you can get it from the Server 2003 Resouce Kit:
    F:
    cd \Applications\Adobe\Reader\
    robocopy /s /e 9.4.0 9.4.1
    cd 9.4.1
    rename AdbeRdr940_en_US.msi AdbeRdr941_en_US.msi
    msiexec /a AdbeRdr941_en_US.msi /update F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp /qb

  • Best practice for batch id access

    I'm not sure if this is the right place to post this so here it goes.
    I have a client who wants to have a super user create background jobs. These jobs will be assigned to a specific generic id (ex. BATCH_FI). The client only wants to give FI-CO access to this id. They are convinced that this is the way to do it in 2011. I know the other clients I support have a role that is pretty much a copy of SAP_ALL. This client does not want this unless I can prove to them that SAP experts recommend using a SAP_ALL type role for background users to eliminate the possibilty of jobs failing. If it comes from me, they want nothing to do with it. However if it comes from SAP experts, they'll listen.
    Can anyone help me here? What can I do?
    Cheers!!!
    Randy

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Best practice for @EJB injection in junit test (out-of-container) ?

    Hi all,
    I'd like to run a JUnit test for a Stateless bean A which has another bean B injected via the @EJB annotation. Both beans are pure EJB 3 POJOs.
    The JUnit test should run out-of-container and is not meant to be an EJB client, either.
    What is the easiest/suggested way of getting this injection happening without explicitely having to instantiate the bean B in my test setup ?

    you can deal with EntityBeans without having the Container managed senario , you can obtain instance of EntityManager using the "EntityManagerFactory" and providing the "persistence.xml" file and provide the "provider" (toplink,hibernate ,...), then you can use entities as plain un managed classes

  • Best practice for Wireless ap vlan

    Is there a best practice for grouping lightweight access points in one vlan or allowing them to be spread across several ??

    Whether you have multiple sites or not, it's good practice to put your APs in a separate and dedicated VLAN. 
    If your sites are routed sites, then you can re-use the same VLAN numbers but make sure they are on separate subnets and/or VRF instance.

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • Best practices for handling elements and symbols (including preloading)

    I am trying to learn Edge Animate and I have not seen enough animations to know how this is typically handled and I searched the forum and have not found an answer either.
    If you have many different elements and symbols for a project, what is the best practice for having them appear, disappear, etc. on the timeline? I ask this question not only from a performance based perspective, but also keeping in mind the idea of preloading. This is a 2 part question:
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.

    Hi, escargo-
    Good questions!
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    I would recommend that you set your visibility to "off" instead of simply changing the opacity.  The reason I suggest this is that when your visibility is set to off, your object's hit points also disappear.  If you have any type of interactivity, having the object still visible but with 0 opacity will interfere with anything you have underneath it in the display order.
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.
    No, none of this has any impact on page load.  As you already noticed, all of the assets of your project will load before it displays.  If you want only part of your composition to load, you may want to do what we call a multi-composition project.  There's a sample of that in the Edge Animate API in the Advanced section, and plenty of posts in the forums (and one in the team's blog) explaining how to do that.
    http://www.adobe.com/devnet-docs/edgeanimate/api/current/index.html
    https://blogs.adobe.com/edge/
    Hope that helps!
    -Elaine

  • Best Practice for a Print Server

    What is the best practice for having a print server serving over 25 printers 10 of which are colour lasers and the rest black and white lasers.
    Hardware
    At the moment we have one server 2Ghz Dual G5 with 4GB Ram and xserve RAID. The server is also our main Open directory server, with about 400+ clients.
    I want to order a new server and want to know the best type of setup for the optimal print server.
    Thanks

    Since print servers need RAM and spool space, but not a lot of processing power, I'd go with a Mac Mini packed with ram and the biggest HD you can get into it. Then load a copy of Xserver Tiger on it and configure your print server there.
    Another option, if you don't mind used equipment, is to pick up an old G4 or G5 Xserve, load it up with RAM and disk space, and put tiger on that.
    Good luck!
    -Gregg

  • What is the Best Practice for Server Pool...

    hi,
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm
    please suggest.....

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best Practice for Security Point-Multipoint 802.11a Bridge Connection

    I am trying to get the best practice for securing a point to multi-point wireless bridge link. Link point A to B, C, & D; and B, C, & D back to A. What authenication is the best and configuration is best that is included in the Aironet 1410 IOS. Thanks for your assistance.
    Greg

    The following document on the types of authentication available on 1400 should help you
    http://www.cisco.com/univercd/cc/td/doc/product/wireless/aero1400/br1410/brscg/p11auth.htm

Maybe you are looking for

  • ERROR Integration Engine configuration

    hi all, i have an error. en sxmb_adm -->Integration Engine configuration y configured next parameters: rol:Integration Server corresponding Int. Server: http://<server>:<port>/sap/xi/engine?type=entry When press in SL buttom, and error occure: No Acc

  • IDOC Error Notification - WorkFlow

    <b><u>ALE-IDOC Master Data Distribution</u></b> We have a busines requirement to email/notify an agent when an incoming idoc has an error-status. Please let me know how to proceed with this. Thanks in advance!

  • Select-options and Parameters, when to use what?

    Hi gurus, I have been using Parameters for long. Now I want to choose a range of values, which I need to retrieve from database. I tried using select-options. Can't I use 2 parameters that is parameter1 and parameter2 and mention query BETWEEN para1

  • Activation of CRM RDS with ERP integration impact on existing ERP configuration

    Hi, I'm doing automatic content activation (via Solution Builder) of the CRM RDS solution with ERP integration. The CRM system is a new installation which will linked to an existing ERP system. Will the activation of RDS Solution content overwrite th

  • Satellite Pro L300D Recovery Partition

    Hi, How do I start the recovery program using the recovery partition? The manual states that I should press and hold down the 0 (zero) key while powering on the laptop. I've tried that, holding the zero-key down but then the computer starts "screamin