Any Best Practices for Guest Access?

Looking to create a guest access WLan so that Vendors can have internet access along with vpn into their own network while disallowing access to our internal systems.
I have created a Guest WLan and configured it on the WLC side. I think all I have to do now is to configure the core switch with athe New 99 Vlan along with configuring the trunk ports connected to the WLC's.
My question is, am I missing anything in the setup? and are there any "best practices" wen it comes to Guest access? I am hoping to use web-passthru authentication. I dont believe this requires any AAA or Radius servers which we dont have set up. I will probably just want a single "guest" account which will provide internet access without allowing access to the internal lan. Am I on the right track here?

***************Guest WLC****************** (Cisco Controller) >show mobility summary Symmetric Mobility Tunneling (current) .......... Enabled Symmetric Mobility Tunneling (after reboot) ..... Enabled Mobility Protocol Port........................... 16666 Default Mobility Domain.......................... DMZ Multicast Mode .................................. Disabled Mobility Domain ID for 802.11r................... 0x43cd Mobility Keepalive Interval...................... 10 Mobility Keepalive Count......................... 3 Mobility Group Members Configured................ 2 Mobility Control Message DSCP Value.............. 0 Controllers configured in the Mobility Group MAC Address        IP Address      Group Name                        Multicast 00:19:aa:72:2e:e0  10.192.60.44    Champion Corp                    0.0.0.0 00:19:aa:72:39:80  10.100.100.20    DMZ                              0.0.0.0 (Cisco Controller) > ***************Corp WLC***************** (Cisco Controller) >show mobility summary Symmetric Mobility Tunneling (current) .......... Enabled Symmetric Mobility Tunneling (after reboot) ..... Enabled Mobility Protocol Port........................... 16666 Default Mobility Domain.......................... Champion Corp Multicast Mode .................................. Disabled Mobility Domain ID for 802.11r................... 0x46d5 Mobility Keepalive Interval...................... 10 Mobility Keepalive Count......................... 3 Mobility Group Members Configured................ 2 Mobility Control Message DSCP Value.............. 0 Controllers configured in the Mobility Group MAC Address        IP Address      Group Name                        Multicast IP    Status 00:19:aa:72:2e:e0  10.192.60.44    Champion Corp                    0.0.0.0          Up 00:19:aa:72:39:80  10.100.100.20    DMZ                              0.0.0.0          Up (Cisco Controller) >

Similar Messages

  • Best practice for select access to users

    Not sure if this is the correct forum to post, if not then let me know where should I post.
    From my understanding this is the best forum to ask this questions.
    Are you aware of any "Best Practice Document" to grant select accesses to users on databases. These users are developers which select data out of database for the investigation and application bug fix.
    From time to time user want more and more access to different tables so that they can do investigation properly.
    Let me know if there exists a best practice document around this space.
    Asked in this forum as this is related to PL/SQL access.

    Welcome to the forum!
    Whenever you post provide your 4 digit Oracle version.
    >
    Are you aware of any "Best Practice Document" to grant select accesses to users on databases. These users are developers which select data out of database for the investigation and application bug fix.
    From time to time user want more and more access to different tables so that they can do investigation properly.
    Let me know if there exists a best practice document around this space.
    >
    There are many best practices documents about various aspects of security for Oracle DBs but none are specific to developers doing invenstigation.
    Here is the main page for Oracles' OPAC white papers about security.
    http://www.oracletechnetwork-ap.com/topics/201207-Security/resources_whitepaper.cfm
    Take a look at the ones on 'Oracle Identity Management' and on 'Developers and Identity Services'.
    http://www.dbspecialists.com/files/presentations/implementing_oracle_11g_enterprise_user_security.pdf
    This paper by Database Specialists shows how to use Oracle Identity Management to limit access to users such as developers through the use of roles. It shows some examples of users using their own account but having limited privileges based on the role they are given.
    http://www.dbspecialists.com/files/presentations/implementing_oracle_11g_enterprise_user_security.pdf
    And this Oracle White Paper, 'Oracle Database Security Checklist', is a more basic security doc that discusses the entire range of security issues that should be considered for an Oracle Database.
    http://www.oracle.com/technetwork/database/security/twp-security-checklist-database-1-132870.pdf
    You don't mention what environment (PROD/QA/TEST/DEV) you are even talking about or whether the access is to deal with emergency issues or general reproduction and fixing of bugs.
    Many sites create special READONLY roles, eg. READ_ONLY_APP1, and then grant privileges to those roles for tables/objects that application uses. Then that role can be granted to users that need privileges for that application and can be revoked when they no longer need it.
    Some sites prefer creating special READONLY users that have those read only roles. If a user needs access the DBA changes the password and provides the account info to the user. When the user has completed their duties the DBA resets the password to something no one else knows.
    Those special users have auditing on them and the user using them is responsible for all activity recorded in the logs during the time the user has access to that account.
    In general you grant the minimum privileges needed and revoke them when they are no longer needed; generally through the use of roles.
    >
    Asked in this forum as this is related to PL/SQL access.
    >
    Please explain that. Your question was about 'access to different tables'. How does PL/SQL access fit into that?
    The important reason for the difference is that access is easily controlled thru the use of roles but in named PL/SQL blocks roles are disabled. So those special roles and accounts mentioned above are well-suited to allowing developers to query data but are not well-suited if the user needs to execute PL/SQL code belonging to another schema (the app schema).

  • Any best practice for Key Management with Oracle Obfuscation?

    Hi,
    I was wondering if anyone is aware if there are any best practices regarding key management when using Oracle's DBMS_OBFUSCATION_TOOLKIT? I'm particularly interested in how we can protect the encryption/decryption key that we would use.
    Thanks,
    Jim

    Oracle offers this document, which includes a strategy for what you're after:
    http://download-west.oracle.com/docs/cd/B13789_01/network.101/b10773/apdvncrp.htm#1006234
    -Chuck

  • Any "best practices" for managing a 1.3TB iPhoto library?

    Does anyone have any "best practices" or suggestions for managing and dealing with a large iPhoto library?  I currently have a 1.3 TB library.  This is made up of anything shot in the past 8 years culminating with the past 2 years being 5D Mark II images.  This also includes a big dose of 1080P video shot with the camera.  These are only our family photos so I would hate to break up the library into years as that would really hurt a lot of iPhotos "power".
    It runs fine in day to day use, but I recently tried to upgrade to iPhoto 11 and it crashes repeatedly during the upgrading library process.
    (I have backups, so no worries there.)
    I just know with Lion and iPhoto 9 being a bit old my upgrade day is coming and I'm not sure what my path is.

    If you have both versions of iPhoto on your Mac then try the following; While running iPhoto 09 create a new, test library and import a few photos into it.  Then try to convert that test library with iPhoto 11.  If it converts OK then your big library is causing the problem.
    If that's the case you can try rebuilding your working library as follows:  make a temporary, backup copy of the library and try the following:
    launch iPhoto with the Command+Option keys held down and rebuild the library.
    select the options identified in the screenshot.
    Click to view full size
    Once rebuild try converting it to iPhoto 11.
    NOTE:  if you already have a backup copy of your library you don't need to make the temporary backup copy.
    OT

  • Best Practices for Data Access

    Good morning!
    I was wondering if someone might give me some advice on some best practices for retrieving data from a SQL server in the cloud via a desktop application?
    I'm curious if I embed into my desktop application the server address (IP, or Domain or whatever) and allow the users to provide their own usernames and passwords when using the application, if there was anything "wrong" with that? Where-in my
    application collects the username and password from the user, connects to a server with that username and password, retrieves the data and uses it in-app.
    I'm petrified of security issues and I would hate to start using a SQL database with this setup only to find out that anyone could download x, y or z and connect to the database and see everything.
    Assuming I secure all of the users with limited permissions, is there anything wrong with exposing a SQL server to the web for my application to use? If so, what and what would be a reasonable alternative?
    I really appreciate any help and feedback!

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Any best practices for secondary interface/IP

    Hello,
    I am working for translate firewall from to ASA now.  As I know ASA did not support secondary interface IP. 
    However, my existing firewall setup is using this method to bind different subnet into single Interface. 
    Did any best practices to migrate into ASA environment?
    Thanks!

    Hi,
    This depends on your current environment which we dont know about.
    As ASA firewalls can not have secondary IP addresses on a single interface then the typical options would be to either
    Move the gateway of these internal subnets (which need to be under the same interface) to an internal L3 switch or Router. Then configure a link network between that device and the ASA interface and route the subnets through that link subnet.
    Configure the subnets to different ASA interface (actual physical interfaces or subinterface if using Trunking) and separate those subnets to different Vlans on your switch network (or if not using Vlans then simply to different switches)
    I guess it would also be possible to have 2 separate physical ASA interfaces connected to the same network switch network (Vlan) where the 2 subnet are used and just configure the other gateway on the other interface and the other subnet on the other physical interface. I would assume it could work but I am really hesitant to even write this as this would certainly be something that I would not even consider unless in some really urgent situation where there was no other options (for some reason).
    - Jouni

  • Any best practices for the iPad mini????

    I am in the beginning stages of designing my mag for the iPad.... now the iPad mini seems to be all the hype and the latest news states that the mini may out sell the larger one.
    So... I know that the dimensions are 1x1 on bumping down to the smaller screen... but what about font sizes? what about the experience? Anyone already ahead of the game?
    I have my own answers to these questions, but any one out there have some best practice advice or links to some articles they find informative...

    I think 18-pt body text is fine for the iPad 2 but too small for the iPad mini. Obviously, it depends on your design and which font you're using, but it seems like a good idea to bump up the font size a couple points to account for the smaller screen.
    For the same reason, be careful with small buttons and small tap areas.
    I've also noticed that for whatever reason, MSOs and scrollable frames in PNG/JPG articles look great on the iPad 2 but look slightly more pixelated on the iPad Mini. It might just be my imagination.
    Make sure that you test your design on the Mini.

  • Any best practices for proxy databases

    Dear all,
    is there any caveat or best practice when using a proxy database?
    Is it secure and wise to create them on the master device? Can it grow? Or is it similar to a MSSQL linked server?
    Thank You for your patience,
    Arthur

    Hello,
    This statement is for proxy database as well.
    Note: For recovery purposes, Sybase recommends that you do not create other system or user databases or user objects on the master device.
    AdaptiveServer Enterprise 15.7 ESD #2 > Configuration Guide for Windows > Adaptive Server Devices and System Databases
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc38421.1572/doc/html/san1335472527967.html?resultof=%22%6d%61%73%74%65%72%22%20%22%64%65%76%69%63%65%22%20%22%64%65%76%69%63%22%20%22%75%73%65%72%22%20%22%64%61%74%61%62%61%73%65%22%20%22%64%61%74%61%62%61%73%22%20
    The  Component Integration Services Users Guide is very good start in some part it is like a link server but the option are many and it all depends on your use case and remote source.
    Niclas

  • Any Best Practices for developing custom ABAP reports for Portal?

    Hello,
    The developers on our project are debating the best way to develop custom reports and make them available on the portal.  Of these options that we can think of, can you give any pros & cons, or experiences, or other options?
    - Web-enabled Abap report programs
    - WebDynpro for Abap
    - WebDynpro for Abap using ALV
    - Adobe forms
    Does a "Best Practices" document or blog exist on this topic?
    Thanks,
    Colleen

    Re: Using p_trace=YES

  • Best practice for SSH access by a user across multiple Xserves?

    Hello.
    I have 3 Xserves and a Mac Mini server I'm working with and I need SSH access to all these machines. I have given myself access via SSH in Server Admin access settings and since all 4 servers are connected to an OD Master (one of the three Xserves), I'm able to SSH into all 4 machines using my username/password combination.
    What I'm unsure of though is, how do I deal with my home folder when accessing these machines? For example, currently, when I SSH into any of the machines, I get an error saying...
    CFPreferences: user home directory at /99 is unavailable. User domains will be volatile.
    It then asks for my password, which I enter, and then I get the following error...
    Could not chdir to home directory 99: No such file or directory
    And then it just dumps me into the root of the server I'm trying to connect to.
    How should I go about dealing with this? Since I don't have a local home directory on any of these servers, it has no where to put me. I tried enabling/using a network home folder, but I end up with the same issue. Since the volume/location designated as my home folder isn't mounted on the servers I'm trying to connect to (and since logging in via SSH doesn't auto-mount the share point like AFP would if I was actually logging into OS X via the GUI), it again says it can't find my home directory and dumps me into the root the server I've logged in to.
    If anyone could lend some advice on how to properly set this up, it would be much appreciated!
    Thanks,
    Kristin.

    Should logging in via SSH auto-mount the share point?
    Yes, of course, but only if you've set it up that way.
    What you need to do is designate one of the servers as being the repository of home directories. You do this by simply setting up an AFP sharepoint on that server (using Server Admin) and checking the 'enable user home directories' option.
    Then you go to Workgroup Manager and select your account. Under the Home tab you'll see the options for where this user's home directory is. It'll currently say 'None' (indicating a local home directory on each server). Just change this to select the recently-created sharepoint from above.
    Save the account and you're done. When you login each server will recognize that your home directory is stored on a network volume and will automatically mount that home directory for you.

  • Is there any best practice for printer configuration for test environment?

    My Oracle ebs test environment has a scheduled refresh period where it will be refreshed with production test data and configuration time to time for BA, Dev to work on their enhancement.
    However, all the printer configurations are still pointing to production ones after refreshed, which means whenever there is a testing on-going, the doc / label / invoices will be printed to production printer.
    May I know if there is a way to globally change all the printer setup for the test environment, so that nothing will be directly to production?... Please assist.
    (currently, the only way is to change one by one and only if you are aware of it...)
    In case your organization has any other strategy on this matter, pls share also... Thanks a lot.

    user6775047 wrote:
    My Oracle ebs test environment has a scheduled refresh period where it will be refreshed with production test data and configuration time to time for BA, Dev to work on their enhancement.
    However, all the printer configurations are still pointing to production ones after refreshed, which means whenever there is a testing on-going, the doc / label / invoices will be printed to production printer.
    May I know if there is a way to globally change all the printer setup for the test environment, so that nothing will be directly to production?... Please assist.
    (currently, the only way is to change one by one and only if you are aware of it...)
    In case your organization has any other strategy on this matter, pls share also... Thanks a lot.If you do not want to use same printers then you have to it to point to the (TEST/BA Printers) after you are done with Rapid Clone. Any new printers need to be added manually (just the same way you defined you PROD printers) -- Rapid Clone Documentation Resources For Release 11i and 12 [ID 799735.1]
    If you want to edit the configuration, you could use FNDLOAD to download printers definition from the source instance, update the ldt file and use FNDLOAD to upload printers definitions.
    Is It Possible To Download/Upload Printer Definitions Using Fndload [ID 432616.1]
    Tips About FNDLOAD [ID 735338.1]
    Thanks,
    Hussein

  • Are there any "Best Practices" for the setting of the variables in the magnus.conf file when configuring iWS4.1 ?

     

    The default values written to magnus.conf are suitable for most installations.
    If you are interested in tuning your web server for performance, the "Performance Tuning, Sizing, and Scaling Guide" at http://docs.iplanet.com/docs/manuals/enterprise.html
    has some suggestions for magnus.conf values.

  • Best Practices for zVM/SLES10/zDB2 environment for dialog instances.

    Hi,  I am a zSeries system programmer who has just completed an IBM led Proof of Concept which demonstrated the viability of running SAP instances on SUSE SLES10 Linux booted in zVM guests and accessing zDB2 data via hipersockets. Before we build a Linux infrastructure using the 62 IFLs we just procured, we are wondering if any best practices for this environment have been developed as an OSS note or something else by SAP.    Below you will find an email which was sent and responded to by IBM and Novell on these topics...
    "As you may know, Home Depot has embarked on an IBM led proof of concept using SUSE SLES10 running in zVM guests on IBM zSeries hardware to host SAP server instances.  The Home Depot IT organization is currently in the midst of a large scale push to modernize our merchandising and people systems on SAP platforms.  The zVM/SUSE/SAP POC is part of that effort, as is a parallel POC of an Intel Blade/Red Hat/SAP platform.  For our production financial systems we now use a pSeries/AIX/SAP platform.
          So far in the zVM/SUSE/SAP POC, we have been able to create four zVM LPARS on IBM z9 hardware, create twelve zVM guests on those LPARS, boot SLES10 in those guests, install and run SAP instances in those guests using hipersockets for access to our DB2 SAP databases running on zOS, and direct user workloads to the SAP instances with good results.  We have also successfully developed cloning scripts that have made it possible to create new SLES10 instances, configured and ready for SAP installs, in about 10 seconds using FLASHCOPY and IBM DASD.
          I am writing in the hope that you can direct us to technical resources at IBM/Novell/SAP who may be able to field a few questions that have arisen.  In our discussions about optimization of the zVM/SUSE/SAP platform, we wondered if any wisdom about the appropriateness of and support for using zVM capabilities to virtualize SAP has ever been developed or any best practices drafted.  Attached you will find an IBM Redbook and a PowerPoint presentation which describes the use of the zVM discontiguous shared segments and the zVM named saved system features for the sharing of reentrant code and other  elements of Linux and its applications, thereby conserving storage and disk resources allocated to guest machines.   The specific question of the hour is, can any SAP code be handled similarly?  Have specific SAP elements eligible for this treatment been identified? 
          I've searched the SUSE Knowledgebase for articles on this topic to no avail.  Any similar techniques that might help us reduce the total cost of ownership of a zVM/SUSE/SAP platform as we compare it to Intel Blade/Red Hat/SAP and pSeries/AIX/SAP platforms are of great interest as we approach the end of our POC.  Can you help?
          Greg McKelvey is a Client I/T Architect at IBM.  He found the attached IBM documents and could give a fuller account of our POC.  Pat Downs, IBM zSeries IT Architect, has also worked to guide our POC. Akshay Rao, IBM Systems IT Specialist - Linux | Virtualization | SOA, is acting as project manager for the POC.  Jim Hawkins is the Home Depot Architect directing the POC.  I've CC:ed their email addresses.  I am sure they would be pleased to hear from you if there are the likely questions about what the heck I am asking about here.  And while writing, I thought of yet another question that I hoping somebody at SAP might weigh in on; are there any performance or operational benefits to using Linux LVM to apportion disk to filesystems vs. using zVM to create appropriately sized minidisks for filesystems without LVM getting involved?"
    As you can see, implementation questions need to be resolved.  We have heard from Novell that the SLES10 Kernel and other SUSE artifacts can reside in memory and be shared by multiple operating system images.  Does SAP support this configuration?  Also, has SAP identified SAP components which are eligible for similar treatment?  We would like to make sure that any decisions we make about the SAP platforms we are building will be supportable.  Any help you can provide will be greatly appreciated.  I will supply the documents referenced above if they are not known to any answerer.  Thanks,  Al Brasher 770-433-8211 x11895 [email protected]

    Hello AL ,
    first, let me welcome you on board,  I am sure you won't be disapointed with your choice to run SAP on ZOS.
    as for your questions,
    it wan't easy to find them in this long post , so i suggest you take the time to write a short summary that contains a very short list of questions.
    as for answers.
    here are a few usefull sources of information :
    1. the sap on db2 for Z/os sdn page :
    SAP on DB2 for z/OS
    in it you can find 2 relevant docs :
    a. best practices for ...
    b. database administration for db2 udb for z/os .
    this second publication is excellent , apart from db2 specific info , it contains information on all the components of the sap on db2 for z/os like zlinux,z/vm and so on ...
    2. I can see that you are already familiar with the ibm redbooks , but it seems that you haven't taken the time to get the most out of that resource.
    from you post it is clear that you have found one usefull publication , but I know there are several.
    3. a few months ago I wrote a short post on a similar subject ,
    I'm sure its not exactly what you are looking for at this moment , but its a good start , and with some patience you may be able to get some answers.
    here's a link
    http://blogs.ittoolbox.com/sap/db2/archives/index-of-free-documentation-on-sap-db2-administration-14245
    good luck.
    omer brandis.

  • Best Practices for new iMac

    I posted a few days ago re failing HDD on mid-2007 iMac. Long story short, took it into Apple store, Genius worked on it for 45 mins before decreeing it in need of new HDD. After considering the expenses of adding memory, new drive, hardware and installation costs, I got a brand new iMac entry level (21.5" screen,
    2.7 GHz Intel Core i5, 8 GB 1600 MHz DDR3 memory, 1TB HDD running Mavericks). Also got a Superdrive. I am not needing to migrate anything from the old iMac.
    I was surprised that a physical disc for the OS was not included. So I am looking for any Best Practices for setting up this iMac, specifically in the area of backup and recovery. Do I need to make a boot DVD? Would that be in addition to making a Time Machine full backup (using external G-drive)? I have searched this community and the Help topics on Apple Support and have not found any "checklist" of recommended actions. I realize the value of everyone's time, so any feedback is very appreciated.

    OS X has not been officially issued on physical media since OS X 10.6 (arguably 10.7 was issued on some USB drives, but this was a non-standard approach for purchasing and installing it).
    To reinstall the OS, your system comes with a recovery partition that can be booted to by holding the Command-R keys immediately after hearing the boot chimes sound. This partition boots to the OS X tools window, where you can select options to restore from backup or reinstall the OS. If you choose the option to reinstall, then the OS installation files will be downloaded from Apple's servers.
    If for some reason your entire hard drive is damaged and even the recovery partition is not accessible, then your system supports the ability to use Internet Recovery, which is the same thing except instead of accessing the recovery boot drive from your hard drive, the system will download it as a disk image (again from Apple's servers) and then boot from that image.
    Both of these options will require you have broadband internet access, as you will ultimately need to download several gigabytes of installation data to proceed with the reinstallation.
    There are some options available for creating your own boot and installation DVD or external hard drive, but for most intents and purposes this is not necessary.
    The only "checklist" option I would recommend for anyone with a new Mac system, is to get a 1TB external drive (or a drive that is at least as big as your internal boot drive) and set it up as a Time Machine backup. This will ensure you have a fully restorable backup of your entire system, which you can access via the recovery partition for restoring if needed, or for migrating data to a fresh OS installation.

  • Best practice for RDGW placement in RDS 2012 R2 deployment

    Hi,
    I have been setting up a RDS 2012 R2 farm deployment and the time has come for setting up the RDGW servers. I have a farm with 4 SH servers, 2 WA servers, 2 CB servers and 1 LS.
    Farm works great for LAN and VPN users.
    Now i want to add two domain joined RDGW servers.
    The question is; I've read a lot on technet and different sites about how to set the thing up, but no one mentions any best practices for where to place them.
    Should i:
    - set up WAP in my DMZ with ADFS in LAN, then place the RDGW in the LAN and reverse proxy in
    - place RDGW in the DMZ, opening all those required ports into the LAN
    - place the RDGW in the LAN, then port forward port 443 into it from internet
    Any help is greatly appreciated.
    This posting is provided "AS IS" with no warranties or guarantees and confers no rights

    Hi,
    The deployment is totally depends on your & company requirements as many things to taken care such as Hardware, Network, Security and other related stuff. Personally to setup RD Gateway server I would not prefer you to select 1st option. But as per my research,
    for best result you can use option 2 (To place RDG server in DMZ and then allowed the required ports). Because by doing so outside network can’t directly connect to your internal server and it’s difficult to break the network by any attackers. A perimeter
    network (DMZ) is a small network that is set up separately from an organization's private network and the Internet. In a network, the hosts most vulnerable to attack are those that provide services to users outside of the LAN, such as e-mail, web, RD Gateway,
    RD Web Access and DNS servers. Because of the increased potential of these hosts being compromised, they are placed into their own sub-network called a perimeter network in order to protect the rest of the network if an intruder were to succeed. You can refer
    beneath article for more information.
    RD Gateway deployment in a perimeter network & Firewall rules
    http://blogs.msdn.com/b/rds/archive/2009/07/31/rd-gateway-deployment-in-a-perimeter-network-firewall-rules.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Maybe you are looking for