Best Pactice for Connecting ASA to Catalyst Switch with Mulitple VLANs

Hi all,
Have the following network topology that was in place when I started the job (See attached pdf).  Am thinking it might be better if I could eliminate the Cisco 2811 router and connect directly from the ASA to my 12 port fiber switch (192.168.7.1).  In my thinking this would eliminate an unnecessary piece of equipment and also give me a gig link to my ASA as opposed to the 100 meg link I have now with the old router.  The 12 port fiber has links to most of my IDFs and is acting as my VLAN gateway for all inter VLAN routing.
Is my current topology ideal or would I be better served to remove router and connect directly to the 3750G-12s Fiber switch or my Master Switch (192.168.7.4)?  Only thing I don’t like about direct connect to Master switch is that it takes scheduling a major outage for me to reboot it.  However, if that is best practice in this case, I can live with it.
It appears the 12 port fiber cannot have IP addresses  assigned directly to Ports, only to VLANs.  So would I have to create a separate vlan for my ASA and assign IPs to the vlan on each end of the connection?
I have read some suggestions that say it is better to terminate all VLANs on the ASA.  So as I understand that would require creating subinterfaces on my ASA LAN port and assigning each subinterface to its own VLAN  Inter VLAN routing would then be controlled by ASA.
Does not seem practical to me as I have about 15 VLANs total.  Not showing everything in the drawing.
Guess my main question is “What is best practice for topology and routing in my scenario?”

Hi Mcreilly,
You should be able to assign an ip address on cat6k sup720 if you are running native ios on sup 720.
If you are running catos then you will not be able to do that and you can have it configured as trunk and connect to the router.Also I do not think that you need subinterfaces on router and trunk on switch because your cat6k with sup720 must be doing intervlan routing between vlans.
You can just connect it on some port on any vlan and same subnet ip address which you have it on msfc for that vlan you can assign on the router interface and anybody want to go out via t3 link will get routes on sup720 and move out via router vlan.
For suppose you do not want the router to be part of existing vlan you can create one vlan on cat6k sup720 and assign one port to that new vlan and connect the royter to that new vlan port and then create logical interface on msfc for that new vlan and assign an ip address range on that logical vlan and same subnet ip address range you can assign on router physical interface.
Any one from other vclan get routed on sup 720 msfc and will move out via the vlan on which you have connected the router.
because you have only one router you will not be able to maintain box level redundancy by which i mean if the router goes down t3 will be unreachable.
HTH
Ankur

Similar Messages

  • What are best practices for connecting asa to nexus 5000

    just trying to get a feel for the best way to connect redundant asa to redundant nexus 5000
    using a vpc vlan is fine, but then running a routing protocol isn't supported, so putting static routes on 5000 works, but it doesn't support ip sla yet so you cant really stop distributing the default if your internet goes down. just looking for what was recommended.

    you want to test RAC upgrade on NON RAC database. If you ask me that is a risk but it depends on may things
    Application configuration - If your application is configured for RAC, FAN etc. you cannot test it on non RAC systems
    Cluster upgrade - If your standalone database is RAC one node you can probably test your cluster upgrade there. If you have non RAC database then you will not be able to test cluster upgrade or CRS
    Database upgrade - There are differences when you upgrade RAC vs non RAC database which you will not be able to test
    I think the best way for you is to convert your standalone database to RAC one node database and test it. that will take you close to multi node RAC

  • Best practices for connecting to DB

    Hi,
    I am having 3 different java classes which will contact the DB for getting the data from the table. I wrote a separate java class for DB connection like this :
    public class DBConnection {
    private Connection con;
    public Connection getConnection() {
    try {
    Class.forName("org.apache.derby.jdbc.ClientDriver");
    con = DriverManager.getConnection("jdbc:derby://localhost:1527/StrutsDB", "username", "password");
    } catch (SQLException ex) {
    Logger.getLogger(DBConnection.class.getName()).log(Level.SEVERE, null, ex);
    } catch (ClassNotFoundException ex) {
    Logger.getLogger(DBConnection.class.getName()).log(Level.SEVERE, null, ex);
    return con;
    How to use this single connection object in all of the 3 classes? What are all the ways of doing so?
    Is there any best practices for connecting to DB?
    Thanks in advance,

    The problem with "best practice" is it really depends on your situation. If you are creating a single user, desktop application then what you are doing will work but would be more efficient if the connection was declared as static and you used the singleton pattern.
    public class DBConnection {
        private static final Connection con;
        private DBConnection() {
            try {
                Class.forName("org.apache.derby.jdbc.ClientDriver");
                con = DriverManager.getConnection("jdbc:derby://localhost:1527/StrutsDB", "username", "password");
            } catch (SQLException ex) {
                Logger.getLogger(DBConnection.class.getName()).log(Level.SEVERE, null, ex);
            } catch (ClassNotFoundException ex) {
                Logger.getLogger(DBConnection.class.getName()).log(Level.SEVERE, null, ex);
        public static Connection getConnection() {
            return con.
    }The private constructor guarantees only one instance of the class will be created (since you can't use 'new' to create one) and initialized the database connection. Then any other object that require a connection simply call DBConnection.getConnection() and they will get the same database connection each time.
    Note that this is a little simplistic and is not thread safe. If your classes will be executing on different threads you will need a more sophisticated approach. You will also need to make sure you commit or rollback any transactions when done or the next time you get the connection you may be in the middle of an existing transaction.

  • Where to find the best application for cleaning out my MacBook Air with OS X 10.7.5? I've been using MacKeeper but believe it's slowing down my laptop considerable.

    where to find the best application for cleaning out my MacBook Air with OS X 10.7.5? I've been using MacKeeper but believe it's slowing down my laptop considerable. Thank you.

    How to maintain a Mac
    1. Make redundant backups, keeping at least one off site at all times. One backup is not enough. Don’t back up your backups; make them independent of each other. Don’t rely completely on any single backup method, such as Time Machine. If you get an indication that a backup has failed, don't ignore it.
    2. Keep your software up to date. In the Software Update preference pane, you can configure automatic notifications of updates to OS X and other Mac App Store products. Some third-party applications from other sources have a similar feature, if you don’t mind letting them phone home. Otherwise you have to check yourself on a regular basis. This is especially important for complex software that modifies the operating system, such as device drivers. Before installing any Apple update, you must check that all such modifications that you use are compatible.
    3. Don't install crapware, such as “themes,” "haxies," “add-ons,” “toolbars,” “enhancers," “optimizers,” “accelerators,” "boosters," “extenders,” “cleaners,” "doctors," "tune-ups," “defragmenters,” “firewalls,” "barriers," “guardians,” “defenders,” “protectors,” most “plugins,” commercial "virus scanners,” "disk tools," or "utilities." With very few exceptions, this stuff is useless, or worse than useless. Above all, avoid any software that purports to change the look and feel of the user interface.
    The more actively promoted the product, the more likely it is to be garbage. The most extreme example is the “MacKeeper” scam.
    As a rule, the only software you should install is that which directly enables you to do the things you use a computer for — such as creating, communicating, and playing — and does not modify the way other software works. Use your computer; don't fuss with it.
    Safari extensions, and perhaps the equivalent for other web browsers, are a partial exception to the above rule. Most are safe, and they're easy to get rid of if they don't work. Some may cause the browser to crash or otherwise malfunction.  Some are malicious. Use with caution, and install only well-known extensions from relatively trustworthy sources, such as the Safari Extensions Gallery.
    Never install any third-party software unless you know how to uninstall it. Otherwise you may create problems that are very hard to solve.
    4. Beware of trojans. A trojan is malicious software (“malware”) that the user is duped into installing voluntarily. Such attacks were rare on the Mac platform until sometime in 2011, but are now increasingly common, and increasingly dangerous.
    There is some built-in protection against downloading malware, but you can’t rely on it — the attackers are always at least one day ahead of the defense. You can’t rely on third-party protection either. What you can rely on is common-sense awareness — not paranoia, which only makes you more vulnerable.
    Never install software from an untrustworthy or unknown source. If in doubt, do some research. Any website that prompts you to install a “codec” or “plugin” that comes from the same site, or an unknown site, is untrustworthy. Software with a corporate brand, such as Adobe Flash Player, must be acquired directly from the developer. No intermediary is acceptable, and don’t trust links unless you know how to parse them. Any file that is automatically downloaded from a web page without your having requested it should go straight into the Trash. A website that claims you have a “virus,” or that anything else is wrong with your computer, is rogue.
    In OS X 10.7.5 or later, downloaded applications and Installer packages that have not been digitally signed by a developer registered with Apple are blocked from loading by default. The block can be overridden, but think carefully before you do so.
    Because of recurring security issues in Java, it’s best to disable it in your web browsers, if it’s installed. Few websites have Java content nowadays, so you won’t be missing much. This action is mandatory if you’re running any version of OS X older than 10.6.8 with the latest Java update. Note: Java has nothing to do with JavaScript, despite the similar names. Don't install Java unless you're sure you need it. Most people don't.
    5. Don't fill up your boot volume. A common mistake is adding more and more large files to your home folder until you start to get warnings that you're out of space, which may be followed in short order by a boot failure. This is more prone to happen on the newer Macs that come with an internal SSD instead of the traditional hard drive. The drive can be very nearly full before you become aware of the problem. While it's not true that you should or must keep any particular percentage of space free, you should monitor your storage consumption and make sure you're not in immediate danger of using it up. According to Apple documentation, you need at least 9 GB of free space on the startup volume for normal operation.
    If storage space is running low, use a tool such as the free application OmniDiskSweeper to explore your volume and find out what's taking up the most space. Move rarely-used large files to secondary storage.
    6. Relax, don’t do it. Besides the above, no routine maintenance is necessary or beneficial for the vast majority of users; specifically not “cleaning caches,” “zapping the PRAM,” "resetting the SMC," “rebuilding the directory,” "defragmenting the drive," “running periodic scripts,” “dumping logs,” "deleting temp files," “scanning for viruses,” "purging memory," "checking for bad blocks," "testing the hardware," or “repairing permissions.” Such measures are either completely pointless or are useful only for solving problems, not for prevention.
    The very height of futility is running an expensive third-party application called “Disk Warrior” when nothing is wrong, or even when something is wrong and you have backups, which you must have. Disk Warrior is a data-salvage tool, not a maintenance tool, and you will never need it if your backups are adequate. Don’t waste money on it or anything like it.

  • Hi - was looking to buy a macbook air soon but wanted to get it with lion software....anyone know when that is available and second question is what is best software for personal money management that works with lion.....use quicken now   thanks

    Hi - was looking to buy a macbook air soon but wanted to get it with lion software....anyone know when that is available and second question is what is best software for personal money management that works with lion.....use quicken now   thanks

    quicken should work with lion.
    Quicken Essentials will work with Lion.  Most people that have used Quicken for a while (i.e. Quicken 2007) have found that Quicken Essentials isn't much better than a basic spreadsheet.  It is a significant step down from previous versions and does not offer many of the features previously offered.  Right now, it seems like the two most common options are iBank and MoneyDance.
    Frankly... this is a major opportunity for these companies.  The largest commercial distributor of this type of product has been Intuit (Quicken).  That makes it hard for any other company to get any of that market as so many people were already using Quicken and may have had years of data stored in it.  Now, with Quicken effectively out of the picture, it's a great chance for another company.  Just imagine if iPhones were suddenly off the market.  That would give other manufacturers a tremendous opportunity since they wouldn't have to fight an up hill battle against a market giant.

  • What is best practice for remotely managing bank of switches over POTS

    I need to be able to have a back door into several catalyst switches and ASA.
    What is the best practice for accessing them remotely. ?

    Just place a modem into any console port. Ideally you use a terminal server, but is not always really needed.

  • Best Practices for Connecting to WebHelp via an application?

    Greetings,
    My first post on these forums, so I appologize if this has already been covered (I've done some limited searching w/o success).  I'm developing a .Net application which is accessing my orginazation's RoboHelp-generated webhelp.  My organization's RoboHelp documentation team is still new with the software and so it's been up to me to chart the course for establishing the workflow for connecting to the help from the application.  I've read up on Peter Grange's 'calling webhelp' section off his blog, but I'm still a bit unclear about what might be the best practices approach for connecting to webhelp.
    To date, my org. has been delayed in letting me know their TopicIDs or MapIDs for their various documented topics.  However, I have been able to acquire the relative paths to those topics (I achieved this by manually browsing their online help and extracting out the paths).  And I've been able to use the strategy of creating the link via constructing a URL (following the strategy of using the following syntax: "<root URL>?#<relative URI path>" alternating with "<root URL>??#<relative URI path>").  It strikes me, however, that this approach is somewhat of a hack - since RoboHelp provides other approaches to linking to their documentation via TopicID and MapID.
    What is the recommended/best-practices approach here?  Are they all equally valid or are there pitfalls I'm missing.  I'm inclined to use the URI methodology that I've established above since it works for my needs so far, but I'm worried that I'm not seeing the forest for the trees...
    Regards,
    Brett
    contractor to the USGS
    Lakewood, CO
    PS: we're using RoboHelp 9.0

    I've been giving this some thought over the weekend and this is the best answer I've come up with from a developer's perspective:
    (1) Connecting via URL is convenient if (#1) you have an established naming convention that works for everyone (as Peter mentioned in his reply above)
    (2) Connecting via URL has the disadvantage that changes to the file names and/or folder structure by the author will break connectivity
    (3) Connecting via TopicID/MapID has the advantage that if there is no naming convention or if it's fluid or under construction, the author can maintain that ID after making changes to his/her file or folder structure and still maintain the application connectivity.  Another approach to solving this problem if you're working with URLs would be to set up a web service that would match file addresses to some identifier utilized by the developer (basically a TopicID/MapID coming from the other direction).
    (4) Connecting via TopicID has an aesthetic appeal in the code since it's easy to provide a more english-readable identifier.  As a .Net developer, I find it easy and convenient to construct an enum that matches my TopicIDs and to utilize that enum to construct my identifier when it comes time to make the documentation call.
    (5) Connecting via URL is more convenient for the author, since he/she doesn't have to worry about maintaining IDs
    (6) Connecting via TopicIDs/MapIDs forces the author to maintain those IDs and allows the documentation to be more easily used into the future by other applications worked by developers who might have their own preference in one direction or another as to how they make their connection.
    Hope that helps for posterity.  I'd be interested if anyone else had thoughts to add.
    -Brett

  • Best Practise for connecting to Ethernet based device

    Hi,
    I have inherited a system where we have a cDAQ-9181 controlling an vehicle access barrier, with a LabView application on  a PC talking to it via Ethernet.
    (The application is very simple - press a button > send a value to the 9181 unit > opens the barrier )
    All works fine most of the time.
    ( We occasionally get network related errors. The LabView application sometimes thinks another PC has reserved the unit, or gives “error 89130 - device not available for routing” )
    The users would now like to be able to easily run the application from a second PC ( not at the same time ), but this seems to be a problem. If I exit the application on PC “A” and run it on PC “B” it struggles to reserve the chassis, and throws the “89130” error and I have to restart the unit via MAC.
    While I’m a “veteran” control programmer, I’m new to LabView, and would be very grateful for any pointers on “best practise” for talking to devices via Ethernet, or any specific suggestions for handling multiple PCs talking to a single device.
    Thank You.
    Tim.

    Hi Tim,
    Thank you for your post and welcome to the NI forums.
    There are lots of knowledgebase articles on our website and you should be able to find documentation for most of our hardware.
    There is a good troubleshooting guide for cDAQ Ethernet here (http://ae.natinst.com/public.nsf/web/searchinternal/e67b4e4749f378ff862577270059bd4b?OpenDocument) - it outlines the steps to take to ensure you have a stable a connection as possible. You may have already seen it, but the quick-start guide for your specific device may also be worth consulting for best practices. Are these helpful?
    As for using more than one PC - this shouldn't be too much of an issue. I would expect that the resource isn't being closed correctly - when you exit the App on PC 'A', how are you closing off the resource?
    Best regards,
    Eden S
    Applications Engineer
    National Instruments UK & Ireland

  • Which is best port for connecting external pc vga monitor to macbook pro

    which is best port to connect my macbook pro to external via monitor?

    not sure what you feel your options are most macbooks pro have a minidisplay port or a thunderbolt port which is the same connector
    and all minidisplay -> hdmi or dvi or vga works

  • What Is The Best Way To Connect Ipod To Stereo Reciever With High Quality?

    I recently purchased an 80 gig and am a newbie. I ripped all my music at around 280kbps to retain good audio quality when playing in my car and home theatre system. There are tons of accesories out there (docks, jacks, etc) which is the highest quality to connect my Yamaha receiver. I would think a connection to the bottom of my ipod would be of a higher quality rather than using the headphone jack. Any suggestions would be great. I was thinking about using the monster itv link but mostly for audio. thanks

    OK - So now that you have all completely ignored the original question - in hopes of proving your manly-hood....
    Does anyone have an answer to the original question....
    "What is the best way to connect to a high end home stereo to retain the best sound?"
    For sure any connection going through the headphone jack will be a big step back - enough so that you would easily hear a sound quality loss in doing so.
    For the few of you who would like to argue/bicker/rant on the nominal sound quality loss....keep on doing so, but for those who actually have some respectable advice we would love to hear it.
    For the record, I tend to hear just outside the normal human auditory range...
    To help myself put it to rest already... a few years back I decided to conduct my own little listening test.
    I took 6 different common ripping modalities (from lossless - to - 128kbps in both MP3 & AAC.... then transfered all of them in order to a blank. Used the same song for each - the song chosen opened with a few clean crisp bars of high freq. music, followed by an instant explosion/drop into a tight & low bassline.
    The final cutoff that was achieved before I was unable to detect an audible difference was 198 VBR AAC. Having convinced myself that there was the ever sooooooo slight difference b/t 198 & 256, I went with my current ripping status of 256.... Just incase my audible range should become finally honed with age (yeah, like that is really going to happen) !
    This test was run over a Yamaha DSP receiver, running at a constant 110w per channel (w/a max of 220w). The speaker setup was a full surround set of Klipsch Reference Series..... each speaker housing 2 x 6.5 woofer (each housing 2 x 1" Titanium tweeters) And sitting along side the 5 surrounds was/is a 150w continuous 12in. Subwoofer.
    Now I am hardly stating that I have hearing as good as some of you extremely technically profound music enthusiasts out there, but for the majority of people making comments about sound quality, and lack there of - when played over a $120/40w Audiovox radio.... I'm sorry, but I would have to side on that of......"Hogwash, that you can hear a difference!"
    So.... if possible would someone please offer some advice on a dock or cable connection that will allow us to enjoy the music we ripped at whatever
    so-called range we did!
    For those interested in seeing who could hit the further bullseye with their eyes closed, You win!!! Now can you help us?
    MBP   Mac OS X (10.4.9)  

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • What is the best practice for creating master pages and styles with translated text?

    I format translated text all the time for my company. I want to create a set of master pages and styles for each language and then import those styles into future translated documents. That way, the formatting can be done quickly and easily.
    What are the best practices for doing this? As a company this has been tried in the past, but without success. I'd like to know what other people are doing in this regard.
    Thank you!

    I create a master template that is usually void of content, with the exception I define as many of the paragraph styles I believe can/will be used with examples of their use in the body of the document--a style guide for that client. When beginning a new document for that client, I import those styles from the paragraph styles panel.
    Exception to this is when in a rush I begin documentation first, then begin new work. Then in the new work, I still pull in those defined paragraph and or object styles via their panels into the new work.
    There are times I need new styles. If they have broader applicability than a one-off instance or publication, then I open the style template for that client and import that style(s) from the publication containing the new style(s) and create example paragraphs and usage instructions.
    Take care, Mike

  • What is the best practice for connecting to different schemas?

    Hi all,
    We are porting an application from SQL Server to oracle and would like to know what the best practices are in oracle for user connections to an Oracle instance.
    More or less the question could be put like this:
    1) The equivalent of a SQL Server Database in Oracle is a Schema. (more or less)
    2) A specific application has it's own schema where it keeps all related objects (Tables, etc)
    3) In SQL Server you grant access to the Database and its objects (Tables, etc) to all users of the application.
    4) In Oracle do you grant access to the Schema and its objects (Tables, etc) to all users of the application also? Or do all users log
    in as the schema owner?
    So in Oracle if there existed [SchemaApplication].[table1], how would [userChris] and [userDave] query [SchemaApplication].[table1]?
    Would Chris and Dave log in as [userChris] and [userDave], or would they normally log in as [userApplication]?
    finally, is it good practice to log in as a unique user eg [userChris] and then issue the
    alter session set current_schema = shemaApplication;
    command to change the way references to tables are interpreted?

    We are porting an application from SQL Server to oracle and would like to know what the best practices are in oracle for user connections to an Oracle instance.
    More or less the question could be put like this:
    1) The equivalent of a SQL Server Database in Oracle is a Schema. (more or less)
    2) A specific application has it's own schema where it keeps all related objects (Tables, etc)
    3) In SQL Server you grant access to the Database and its objects (Tables, etc) to all users of the application.
    4) In Oracle do you grant access to the Schema and its objects (Tables, etc) to all users of the application also? Or do all users log
    in as the schema owner?There are ways to implement the same.
    Case 1.
    Create different roles, such as APP_ROLE, READONLY_ROLE. Create public synonym for objects in SchemaApplication user. Grant these role to single user say appUser this is different from you SchemaApplication user. Use appUser to connect to application and for different user like userChris, userDave provide another layer of security. Say userDave is allowed only to deal with cash related transaction, so allow him to open only those screens which are related to cash transaction only.
    Case 2.
    Create public synonym and grant privilege on tables from SchemaApplication to different users (say userChris, userDave).
    So in Oracle if there existed [SchemaApplication].[table1], how would [userChris] and [userDave] query [SchemaApplication].[table1]?This is resolved by public synonym. There are private synonym as well, you can create this also but in this case you have to create private synonym for each of the users.
    Would Chris and Dave log in as [userChris] and [userDave], or would they normally log in as [userApplication]? I would suggest you to connect either using a new user(Case 1) or the user itself has account in the database(Case2).
    finally, is it good practice to log in as a unique user eg [userChris] and then issue the
    alter session set current_schema = shemaApplication;
    No. It is not a good practice to allow the user to login to database using the application owner.
    command to change the way references to tables are interpreted?The public/private synonym can be used to resolve the schema.object value. For example, if SchemaApplication has table T, then you can create public synonym as 'CREATE PUBLIC SYNONYM T FOR SchemaApplication.T'; and now you can refer this table as T from any other schema(user).
    HTH
    Virendra

  • 4400 Controllers - Best Practise for connecting to wired network

    At one time the best practise recommendation for wireless was to treat the traffic as untrusted and separate it from the wired network by firewalls and intrusion detection. A lot of the reason for this was the weakness of WEP. Now with strong authentication and encryption (e.g., WPA2 and EAP-TLS) in use, and the use of wireless controllers, I'm wondering what the industry is recommending (and doing in case the actions aren't the same as the recommendations).
    Are organizations connecting the wireless controllers directly to the internal network or are they separating them with a firewall and IDS infrastructure? If the latter, what does the architecture look like? Are there documents on the Cisco site or on the Internet that show how the controllers could be firewalled? Everthing I've seen shows connections directly to the internal network. Is firewalling the controller an overreaction to the historical paranoia from the WEP days?

    The argument would be that regardless of what security you put on the wireless, you still don't have the physical security - i.e. someone doesn't need to walk into your building to use your network.
    Beyond that if you're using strong auth/enc you can currently be considered safe, we have customers using that direct into their LANs (but then, we also have customers with WEP direct into their LANs!)...
    If you are concerned or really need belt 'n' braces security, then go down the firewall/IDS route - there's no harm in it if you have the money. It really depends how much functionality and ease of use you need to balance against it.
    Aaron
    Please rate helpful posts

Maybe you are looking for

  • Messages in a batch input session

    Hi, I would like to catch the messages of a batch input session in order to inform user of clearing itens posted and not posted. So i thought that SX_MESSAGE_TEXT_BUILD would work fine. Somebody can give an example of this kind of code. Of course if

  • Which Express34 Cards work with the latest Yosemite?

    I'm running OS X 10.10.1 on a MBP 17" which I just bought 2nd hand - it all seems fine EXCEPT the Thunderbolt port doesn't seem to work - "No Hardware Found". It is a Mid 2010 machine. I suppose I will have to go to the apple store to fix the Thunder

  • Item status in delivery,

    Hello friends, I have a question, in SD I have to make a following check .. "Delivery items should not be checked if they have already been picked (item status B or C) " So which field is the item status, in which table in SD ? Many thanks, and kind

  • Odd letters show up

    hp pavilion dm4  keyboard is acting funny - I get extra z's in many words and some skipped letters

  • Transform using the bounding box

    This question was posted in response to the following article: http://help.adobe.com/en_US/illustrator/cs/using/WS714a382cdf7d304e7e07d0100196cbc5f-647ba .html