How to handle multiple site to site IPsec vpn on ASA, any best practice to to manage multiple ipsec vpn configrations

how to handle multiple site to site IPsec vpn on ASA, any best practice to to manage multiple ipsec vpn configurations
before ver 8.3 and after version 8.3 ...8.4.. 9 versions..

Hi,
To my understanding you should be able to attach the same cryptomap to the other "outside" interface or perhaps alternatively create a new crypto map that you attach only to your new "outside" interface.
Also I think you will probably need to route the remote peer ip of the VPN connection towards the gateway IP address of that new "outside" and also the remote network found behind the VPN connection.
If you attempt to use VPN Client connection instead of L2L VPN connection with the new "outside" interface then you will run into routing problems as naturally you can have 2 default routes active at the sametime (default route would be required on the new "outside" interface if VPN Client was used since you DONT KNOW where the VPN Clients are connecting to your ASA)
Hope this helps
- Jouni

Similar Messages

  • Best practice for working on multiple computers

    How do I handle working on multiple devices without having to sync the local files with the remote/testserver everytime I change my machine?
    I have 2 computers, a desktop and a laptop. Usually I code on my desktop but from time to time, I need to make a few edits on my laptop, e.g. when I'm not at home.
    In my early days (CS3) I used to edit the files directly on the remote server, which is not possble anymore since - I think - CS5. Moreover I'm quite happy for having local files I can browse and search through very quickly.
    However everytime need to make a quick edit, I need to sync the whole site with my local files - which is very inconvenient for me. And sometimes I forget that I edited a file on my laptop, uploaded it to the server and then I start working again on the desktop with the old local version of this file. Some projects are quite large with thousands of files due to plugins (e.g. tinymce), for example a webshop. It is a real pain to wait for the sync when I just need to edit one word.
    So what is the default solution for this problem?

    Well, thank you for your anwers.
    Using an online drive system like dropbox seems to be a fine solution - however I wished I wouldn't need a 3rd party software to do so. There two concerns about this solution:
    Syncing problems: when I hit CTRL+S, Dreamweaver automaticalles saves my local files and upload them to the server. If there is an additional dropbox sync, isn't the whole solution prone to errors? (Any experience with ondrive? As it comes preinstalled and has 25 Gigs free, I might give it a try for syncing the local DW data)
    Most important: Password security. I story my mySQL connection information (dbname, passwords, hosts...) in a PHP file. As this connection information is in plain text, I'm not very happy that MS (or Dropbox, Google, ..) can see and scan this data.
    @Nancy O.: I will start using check-in/check-out, it seems to be a great feature. Just so define what it does what it does not do: As long as I checked-out a file, I can't edit on my other machine, which is nice. However, back to the new-file.html example, I won't see this file on my desktop unless I sync it (using DW sync, Dropbox, or anything else), correct?

  • Best Practices: BIP Infrastructure and Multiple Installations/Environments

    Hi all,
    We are in process of implementing BI Publisher as the main reporting tool to replace Oracle Reports for a number of Oracle Form Applications within our organization. Almost all of our Forms environments are (or will be) SSO enabled.
    We have done a server install of BIP (AS 10gR3) and enabled BIP with SSO (test) and everything seems in order for this one dev/test environment. I was hoping to find out how others out there are dealing with some of the following issues regarding multiple environments/installs (and licensing):
    Is it better to have one production BIP server or as many BIP severs as there are middle tier form servers? (Keeping in mind all these need to be SSO enabled). Multiple installs would mean higher maintenance/resource costs but is there any significant gain by having more autonomy where each application has its own BIP install?
    Can we get away with stand alone installations for dev/test environments? If so, how do we implement/migrate reports to production if BIP server is only accessible to DBAs in production (and even real UAT environment where developer needs to script work for migration)? In general, what is the best way to handle security when it comes to administration/development?
    I have looked at the Oracle iStore for some figures but this last question is perhaps one for Oracle Sales people but just in case anybody knows... How's licensing affected by multiple installations? Do we pay per installation or user? Do production and test/dev cost the same? Is the cost of stand alone environment different?
    I would appreciate if you can share your thoughts/experiences in regards to any of the above topics. Thank you in advance for your time.
    Regards,
    Yahya

    Your data is bigger than I run, but what I have done in the past is to restrict their accounts to a separate datafile and limit its size to the max that I want for them to use: create objects restricted to accommodate the location.

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Best practice for JSP app- multiple windows

    Hi all,
    I am looking for some opinions on whether I should construct my app so it uses multiple windows. eg. If someone opens a master record and clicks a link, I can display details in a seperate window. Obviously this has the advantage of being more user-friendly ala a thick client as the user can view multiple windows at once, etc. I've seen many web apps that do this but many that have opted not to also.
    I am interested in the common 'pitfalls' associated with multiple-window web apps and why I should/should not use them. Obviously, opinions vary but I'd like to get some feedback on this. Also, any link(s) discussing this would be greatly appreciated.
    Thanks,
    Mike

    Err...
    JSP is generally used to embed sippets of Java code within otherwise existing HTML pages. It is possible to dynamically generate the resulting Javascript code that will create additional windows, or the Applet code that will likewise spawn new windows in response to user activities.
    For the most part, I do something along the lines of the former suggestion, that is, I write code like:
    out.write( "<A HREF=\"show_detail.jsp?id=" + id + "\" TARGET=\"detail_window\">" + id + "</A>\n" );...and the result is that if the user clicks on a link, a new web page is loaded into some other window.
    Is this a valid way to go? Certainly. Are there any drawbacks? Yes. Many users actually don't want multiple windows, that is, they click on the link, it opens a window - fine. They click on a similar link, it should load the relevant data into that companion window, not load yet another window. The primary reason being that closing all those windows is a bit of a pain in the behind.
    Beyond that, I can't really offer any suggestions that are not application specific. If I think of any, I'll come back to the thread.

  • Best practices/workflow in syncing multiple audio tracks to a clip

    Hello
    Master Collecition CS5
    I have a couple of webisides that I'm editing.
    Each actor had their own wirless lav microphone recording to an external audio system that recorded wav files.
    And the camera was also recording audio.
    We also had a clapboard that marked each scene. The camera and the external audio recording system all captured the clapboard information.
    I am replacing the camera audio with the wav tracks from each lav microphone.  For the most part when Actor "A" speaks I use Actor "A"''s lav wav. amd when Actor "B" talks I use Actor "B"''s wav file.
    The timeline is already fine tuned and I don't want to have to extend the beginning of each video clip on the timeline so that I can sync it with the clapboard.
    I've tried a number of tricks including a separate sequence just for syncing up sound.  But it doesn't seem really practical or time efficient.
    I was thinking that maybe I should be syncing up the clips in soundbooth.  I see that when I right-click on an audio trick I am given the option to "Edit in Soundbooth".
    Would incorporating Soundbooth into my work flow speed up the audio syncing.
    I hope I am being clear!
    Thanks
    Rowby

    I personally wouldn't mess around with copying and pasting timecodes. Yes, that'll work, but it's going to take more time than is really necessary to accomplish what you need. This is how I'd do this, though note that it's going to require some manual effort, no matter what:
    In your sequence, move the CTI to the in point of your clip's audio portion; make sure the track that that clip is on is targeted. Hit the M key, which will match frame the clip and load it into the Source Monitor. Don't double-click the sequence clip to load it; you need an independent reference (e.g. the original master clip) for "measurement" purposes.
    In the source monitor, press O to mark an out point before you move the CTI. Now, move earlier in the clip until you get to the clapboard, and park the CTI on the frame where the "clap" happens. You could also toggle over to waveform view to make this easier (I've actually mapped Composite Video to F3 and Audio Waveform to F4 on my keyboard to make this a quick and easy toggle). Mark an in point (I key), and take note of the in-out duration in the lower-right corner of the source monitor; it's the white numbers. For this example, let's say those numbers read (for a 29.97fps clip) 00;00;05;00, or five seconds.
    With that number in mind, load up the matching audio-only clip into the source monitor, and move the CTI to the "clap" which you should see pretty clearly in the waveform. Once the CTI is postioned, you need to move the CTI forward the number of frames/seconds that you noted above... minus one frame. So, on your numeric keypad, type a "plus" (+) and the digits 429, or 4 seconds 29 frames (obviously this will depend on your sequence/source frame rate; one second less one frame in a 23.976 timebase is 23 frames, etc.). Once the CTI is positioned, hit the I key to mark an in point, and then drag the audio to the sequence; hold the Alt key and drop it onto the audio portion of the clip you're replacing. This will map the original audio's in point to the replacement audio's in point, and maintain the duration of the original sequence clip.
    The reason you need to go one frame less is because when you mark an out point on a clip, you're actually placing it at the "end" of the current frame you are viewing, not the "beginning." You can see this in action if you park the CTI at a given frame and mark both an in point and an out point without moving the CTI; you end up with a selection duration of 1 frame. As such, you need to offset this duration when backtiming and then applying the duration to the replacement clip. It'll make sense after you do it once.
    I know that sounds like a lot, but I was just trying to be as detailed as possible. Hope it helps...

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best practice for listeners for multiple oracle homes - 1 listener?

    I have a machine with 9i and 10g on it, and I have set up a 10g listener to serve connections to the 9i and 10g databases.
    Is this best practice, or is it better to have two listeners - a 9i listener for th 9i databases and a 10g listener for the 10g databases?

    HI,
    Are they two production databases ? In this case, like Laurent said, perhaps it's better to have one listener per instance, if one shutdown, it doesn't disturbe the other.
    Are they different application level (1 for dev, 1 for validation...), here too, perhaps it's better to have one listener per instance, if one shutdown, it doesn't disturbe the other.
    Are they two development databases ? In this case, you can to have one listener for all databases
    I don't think that there is only one way, but there is too many configuration which make one solution is better that other.
    Nicolas.

  • Music on Hold: Best Practice and site assignment

    Hi guys,
    I have a client with multiple sites, a large number of remote workers (on and off domain) and Lync Phone Edition devices.
    We want to deploy a custom music on hold file. What's the best way of doing this? I'm thinking
    Placing the file on a share on one of the Lync servers. However this would mean (I assume) that clients will always try to contact the UNC path every time a call is placed on hold, which would result in site B connecting to site A for it's MoH file. This
    is very inefficient and adds delay onto placing a call on hold. If accessing the file from a central share is best practice, how could I do this per site? Site policies I've tried haven't worked very well. For example, if a file is on
    \\serverB\MoH\file.wma for a site called "London Site" what commands do I need to run to create a policy that will force clients located at a site to use that UNC path? Also, how do client know what site
    they are in?
    Alternatively, I was thinking of pushing out the WMA file to local devices via a Group Policy, and then setting Lync globally to point to %systemdrive%\MoH\file.wma. Again, how do I go about doing this? Also, what would happen to LPE devices that wouldn't
    have the file (as they wouldn't get the GPO)?
    Any help with this would be appreciated. Particularly around how users are assigned to sites, and the syntax used to create a site policy for the first option. Any best practice guidance would be great!
    Thanks - Steve

    Hi StevehootMITS,
    If Lync Phone Edition or other device that doesn’t provide endpoint MOH, you can use PSTN Gateways to provide music on hold. For more information about Music On Hold, you can check
    http://windowspbx.blogspot.in/2011/07/questions-about-microsoft-lync-server.html
    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or
    suitability of any software or information found there. Please make sure that you completely understand the risk before retrieving any suggestions from the above link.
    Best regards,
    Eric

  • Good CCIE question: Can multiple site-2-site VPNs support dynamic routing protocols?

    Hi All,
    Was not sure if this should be posted in LAN routing, WAN routing or VPN forums: I have posted here as the VPN tunnels are the limiting factors...
    I am trying to understand if it is possible to have dynamic routing between LANs when using site to site VPNs on three or more ASA55x5-x (9.0).
    To best explain the question I have put together an example scenario:
    Lets say we have three sites, which are all connected via a separate site-2-site IKEv2 VPNs, in a full mesh topology (6 x SAs).
    Across the whole system there would be a 192.168.0.0/16 subnet which is divided up by VLSM across all sites.
    The inside / outside interfaces of the ASA would be static IPs from a /30 subnet.
    Routing on the outside interface is not of concern in this scenario.
    The inside interface of the ASA connects directly to a router, which further uses VLSM to assign additional subnets.
    VLSM is not cleanly summarised per site. (I know this flys against VLSM best practice, but makes the scenario clearer...)
    New subnets are added and removed at each site on a frequent basis.
    EIGRP will be running on each core router, and any stub routers at each site.
    So this results in the following example topology, of which I have exaggerated the VLSM position:
    (http://www.diagram.ly/?share=#OtprIYuOeKRb3HBV6Qy8CL8ZUE6Bkc2FPg2gKHnzVliaJBhuIG)
    Now, using static route redistribution from the ASAs into EIGRP and making the ASAs to be an EIGRP neighbour, would be one way. This would mean an isolated EIGRP AS per site, but each site would only learn about a new remote subnet if the crypto map match ACL was altered. But the bit that I am confused over, is the potential to have new subnets added or removed which would require EIGRP routing processes on the relevant site X router to be altered as well as crypto map ACLs being altered at all sites. This doesn't seem a sensible approach...
    The second method could be to have the 192.168.0.0/16 network defined in the crypto map on all tunnels and allow the ASAs routing table to chose which tunnel to send the traffic over. This would require multiple neighbours for the ASA, but for example in OSPF, it can only support one neighbour over a S2S VPN when manually defined (point-to-point). The only way round this I can see is to share our internal routing tables with the IP cloud, but this then discloses information that would be otherwise protected by the IPSEC tunnel...
    Is there a better method to propagate the routing information dynamically around the example scenario above?
    Is there a way to have dynamic crypto maps based on router information?
    P.S. Diagram above produced via http://www.diagram.ly/

    Hi Guys,
    Thanks for your responses!  I am learning here, hence the post.
    David: I had looked in to the potential for GRE tunnels, but the side-effects could out weight the benifits.  The link provided shows how to pass IKEv1 and ISAKMP traffic through the ASA.  In my example (maybe not too clear?) the IPSEC traffic would be terminated on the ASA and not the core router behind.
    Marcin: Was looking at OSPF, but is that not limited to one neighbour, due to the "ospf network point-to-point non-broadcast" command in the example (needed to force the unicast over the IPSEC tunnel)? Have had a look in the ASA CLI 9.0 config guide and it is still limited to one neighbour per interface when in point-to-point:
    ospf network point-to-point non-broadcastSpecifies the interface as a point-to-point, non-broadcast network.When you designate an interface as point-to-point and non-broadcast, you must manually define the OSPF neighbor; dynamic neighbor discovery is not possible. See the "Defining Static OSPFv2 Neighbors" section for more information. Additionally, you can only define one OSPF neighbor on that interface.
    Otherwise I would agree it would be happy days...
    Any other ideas (maybe around iBGPs like OSPF) which do not envolve GRE tunnels or terminating the IPSEC on the core router please?
    Kindest Regards,
    James.

  • Site System Roles - Best Practices

    Hi all -
    I was wondering if there wwere any best practice recommendations for how to configure Site System Roles? We had a vendor come onsite and setup our environment and without going into a lot of detail on why, I wasn't able to work with the vendor. I am trying
    to understand why they did certain things after the fact.
    For scoping purposes we have about 12,000 clients, and this how our environment was setup:
    SERVERA - Site Server, Management Point
    SERVERB - Management Point, Software Update Point
    SERVERC - Asset Intelligence Synchronization Point, Application Catalog Web Service Point, Application Catalog Website Point, Fallback Status Point, Software Update Point
    SERVERD - Distribution Point (we will add more DPs later)
    SERVERE - Distribution Point (we will add more DPs later)
    SERVERF - Reporting Services Point
    The rest is dedicated to our SQL cluster.
    I was wondering if this seems like a good setup, and had a few specific questions:
    Our Site Server is also a Management Point. We have a second Management Point as well, but I was curious if that was best practice?
    Should our Fallback Status Point be a Distribution Point?
    I really appreciate any help on this.

    The FSP role has nothing to do with the 'Allow
    fallback source location for content' on the DP.
    http://technet.microsoft.com/en-us/library/gg681976.aspx
    http://blogs.technet.com/b/cmpfekevin/archive/2013/03/05/what-is-fallback-and-what-does-it-mean.aspx
    Benoit Lecours | Blog: System Center Dudes

  • How to handle rules and activities in AII

    Hi experts,
                     How to handle rules and activities in AII. Is there any transaction code for it. please give some information, it will be hepful for me

    In IMG go to Conditions & Rules to configure the relevant rules.
    Rules in AII represent business process you intend to RFID enable. Each rule is made up of multiple activities that defines exactly how a particular rule functions.
    SAP delivers standard rules & activities for standard scenarios of Pack, Load etc. processes.
    For more information on AII rules and activities refer to the performance assistant available in IMG or read more at the link below:
    http://help.sap.com/saphelp_autoid40/helpdata/en/index.htm
    Hope this helps.
    -Ashish

  • Change scopre / Multiscope option regarding Server 2008 R2 With TMG2010 PPTP Site-to-Site VPN.

    Hey guys,
    Ive been looking through the forum for some answers regarding the next setup;
    Host with 2VM'S; DC and TMG
    Internal range is 192.168.100.x/24
    Host02 with 2 VM'S;  DC02 and TMG02
    Internal range is 192.168.200.x/24
    Now im looking to expand DC02 as that is my DHCP. (TMG is the Gateway)
    However, there is a site-to-site between both TMG server.
    What is best practice in this situation and how should I go about this..?
    Any thoughts upon this?
    With kind regards, René de Meijer. MIEGroup.

    Hi,
    To serve the client in the subnet that doesn't have a DHCP server, we need to add a DHCP relay agent in this subnet.
    To enable the DHCP relay agent, we need to install the RRAS. For the detailed steps, please refer to the link below:
    Configure the IPv4 DHCP Relay Agent
    https://technet.microsoft.com/en-us/library/dd469685.aspx
    Besides, since we have installed the TMG server, a few more configuration is needed to allow the DHCP traffic.
    Here is a related article, it may be helpful:
    https://technet.microsoft.com/en-us/library/cc302680.aspx
    Best Regards.
    Steven Lee Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • How to handle source code changes in apex

    hi all,
    can anybody help me plz...
    how to handle source code changes in apex.
    which development process is best suite for apex.
    Regards
    Alekh

    Thanks Andy, so as per the suggestion we have to handle the above snippet as individual if block statements as i had shown below.
    But in this case how we show the else part as NULL.
    correct me if my understanding is wrong.
    if  'Products' in (:P1_ENG_GRP1, :P1_ENG_GRP2, :P_ENG_GRP3) then
                    lv_to_email_id :='[email protected]';
    end if;
    if  'Materials' in (:P1_ENG_GRP1, :P1_ENG_GRP2, :P_ENG_GRP3) then
               lv_to_email_id :='[email protected]';
    end if;Thanks,
    Anoo..

  • DW:101 Question - Site folder and file naming - best practices

    OK - My 1st post! I’m new to DW and fairly new to developing websites (have done a couple in FrontPage and a couple in SiteGrinder), Although not new at all to technical concepts building PCs, figuring out etc.
    For websites, I know I have a lot to learn and I'll do my best to look for answers, RTFM and all that before I post. I even purchased a few months of access to lynda.com for technical reference.
    So no more introduction. I did some research (and I kind of already knew) that for file names and folder names: no spaces, just dashes or underscores, don’t start with a number, keep the names short, so special characters.
    I’ve noticed in some of the example sites in the training I’m looking at that some folders start with an underscore and some don’t. And some start with a capital letter and some don’t.
    So the question is - what is the best practice for naming files – and especially folders. And that’s the best way to organize the files in the folders? For example, all the .css files in a folder called ‘css’ or ‘_css’.
    While I’m asking, are there any other things along the lines of just starting out I should be looking at? (If this is way to general a question, I understand).
    Thanks…
    \Dave
    www.beacondigitalvideo.com
    By the way I built this site from a template – (modified quite a bit) in 2004 with FrontPage. I know it needs a re-design but I have to say, we get about 80% of our video conversion business from this site.

    So the question is - what is the best practice for naming files – and especially folders. And that’s the best way to organize the files in the folders? For example, all the .css files in a folder called ‘css’ or ‘_css’.
    For me, best practice is always the nomenclature and structure that makes most sense to you, your way of thinking and your workflow.
    Logical and hierarchical always helps me.
    Beyond that:
    Some seem to use _css rather than css because (I guess) those file/folder names rise to the top in an alphabetical sort. Or perhaos they're used to that from a programming environment.
    Some use CamelCase, some use all lowercase or special_characters to separate words.
    Some work with CMSes or in team environments which have agreed schemes.

Maybe you are looking for

  • How to populate BAPICATS3 in BAPI_CATIMESHEETMGR_CHANGE?

    Hi All, While updating CATSDB table,, we have mistakenly populated a wrong value to ZZFIELD2. We want to revert it. So we are using FM BAPI_CATIMESHEETMGR_CHANGE. This FM uses two tables: catsrecords_in of type BAPICATS3 and EXTENSIONIN of type BAPIC

  • Warning message on NDMP backups

    Running OSB version 10.4.0.2 on Solaris11 update 1. When we attempt a client NDMP backup operation the jobs complete but with the following warnings about HW encryption. Does anyone have any idea why we would be seeing this. The backups are being wri

  • BUG: Dynamic report column headings not working with some templates

    Using Apex 2.0, we are trying to create a vertical report with dynamic labels like &P26_REF_RECEIVED_DATE_L. based on items populated by PL/SQL. This seems to work fine for some templates but not for others. With a horizontal report template we get o

  • Workflow abap

    Hi Gurus, How i know a particular transaction code is configure with workflow.For example when ever we create Po(Purchase Order-ME21N) it automatic mail to the authorize person mail id.my question is how i know in tcode ME21n is configure with workfl

  • Safari stops playing mov files in Lion

    Having just upgraded to a new iMac i7 2gb graphics and 8gb of ram, I was very annoyed to find that what works fine on my old 2core iMac running snow Lepoard fails on lion. My current problem is .mov files stopping when embedded into a web page. After