Best-practice to redistribute NAT entries into OSPF

I have several different subnets that are all either NAT'd or accessible via a VPN. There's no actual route on the ASA to the addresses, and they're not directly connected, eliminating the usual redistribution commands.
What is the best-practice for redistributing such entries into an OSPF area? In the past, I've had static entries on the upstream firewall, allowing the rest of the network to see this. I'm trying to get rid of as many static routes as possible (or at least make them a floating route so as to provide backup should something in OSPF fail), but am having difficulty figuring out how to redistribute these into the OSPF area.
I can't use a summary-address command as there's no external routes that are being redistributed. The area range command is out as I don't have a separate area that routes are being redistributed from.
One thought I've had is to create a static null route for each subnet (allowing me to redistribute static, and have the static entries only on the originating box), but I imagine rather than NAT'ng or open the site-to-site VPN, it would discard traffic (as the destination is null).
Any ideas on what to do when you have "imaginary" addresses that don't exist anywhere but in NAT entries or that's defined as interesting traffic for a site-to-site VPN?
Thanks in advance.

I have the code working without use of config files. I am just disappointed that it is not working using the configuration files. That was one of the primary intents of my code re-factoring. 
 Katherine
Xiong , If you are proposing this as an answer then does this imply that Microsoft's stance is not to use configuration files with SSIS?? Please answer.
SM

Similar Messages

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best practice to split up documents into articles?

    Dear Adobe,
    at the today's InDesign and DPS session I was asking how the DPS folks, split up the InDesign documents to upload as different articles to the dps? Bob wanted to ask Collin, but forgot to do it!
    I will explain my situation (supposing that most producers sitting in the same boat):
    I have a monthly magazine with about 30 different articles. I receive the print file to create an iPad 1/2, iPad 3 and beginning in next month also an iPhone and Android (also IceCream Sandwich) version. The magazine is just vertical orientation.
    Now with CS6, you do a lot of promotional work for the alternate layout feature.
    But what is the recommondation or best practice at Adobe to upload this one InDesign document with viewer builder to get an iPad 1/2, iPad 3 and iPhone rendition containing the separation of the different articles? Please let me know a workaround how you do this!
    Kind regards
    Yves

    as you know, you need one indesign file for each article (that file can
    contain cover all devices down to the iphone version). the next article
    needs a new indesign file. you can drag-and-drop pages from one indesign
    file into the new layout to move pages across documents.
    if youwant to synchronize settings, try the book feature. but I never
    tested the book feature on CS6/alternative layout compatibility.
    —Johannes

  • Best practice to pass a value into a sub-process

    Hi
    I'm new to Oracle workflow and have the following problem/question.
    I have three identical subprocesses (one for each category) which should run in parallel, and I need to know for which category I'm running inside the activities of these subprocesses.
    Of course I would like to define the subprocess only once an reuse it for the other two parallel paths.
    What's the best way of doing this and how should I pass the category into the subprocess.
    I can think of the following
    - use an item attribute to pass the CATEGORY
    - use a process attribute to pass the CATEGORY into the subprocess
    - use the <process_name> of WF_ENGINE.GET_ACTIVITY_LABEL (each subprocess having a different process_name)
    Is there a better way of doing this ?
    Thanks
    Guido

    Unfortunately there is a limitation that Oracle Workflow does not support using a subprocess activity multiple times within a process hierarchy.
    See http://download-west.oracle.com/docs/cd/B10501_01/workflow.920/a95265/defcom36.htm#pact
    You could create one subprocess and then make copies with different names, and include logic in the main process to transition to the appropriate one.

  • Best Practice for CATTS time entry to Purchase Order

    Hello All -
    We have a scenario where we create a purchase order to a recruiting company for a contractor.
    When the contractor works on our project and books hours via CATTs, we would like the hours to post back to the purchase order like a goods receipt.
    What is the best way to do this?  Service Entry Sheet or Goods Receipt?
    if so, technically, what should be done?
    Thanks for your help!

    Well their first requierement was to get real time data .. if i am in Xcelsius and click refresh then i want it to load my last data ..
    with live office , i can either schedule a crystal report and get the data delayed or use the option from live office to make iterfresh as right now .. is that a correct assumption ?
    I was talking about BW, just in case they are willing to change the requierement to go from Real time to every 5 min
    Just you know we are also thinking of the following option:
    1 - modify the virtual provider on the  CRM machine to get all the custom fields needed for the Xcelsius Dashboard
    2 - Build some interactive report on the top of these Virtual Provider within CRM
    3 - get the link to this report , it is one of the Report feature within CRM
    4 - design and build your dashboard on the top of it
    5 - EXport your swf file to the cRM web ui
    we are trying to see which one is the best one
    Philippe

  • Best practice for loading from mysql into oracle?

    Hi!
    We're planning migrating our software from mysql to oracle. Therefore we need a migration path for moving the customer's data from mysql to oracle. The installation and the data migration/transfer have to run onto different customer's enviroments. So migration ways like installing the oracle gateway and connect for example via ODBC to mysql are no option because the installation process gets more complicated... Also the installation with preconfigured oracle database has to fit on a 4,6 GB dvd...
    I would prefer the following:
    - spool mysql table data into flat files
    - create oracle external tables on the flat files
    - load data with insert into from external tables
    Are there other "easy" ways of doing migrations or what do you think about the prefered way above?
    Thanks
    Markus

    Hi!
    Didn't anyone have this requirement for migrations? I have tested with the mysql select into file clause. Seems to work for simple data types - we're now testing with blobs...
    Markus

  • Best practice for Source NATTING ?

    Is there a general design rule for configuring source NATing ? Is it best to configure the CSS is one/two armed mode.
    What are the perfomance limitations in doing this ?
    Can soure NATed and non source NATed content rules be configured on the CSS with no impact ?
    Cheers, Mike

    Source groups translate the source address of packets from back-end services before forwarding them. When a flow is originated from the back-end server with a private address, the request appears to come from the public Virtual IP (VIP) of the source group. You can also use source groups (with Access Lists (ACLs)) to translate clients' private IP addresses (which reside on the back-end of the CSS) to a public IP address (the VIP).
    The use of this type of source group is useful when setting up a one-armed configuration where client and server traffic flows through the same CSS switch. For more information read the following document.
    http://www.cisco.com/en/US/products/hw/contnetw/ps789/products_tech_note09186a0080093dfc.shtml

  • Best Practices for importing FrameMaker books into RoboHelp?

    We are getting ready to purchase RobHelp, and I'm trying to get a head start on the conversion.  We have ~600 base FrameMaker (v7) files that are pieced together to create ~11 different manuals.  I've got a couple of simpler books picked out for testing purposes, but if there's a white paper or anyting on the subject floating about, I would appreciate it.

    I've tried to use RH8 to produce HTML help from a FM9 book. I found that it mostly worked as advertised, with the exception of automatic TOC generation from a book TOC (which can be worked around) and disappearing HTML tags when conditional text is used (no work around found yet). I've posted about both of these issues in this forum, and while many have read my posts, no-one has yet acknowledged that they have seen similar problems or pointed out incorrect usage on my part. So I might just be mad (or furious, depending).
    Before purchasing and committing to an approach, I recommend that you use the 30 day eval to thoroughly exercise these products and see if they work as advertised for you. If you need more than FM9 and RH8 (which can be downloaded as full-featured trials), then you can get a DVD of a 30 day trial of TCS2 from Adobe. I ordered the DVD and it arrived in about 3-5 days (at least, it did for me, and I'm in the USA).
    -Adam

  • What is best practice to deploy webpart into 1. Solutions Galary, 2. GAC, 3. BIN?

    I am trying various ways to deploy webpart. Can you please provide me  best practice methods to deploy webpart into:-
    Case 1. Solutions Galary: ?
    Case 2. GAC: ?
    Case 3. BIN: ?

    That is going to depend on what is in the web part...
    There are "apps", "sandboxed solutions" (becoming deprecated in 2013), "Farm Solutions" if you have dlls that need to be deployed to the gac.
    Apps - More for javascript (or if you have server side code that you want to run on a server that is not in sharepoint
    sandboxed solutions - run in the context of a site, but cannot add dll to gac (or consume certain dlls such as system.web, etc...) so anything that you want to do outside the context of the current site collection is not allowed
    Farm Solution - allows you to deploy .Net code to the GAC.  Would package as a wsp and give it to an admin to install (requires app pool resets and/or iis resets).

  • Best Practice on Moving Standard Hierarchy (Cost /Profit)  into production

    Hi,
    What is the best practice to move standard hierarchy into production? Is it better to move it as a transport? Or is it better to upload into production with LSMW?
    Thanks,
    Santoshi

    Hi,
    Best practices applied to all developments whether it is R/3, BI modelling or Reporting and as per the best practice we do development in Development system, testing in testing box and finally deploy successful development to production. yes for user analysis purpose, user can do adhoc analysis or in some scenario they create user specific custom queries (sometimes reffere as X-query created by super user).
    So it is always to do all yr developement in Development Box and then transport to Production after successful QA testing.
    Dev

  • Java WebDynpro context mapping  best practices

    Hi Friends,
    the data to provide in context for every view controller and component controller.. can bemaintained in different ways.
    1. to map the view controller fields with component controller only when it is required at both the places to be accessed.
    rest all fields which do not need to be accessed at both the places may not be maped.
    or:- Whats the advantage of not mapping the fields between view controllers and component controller?
    2.
    instead of fields as value attributes, one Value Node may be used  as a grouping for a particular group of fields. is is best practice to group the fields into value node as per screen grouping?
    for example screen has three sub parts, so three value node.. and each value ndoe may contain different value attributes. which scenario should be consider as best practice?
    Thanks!

    <i>1) Advantage of not mapping is perfomance;</i>
    Very weak argument. There is no any significant performance lost when mapping used (I bet you save less then a percent comparing to "direct" access).
    Just put simple: your business data originates from controller. You must to show it on view, hence the need for mapping.
    Also view may require certain context nodes just to setup and control UI elements. Declare these nodes directly in view controller and you'll need no mapping in this case.
    Valery Silaev
    EPAM Systems
    http://www.NetWeaverTeam.com

  • Oracle Custom Workflow Redesign best practices

    Hi All,
    Morning , need some help with this scenario.
    We are in the process of redesigning existing developed custom Oracle Workflows in our system ( Oracle Apps Release 12.0.6 )
    hence wanted to know if there are steps or guidelines/best practices which could be followed in this situation on points like handling performance issues with the workflow , how to handle the purging of the obsolete workflow data , design steps regards notifications, how to handle error conditions with workflow activities and how to retry activities ( means if any activity within the workflow process shows error how it could be retried or re executed in real time without any delay).
    means any pointers which could be considered for this redesign actvity , any best practices document/steps and guidelines would be really very very helpful here...
    Regards

    This is a very broad question - narrowing it to specifics might help folks respond better.
    There are a lot of documents on MOS that refer to best practices from a technology stack perspective.
    Oracle Workflow Best Practices Release 12 and Release 11i          (Doc ID 453137.1)
    As far as functional practices are concerned, these may vary from module to module, as functionality and workflow implementation vary from module to module.
    FAQ: Best Practices For Custom Order Entry Workflow Design          (Doc ID 402144.1)
    HTH
    Srini

  • Info on best practices to add jar references in server.xml

    Hi
    I want to add some jar references (pcl.jar & struts.jar) in server.xml.
    Can someone let me know if this can be added to <shared-library name="global.tag.libraries" version="1.0" library-compatible="true"> ?
    What is the best practice to add such entries in server.xml?
    Thanks
    Badri

    If you want to use it in BPEL it should be placed in oracle.bpel.common
    cheers
    James

  • Best practice for Extracting sales contracts from R/3

    Hi Everyone,
    We have a requirement to bring sales contract data from R/3 and most of the fields from table VEDA. 2lis_11_vaitm don't carry VEDA fields.
    What are the best practices on brining VEDA fields into BW?
    Thanks,
    D. Eranezhath

    New URLs are:
    LOGISTIC COCKPIT - WHEN YOU NEED MORE - First option: enhance it !
    Custom fields and BW extractors : Making a mixed marriage work! (Part-II)

  • Best Practice Directory Structure

    I'm new to programming and I'm creating a large application
    that will have a lot of pages and components. As I continue to add
    to the project, I find it harder and harder to easily locate the
    pages and components that I need to work with. To help me with this
    problem, I've decided to add a label to every component with the
    location of where it can be found. Now when I view the application
    and I see a component I want to work with, I know where it is
    located.
    This is problably not the best way to handle the problem, it
    will be a pain to go back and remove or comment out these when I'm
    fininshed with the project, but it wil help now.
    Short of having a better memory :>), is there any best
    practice guidelines I should look into and follow when I create my
    directory structure of files and components that will intuitively
    help me quickly locate a component? What do you use to help you?
    Thanks for your ideas and suggestions

    There are many ways to answer this. If you have a big project
    you might consider using a Flex Library Project for your components
    - think of it as writing a set of components you (or your company)
    might one day sell to others - even if you never have that intent.
    The idea is to make the components as reusable as possible in case
    you need them for the next project. Having them in a separate Flex
    Library project (which creates a .swc file) would make that easier.
    The industry best-practice seems to be to create packages
    that begin with your company's domain name, but in reverse order.
    For example, components I write for Adobe go into a package
    beginning with com.adobe which works out to be the file structure
    com/adobe and then is further divided by the application and then
    its parts (eg, com.adobe.scrapbook.editor). Using your company's
    domain to distinguish your components enables you and others to
    combine components from different places without naming conflicts.
    If you divide your application into logical parts you can
    figure out a good package naming convention that is easier to
    remember. For example, I might put all of the skins for my
    Scrapbook Editor into the com.adobe.scrapbook.editor.skins package.
    I can easily find those files and add to them as necessary.
    Other people follow a similar pattern and there are books on
    the subject, too.

Maybe you are looking for

  • Downloading the internal table with header to FTP folder

    Hi All, I have one requirement in downloading the internal table details with the fixed header line to FTP folder. The header line having the fixed text of 425 characters length. Note: We are not suppose to use WS_DOWNLOAD and GUI_DOWNLOAD function m

  • Authorization Check in Personnel Cost Planning (PA-CP)

    Dear Experts, We are facing an issue where there is no authorization checking when performing the Cost Planning functions. The requirement here is to put in an authorization check such that when: 1) collecting cost plan data for employees (tcode: PHC

  • Sequence passfail status

    The TS code I need to modify needs to have a subsequence and the subsequence will have about 5 or 6 verify steps. What is the standard method for rolling up the results of all these Verify statements to give an overall pass/fail value for the subSequ

  • 2600n not printing

    Printer HP 2600n has ceased to print colour images. When you send on the press on the printer the green light-emitting diode blinks and is written - "There is a press", more occurs nothing. Tried to connect the printer to other personal computer - on

  • Different output for each running (Thread & Runnable)

    When i run a program which extends Thread or implement Runnable interface, i get different output each time i run from either of the program. Why this can happen? Why the output from the program which extends Thread different from the output which im