BGP Best Practice / Private-AS vs. Public-AS in the MPLS Core

Dears,
We have recently aquired a large network with ASR9K as Internet Gateways and non-Cisco devices in the MPLS Core.
We would liike to know which is the best recommended solution to use Private MP-BGP AS in the MPLS Core or extend the IGW Public AS, knowing  that the IGW will be in a VRF and not the global routing table. Moreover, the clients of the MPLS Core have their own BGP Public AS and would need to connect to the MPLS Core to obtain internet services from the IGW.
(Cust1)------EBGP------[VRF_Cust_1](MPLS CORE AS_2)[VRF_IGW]------EBGP-----(IGW AS_1) in the case of having a private BGP AS in the core
(Cust1)------EBGP------[VRF_Cust_1](MPLS CORE AS_1)[VRF_IGW]------iBGP-----(IGW AS_1) in the case of having same public BGP AS in the core
Waiting for your feedback and thoughts.
Thanks,
Michel.

Michel,
if your mpls core is also used for internet transit, then it is best to be a public AS.
if not, then you can leave it be and remove the private AS at your border routers.
If oyu are connecting multiple MPLS networks together to link L2 or L3 VPN services, I think it is easiest to have it all one AS, otherwise you end up with complex designs such as Carrier supporting Carrier (CSC) or Inter-AS option A (vrf lite), B (using vpnv4 at the inter AS gateay) or C (using vpnv4 at the interAS gateway with route reflectors in each AS peering with each other).
regards
xander
Xander Thuijs CCIE #6775
Principal Engineer 
ASR9000, CRS, NCS6000 & IOS-XR

Similar Messages

  • Swing best practice - private modifier vs. many parameters

    Dear Experts,
    I have a comboBox that has a customized editor and has KeyListener that responds to several keyPressed and keyTyped. The comboBox is used in two different JFrame, say JFrame frmA and JFrame frmB.
    Since the KeyListener changes the state of 8 other components in frmA, I have two options:
    Option 1:
    - Coding the comboBox in separate class and pass all components as parameters. I will have around 10 parameters, but the components can be made private to frmA or frmB.
    Option 2:
    - Coding the comboBox in separate class and pass the instance of the caller (frmA or frmB), so that the comboBox can change the state of the other components in frmA or frmB according to its caller. However, the components must not be private and should be able to be accessed by the comboBox class.
    My questions:
    1. I have not implemented option 2, so that I have not proved that it will work. Will it work?
    2. Which option will be more efficient and require less cpu time? If it is the same, which option is the best practice?
    3. Is there any other option that is better than these two options?
    Thanks for your advice,
    Patrick

    Option 2:
    - Coding the comboBox in separate class and pass the instance of the caller (frmA or frmB), so that the comboBox can change the state of the other components in frmA or frmB according to its caller. However, the components must not be private and should be able to be accessed by the comboBox class.
    My questions:
    1. I have not implemented option 2, so that I have not proved that it will work. Will it work?It doesn't stand in the long run. Doing so couples your specific ComboBox classes to all widgets that react to the ComboBox changes. If you happen to add a new button is either JFrame that should also be affected by the combo-box selection, you'll have to modify, and re-test, the ComboBox code. Moreover, if a new button was needed in one the JFrame but not the other, you'd have to introduce a special case in your combobox code.
    Instead of the ComboBox's listeners invoke methods on each piloted widget, have them invoke one method (+selectionChanged(...)+) on these widget's common parent (not necessarily a graphical container, but an object that has (maybe indirect) references to each of the dependant widgets).
    2. Which option will be more efficient and require less cpu time?I wouldn't mind.
    In the graphical layer of an application, and unless the graphcial representation performs computations on the bitmap, any action is normally much quicker than business logic computation. Any counter-example is likely to be a bug in the UI implementation (such as, not observing Swing's threading rules), or a severe flaw in the design 'such as, having a hierrachy of several hunderd JComponents,...). Swing widgets are pretty reactive to genuine calls such as setEnabled(), setBackground(), setText(),...
    If it is the same, which option is the best practice?Neither. Hardcoding relationships between widgets may be OK within a single, and single-purpose, form.
    But if you want to code a reusable component thoug, design it for reuse (that is, the less it knows about how which context it is used in, the more contexts it can be used in).
    In general, widgets that know each-other involve a quadratic number of references that accordingly impacts the code readability (and bug rate). This is the primary reason for introducing a Mediator pattern (of which my reply to 1 above is a degenerate form).
    3. Is there any other option that is better than these two options?Yes. Look into the [+Mediator+|http://en.wikipedia.org/wiki/Mediator_pattern] pattern (the Wikipedia page is not compelling, but you'll easily find lots of resources on the Web).

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • What is the best practice for full browser video to achieve the highest quality?

    I'd like to get your thoughts on the best way to deliver full-browser (scale to the size of the browser window) video. I'm skilled in the creation of the content but learning to make the most out of Flash CS5 and would love to hear what you would suggest.
    Most of the tutorials I can find on full browser/scalable video are for earlier versions of Flash; what is the best practice today? Best resolution/format for the video?
    If there is an Adobe guide to this I'm happy to eat humble pie if someone can redirect me to it; I'm using CS5 Production Premium.
    I like the full screen video effect they have on the "Sounds of pertussis" web-site; this is exactly what I'm trying to create but I'm not sure what is the best way to approach it - any hints/tips you can offer would be great?
    Thanks in advance!

    Use the little squares over your video to mask the quality. Sounds of Pertussis is not full screen video, but rather full stage. Which is easier to work with since all the controls and other assets stay on screen. You set up your html file to allow full screen. Then bring in your video (netstream or flvPlayback component) and scale that to the full size of your stage  (since in this case it's basically the background) . I made a quickie demo here. (The video is from a cheapo SD consumer camera, so pretty poor quality to start.)
    In AS3 is would look something like
    import flash.display.Loader;
    import flash.net.URLRequest;
    import flash.display.Bitmap;
    import flash.display.BitmapData;
    import flash.ui.Mouse;
    import flash.events.Event;
    import flash.events.MouseEvent;
    import flash.display.StageDisplayState;
    stage.align = StageAlign.TOP_LEFT;
    stage.scaleMode = StageScaleMode.NO_SCALE;
    // determine current stage size
    var sw:int = int(stage.stageWidth);
    var sh:int = int(stage.stageHeight);
    // load video
    var nc:NetConnection = new NetConnection();
    nc.connect(null);
    var ns:NetStream = new NetStream(nc);
    var vid:Video = new Video(656, 480); // size off video
    this.addChildAt(vid, 0);
    vid.attachNetStream(ns);
    //path to your video_file
    ns.play("content/GS.f4v"); 
    var netClient:Object = new Object();
    ns.client = netClient;
    // add listener for resizing of the stage so we can scale our assets
    stage.addEventListener(Event.RESIZE, resizeHandler);
    stage.dispatchEvent(new Event(Event.RESIZE));
    function resizeHandler(e:Event = null):void
    // determine current stage size
        var sw:int = stage.stageWidth;
        var sh:int = stage.stageHeight;
    // scale video size depending on stage size
        vid.width = sw;
        vid.height = sh;
    // Don't scale video smaller than certain size
        if (vid.height < 480)
        vid.height = 480;
        if (vid.width < 656)
        vid.width = 656;
    // choose the smaller scale property (x or y) and match the other to it so the size is proportional;
        (vid.scaleX > vid.scaleY) ? vid.scaleY = vid.scaleX : vid.scaleX = vid.scaleY;
    // add event listener for full screen button
    fullScreenStage_mc.buttonMode = true;
    fullScreenStage_mc.mouseChildren = false;
    fullScreenStage_mc.addEventListener(MouseEvent.CLICK, goFullStage, false, 0, true);
    function goFullStage(event:MouseEvent):void
        //vid.fullScreenTakeOver = false; // keeps flvPlayer component from becoming full screen if you use it instead  
        if (stage.displayState == StageDisplayState.NORMAL)
            stage.displayState=StageDisplayState.FULL_SCREEN;
        else
            stage.displayState=StageDisplayState.NORMAL;

  • Best Practice? - Implementing different sap portals on the same hardware

    We have a very large intranet portal implementation today spanning multiple boxes with 30k+ users on it.
    A different business group is asking us to build a sap vendor portal system, but would like to know if we can run it on the same equipment.
    The intranet uses ldap where the vendor will authenticate/authorize against the database. Aside from this, other configurations will be different. My gut feel is that this is something we should not do (mixing both intranet and vendor systems on the same hardware with different config's).
    Is there a best practice document that outlines if this is something that should be done or avoided. Also, if you have run into this and have an answer, would appreciate the feedback.
    Thanks in advance for the assistance,
    Todd

    Hi Todd,
       Technically there isn't a reason you couldn't run both portals on the same hardware assuming it is sized properly. You could even use the same portal if you wanted to.
    The thing I would be concerned with is security.  I assume you have more stringent security requirements for external facing applications than internal applications like the need for additional firewalls and reverse proxies.  Usually if you pursue the security requirements you will find the need for separate portal hardware. 
    Hope this helps
    John

  • Best Practice for closing a blocking connection to the database.

    Background:
    Using toplink/eclipselink to manage the the connection to the database.
    We support 3 databases, Oracle, Mysql, and MSSQL.
    We host the an application locally (SAAS) and we also distribute it to our clients so they can run it on premise.
    Problem:
    From time to time the application will execute a route that query the DB and will take 10+ minutes. When we find these queries we generally fix them, but it takes time to fix the problem and roll new code to production. We have also found that if an end user executes the query to many times they can potentially bring the service down - maxing out the database connection, or putting a huge load on the DB effectively rendering the system to a crawl.
    During this time period the IT team need a way to abort a transaction. The current approach, for our SAAS solution, to scan the database find to the long running query and kill it. The application is smart enough to recover and everything seems to work fine. This doesn't always work for our on premise customers - often they don't have a DBA or adequate resources to monitor the system. Therefore, I'm looking for a way to have the application kill/close any query that is lasting longer then X minutes. We currently track the duration of every thread so I can get the time the thread has been actively running.
    Stack Trace
    Currently we keep track of how long each thread is running waiting for a response from the database such as the stacktrace below, I would like to grab the connection and physically close it. Since we are using a java socket and it's in a blocking state there is no way for us to interrupt the thread. The only option is to wait for it to receive a response from the db - kill the sql process - or to close the java socket.
    java.net.SocketInputStream.socketRead0(Native Method)
    java.net.SocketInputStream.read(SocketInputStream.java:129)
    com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:113)
    com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:160)
    com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:188)
    com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1931)
    com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2380)
    com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2909)
    com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1600)
    com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1695)
    com.mysql.jdbc.Connection.execSQL(Connection.java:2998)
    com.mysql.jdbc.Connection.execSQL(Connection.java:2927)
    com.mysql.jdbc.Statement.executeQuery(Statement.java:956)
    org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeSelect(DatabaseAccessor.java:854)
    org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:573)
    org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:501)
    org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:536)
    org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:205)
    org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:191)
    org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeSelectCall(DatasourceCallQueryMechanism.java:262)
    org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeSelect(DatasourceCallQueryMechanism.java:244)
    org.eclipse.persistence.queries.DataReadQuery.executeNonCursor(DataReadQuery.java:188)
    org.eclipse.persistence.queries.DataReadQuery.executeDatabaseQuery(DataReadQuery.java:144)
    org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:664)
    org.eclipse.persistence.queries.DataReadQuery.execute(DataReadQuery.java:130)
    org.eclipse.persistence.internal.sessions.AbstractSession.internalExecuteQuery(AbstractSession.java:2243)
    org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1181)
    org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1165)
    org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1125)
    Question
    What is the best way to grab the connection and close it - from another thread?

    The reason for changing the namespace is to guarantee that they appear as different versions.  The namespace is part of the strong name of the DLL when deployed.  Yes they will be deployed to separate servers because a server can only belong
    to one farm.  But when you are doing a migration it won't be clear that you need to install the 2013 version since the .DLL will report that it exists in the 2010 farm already. It will work without changing the namespace, but it would be best to differentiate
    versions at that level to avoid confusion.
    Paul Stork SharePoint Server MVP
    Principal Architect: Blue Chip Consulting Group
    Blog: http://dontpapanic.com/blog
    Twitter: Follow @pstork
    Please remember to mark your question as "answered" if this solves your problem.

  • Best practices for saving files when emailing to the media?

    I'm pretty much an Ai imposter. I have Illustrator cs4 (14.0.0) and the way I use it is to cobble my designs together from iStock vector files and a lot of trial and error. Occasionally my husband throws me a lifeline because he is skilled with PhotoShop. As a small business owner, I am dying to find the time to take a course but right now I am just trying to float around the forums and learn what I can. Basically I flail about near the computer and the design sloooowly and somewhat mysteriously comes together. It's getting faster and is kind of fun, unless I get stuck.
    This week I had a problem with an ad I designed for a very small community orchestra's printed programs. I send them a pdf first, and when they view it on their screens using inDesign they see the whole image but when it printed it omitted an element (a sunburst in the background). They thought it had something to do with that element being in color rather than greyscale (though there were other elements that survived and they were the exact same color so I was skeptical). I sent a greyscale file, no luck. I sent them the Ai file, but that apparently "crashed" their iD and now they believe I've sent a corrupted file. They aren't very adobe-savvy, either.
    I've designed and emailed no fewer than 9 other ads to other print & online media organizations this year and never had a problem. The file looks fine to me in all versions I open/upload/email to myself.
    You can see it here if you like: http://www.scribd.com/doc/27776704/RCMA-for-CSO-Greyscale
    So here are my specific questions:
    1. How SHOULD I be saving this stuff? Is pdf the mark of a rookie?
    2. What settings should I be looking at when/before saving? I read about overprint, for example, and did try that with one of the versions I sent them to no avail. I don't really know what that does, so I was just trying a hail mary there anyway.
    Thanks for your time!

    PDF is the modern way of sending files  but what you might want to do is in this case select the art in question and go to Object>Flatten Transparency
    then save it as a pdf or as an ai file when sending the file to them zip it if they have windows based computers or stuffit if they have Mac actually zip is good for both.
    It is safer to send it as an archive then as an ai file.

  • Is there a best practice way of segregating biztalk permissions across the SDLC?

    I'm working on the SQL Server side of things, helping some people set up a biztalk project. There are some issues with figuring out the SQL permissions required by biztalk and how to set them up.
    I have read the guide
    here which indicates which SQL permissions to assign to which AD groups. I assume these groups are created by the biztalk install process (if that's not true then please let me know).
    I generally try to grant SQL permissions to AD groups rather than individual users, but it seems like all of the biztalk services (dev, uat and production) would be using the same AD groups (eg, "SSO Administrators"). So if I grant permission to the
    groups, then all biztalk services would be able to access all SQL Servers. For example, someone could point the development biztalk services at the production biztalk SQL server, and it would work because on both SQL Servers the permission would be assigned
    to the same AD group.
    Is there any way to have biztalk create, for example, the AD group "SSO Administrators - DEV" and "SSO Administrators - UAT" so that I can prevent biztalk from violating SDLC boundaries, or do I just have to accept this in biztalk-land?

    Yes, our SQL services and biztalk serviecs are hosted on different machines, but that's not really relevant to this particular topic.
    Yep, the goal here is to assign the appropriate rights to the appropriate groups.
    You said "Separate AD windows group for Production and Test Environment is preferred"
    - I definitely agree. But this is the reason for the question. If the biztalk install creates the AD groups (SSO Administrators, SSO Affiliate adminstrators, Biztalk Administrators, Biztalk Host users, etc) then every biztalk install (dev, UAT and prod) will
    all use the same AD groups. If permissions in SQL are assigned to these groups, then it doesn't seem possible for different phases of the SLDC to be accessible only to specific biztalk services. In other words, it doesn't seem like I can enforce the idea that
    the biztalk dev services should only be able to access the biztalk dev databases.

  • Select One Choice attribute' LoV based on two bind variables, best practice

    Hello there,
    I am in the process of learning the ADF 11g, I have following requirement,
    A page must contain a list of school names which is needed to be fetched based on two parameters, the parameters are student information been inserted in the previous page.
    I have defined a read only view "SchoolNamesViewRO", it's query depends on two bind variables :stdDegree and stdCateg.
    added that RO View as a view accessor to the entity to which the name attribute belongs, and then add LoV for the name attribute using the ReadOnly view,
    added the name attribute as Select One Choice to page2,
    and now I need to pass the values of the bind variables of the ReadOnly view,
    the information needed to be passed as the bind variables is inserted in the previous page, I could have the data as bindings attribute values in the page2 definition
    I have implemented the next two appraoches but both resulted in an empty list :
    * added ExecuteWithParams Action to the bindings of the page and then defined an Invoke Action (set refresh condition) in the executable s, set the default values of the parameters to be the attributes values' input value,
    in the trace I code see that the binding fetches correct values as supposed , but the select list appears empty, does the this execution for the query considered to be connected to the list ?
    * added a method to the ReadOnly view Imp java class to set the bind variables, then I define it as a MethodAction in the bindings , and then create an Invoke action for it , also the select is empty,
    if the query been executed with the passed variables, then why the list is empty? is it reading data from another place than the page!
    and what is the best practice to implement that requirement?
    would the solution be : by setting the default value of the bind variables to be some kind of Expression!
    please notice that query execution had the bound variables ( I see in the trace) are set to the correct values.
    would you give some hints or redirect me to a useful link,
    Thanks in advance
    Regards,

    please give me any example using backing bean .for example
    <?xml version='1.0' encoding='UTF-8'?>
    <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
    <jsp:directive.page contentType="text/html;charset=UTF-8"/>
    <f:view>
    <af:document id="d1">
    <af:form id="f1">
    <af:selectOneChoice label="Label 1" id="soc1" binding="#{Af.l1}"
    autoSubmit="true">
    <af:selectItem label="A" value="1" id="si1"/>
    <af:selectItem label="B" value="2" id="si2"/>
    </af:selectOneChoice>
    <af:selectOneChoice label="Label 2" id="soc2" disabled="#{Af.l1=='2'}"
    partialTriggers="soc1">
    <af:selectItem label="C" value="3" id="si3"/>
    <af:selectItem label="D" value="4" id="si4"/>
    </af:selectOneChoice>
    </af:form>
    </af:document>
    </f:view>
    </jsp:root>
    package a;
    import oracle.adf.view.rich.component.rich.input.RichSelectOneChoice;
    public class A {
    private RichSelectOneChoice l1;
    public A() {
    public void setL1(RichSelectOneChoice l1) {
    this.l1 = l1;
    public RichSelectOneChoice getL1() {
    return l1;
    is there any mistake

  • Oim11g - Best Practice for Logging

    Hello all,
    want to know  the best practice or common usage of error logging in oim11g. As we know that oim has some sequence to run processes that will end in Java. For the best practice, where and when should I create the log? Should I create log inside each function called by oim adapter? If I make a sysout at the beginning of function, parameter names and values, e.getMessage() inside catch(Exception e) and at the end of function, is it okay? Or is there any better implementation? Using sysout or log4j, commons logging, etc.
    in my idea (in each function):
    <function_name>::BEGIN
    - Time: <dd-MMM-yyyy>
    <function_name>::PARAMETER
    - <param_name1>: <param1_value>
    - <param_name2>: <param2_value>
    - <param_name3>: <param3_value>
    if error, inside throw{}
    <function_name>::ERROR
    - Message: <e.getMessage()>
    <function_name>::END
    is it good?

    This is what i've been doing with 11g logging.  In every custom code class i run, i use this to declare my logger:
    private final static Logger LOGGER = Logger.getLogger(<Class_Name>.class.getName().toUpperCase());  // Replace <Class_Name> with the actual class name
    This lets me go go the enterprise manager and make changes to the logging level once the class has been used.
    You can then use the following code:
    LOGGER.log(Level.INFO, "Insert Information Message Here");
    LOGGER.log(Level.CONFIG, "Insert More Detailed Debugger Information Message Here");
    LOGGER.log(Level.WARNING, "Insert Error Message Information Here", e); //e is your exception that is caught
    Personally, i like to put a start and end output in my logging, and then any details in the middle, i use the CONFIG level.  This lets me know pieces are running successfully, and only need to see the details during testing or if needed.  When deployed to production, i set the logger to WARNING level to only know about the problems.
    By using these, you can set your logger appropriately in the enterprise manager to output more detailed when needed.
    -Kevin

  • Trade offs for spreading oraganizatons across suffixes - best practices?

    Hey Everyone, I am trying to figure out some best practices here. I'v looked through the docs but have not found anything that quite touches on this.
    In the past, here is how I created my directory (basically using dsconf create-suffix for each branch I needed)
    dsconf list-suffixes
    dc=example,dc=com
    ou=People,dc=example,dc=com
    ou=Groups,dc=example,dc=com
    o=Services,dc=example,dc=com
    ou=Groups,o=Services,dc=example,dc=com
    ou=People,o=Services,dc=example,dc=com
    o=listserv,dc=example,dc=com
    ou=lists,o=listserv,dc=example,dc=com
    A few years later, learning more, and setting up replication, it seems I may have made my life a bit more complicated that it should be. It seems i would need many more replication agreements to get every branch of the tree replicated. It also seems that different parts of the directory are stored in different backend database files.
    It seems like I should have something like this:
    dsconf list-suffixes
    dc=example,dc=com
    Instead of creating all the branches as suffixes or sub-suffixes, maybe i should have just created organization and organizational unit entries within a single suffix "dc=example,dc=com". This way I can replicate all data by replicating just one suffix. Is there a downside to having one backend db files containing all the data instead of spreading it across multiple files (were talking possibly 90K entries across the entire directory).
    Can anyone confirm the logic here or provide any insight?
    Thanks much in Advance,
    Deejam

    Well, there are a couple of dimensions to this question. The first is simply whether your DIT ought to have more or less depth. This is an old design debate that goes back to problems with changing DNs in X500 style DITs with lots of organizational information embedded in the DN. Nowadays DITs tend to be flatter even though there are more tools for renaming entries. You still can't rename entries across backends, though. The second dimension is, given a DIT, how should you distribute the containers in your DIT across the backend databases.
    As you have already determined, the principal design consideration for your backend configuration will be replication, though scalability and backup configuration might also come into it. From what you have posted, though, it does not look like you have that much data. So yes, you should configure database backends and associated suffixes with sufficient granularity to support your replication requirements. So, if a particular suffix needs to be replicated differently than another suffix, they need to be defined as distinct suffixes/backends. Usually we define the minimal number of suffixes and backends needed to satisfy the topological requirements, though I can imagine there might be cases where suffixes might be more fine grained.
    For large, extensible Directory topologies, I usually look for data that's sensibly divisible into "building blocks". So for instance you might have a top-level suffix "dc=example,dc=com" with a bunch of global ACIs, system users and groups that are going to need to be everywhere. Then you might have a large chunk of external customer data, and a small amount of internal employee data. I would consider putting the external users in a distinct suffix from the employees, because the two types of entries are likely to be quite different. If I have a need to build a public Directory somewhere, all I have to do is configure the external suffix and replicate it. The basic question I would be asking there is if I might ever need to expose a subset of the Directory, will the data already be partitioned for me or will I have to do data reorganization.
    In your case, it does not look likely you will need to chop up your data much, so it's probably simpler to stay monolithic and use only one backend.

  • Best practice for a same query against 2 different tables

    Hello all,
    I want to extract info about tablespaces storage, both permanent and temporary. For that I use 2 different cursors that do exactly the same query but against a different table (dba_data_files and dba_temp_files).
    CURSOR permanentTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_data_files
    WHERE tablespace_name = tablespaceName;
    CURSOR temporaryTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_temp_files
    WHERE tablespace_name = tablespaceName;
    First I'm bothered that I have to use 2 cursors to execute the same query against 2 different tables. Is there no another way around?
    Then I fetch the results of this cursors in 2 different loops because I didn't find a way to dynamically call the cursors. I am looking for best practice here, knowing that I will do the same parsing against the results of the 2 cursors.
    Thank you,

    Hi
    Check whether the below query is helpful or not
    select      fs.tablespace_name "Tablespace",
         fs.tempspace "Temp MB",
         df.totalspace "Total MB"
         from
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) TotalSpace
         from
         dba_data_files
         group by
         tablespace_name
         ) df,
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) tempSpace
         from
         dba_temp_files
         group by
         tablespace_name
         ) fs
         where
         df.tablespace_name = fs.tablespace_name;
    Thanks

  • Best Practices for SRM Installation !!

    Hi
        can someone share the best Practices for SRM Installation ?
    What is the typical timeframe to install SRM on development server and as well as on the Production server ?
    Appericiate the responses
    Thanks,
    Arvind

    Hi
    I don't know whether this will help you.
    See these links as well.
    <b>http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm
    http://help.sap.com/bp_scmv150/index.htm
    http://help.sap.com/bp_biv170/index.htm
    http://help.sap.com/bp_crmv250/CRM_DE/index.htm</b>
    Hope this will help.
    Please reward suitable points.
    Regards
    - Atul

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • External System Authentication Credentials Best practice

    We are in the process of an 5.0 upgrade.
    We are using NTLM as our authentication source to get teh users and the groups and authenticate against the source. So currently we only have the NT userid, group info(NT domain password is not stored).
    We need to get user credentials to other systems/applications so that we can pass that on the specfic applications when we search/crawl or integrate with those apps/systems.
    We were thinking of getting the credentials(App userid and password) for other applications by developing a custom Profile Web service to gather the information specific to these users. However, don't know if external application password is secured when retrieving from the external repository via a PWS and storing into the Portal database.
    Is this the best approach to take to gather the above information? If not, please recommend the best practice to follow.
    Alternatively, can have the users enter the external system credentials by having them edit their user profile. However, this approach is not preferred.
    If we can't store the user credential to the external apps, we won't eb able to enhance the user experience when doing a search/or click-thorugh to tthe other applications.
    Any insight would be appreciated.
    Thanks.
    Vanita

    Hi Vanita,
    So your solution sounds fine - however, it might be easier to use an SSO Token or the Plumtree UserID in your external applications as a difinitive authentication token.
    For example if you have some external application that requires a username and password, then if you are in a portlet view of the application the application should be able to take the userid plumtree sends it to authenticate that it is the correct user.  You should limit this sort of password bypass to traffic being gatewayed by the portal (i.e. coming from the portal server only).
    If you want to write a Profile Web Service, the data the gets stored in the Plumtree Database is exactly what the Profile Web Service send it as the value for a particular attribute.  For example if your PWS tells Plumtree that the APP1UserName and APP1Password for user My Domain\Akash is Akash and password then that is what we save.  If your PWS encrypts the password using some 2-way encryption before hand, then that is what we will save.  These properties are simply attached to the user, and can be sent to different portlets.
    Hope this helps,
    -aki-

Maybe you are looking for

  • Problem with writing continuous data to excel using using Report Generation vi's

    Hey Everyone, I am trying to read the data from DAQ and write to excel continuously using Report Generation vi's.  But when I run the VI, it writes only one interation of the while loop (gathering data from DAQ continuously) and doesn't append the da

  • Help...(sob)

    Okay so my ipod mini has issues. The buttons and the knob that u spin will not respond i have reset it like 80 times can someone tell me what to do and if this happened to them

  • Macbook Pro 15" not the same?

    I just bought a Macbook pro 15" 2.2 ghz, and I was wondering why I got the "AMD Radeon HD 6750M with 512MB GDDR5 memory" Instead of the "AMD Radeon HD 6770M with 1GB GDDR5 memory"? I read forums and saw youtube videos where people would have the 2.2g

  • Workflow Problem : Publishing &Content Archiving

    Hi all, Thanks a lot for your help. Let me be more clear in my question. I am using the XML Form Builder 'Edit' page to uplaod a document. Once I click on the 'Save' button of the 'Edit' form, the content automatically gets displayed on the 'renderLi

  • Captivate 4 Save and Continue?

    I am a 6th grade teacher and I am trying to use Captivate 4 with Angel LMS 7.3. Right now the only SCORM that seems to be working is 1.2. What I want to do is 2 things. 1) have a lesson that has a few "paying attention" questions in it. After they co