Migration Best Practice When Using an Auth Source

Hi,
I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
Thanks,
Chris Bucchere
Bucchere Development Group
[email protected]
http://www.bucchere.com

The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

Similar Messages

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best practice when using auto complete in view layer

    Hello
    I have a question regarding best way to store/cash data when using auto complete function in view layer.
    I know that there will be a lot of visitors that will use this function and I dont want to kill the application server so I need some advice.
    Its about 6000 words that should be searchable .... my first thought was to create a Singleton-bean that stores current items and that I will iterate over but its a lot of "waste" doing that way.
    I would be very glad if anyone could advice me how to do this best way if there is any de-facto standard to use when using auto completion in "view layer"
    Thanks!
    Best Regards/D_S

    I dont know what your design is, but here are some ideas:
    To me, autocomplete means you have some user specific data that the user entered prevously such as their home address, and some generic data that is not specific to any particular user. I would store all that in a database. For the user specific data I would store their userID along with the data in the database. Then, when populating a JSP page, I would call up just the data specific to that user and the generic data from the database. I would store it as an array of some type in javascript client--side. When the user clicks the autopopulate button, I would have that button call a javascript fuction that reteives the data from the javascript array and populate the various textfields. All this is done client-side so the form does not have to be re-drawn. I question why you have 6000 items. Normally, autopopulate has at most only a few dozens of items. If you still need 6000 items, I suggest adding a textfield to the form to filter what the data he needs down to a manageable amount. Example: rather than get all names from a telephone book, put a textfield on the form that allowfs an end user to enter a letter a to z such as 'b', then only fetch last names from the phone book that begins with 'b'.

  • Best practices when using OEM with Siebel

    Hello,
    I support numerous Oracle databases and have also taken on the task of supporting Enterprise Manager (GRID Control). Currently we have installed the agent (10.2.0.3) on our Oracle Database servers. So most of our targets are host, databases and listeners. Our company is also using Siebel 7.8 which is supported by the Siebel Ops team. The are looking into purchasing the Siebel plugin for OEM. The question I have is there a general guide or best practice to managing the Siebel plugin? I understand that there will be agents installed on each of the Servers that have Siebel Components, but what I have not seen documented is who is responsible for installing them? Does the DBA team need an account on the Siebel servers to do the install or can the Siebel ops team do the install and have permissions set on the agent so that it can communicate with the GRID Control? Also they will want access to the Grid Control to see performance of their Components, how do we limit their access to only see the Siebel targets including what is available under the Siebel Services tab? Any help would be appreciated.
    Thanks.

    There is a Getting Started Guide, which explains about installation
    http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b32394/toc.htm
    -- I presume there are two teams in your organization. viz. DBA team which is responsible for installing the agent and owns the Grid Control / Siebel ops team is responsible for monitoring Siebel deployment.
    Following is my opinion based on the above assumption:
    -- DBA team installs agent as a monitoring user
    -- Siebel ops team provides execute permission to the above user for server manager[]srvrmgr.exe] utilities and read permission to all the files under Siebel installation directory
    -- DBA team provisions a new admin for Siebel ops team and restrict the permissions for this user
    -- Siebel ops team configures the Siebel pack in Grid Control. [Discovery/Configuration etc]
    -- With the above set up Siebel ops team can view only the Siebel specific targets.
    Thanks

  • Mobile App Best Practice When Using SQLite Database

    Hello,
    I have a mobile app that has several views.
    Each view calls a different method of a Database custom class that basically returns the array from a synchronous execute call.
    So, each view has a creationComplete handler in which I have something like this:
    var db:Database=new Database();
    var connectResponse:Object=db.connect('path-to-database');
    if(connectResponse.allOK)//allOK is true if connection was succesful
       //Do stuff with data
    else
       //Present error notice
    However this seems reduntant. Is it OK to do this once (connect the the database) in the Main Application file?
    The do something like FlexGlobals.topLevelApplication.db?
    And even generally speaking, constants and other things that I would need throughout the app, can be placed in the main app? As a best practice, not technically as technically it is possible.
    Thank you.

    no, I only connect it once
    I figured I wanted several views to use it so made it static and singleton as I only have 1 database
    I actually use synchronous calls but there is a sync with remote mysql database function, hence the eventdispatcher
    ... although I am thinking it might be better to use Async and dispatch a custom event and have the relative views subscribe

  • Best Practices when using MVMC to improve speed of vmdk download

    I've converted a number of machines already from ESXi4.1 to Hyper-V 2012 successfully and learnt pretty much all the gotchas and potential issues to avoid along the way, but I'm still stuck with extremely slow downloading of the source
    vmdk files to the host i'm using for the MVMC. This is not so much an issue for my smaller VM's but it will be once I hit the monster sized ones.
    To give you an idea on a 1GB network it took me 3 hours to download an 80GB VM. Monitoring the network card on the hyper-v host I have MVMC running on shows that i'm at best getting 30-40Mbs download and there are large patches where that falls right down
    to 20Kbs or thereabouts before ramping back up to the Mbs range again. There are no physical network issues that should be causing this as far as I can see.
    Is there some undocumented trick to get this working at an acceptable speed? 
    Copying large files from a windows guest VM on the esx infrastructure to the Hyper-V host does not have this issue and I get the full consistent bandwidth.

    It's VMWARE in general is why... Ever since I can remember (which was ESX 3.5) if you copy using the webservice from the data store the speeds are terrible. Back in the 3.5 days the max speed was 10Mbps as second. FASTSCP came around and threaded it to make
    it fast.
    Backup software like Veeam goes faster only if you have a backup proxy that has access to all data stores running in VMware. It will then utilize the backend VMware pipe and VM network to move the machines which is much faster.
    That being said in theory if you nested a Hyper-V server in a VMWARE VM just for conversations it would be fast permitting the VM server has access to all the datastores.
    Oh and if you look at MAT and MVMC the reason why its fast is because netapp does some SAN offloading to get around VMWARE and make it array based. So then its crazy fast.
    As a side not that was always one thing that has pissed me off about VMWARE.

  • Best practices when using collections in LR 2

    How are you using collections and collection sets? I realize that each photographer uses them differently, but I'm looking some inspiration because I have used them only very little.
    In LR 1 I used collections a bit like virtual Folders, but it doesn't anymore work as naturally in LR2.
    In LR 1 it was like:
    Travel (1000 images, all Berlin images)
    Travel / Berlin trip (400 images, all Berlin trip images)
    Travel / Berlin trip / web (200 images)
    Travel / Berlin trip / print (100 images)
    In LR 2 it could be done like this, but this somehow feels unnatural.
    Travel (Collection Set)
    Travel / Berlin trip (CS)
    Travel / Berlin trip / All (collection, 400 images)
    Travel / Berlin trip / web (collection, 200 images)
    Travel / Berlin trip / print (collection, 100 images)
    Or is this kind of use stupid, because same could be done with keywords and smart collections, and it would be more transferable.
    Also, how heavily are you using Collections? I'm kind of on the edge now (should I start using them heavily or not), because I just lost all my collections because I had to build my library from scratch because of weird library/database problems.

    Basically, i suggest not to use collections to replicate the physical folder structure, but rather to collect images independent of physical storage to serve a particular purpose. The folder structure is already available as a selection means.
    Collections are used to keep a user defined selection of images as a persistent collection, that can be easily accessed.
    Smart collections are based on criteria that are understood by the application and automatically kept up to date, again as a persistent collection for easy access. If this is based on a simple criterium, this can also, and perhaps even easier, done by use of keywords. If however it is a set of criteria with AND, OR combinations, or includes any other metadata field, the smart collection is the better way to do it. So keywords and collections in Lightroom are complementary to eachother.
    I use (smart)collections extensively, check my website  www.fromklicktokick.com where i have published a paper and an essay on the use of (smart)collections to add controls to the workflow.
    Jan R.

  • Best practice to use Tortoise SVN with LV

    Can anyone recommend what is the best practice to use and structure the project using TSVN with LV? I have seen the jki tool and have also read about some issues of linkage when using TSVN with LV as posted on the forum here. I suppose these linkage issues still exists? Other than perforce is there any suggestion on source control that integrates well with LV?
    TIA
    CLD,CTD

    We use Tortoise SVN with LV and it works very good. It's not integrated in that i cannot from within LV check in and out stuff, i have to do that in the Explorer. That's not a problem to me.
    SVN is a very good source and version control regardless.
    One small issue with external handling is if you want to change an already used and active filename. In LV you can save to another filename and references will update, but ofcourse SVN doesn't pick up on that automagically. There are two solutions to this:
    1. When you check in, you'll get 1 added and 1 deleted file, select both, r-click and "Repair move".
    2. After changing the filename in LV, change it back in explorer and r-click the file for a SVN rename and rename it to the new name.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Best practice for using common VIs

    Hi,
    I have some projects, which use some common VIs (like open/close file format and similar). What is the best practice to use these common VIs? Now I copy these to the project folders, but now I have multiple copies of some VIs, which is difficult to maintian, if I need to modify something in these common files. 
    What sould i use? Still copy, or link the common folder to my project, or use a packed project, or something else?
    Thanks

    dont know if it is the best practice but with every new project, I create a new folder in my project (right click on my computer -> new -> virtual folder) and link that virtual folder to my folder that contains all my common code (vi/ctl/class) by right clicking on the newly created virtual folder and choosing "Convert to auto-populating folder".
    One downside is that if you modify one of those vis for your new application, it might break it for one of your older projects.
    I also use source code control (svn / git / mercurial / etc..) so if that happens, I can always go back to a previous working version.

  • Best practice for use of spatial operators

    Hi All,
    I'm trying to build a .NET toolkit to interact with Oracles spatial operators. The most common use of this toolkit will be to find results which are within a given geometry - for example select parish boundaries within a county.
    Our boundary data is high detail, commonly containing upwards of 50'000 vertices for a county sized polygon.
    I've currently been experimenting with queries such as:
    select
    from
    uk_ward a,
    uk_county b
    where
    UPPER(b.name) = 'DORSET COUNTY' and
    sdo_relate(a.geoloc, b.geoloc, 'mask=coveredby+inside') = 'TRUE';
    However the speed is unacceptable, especially as most of the implementations of the toolkit will be web based. The query above takes around a minute to return.
    Any comments or thoughts on the best practice for use of Oracle spatial in this way will be warmly welcomed. I'm looking for a solution which is as quick and efficient as possible.

    Thanks again for the reply... the query currently takes just under 90 seconds to return. Here are the results from the execution plan ran in sql*:
    Elapsed: 00:01:24.81
    Execution Plan
    Plan hash value: 598052089
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 156 | 46956 | 76 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 156 | 46956 | 76 (0)| 00:00:01 |
    |* 2 | TABLE ACCESS FULL | UK_COUNTY | 2 | 262 | 5 (0)| 00:00:01 |
    | 3 | TABLE ACCESS BY INDEX ROWID| UK_WARD | 75 | 12750 | 76 (0)| 00:00:01 |
    |* 4 | DOMAIN INDEX | UK_WARD_SX | | | | |
    Predicate Information (identified by operation id):
    2 - filter(UPPER("B"."NAME")='DORSET COUNTY')
    4 - access("MDSYS"."SDO_INT2_RELATE"("A"."GEOLOC","B"."GEOLOC",'mask=coveredby+i
    nside')='TRUE')
    Statistics
    20431 recursive calls
    60 db block gets
    22432 consistent gets
    1156 physical reads
    0 redo size
    2998369 bytes sent via SQL*Net to client
    1158 bytes received via SQL*Net from client
    17 SQL*Net roundtrips to/from client
    452 sorts (memory)
    0 sorts (disk)
    125 rows processed
    The wards table has 7545 rows, the county table has 207.
    We are currently on release 10.2.0.3.
    All i want to do with this is generate results which fall in a particular geometry. Most of my testing has been successful i just seem to run into issues when querying against a county sized polygon - i guess due to the amount of vertices.
    Also looking through the forums now for tuning topics...

  • Best practices on using EVALUATE functions

    hi, experts,
    I wanna know what is the best practices on using EVALUATE functions on obiee (calling oracle user defined functions)
    I found that if I use evaluate functions in Answers,
    obiee will construct a sql behind and then execute.
    sometimes, obiee contructs some unexpected sqls, and returns errors.
    so, is it better to use EVALUATE functions in logical columns ?
    thanks

    EVALUATE('DB_Function(%1)' as returntype, {Comma separated Expression})
    even when used in Logical columns, its gonna fire the same sql.

  • Best practice to use PXE on 802.1X network ?

    Hello,
    We use Cisco ISE 1.2.0.899 on our network (we plan to upgrade to 1.3 in some months).
    Our network includes Cisco models 2960S (and some 2960T) about wired and 2602I (with WISM2) about wireless.
    We have to allow PXE boot on one (or many) VLAN.
    Do you know what's the best practice to use PXE on a 802.1X network ?
    Does ISE and/or Switch can recognize PXE request?
    Do we have to use settings/rules into ISE or on Switch?
    Does the easy way is to allow PXE on WebAuth VLAN?
    Regards,
    Chris

    I am in a similar position.
    We would prefer to keep all switch ports common, even those used for imaging from scratch.
    For PXE as far as I can see we need to allow the port to quickly fail 802.1X and MAB to a remediation VLAN.
    Using ISE we can apply an ACL that allows PXE bootp and dhcp requests and responses along with any other traffic we want in that network i.e. access to internet proxy server, anti-virus updates for posturing etc.
    I haven't configured this yet so I'm not sure of what issues we'll face with timing. We currently use an auth pattern of 802.1X first, then MAB, then fail open to the static VLAN. With ISE 1.3 this is the supposed suggested method instead of a hard "closed" mode. 
     switchport access vlan XX
     switchport mode access
     network-policy VV
     ip access-group ACL-ALLOW in
     authentication event fail action next-method
     authentication event server dead action reinitialize vlan XX
     authentication event server dead action authorize voice
     authentication host-mode multi-domain
     authentication open
     authentication order dot1x mab
     authentication priority dot1x mab
     authentication port-control auto
     authentication periodic
     authentication violation restrict
     mab
     dot1x pae authenticator
     dot1x timeout tx-period 10

  • Best practice when modifying SAP Standard Development Component

    Hello Experts,
    What is best practice when modifying SAP Standard Development Component (Java Web Dynpro)? Iu2019m looking for the best method to do modifications to SAP Standard DC so that my changes will be kept (or need low maintenance) after a new service package (or EHP) is applied.
    Thanks,
    Kevin

    Hi,
      'How to use Busiess Packages in Enterprise Portal 6.0' is available in this link.
    http://help.sap.com/bp_epv260/EP_EN/documentation/How-to_Guides/misc/Using_Business_Packages.pdf
    Check out for the best practices.
    Regards,
    Harini S

  • Best practices for using the knowledge directory

    Anyone know when it is best to store docs in the Knowledge Directory versus Collab? They are both searchable, but I guess you can publish from the Publisher to the KD. Anyone have any best practices for using the KD or setting up taxonomies in the KD?

    Hi Richard,
    If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0. 
    The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0.  The current plan is not to bring cost detail into the Service Catalog module.

Maybe you are looking for