GRC AACG/TCG and CCG control migration best practice.

Is there any best practice documents which illustrates the step by step migration of AACG/TCG and CCG controls from the development instance to the production? Also, how should one take the back up for the same ?
Thanks,
Arka

There are no automated out of the box tools to migrate anything from CCG.  In AACG/TCG  you can export and import Access Models (includes the Entitlements) and Global Conditions.  You will have to manual setup roles, users, path conditions, etc.
You can't clone AACG/TCG or CCG.
Regards,
Roger Drolet
OIC

Similar Messages

  • When I share a file to YouTube, where does the output file live? I want also to make a DVD. And is this a best practice or is there a better way?

    I want also to make a DVD, but can't see where the .mov files are.
    And is this a best practice or is there a better way to do this, such as with a master file?
    thanks,
    /john

    I would export to a file saved on your drive as h.264, same frame size. Then import that into youtube.
    I have never used FCP X to make a DVD but I assume that it will build the needed vob mpeg 2 source material for the disk.
      I used to use Toast & IDVD. Toast is great.
    To "see" the files created by FCP 10.1.1 for YouTube, rt. (control) click on the Library Icon in your Movies/show package contents/"project"/share.

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

  • Grid Control deployment best practices

    Looking for this document, interested to know more about Grid Control deployment best practices, monitoring and managing for more than 300+ dbs.

    hi
    have a search for the following document
    MAA_WP_10gR2_EnterpriseManagerBestPractices.pdf
    regards
    Alan

  • Grid Control and SOA suite monitoring best practice

    Hi there,
    I’m trying to monitor a SOA implementation on Grid Control.
    Are there some best practices about it?
    Thanks,     
    Nisti
    Edited by: rnisti on 12-Nov-2009 9:34 AM

    If they use it to access and monitor the database without making any other changes, then it should be fine. But if they start scheduling stuff like oradba mentioned above, then that is where they will clash.
    You do not want a situation where different jobs are running on the same database from different setups by different team (cron, dbcontrol, dbms_job, grid control).
    Just remember their will be aditional resource usage on the database/server to have both running and the Grid Control Repository cannot be in the same database as the db console repository.

  • Native Toplink to EclipseLink to JPA - Migration Best Practice

    I am currently looking at the future technical stack of our developments, and would appreciate any advise concerning best practice migration paths.
    Our current platform is as follows:
    Oracle 10g AS -> Toplink 10g -> Spring 2.5.x
    We have (approx.) 100 seperate Toplink Mapping Workbench projects (we have one per DDD Aggregate object in effect) and therefore 100 Repositories (or DAOs), some using Toplink code (e.g. Expression Builder, Class Extractors etc) on top of the mappings to support Object to RDB (legacy) mismatch.
    Future platform is:
    Oracle 11g AS -> EclipseLink -> Spring 3.x
    Migration issues are as follows:
    Spring 3.x does not provide any Native Toplink ORM support
    Spring 2.5.x requires Toplink 10g to provide Native Toplink ORM support
    My current plan is as follows:
    1. Migrate Code and Mappings to use EclipseLink (as per Link:[http://wiki.eclipse.org/EclipseLink/Examples/MigratingFromOracleTopLink])
    2. Temporarily re-implement the Spring 2.5.x ->Toplink 10g support code to use EclipseLink (e.g. TopLinkDaoSupport etc) to enable testing of this step.
    3. Refactor all Repositories/DAOs and Support code to use JPA engine (i.e. Entity Manager etc.)
    4. Move to Spring 3.x
    5. Move to 11g (when available!)
    Step 2 is only required to enable testing of the mapping changes, without changing to use the JPA engine.
    Step 3 will only work if my understanding of the following statement is correct (i.e. I can use the JPA engine to run native Toplink mappings and associated code):
    Quote:"Deployment XML files from Oracle TopLink 10.1.3 and above can be read by EclipseLink."
    Speciifc questions are:
    Is my understanding correct regarding the above?
    Is there any other path to achieve the goal of using 11g, EclipseLink (and Spring 3.x)?
    Is this achieveable without refactoring all XML mappings from Native -> JPA?
    Many thanks for any assistance.
    Marc

    It is possible to use the native/MW TopLink/EclipseLink deployment xml files with JPA in EclipseLink, this is correct. You just need to pass a persistence property giving your sessions.xml file location. The native API is also still supported in EclipseLink.
    James : http://www.eclipselink.org

  • New white paper: Character Set Migration Best Practices

    This paper can be found on the Globalization Home Page at:
    http://technet.oracle.com/tech/globalization/pdf/mwp.pdf
    This paper outlines the best practices for database character set
    migration that has been utilized on behalf of hundreds of customers
    successfully. Following these methods will help determine what
    strategies are best suited for your environment and will help minimize
    risk and downtime. This paper also highlights migration to Unicode.
    Many customers today are finding Unicode to be essential to supporting
    their global businesses.

    Sorry about that. I posted that too soon. It should become available today (Monday Aug 22nd).
    Doug

  • Win xp to Win 7 migration -- best practice

    Hi , 
    What are the best practices that need to be followed when we migrate xp to win 7 using configuration manager?
     - like computer name (should we rename it during migration or keep it as is?).
    - USMT like what should be migrated?
    Psl. share any pointers\suggeations. Thanks in advance.
    Regards,

    First determine your needs... do you really need to capture the user data or not.. perhaps they can store their precious data before the OS upgrade by themselves, so you don't need to worry about it? If you can made this kind of political decision, then
    you don't need USMT. Same goes for computer name, you can use the old ones if you please. It's a political decision, you should use unique name for all of your computers, some prefer PCs serial numbers, some prefer something else,
    it's really up to you to decide.
    Some technical pointers to consider:
    Clients should have ConfigMgr client installed on them before the migration (so that they appear in the console and can be instructed to do things, like run task sequence...)
    If the clients use static IP addresses, you need to configure your TS to capture those settings and use them during the upgrade process...

  • OVM Repository and VM Guest Backups - Best Practice?

    Hey all,
    Does anybody out there have any tips/best practices on backing up the OVM Repository as well ( of course ) the VM's? We are using NFS exclusively and have the ability to take snapshots at the storage level.
    Some of the main points we'd like to do ( without using a backup agent within each VM ):
    backup/recovery of the entire VM Guest
    single file restore of a file within a VM Guest
    backup/recovery of the entire repository.
    The single file restore is probably the most difficult/manual. The rest can be done manually from the .snapshot directories, but when we're talking about having hundreds and hundreds of guests within OVM...this isn't overly appealing to me.
    OVM has this lovely manner of naming it's underlying VM directories off of some abiguous number which has nothing to do with the name of the VM ( I've been told this is changing in an upcoming release ).
    Brent

    Please find below the response from the Oracle support on that.
    In short :
    - First, "manual" copies of files into the repository is not recommend nor supported.
    - Second we have to go back and forth through templates and http (or ftp) server.
    Note that when creating a template or creating a new VM from a template, we're tlaking about full copies. No "fast-clone" (snapshots) are involved.
    This is ridiculous.
    How to Back up a VM:1) Create a template from the OVM Manager console
    Note: Creating a template requires the VM to be stopped (this is required because the if the copy of the virtual disk is done with the running will corrupt data) and the process to create the template make changes to the vm.cfg
    2) Enable Storage Repository Back Ups using the step above:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-storage-repo-config.html#vmusg-repo-backup
    2) Mount the NFS export created above on another server
    3) Them create a compress file (tgz) using the the relevant files (cfg + img) from the Repository NFS mount:
    Here is an example of the template:
    $ tar tf OVM_EL5U2_X86_64_PVHVM_4GB.tgz
    OVM_EL5U2_X86_64_PVHVM_4GB/
    OVM_EL5U2_X86_64_PVHVM_4GB/vm.cfg
    OVM_EL5U2_X86_64_PVHVM_4GB/System.img
    OVM_EL5U2_X86_64_PVHVM_4GB/README
    How to restore up a VM:1) Then upload the compress file (tgz) to an HTTP, HTTPS or FTP. server
    2) Import to the OVM manager using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-repo.html#vmusg-repo-template-import
    3) Clone the Virtual machine from the template imported above using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-vm-clone.html#vmusg-vm-clone-image
    Edited by: user521138 on Sep 5, 2012 11:59 PM
    Edited by: user521138 on Sep 6, 2012 3:06 AM

  • UDDI and deployed Web Services Best Practice

    Which would be considered a best practice?
    1. To run the UDDI Registry in it's own OC4J container with Web Services deployed in another container
    2. To run the UDDI Registry in the same OC4J container as the deployed Web Services

    The reason you don't see your services in the drop-down is because, CE does lazy initialization of EJB components (gives you a faster startup time of the server itself). But your services are still available to you. You do not need to redeply each time you start the server. One thing you could do is create a logical destinal (in NWA) for each service and use the "search by logical destination" button. You should always see your logical names in that drop-down that you can use to invoke your services. Hope it helps.
    Rao

  • Data Migration Best Practice

    Is the a clear cut best practice procedure for conducting data migration from one company to a new one ?

    I don't think there is a clear cut for that.  Best Practice would always be relative.  It varies dramatically depending on many factors.  There is no magical bullet here.
    One except for above: you should always use Tab delimited Text format.  It is DTW friendly format.
    Thanks,
    Gordon

  • What is the Account and Contact workflow or best practice?

    I'm just learning the use of the web services. I have written something to upload my customers into accounts using the web services. I need to now include a contact for each account. I'm trying to understand the workflow. It looks like I need to first call the web service to create the account, then call a separate web service to create the contact and include the account's ID with the contact to that they are linked, is this correct?
    Is there a place I can go to find the "best practices" for work flows?
    Can I automatically create the contact within my call to create the account in the web service?
    Thanks,

    Probably a poor choice of words. Sorry.
    So basically, I have gotten further, but I just noticed related problem.
    I'm using the WebServices(WS) 1.0. I insert an account, then, on a separate WS call, I insert my contacts for the account. I include the AccountID, and a user defined key from the Account when creating the Contact.
    When I look at my Contact on the CRMOD web page, it shows the appropriate links back to the Account. But when I look at my Account on the CRMOD web page, it shows no Contacts.
    So when I say workflow or Best Practice, I was hoping for guidance on how to properly write my code to accomplish all of the necessary steps. As in this is how you insert an account with a contact(s) and it updates the appropriate IDs so that it shows up properly on the CRMOD web pages.
    Based on the above, it looks like I need to, as the next step, take the ContactID and update the Account with it so that their is a bi-directional link.
    I'm thinking there is a better way in doing this.
    Here is my psuedocode:
    AccountInsert()
    AccountID = NewAcctRec
    ContactInsert(NewAcctRec)
    ContactID = NewContRec
    AccountUpdate(NewContRec)
    Thanks,

  • Oracle SLA Metrics and System Level Metrics Best Practices

    I hope this is the right forum...
    Hey everyone,
    This is what I am looking for. We have several SLA's setup and we have defined many Business Metrics and we are trying to map them to System level metrics. One key area for us is Oracle. I was wondering is there is a best practice guide out there for SLA's when dealing with Oracle or even better, System Level Metric best practices.
    Any help would be ideal please.

    Hi
    Can you also include the following in the FAQ?
    1) ODP.NET if installed prior to this beta version - what is the best practice ? De-install it prior to getting this installed etc ..
    2) As multiple Oracle home's have become the NORM these days - this being a Client only should probably be non-intrusive and non-invasive.. Hope that is getting addressed.
    3) Is this a pre-cursor to the future happenings like some of the App-Server evolving to support .NET natively and so on??
    4) Where is BPEL in this scheme of things? Is that getting added to this also so that Eclipse and .NET VS 2003 developers can use some common Webservice framework??
    Regards
    Sundar
    It was interesting to see options for changing the spelling of Webservice [ the first one was WEBSTER]..

  • Bring CRM to clients and partners, Architecture DMZ best practices

    Hi, we need to bring access to CRM from internet to clients and partners.
    So that we need to know the best practices to architecture design.
    We have many doubts with these aspects:
    - We will use SAP portal, SAP Gateways and web dispatchers with a DMZ:
           do you have examples about this kind of architecture?
    - The new users will be added in 3 steps: 1000, 10000 and 50000:
           how can regulate the stress at internal system?, is it possible?
    - The system can't show any problems to the clients:
           we need 24x7 system, because the clients are big clients.
    - At the moment we have 1000 internal users.
    thanks

    I use the Panel Close? filter event and discard it and use the event to signal to my other loops/modules that my software should shut down. I normally do this either via user events or if I'm using a queued state machine (which I generally do for each of my modules) then I enqueue a 'shutdown' message where each VI will close its references (e.g. hardware/file) and stop the loop.
    If it's just a simple VI, I can sometimes be lazy and use local variables to tell a simple loop to exit.
    Finally, once all of the modules have finished, use the FP.Close method to close the top level VI and the application should leave memory (once everything else has finished running).
    This *seems* to be the most recommended way of doing things but I'm sure others will pipe up with other suggestions!
    The main thing is discarding the panel close event and using it to signal the rest of your application to shut down. You can leave your global for 'stopping' the other loops - just write a True to that inside the Panel Close? event but a better method is to use some sort of communications method (queue/event) to tell the rest of your application to shut down.
    Certified LabVIEW Architect, Certified TestStand Developer
    NI Days (and A&DF): 2010, 2011, 2013, 2014
    NI Week: 2012, 2014
    Knowledgeable in all things Giant Tetris and WebSockets

  • Populating users and groups - design considerations/best practice

    We are currently running a 4.5 Portal in production. We are doing requirements/design for the 5.0 upgrade.
    We currently have a stored procedure that assigns users to the appropriate groups based on the domain info and role info from an ERP database after they are imported and synched up by the authentication source.
    We need to migrate this functionality to the 5.0 portal. We are debating whether to provide this functionality by doing this process via a custom Profile Web service. It was recommended during ADC and other presentation that we should stay away from using the database security/membership tables in the database directy and use the EDK/PRC instead.
    Please advise on the best way to approach(With details) this issue. We need to finalize the best approach to take asap.
    Thanks.
    Vanita

    So the best way to do this is to write a custom Authentication Web Service.  Database customizations can do much more damage and the EDK/PRC/API are designed to prevent inconsistencies and problems.
    Along those lines they also make it really easy to rationalize data from multiple backend systems into an orgainzation you'd like for your portal.  For example you could write a Custom Authentication Source that would connect to your NT Domain and get all the users and groups, then connect to your ERP system and do the same work your stored procedure would do.  It can then present this information to the portal in the way that the portal expects and let the portal maintain its own database and information store.
    Another solution is to write an External Operation that encapsulates the logic in your stored procedure but uses the PRC/Server API to manipulate users and group memberships.  I suggest you use the PRC interface since the Server API may change in subtle ways from release to release and is not as well documented.
    Either of these solutions would be easier in the long term to maintain than a database stored procedure.
    Hope this helps,
    -Akash

Maybe you are looking for