Data Migration and Consolidation Best Practices?

Hi guys
Do you know what the best practice for data migration to FCSvr is? We’re trying to consolidate all media on various firewire/internal drives to a centralised RAID directly attached to a dedicated server. We’ve found that dragging and dropping a FCP project file uploads its associated media. The problem is that if there are several versions or separate projects linking to the same media, associated media is re-uploaded everytime! This results in that media getting duplicated several times. It appears that the issue is due to FCSvr creating a subfolder for every project file being uploaded which contains all the project’s media.
This behaviour is not consistent when caching assets, checking out a project file, making changes and checking it back in. FCSvr is quite happy for a project file to link to media existing at the root level of the media device.
We are of course running the latest version of everything. Hope you can help as we’re pulling our hair out here!
Regards
Gavin

Hi,
Do you really need an ETL Tool for these loading processes. Have you considered doing it in sql/plsql. If the performance of the application is one of the main priority I would definitely consider doing it in sql/plsql.
Because of the huge amount of data and because your source and target systems are Oracle DBs I wouldn't recommend you to use Informatica.
Also because source and target are Oracle DBs and it should be near real time you should have a look at Oracle Streams.
Regards
Maurice

Similar Messages

  • Low-Cost data migration and ETL tools

    Hi, Im building a database for my company. We are a rather small size book company with a lot of references and still growing.
    We have a Mysql database here and are trying to find some good tools to use it at its best. Basically we are just starting up the database after dealing with Excel: we had a size problemu2026 So im trying to find a program that will allow us to do two different things: the migration of our data from the old system to the new one and a specialized software to perform ETL (Extract, transform and load) on our database.
    About the price of the tools, if we were one year ago, the accounting department would have been pretty relaxed about this. But today, we have some budget restrictions and therefore need a low cost tool. So could you give me some advice on a good data migration and etl tool for a low cost?
    Thanks for your help.

    Hi,
    Some companies have budget problems and are having to reduce their cost of operation.
    Have you ever heard of open source tools? They can do the job easily and less expensive than proprietary solutions that cost a lot of money.
    You can have a look at a good ETL open source program called Talend Open Studio: it is user-friendly but also has advanced features intended for technical users (java debugger, code injectionu2026). It can perform data migration and ETL as you wrote in your first post.
    The website is [here|http://www.talend.com/solutions-data-integration/data-migration.php] to download the open source program. They have a forum and documentation you can read. Tell us what you think about the software.
    For an ETL benchmark: [ETL Benchmark|http://blogs.sun.com/aja/entry/talend_s_new_data_processing]

  • HTML and CSS Best Practices for Eloqua?

    Hello Topliners Community,
    My name is Ben and I am a Web Designer. I am currently looking for any guidance on HTML and CSS best practices when working with Eloqua. I am interested in the best practices for e-mail and landing pages.
    Thank you,
    Ben

    Personally I like to upload my custom created html/css into Eloqua instead of using the WYSIWYG.
    But if you must then right clicking on text boxes and click edit source is the way to go.
    There was a good discussion on editing your forms with CSS:
    Energize Your Eloqua10 Forms with CSS
    created by Ryan Wheler on Nov 2, 2012 8:44 AM, last modified by Greg Stotler on Sep 19, 2013 2:00 PM
    Version 2
    CSS can be used to heavily customize the layout of forms in Eloqua10.  In this article we will provide sample cover some common formatting use cases on Eloqua10 Landing Pages.  Further details about uses of CSS in Eloqua10 form templates can be found here: EE12 - Do It - Eloqua - Energize E10 Forms
    Eloqua10 Forms HTML Structure
    Below is an outline of the structure of the HTML generated by Eloqua when a form is added to a landing page.  By targeting the HTML classes highlighted below, we can control the layout of any form on your landing page.
      For the rest of page: http://topliners.eloqua.com/docs/DOC-3015

  • Data Migrator  and lsmw

    hi
    i want to know where i can find material on data migrator and lsma?
    how it works?examples?
    thanks
    have a nice day

    Hi balan,
    Check this <a href="http://www.sap-img.com/sap-data-migration.htm">Link</a> for a few tips on the same..
    also<a href="http://www.sap-img.com/basis/setting-up-lsmw-on-mini-basis.htm">http://www.sap-img.com/basis/setting-up-lsmw-on-mini-basis.htm</a>
    Also check this step by step example..
    <a href="http://www.sap-img.com/basis/setting-up-lsmw-on-mini-basis.htm">Doc on LSMW</a>
    regards
    satesh

  • Data Migration and Reconcile

    Hi All,
    What all Elements need to be considered for Data Migration and how do we reconcile the historic data?
    Regards
    Raghavender K

    Hi
    For project foundation you will need to migrate the following -
    projects and tasks
    For project costing you will need to migrate the following -
    Projects expenditures and CDL's. Usually companies are migrating historic costs in a summarized way. Consider an expenditure for total cost per project, task, expenditure organization and type. You may consider summarizing by period, and maybe summarizing by employee. Actually the level of data summarization depends on the reporting needs of the historic data from the new system.
    For project billing you will need to migrate -
    Agreements and funding allocations to projects.
    Revenue and invoices amounts. Here also you may consider allocating the accumulated amounts or keeping each invoice / revenue as a separate transaction.
    If you have budgets and forecasts, those should be converted as well.
    Dina

  • Table Onwers and Users Best Practice for Data Marts

    2 Questions:
    (1)We are developing multiple data marts that share the same Instance. We want to deny access to the users when tables are being updated. We have one generic user (BI_USER) with read access through one of the popular BI Tools. The current (first) data mart we denied access by revoking the privilege to the BI_USER, however going forward with other data marts the tables will get updated on a different schedule and we do not want to deny access to all the data marts. What is the best approach?
    (2) What is the best Methodology for table ownership of tables in different data marts that share tables across marts? Can we create one generic ETL_USER to update tables with different owners?
    Thanx,
    Jim Masterson

    If you have to go with generic logins, I would at least have separate generic logins for each data mart.
    Ideally, data loads should be transactional (or nearly transactional), so you don't have to revoke access ever. One of the easier tricks to accomplish this is to load data into a shadow table and then rename the existing table and the shadow table. If you can move the data from the shadow table to the real table in a single transaction, though, that's even better from an availability standpoint.
    If you do have to revoke table access, you would generally want to revoke SELECT access to the particular object from a role while the object is being modified. If this role is then assigned to all the Oracle user accounts, everyone will be prevented from viewing the table. Of course, in this scenario, you would have to teach your users that "table not found" means that the table is being refreshed, which is why the zero downtime approach makes sense.
    You can have generic users that have UPDATE access on a large variety of tables. I would suggest, though, that you have individual user logins to the database and use roles to grant whatever ad-hoc privileges users need. I would then create one account per data mart, with perhaps one additional account for the truely generic tables, that own each data mart's objects. Those users would then grant different roles different database privileges, and you would then grant those different roles to different users. That way, Sue in accounting can have SELECT access to portions of one data mart and UPDATE access to another data mart without granting her every privilege under the sun. My hunch is that most users should not be logging in to, let alone modifying, all the data marts, so their privileges should reflect that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Import data from excel file - best practice in the CQ?

    Hi,
    I have question related to importing data from excel file and creates from those data a table in the CQ page. Is inside CQ some OOTB component which provides this kind of functionalities? Maybe somebody implement this kind of functionality or there is best practice to do this kind of functionalities?
    Thanks in advance for any answer,
    Regards
    kasq

    You can check a working example package [1] (use your Adobe ID to log in)
    After installing it, go to [2] for immediate example.
    Unfortunately it only supports the old OLE-2 Excel format (.xls and not .xlsx)
    [1] - http://dev.day.com/content/packageshare/packages/public/day/cq540/demo/xlstable.html
    [2] - http://localhost:4502/cf#/content/geometrixx/en/company/news/pressreleases/my_personal_bes ts.html

  • Users And Security Best Practice

    Dear Experts
    I am designing an application with almost fifty users scattered in different places. Each users should access tables according to his/her criteria. For example salessam, salesjug can see only the sales related tables. purchasedon should access only purchase related tables. i have the following problems
    Is it a best practice to create 50 users in the DB i.e. 50 Schemas are going to be created? Where are these users normally created?
    or is it better for me to maintain a table of users and their passwords in my design itself and i regulate through the front end. seems that this would be risky and a cumbersome process.
    Please advice
    thanks
    Manish Sawjiani

    You would normally create a single schema to own the
    objects and 50 users to use them. You would use roles
    and object privileges to control access.Well, this is the classic 'Oracle' approach to do this. I might say it depends a bit on what you want to achieve. Let's call this approach A.
    The other option was to have your own user/pwd table. You can create your own custom authentication but I would go for the built-in Application Express Users - authentication scheme. You can manage the users via the frontend (Application builder > manage Application Express Users) . There you can manage the groups and end users which you can leverage in your Apex app. You can even use the APIs to create the users programmatically. It is all done for you. Let's call this approach B.
    Some things to consider:
    1) You want to create a web application and also other applications that access the data stored in Oracle (another PHP / Oracle Forms / Perl ) or allow access via SQL/Plus. Then you should use approach A. This way you don't need to reimplement security for these different approaches.
    2) You want to create one (or multiple) Apex applications only. This will be the only mechanism the users will access your data. Then I would go for approach B.
    3) When using approach A some users didn't like that all users will have access to their workspace, including the sql command line and having the capability of building applications and possibly being able to change the data they have access to through the Oracle roles. Locking down this capability is possible but it takes some effort and requires an Apache as a proxy.
    4) When using approach A you will need DBA privileges to manage the users and assign the roles. This might not always be possible nor desired. Depends on who will manage the Oracle XE instance.
    5) Moving the application including the end users to another machine is a bit easier using approach B since they are exported via the application export mechanism. Using approach A you would have to do it yourself. Be aware that the passwords are lost when you install the users into a different Oracle XE instance.
    6) If you design the application using approach B you will have to design security in a way that doesn't rely on the Oracle roles / grants security mechanisms. This makes it easier to change the authentication scheme later. For example, later you want to use a LDAP directory, a different custom authentication scheme or even SSO (SSO is not available out of the box but feasible). This is directly possible.
    Using approach A you would have to recode the security mechanisms (which user is allowed to update/delete which data).
    Hope that clarifies your options a bit.
    ~Dietmar.
    Message was edited by:
    Dietmar Aust
    Corrected a typo in (5): Approach B instead of approach A , sorry.
    Message was edited by:
    Dietmar Aust

  • Configuring AD Sites and Services best practice for multiple office site ?

    Hi People,
    Can anyone here please suggest me or share the link of what is the best practice in configuring the AD Sites and Service for single AD domain with multiple office sites ?
    I'd like to know more about the number and the direction of the connection between Domain Controllers in one site to the Data Center and vice versa.
    Thanks.
    /* Server Support Specialist */

    Hi People,
    Can anyone here please suggest me or share the link of what is the best practice in configuring the AD Sites and Service for single AD domain with multiple office sites ?
    This series can be useful:
    Active Directory Structure Guidelines – Part 1
    Mahdi Tehrani   |  
      |  
    www.mahditehrani.ir
    Please click on Propose As Answer or to mark this post as
    and helpful for other people.
    This posting is provided AS-IS with no warranties, and confers no rights.
    How to query members of 'Local Administrators' group in all computers?

  • BI BPC server strategy and the best practice

    Hi  Everyone
    I would like to know couple of questions:
    1) Is any white paper or documentation on pros and cons of having BPC NW installed as a add-on to BW system wherein Planning and Reporting are taking place on the same BW system versus BPC as a separate system wherein is used primarily for Planning and Consolidation system only
    2) is there a Best Practice document and performance considerations for BPC development from SAP.
    appreciated for any answers.
    Regards
    AK

    Hi AK,
    both scenarios works well but for first scenario having BPC on top of exisiting BW reporting, you need to take special attention on sizing. As BPC requires additional capacity, you need to take care of it.
    Also if you have SEM component on your BW system, you need to check this Sap note: 1326576 u2013 SAP NW system with SAP ERP software components.
    And before you install BPC, it is recommended to have a quick test process once you upgraded to NW EhP1 (it is a prerequisites for BPC), to check the existing BW reporting process.
    regards,
    Sreedhar

  • Export and Deployment - Best Practices for RAR and CUP

    Hi Experts,
    I wanted to know what in your opinon is best practice for deployment for GRC for a 3 system landscape.
    We have a development landscape which connacts to all our environments - Dev-QA-Prod.
    Is it recommended to have just the production client connected to the prodiction boxes only and use Dev/ QA for other environments or is it a good idea to have Prod and QA in sync?
    In my opinion it looks like a good idea to have the same QA and PROD as it would make export easier.. Maybe I am worng..
    What according to you all is a good recommended practice here?
    Thanks,
    Chinmaya

    Hi Chinmaya,
    depends how many clusters you have in your landscape
    if it is something like 5 DEV box to connect 5 QAS boxes, so on
    then best practice will be to have separate DEV - QAS - PRD boxes for GRC  if money (h/w ) is no constraint for organization
    rather than later asking SAP for deletion scripts for deleting sandbox or dev connectors,
    best to have separate boxes for each
    also for future whenever you do rule changes in RAR and config changes in CUP, best to test in QAS first, as CUP will become very critical for your organization, post go-live
    and good part will be that management report will reflect true data for PRD only
    regards,
    Surpreet

  • Sessions and Controllers best-practice in JSF2

    Hi,
    I've not done web development work since last using Apache Struts for its MVC framework ( about 6 years ago now ). So bear with me if my questions does not make sense:
    SESSIONS
    1) Reading through the JSF2 spec PDF, it mentions about state-saving via the StateManager. I presume this is also the same StateManager that it used to store managed-beans that are in @SessionScoped ?
    2) In relation to session-scoped managed beans, when does a JSF implementation starts a new session ? That is, when does the implementation such as Mojarra call ExternalContext.getSession( true ) .. and when does it simply uses an existing session ( calling ExternalContext.getSession( false ) ) ?
    3) In relation to session-scoped managed beans, when does a JSF implementation invalidate a session ? That is, when does the implementation call ExternalContext.invalidateSession() ?
    4) Does ExternalContext.getSession( true ) or ExternalContext.invalidateSession() even make sense if the state-saving mechanism is client ? ( javax.faces.STATE_SAVING_METHOD = client ) Will the JSF implementation ever call these methods if the state-saving mechanism is client ?
    CONTROLLERS
    Most of the JSF2 tutorials that I have been reading on-line uses the same backing-bean when perfoming an action on the form ( when doing a POST or a GET or a post-back to the same page ).
    Is this best practice ? It looks like mixing what should have been a simple POJO with additional logic that should really be in a separate class.
    What have others done ?

    gimbal2 wrote:
    jmsjr wrote:
    EJP wrote:
    It's better because it ensures the bean gets instantiated, stuck in the session, which gets instantiated itself, the bean gets initialised, resource-injected, etc etc etc. Your way goes goes behind the scenes and hopes for the best, and raises complicated questions that don't really need answers.Thanks.
    1) But if I only want to check that the bean is in the session ... and I do NOT want to create an instance of the bean itself if it does not exist, then I presume I should still use ExternalApplication.getSessionMap.get(<beanName>).I can't think of a single reason why you would ever need to do that. Checking if a property of a bean in the session is populated however is far more reasonable to me.In my case, there is an external application ( e.g. a workflow system from a vendor ) that will open a page in the JSF webapp.
    The user is already authenticated in the workflow system, and the external system from the vendor sends along the username and password and some parameters that define what the request is about ( e.g. whether to start a new case, or open an existing case ). There will be no login page in the JSF webapp as the authentication was already done externally by the workflow system.
    Basically, I was think of implementing a PhaseListener that would:
    1) Parse the request from the external system, and store the relevant username / password and other information into a bean which I store into the session.
    2) If the request parameter does not exist, then I go look for a bean in the session to see if the actual request came from within the JSF webapp itself ( e.g. if it was not triggered from the external workflow system ).
    3) If this bean does not exist at all ( e.g. It was triggered by something else other than the external workflow system that I was expecting ) then I would prefer that it would avoid all the JSF lifecycle for the current request and immediately do a redirect to a different page ( be it a static HTML, or another JSF page ).
    4) If the bean exist, then proceed with the normal JSF lifecycle.
    I could also, between [1] and [2], do a quick check to verify that the username and password is indeed valid on the external system ( they have a Java API to do that ), and if the credentials are not valid, I would also avoid all the JSF lifecycle for the current request and redirect to a different page.

  • What is the guideline and/or best practice for EMC setup on ASM?

    We are going to use EMC CX4-480 for ASM storage on RAC. What is the guideline and best practice for EMC setup on ASM?
    Thanks for the advice!

    Probably a poor choice of words. Sorry.
    So basically, I have gotten further, but I just noticed related problem.
    I'm using the WebServices(WS) 1.0. I insert an account, then, on a separate WS call, I insert my contacts for the account. I include the AccountID, and a user defined key from the Account when creating the Contact.
    When I look at my Contact on the CRMOD web page, it shows the appropriate links back to the Account. But when I look at my Account on the CRMOD web page, it shows no Contacts.
    So when I say workflow or Best Practice, I was hoping for guidance on how to properly write my code to accomplish all of the necessary steps. As in this is how you insert an account with a contact(s) and it updates the appropriate IDs so that it shows up properly on the CRMOD web pages.
    Based on the above, it looks like I need to, as the next step, take the ContactID and update the Account with it so that their is a bi-directional link.
    I'm thinking there is a better way in doing this.
    Here is my psuedocode:
    AccountInsert()
    AccountID = NewAcctRec
    ContactInsert(NewAcctRec)
    ContactID = NewContRec
    AccountUpdate(NewContRec)
    Thanks,

  • Product and SWCV best practice

    We have a 3rd party Product  that tend to change product versions
    frequently as once in 3-5 month.
    as SAP Software logistics mechanisem is based on hierrachy of
    Product->product version->SWCU->SWCV
    My quesion is :
    what is the best way to maintain this Product versioning in the SLD and IR,
    To allow best practice of software logistics in XI and maintanance?
    Please share from you knowledge and expiriance only.
    Nimrod

    Structuring Integration Repository Content - Part 1: Software Component Versions
    Have a look at that weblog and also search for the following parts of the same. That should give you a good idea.

Maybe you are looking for