Staging question

Hi
I have a target table in an Oracle schema/server with very limited access. When I created my interface I selected staging area as a different schema which I have enough previlage.
But still ODI is trying to create the work table in the target schema and the process is failing.
At the time of creating the logical schema connection for target I have selected the same schema as my work schema because I don't have access to any other schemas in that server.
But I thought by choosing different staging area ODI would create the tables there. Any help is appreciated.
-app

ODI will create the temp tables in the work schema specified in the topology .
So if you have a main schema (which hoalds the actual tables ) , then create a work schema which has select privilege on those tables and use it as your work schema. So that all temp tables will get created in the work schema .
Thanks,
Sutirtha

Similar Messages

  • Questions on staging, how we can check staging completed successfully ??

    Hi
    I have below questions
    1. Can we setup control cycles such a way that few materials can be staged manually [indicator 4] and few with pick option [Indicator 1] ??
    2. How do we ensure that staging is done successfully and all material reachd production bin??? it is possible like TO is not confirmed etc
    Thanks gurus

    Understood SAP doesnt provide any help in planning when staging indicator 4 is used

  • MDT OEM pre staging setup question/error

    See below error...
    Do you have any items for me to start with troubleshooting? 
    Made selection profile, setup media, did the OEM prestage task and dumped it to the drive... and on boot I start up the machine and it can't find the deploy folder... it is there on the disk I can see it in command prompt.

    Dan:
    Yes this is what I am doing.
    The Media I create is 67GB iso, I burn to usb drive with iso-to-usb. 
    It has selection profile with driver for the machine it is targeted to, OS, Apps and TS's associated with the process.
    I update media multiple times, at least every time I make a change during trouble shooting, and then dump to the iso to usb (or for quick testing I use VM and mount)
    Then I insert the USB drive or Mount ISO and boot from it, and run the OEM staging TS. It says success, click okay to shut the machine down or it will shut down uatomatically in 20 minutes .
    I assumed this TS dumped the DS to the drive.
    I then disconnect usb drive and to on to simulate the delivery of a new machine and on first boot I get that screen above. Examining the drive I see none of my deployment share on the drive.
    Am I not understanding the purpose of the OEM staging?
    If i put the usb drive in it will connect to the share on the drive... that is useless.
    I must be setting something in the wrong manner.
    dave

  • Provisioning from HCM Staging Area to SAP Master question from a newbie

    Hi
    I have built a sandbox idM 7.2 PL3 (MS-SQL 2008/Windows 2003).
    I've had a few stumbles on the way and my latest issue is that I have extracted the HCM data from our ERP (EHP5) HCM instance via LDAP to the VDS.  The transfer completes successfully and the statistics on the HCM_Staging_Area indicate that the records have been added to this Identity Store.
    The event handling on the Entity Type, MX_HCM_EMPLOYEE, has the task +189/Write HCM Employee To SAP Master_ configured.  This Job (and it's tasks) is enabled.  I can see the records (cn = "<SID666SAP 0000001>") with an objectclass of MX_HCM_EMPLOYEE.
    Hiowever, nothing is logged in the System Log or Job Log to indicate that this task has been executed by the load.  The SAP_Master is not updated.
    I'm using the idM for SAP Landscape Configuration Guide (October 2011)  for this installation.
    Am I missing something?  Is there a log file or trace that I can look at to try and figure out what is going on.
    Thanks
    Doug

    Thanks for the reply.
    This is my first attempt at setting up the idM so I've added what I've checked to answer your questions.  I would be glad to hear of any additional checks I could perform.
    Achim:
    - Dispatcher is started:  Yes, Windows Service is running (started).
    - Dispatcher is configured to run standard and provisioning jobs:  Yes, all jobs in Job tab of dispatcher are selected
    - are there any entries within the provisioning queue: I believe so, On the statistics tab of the HCM_staging_area Identity store there are 513 entries provisioning queue size
    - Jobs and Tasks are enabled, assigned to a dispatcher and configured to run as provisioning jobs:  * Yes, the tasks are enabled and the schedule rule is set to provisioning for both tasks*
    Bernd:
    The exported entries are reaching the Identity Store.  I can query the data and the statistics indicate this as well.  The provisioning queue size increases as well.  I have set the SAP_MASTER_IDS_ID to the appropriate id for SAP_Master  (4) and HR_STAGING_AREA_IDS_ID to HCM_Staging_Area Identity Store (5).
    Thankis again.
    Doug

  • SQL stored procedure Staging.GroomDwStagingData stuck in infinite loop, consuming excessive CPU

    Hello
    I'm hoping that someone here might be able to help or point me in the right direction. Apologies for the long post.
    Just to set the scene, I am a SQL Server DBA and have very limited experience with System Centre so please go easy on me.
    At the company I am currently working they are complaining about very poor performance when running reports (any).
    Quick look at the database server and CPU utilisation being a constant 90-95%, meant that you dont have to be Sherlock Holmes to realise there is a problem. The instance consuming the majority of the CPU is the instance hosting the datawarehouse and in particular
    a stored procedure in the DWStagingAndConfig database called Staging.GroomDwStagingData.
    This stored procedure executes continually for 2 hours performing 500,000,000 reads per execution before "timing out". It is then executed again for another 2 hours etc etc.
    After a bit of diagnosis it seems that the issue is either a bug or that there is something wrong with our data in that a stored procedure is stuck in an infinite loop
    System Center 2012 SP1 CU2 (5.0.7804.1300)
    Diagnosis details
    SQL connection details
    program name = SC DAL--GroomingWriteModule
    set quoted_identifier on
    set arithabort off
    set numeric_roundabort off
    set ansi_warnings on
    set ansi_padding on
    set ansi_nulls on
    set concat_null_yields_null on
    set cursor_close_on_commit off
    set implicit_transactions off
    set language us_english
    set dateformat mdy
    set datefirst 7
    set transaction isolation level read committed
    Store procedures executed
    1. dbo.p_GetDwStagingGroomingConfig (executes immediately)
    2. Staging.GroomDwStagingData (this is the procedure that executes in 2 hours before being cancelled)
    The 1st stored procedure seems to return a table with the "xml" / required parameters to execute Staging.GroomDwStagingData
    Sample xml below (cut right down)
    <Config>
    <Target>
    <ModuleName>TransformActivityDim</ModuleName>
    <WarehouseEntityName>ActivityDim</WarehouseEntityName>
    <RequiredWarehouseEntityName>MTV_System$WorkItem$Activity</RequiredWarehouseEntityName>
    <Watermark>2015-01-30T08:59:14.397</Watermark>
    </Target>
    <Target>
    <ModuleName>TransformActivityDim</ModuleName>
    <WarehouseEntityName>ActivityDim</WarehouseEntityName>
    <RequiredWarehouseEntityName>MTV_System$WorkItem$Activity</RequiredWarehouseEntityName>
    <ManagedTypeViewName>MTV_Microsoft$SystemCenter$Orchestrator$RunbookAutomationActivity</ManagedTypeViewName>
    <Watermark>2015-01-30T08:59:14.397</Watermark>
    </Target>
    </Config>
    If you look carefully you will see that the 1st <target> is missing the ManagedTypeViewName, which when "shredded" by the Staging.GroomDwStagingData returns the following result set
    Example
    DECLARE @Config xml
    DECLARE @GroomingCriteria NVARCHAR(MAX)
    SET @GroomingCriteria = '<Config><Target><ModuleName>TransformActivityDim</ModuleName><WarehouseEntityName>ActivityDim</WarehouseEntityName><RequiredWarehouseEntityName>MTV_System$WorkItem$Activity</RequiredWarehouseEntityName><Watermark>2015-01-30T08:59:14.397</Watermark></Target><Target><ModuleName>TransformActivityDim</ModuleName><WarehouseEntityName>ActivityDim</WarehouseEntityName><RequiredWarehouseEntityName>MTV_System$WorkItem$Activity</RequiredWarehouseEntityName><ManagedTypeViewName>MTV_Microsoft$SystemCenter$Orchestrator$RunbookAutomationActivity</ManagedTypeViewName><Watermark>2015-01-30T08:59:14.397</Watermark></Target></Config>'
    SET @Config = CONVERT(xml, @GroomingCriteria)
    SELECT
    ModuleName = p.value(N'child::ModuleName[1]', N'nvarchar(255)')
    ,WarehouseEntityName = p.value(N'child::WarehouseEntityName[1]', N'nvarchar(255)')
    ,RequiredWarehouseEntityName =p.value(N'child::RequiredWarehouseEntityName[1]', N'nvarchar(255)')
    ,ManagedTypeViewName = p.value(N'child::ManagedTypeViewName[1]', N'nvarchar(255)')
    ,Watermark = p.value(N'child::Watermark[1]', N'datetime')
    FROM @Config.nodes(N'/Config/*') Elem(p)
    /* RESULTS - NOTE THE NULL VALUE FOR ManagedTypeViewName
    ModuleName WarehouseEntityName RequiredWarehouseEntityName ManagedTypeViewName Watermark
    TransformActivityDim ActivityDim MTV_System$WorkItem$Activity NULL 2015-01-30 08:59:14.397
    TransformActivityDim ActivityDim MTV_System$WorkItem$Activity MTV_Microsoft$SystemCenter$Orchestrator$RunbookAutomationActivity 2015-01-30 08:59:14.397
    When the procedure enters the loop to build its dynamic SQL to delete relevant rows from the inbound schema tables it concatenates various options / variables into an executable string. However when adding a NULL value to a string the entire string becomes
    NULL which then gets executed.
    Whilst executing "EXEC(NULL)" would cause SQL to throw an error and be caught, executing the following doesnt
    DECLARE @null_string VARCHAR(100)
    SET @null_string = 'hello world ' + NULL
    EXEC(@null_string)
    SELECT @null_string
    So as it hasnt caused an error the next part of the procedure is to move to the next record and this is why its caught in an infinite loop
    DELETE @items WHERE ManagedTypeViewName = @View
    The value for the variable @View is the ManagedTypeViewName which is NULL, as ANSI_NULLS are set to ON in the connection and not overridded in the procedure then the above statement wont delete anything as it needs to handle NULL values differently (IS NULL),
    so we are now stuck in an infinite loop executing NULL for 2 hours until cancelled.
    I amended the stored procedure and added the following line before the loop statement which had the desired effect and "fixed" the performance issue for the time being
    DELETE @items WHERE ManagedTypeViewName IS NULL
    I also noticed that the following line in dbo.p_GetDwStagingGroomingConfig is commented out (no idea why as no notes in the procedure)
    --AND COALESCE(i.ManagedTypeViewName, j.RelationshipTypeViewName) IS NOT NULL
    There are obviously other ways to mitigate the dynamic SQL string being NULL, there's more than one way to skin a cat and thats not why I am asking this question, but what I am concerned about is that is there a reason that the xml / @GroomingCriteria is incomplete
    and / or that the procedures dont handle potential NULL values.
    I cant find any documentation, KBs, forum posts of anyone else having this issue which somewhat surprises me.
    Would be grateful of any help / advice that anyone can provide or if someone can look at their 2 stored procedures on a later version to see if it has already been fixed. Or is it simply that we have orphaned data, this is the bit that concerns most as I dont
    really want to be deleting / updating data when I have no idea what the knock on effect might be
    Many many thanks
    Andy

    First thing I would do is upgrade to 2012 R2 UR5. If you are running non-US dates you need the UR5 hotfix also.
    Rob Ford scsmnz.net
    Cireson www.cireson.com
    For a free SCSM 2012 Notify Analyst app click
    here

  • Why it takes long time to execute on Production than staging?

    Hi Experts,
    Any help apreciated on below issue.
    I have one anonymous block for updating around 1 million records by joining 9 tables.
    This is proceeded to production by following environments. And all env have exact equal volume of data.
    development->Testing->Staging->Production.
    The funny problem is while it takes 5 mins to be executed in all environments, it takes 30 mins on production.
    why it happned and what can be action points for future?
    Thanks
    -J
    ==============
    If the performance is that different in the different environments, one or more statements must have different query plans in the different environments. The first step would be to get the query plans and compare them to figure out which statement(s) is/are running slowly.
    If there are different query plans, that implies that something is different between the environments. That could be any of
    - Oracle version
    - initialization parameters
    - data
    - object statistics
    - system statistics
    If you guarantee that the data is the same, I would tend to expect that the object statistics are different. How have you gathered statistics in the various environments? Can you move statistics from an environment where performance is acceptable to the environment where performance is unacceptable?
    I would also recommend following the advice others have given you. You don't want to commit in a loop and you want to do as much processing in SQL as possible.
    Justin
    ===============
    Thanks Steve for your inputs.
    My investigation resulted following 2 points.
    There are 2 main reasons why some scripts might take longer in live than on Staging.
    1: Weekend backups were running on the live server so slowing the server down allot.
    2: the tables are re-orged when they are imported into staging/Dev – so the table and index layout is optimal, on live the tables and indexes are not necessarily contiguous so in order to do the same work the server will need to do many more I/O operations.
    Can we have some action points to address these above issues?
    I think if data can be contigous then it may help.
    Best Regards
    -J
    ===============
    But before that, can you raise this in a seperate thread as there is a different issue going on in this thread?
    Cheers
    Sarma.
    ===========
    Posts: 4
    Registered: 08/28/06
    Re: Performance issue (Oracle 10.2.0.3.0)
    Posted: May 22, 2009 2:46 AM in response to: Radhakrishna Sa... Edit Reply
    Hey Sarma,
    Exterme aplogies to say that I don't know how to raise a new thread.
    Thanks in advnce for your help.
    -J
    user636482
    Posts: 202
    Registered: 05/15/08
    Re: Performance issue (Oracle 10.2.0.3.0)
    Posted: May 22, 2009 2:51 AM in response to: user527345 Reply
    Hi User 527345,
    Please follow the steps to raise a request in this Forum.
    1. Register urself.
    2. Go to the forum home and select the Technolgy where do you want to rasie a request.
    eg : If is related to Oracle DATAbase general then select Oracle databse general...
    3. clik on post new thread
    4. Give the summary of your issue.
    5. then submit the issue.
    please let me know if you need more information.
    Thank you

    Jayashree Mohanty wrote:
    My investigation resulted following 2 points.
    There are 2 main reasons why some scripts might take longer in live than on Staging.
    1: Weekend backups were running on the live server so slowing the server down allot.
    2: the tables are re-orged when they are imported into staging/Dev – so the table and index layout is optimal, on live the tables and indexes are not necessarily contiguous so in order to do the same work the server will need to do many more I/O operations.
    Can we have some action points to address these above issues?
    I think if data can be contigous then it may help.First , I didn't get at all what actually was that thing when you copied some part of don't know which post in your actual question? Please read this , it would help you post a proper question to get a proper answer ,
    http://www.catb.org/~esr/faqs/smart-questions.html
    Now, how did you come to the conclusion that the backups are actually making your query slower? What's the benchmark that you lead to this? And what's the meaning of the 2nd point , can you please explain it ?
    As others have also mentioned, please post the plan of the query at boththe staging and production, that only can tell that what's going on.
    HTH
    Aman....

  • How to move data from a staging table to three entity tables #2

    Environment: SQL Server 2008 R2
    I have a few questions:
    How would I prevent duplicate records, when/ IF SSIS is executed many times?
    How would I know that all huge volume of data being loaded in the entity tables?
    In reference to "how to move data from a staging table to three entity tables ", since I am loading large volume of data, while using lookup transformation:
    which of the merge components is best suited.
    How to configure merge component correctly. (screen shot is preferred) 
    Please refer to the following link
    http://social.msdn.microsoft.com/Forums/en-US/5f2128c8-3ddd-4455-9076-05fa1902a62a/how-to-move-data-from-a-staging-table-to-three-entity-tables?forum=sqlintegrationservices

    You can use RowCount transformation in the path where you want to capture record details. Then inside rowcount transformation pass a integer variable to get count value inside
    the event handler can be configured as below
    Inside Execute SQL task add INSERT statement to add rowcount to your audit table
    Can you also show me how to Check against destination table using key columns inside a lookup task and insert only non
    matched records (No Match output)
    This is explained clearly in below link which Arthur posted
    http://www.sqlis.com/sqlis/post/Get-all-from-Table-A-that-isnt-in-Table-B.aspx
    For large data I would prefer doing this in T-SQL. So what you could do is dump data to staging table and then apply
    T-SQL MERGE between tables (or even a combination of INSERT/UPDATE statements)
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Material staging indicator not populating in prod order WM pick list item

    Hello,
    I have an issue with material staging in an prod order
    1) PP-WM interface is activated
    2) Control cycle for material is created
    3) Production storage location is created for material
    4) storage type is 100 for production
    5) There is one discontinued material and also the follow up material
    6) stock of discontinued material is zero and requirement are passed to follow up material
    When we confirm the order the stagging indicator for both follow up material as well as discontinued material automatically populates zero (Non relevence to pick list items) where as it should be one (1 - for pick list items).
    One more issue user has manually inserted discontinued material as well as follow up material in production order change mode.
    In the BOM of a main material both discontinued as well as follow up material is there with some quantity as a component.
    For the same work center, control cycle , production storage location the indicator is populating.
    These two material (discontinued as well as follow up) are appearing twice in the WM pick list screen where first two line items are OK and populating indicator "1". But in line item last and second last indicator is not there.
    My question is why the stagging indicator is not automatically populating in the production order WM pick list screen in front of components.

    Unfortunately, WM material staging via production orders is not possible
    from the pull list.  Please see the long text of message RMPU 311
    (WM material staging for production order reservation not possible):
    "You cannot carry out a WM material provision for pick parts from
    production order reservations in the pull list". The reasons for this
    are cleary explained in the SAP on-line documentation via the
    following path :
      Logistics -> Logistics Execution -> Warehouse Management Guide ->
      Goods Issue -> Goods Issue for Production Supply ->
      Material Staging for Repetitive Manufacturing
    See the following under the Selection heading :
    The choice of the selection type influences which types of WM material
    staging are supported in the pull list. However, the pick parts can be
    staged via RS headers/planned orders but not with the current BOM
    explosion. The release order parts, on the other hand, can also be
    staged if the current BOM is used for calculating the dependent
    requirements.
    WM material staging via production orders is not possible from the pull
    list.
    I think you may try in CO02 or COR2 for production order or process order.

  • Creation of staging table - quickest way.

    Hi,
    I need to create a staging table with roughly 370 fields. The sources' specs are not clear and some of the staging table's fields, too. So effectively, first of all I need to understand the staging fields and the source fields. The end user has been given some target date with some generic guess-work (not by me).
    I would like to know the best approach to complete the activity quicker - at least, document all the staging fields and source fields. What I have now is one Excel file with four columns - Staging Field, Source table and field, Staging Field Type and Staging Field Length. Out of the total 370 staging fields, I have just completed 50% and the target date is approaching faster - I have not touched developing Oracle package yet.
    This could be a generic question, but in order to meet deadline, could you please throw some light on the quickest way/tips to complete the mapping documentation (or creating the staging table)?
    Thanks in advance,
    Manoj.

    MDixit wrote:
    Hi,
    I need to create a staging table with roughly 370 fields. The sources' specs are not clear and some of the staging table's fields, too. So effectively, first of all I need to understand the staging fields and the source fields. The end user has been given some target date with some generic guess-work (not by me).
    I would like to know the best approach to complete the activity quicker - at least, document all the staging fields and source fields. What I have now is one Excel file with four columns - Staging Field, Source table and field, Staging Field Type and Staging Field Length. Out of the total 370 staging fields, I have just completed 50% and the target date is approaching faster - I have not touched developing Oracle package yet.
    This could be a generic question, but in order to meet deadline, could you please throw some light on the quickest way/tips to complete the mapping documentation (or creating the staging table)?
    Thanks in advance,
    Manoj.The first thing that comes to mind is the unclear specifications. (actually the first thing that came to my mind was that tables have columns not fields, but anyway......)
    you need to clarify what is coming from the source system before you will be able to intelligently map that to your target system. you may have to push back on your client and tell them that the deadline can't be met unless they provide more information about their source system.
    how have they chosen those 370 columns? how did they know these needed to be part of whatever process you are completing? if they know that these need to be moved to a target system then they should be able to tell you why.

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

  • How to print staging areas and doors on the pick doc. ?

    Dear experts!
    Thank you for your attention!
    In our system, we use Lean WM.
    We know staging areas and doors can aslo be printed on the pick doc.  How to do that?
    Best regard!
    Tangdark

    I'm not a WM expert, but am pretty sure nothing special is required for that. If the fields, where this information is stored, are used by the output processing program and the data is passed to the form and then printed by the form, there should be no problem. If data is there, but it's not printed, then you might need to adjust the output program and/or form.
    You might want at least to try it first and use Search whenever possible, then come back with more specific questions, if necessary.

  • Staging area for Management Agent files...?

    Sorry for another question so quickly - I am reading the documentation first!
    Management agents upload files to the OMS server, and these are parsed and loaded as data into the MR database, right?
    Where is the staging area for these files (upload from all the management agents)? Is it possible to change it? From what I can gather it's
    https://oms_server:1159/em/upload - but where is this in the real world? I've searched the OMS $ORACLE_HOME, but can't find any uploaded files
    The reason for these questions is because we have a requirement to load these files to a shared area so that both our OMSes can access them - has anyone come across a configuration like this before? Does it work?
    Many thanks (again!)
    D

    The upload area on the OMS would normally be <OMS_HOME>/sysman/recv.
    You still might find no data files over here, as they may be uploaded to the OMS on that moment.
    Yes, making this directory a shared location ,will work when running multiple OMS.
    Regards
    Rob

  • Staging Area with OWB 10.2 - necessary or not?

    Hi to all,
    I have read so much about Staging Area and OWB 10.2 that I am totally confused: Some documents and powerpoints in the web say you do not need one, others say you need one. The thing is I am planning a DWH and now I am not sure if a staging area is necessary or not, because the mappings do the ETL jobs internal so I am not sure about the staging area. Most of my data sources are Tables/Views/MViews in a database.
    Thank you very much for any help concerning this question!
    Regards
    Thomas

    Would you prefer the answer that you MAY need one? Then again, you may just WANT one!
    For example, if you are building against a high transaction volume, busy 24/7 OLTP system then you may find that you need a local snapshot in order to do a complete build with a consistent set of source data for all your numbers to be consistent.
    Then again, you also may just find that bringing over just delta data into a local snapshot makes for much more efficient load rather than running against huge full remote tables if they are not well partitioned and/or indexed.
    Then again, complex joins run against a remote system may run more efficiently if you bring the data across with simple table dumps into a staging area that you can index to optimize your queries rather than have to deal with poor performance of complex joins over a dblink. Especially if you need to perform complex joins accross more than one db link to multiple source source systems. How big a cartesion product do you want bouncing around the network to perform that sort of scenario? Sure, maybe you can do it - but how much are you going to impact performance across the boards doing things like that?
    Is the source system already stresed to the max and sitting on a vintage piece of equipment, but your shiny new DW environment is blessed with tons of resources that will make the ETL run faster by several factors if you first copy the data over locally?
    So, do you need a staging area?
    Fact is that there is no generic correct answer to this question.
    You have to look at the specifics of your data requirements and your environment to answer that question. There are costs and benefits to having a staging area, and you have to determine which way the cost/benefit analysis comes out for your specific project.
    Mike

  • Staging Area Indexing

    We currently planing to build our staging area from source database (Oracle 11g database)
    So we decided to do 1 to 1 mapping for each table but there are some tables that contains huge data (millions of records) and for these tables we decide to do indexing to fasting the extraction using data services to our staging database but at the same time I think will affect the loading to staging area
    So my question is the indexes is recommended to be created in staging area?

    Hi Mohamed Anas,
    Usually Creating  indexes on the columns   will give better performace the index does not take much space in the memory, there by resulting in faster SELECTs. I am not much expert to say that whether it is suggestable in staging area or not but while loading the data into any target table which is having indexed columns will effect the performace of the job ,
    In general while loading the data into target we will drop the index in Preload SQL and will recreate it with post load SQL in target properties, because it will give better performace.
    so as per the above even though if you created the indexes on columns, its better to drop it and re create it before loading into target that will give you better performance.
    Hope you have got some idea,
    Thanks,
    subbu CH.

  • Ridiculously simple audio editing question for Garage Band

    Ok, so this is really a fundamental
    question. It concerns editing technique.
    In analog mixing, the editor put
    the sounds to be saved in the right
    places on each of several tracks --
    narration, sound effects, lip sync,
    music, etc. The tracks were then
    all locked together and the mixer
    adjusted the volumes to either save
    or reject audio information.
    A blank master received the information
    and that was your final product,
    in sync, sprocket by sprocket, with
    the video.
    My question, then, is, how do you
    do it in digital multitrack such as
    Garage Band? Are you aiming to have
    4 or 5 tracks all level controlled
    or do you simple erase unwanted
    sound, or what exactly do you do,
    in post production? If you are going
    to do a voice over, then you have to
    do a mix at some point, not just a
    cut and paste. What's the philosophy,
    if you don't mind my asking?
    Thanks,
    Ed

    Wow... you gave me flashbacks to a movie soundtrack I scored, with separate reels of tape all synced up and whirling away for final mixdown!
    To answer your question as best I can in a paragraph or two, digital mixing is unlimited compared to analog. Whereas tape has a physical limit, you can always add another digital track (there are limitations to digital, but they are becoming harder to reach as systems become increasingly powerful). So no need to erase. Editing is also a breeze. Digital tracks are endlessly malleable, with virtually every parameter available for automation.
    Sometimes you might mix down to stems or mix everything down to a stereo master (or alternative mixes) for post production. Similarly, in post, they can deal with huge numbers of digital tracks from different sources.
    In terms of a professional workflow, Logic is the more typical choice over Garageband, and includes a number of features designed for scoring to picture: variable sample rates, frame rates, etc.
    Gain staging in digital recording is different than in analog, and it requires a different approach to achieve excellent sounding results. But the digital workflow is so much more flexible, forgiving, and economical, that it has been the death-knell for the analog workflow. Infinite bussing, routing, mixing, comping, storing, stemming... infinite possibilities, really!

Maybe you are looking for

  • Closing a connection in "finally {...}"

    Hello! I have a connection pool constantly growing in an OC4J in an Oracle 9iAS. My question is kind of "basic", but I need to find the "leak": In the Handler-classes I close the resultsets, statements and connections in a finally-block, but return-v

  • S-Video & Cinema Display on MBP

    I've posted this originally in the macBook Pro section but quickly realized that this is as much a FCP question as it is a macbook Pro question. Here's the link to the original post: http://discussions.apple.com/thread.jspa?threadID=350386 I hadn't r

  • How to set a default message in "Share Invitation" dialog box

    Hello, Working on SharePoint Online, my customer would like to have a default message whenever he adds a new user in a group. By default, when a user is added to a group, a text box appears with a greyed out text: "Include a personal message with thi

  • Purchase price and sales price

    Hi Gurus, for our company I need make configuration that will allow to sell materials for the same prices as it was bought. for example I had 2 PO for the same material, firstly it cost 200 usd and secondly 100 usd. Now I will sell 2 e.a. for 150usd

  • Occuring Issues

    I am hoping some one may be able to give me an idea of what is happening to my iBook as the following issues have started this morning. While using my iBook the screen went black and as far as I could tell the computer was not responding but was stil