Source tables in rpd

Can we import source database tables directly into OBIEE instead of target warehouse tables?
This means that the whole of ETL part is being skipped and direct source tables will be made use of. Is this possible?
This is one of the requirements I have.
Thanks,
Swetha

You can import any DB table as a source into OBIEE with its own connection pool so the answer is yes. However, this is not a replacement for the ETL process and DW model. You will not get any of the benefits of a dimensional DW model which include performance, ability to slice and dice by various dimensions and facts..it will be the same as if you directly reported off of an ERP system. So you can skip the process, but you will also skip all the advantages of the process.

Similar Messages

  • How to get Materialized View to ignore unused columns in source table

    When updating a column in a source table, records are generated in the corresponding materialized view log table. This happens even if the column being updated is not used in any MV that references the source table. That could be OK, so long as those updates are ignored. However they are not ignored, so when the MV is fast refreshed, I find it can take over a minute, even though no changes are required or made. Is there some way of configuring the materialized view log such that the materialized view refresh ignores these updates ?
    So for examle if I have table TEST:
    CREATE table test (
    d_id NUMBER(10) PRIMARY KEY,
    d_name VARCHAR2(100),
    d_desc VARCHAR2(256)
    This has an MV log MLOG$_TEST:
    CREATE MATERIALIZED VIEW LOG ON TEST with rowid, sequence, primary key;
    CREATE MATERIALIZED VIEW test_mv
    refresh fast on demand
    as
    select d_id, d_name
    from test;
    INSERT 200,000 records
    exec dbms_mview.refresh('TEST_MV','f');
    update test set d_desc = upper(d_desc) ;
    exec dbms_mview.refresh('TEST_MV','f'); -- This takes 37 seconds, yet no changes are required.
    Oracle 10g/11g

    I would love to hear a positive answer to this question - I have the exact same issue :-)
    In the "old" days (version 8 I think it was) populating the materialized view logs was done by Oracle auto-creating triggers on the base table. A "trick" could then make that trigger become "FOR UPDATE OF <used_column_list>". Now-a-days it has been internalized so such "triggers" are not visible and modifiable by us mere mortals.
    I have not found a way to explicitly tell Oracle "only populate MV log for updates of these columns." I think the underlying reason is that the MV log potentially could be used for several different materialized views at possibly several different target databases. So to be safe that the MV log can be used for any MV created in the future - Oracle always populates MV log at any update (I think.)
    One way around the problem is to migrate to STREAMS replication rather than materialized views - but it seems to me like swatting a fly with a bowling ball...
    One thing to be aware of: Once the MV log has been "bloated" with a lot of unneccessary logging, you may perhaps see that all your FAST REFRESHes afterwards becomes slow - even after the one that checked all the 200000 unneccessary updates. We have seen that it can happen that Oracle decides on full table scanning the MV log when it does a fast refresh - which usually makes sense. But after a "bloat" has happened, the high water mark of the MV log is now unnaturally high, which can make the full table scan slow by scanning a lot of empty blocks.
    We have a nightly job that checks each MV log if it is empty. If it is empty, it locks the MV log and locks the base table, checks for emptiness again, and truncates the MV log if it is still empty, before finally unlocking the tables. That way if an update during the day has happened to bloat the MV log, all the empty space in the MV log will be reclaimed at night.
    But I hope someone can answer both you and me with a better solution ;-)

  • Data Federator- Merging two source tables

    Hi,
    I have two identically structured source tables and i need to merge them together to a target table.
    For Example:-
    Source Table1=  { (A,10), (B,11), (C,12) }
    +
    Source Table2=  { (D,15), (E,16), (F,17) }
    Target Table=  { (A,10), (B,11), (C,12), (D,15), (E,16), (F,17)}
    Can you please help on how achieve this in data federator.
    Thanks in advance.

    For Merge - Create a target table with your desired end columns A - E.  Create 2 seperate mappings and map the respective columnns from each source.
    For Join - Create a target table with your desired end columns A - E.  Create 1 mappings and map the respective columnns from each source and also creating the necessary relationship between tables.

  • How to delete the source table rows once loaded in Destination Table in SSIS?

    Data Base=kssdata
    Tables= Userdetails having 1000 rows
    Using SSIS: 
    Taking A  
    OLE DB Source----------------->OLE DB Destination
    Am Taking 200 rows in Source table and loaded into Destination table once
    Constraint: here once 200 rows are exported in destination table , that 200 rows are deleted in source table
    repeat the task as source table all the records are loaded into Destination table 
    After that am taking another 200 rows in source table and loaded into Destination table

    Provided you've a sequential primary key  or audit timestamp (datetime/date) column in the table you can do an approach like this
    1. Add a execute sql task connectng to source db with below statement
    SELECT COUNT(*) FROM table
    Store the result in a variable
    2. Have another variable and set it to below expression
    (@[User::CountVariable]/200) + (@[User::CountVariable]%200 >0? 1:0)
    by setting EvaluatesExpression as true. Here CountVariable is variable created in previous step
    3. Have a for loop container with below settings
    InitExpression
    @NewVariable = @CounterVariable
    EvalExpression
    @NewVariable > 0
    AssignExpression
    @NewVariable = @NewVariable - 1
    3. Add a data flow task with OLEDB source and OLEDB Destination
    4. Use source query as
    SELECT TOP 200 columns...
    FROM Table
    ORDER BY [PK | AuditColumn]
    Use PK or audit column depending which one is sequential
    5. After data flow task have a execute sql task with statement as below
    DELETE t
    FROM (SELECT ROW_NUMBER() OVER (ORDER BY PK) AS Rn
    FROM Table)t
    WHERE Rn <= 200
    This will make sure the 200 records gets deleted each time
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • How do I create a target table with the same PK as the source table?

    I am trying to create a target table in a mapping that will end up with the same primary key as the source table.
    It is a simple map that simply uses a subset of the columns of the source table in the target table. I was wanting to create and bind a new table by dragging the columns I want from the source to the initially blank target table operator, change the column names and create a primary key to match the source table.
    I can't seem to be able to create a constraint on the table in the mapping. I can create the constraint after the table is created and boound to the database object but the PK doesn't carry back into the mapping.
    I need it in the mapping so I can use the UPDATE/INSERT operation and use the 'All Constraints' implementation. The mapping won't let me validate the object without the PK on it in the map.
    Believe it or not folks, I am getting better at this.
    Thanks very much for the guidance.
    Gary

    Hi Gary
    You are close, you are really close... :-))
    You need to do exactly as you propose plus one extra step. Build the map as you describe, binding the new table to the target. Then you edit the table definition to add the primary key and any other constraints you need. After this is the step that you are missing.
    You need to do the following:
    1. Go back and re-edit the map
    2. Right click on the table
    3. From the pop up menu, select Reconcile Inbound
    4. Set any operators that you need for the UPDATE/INSERT
    5. Save the map
    6. Commit your changes
    The first three steps above make the map read in the indexes and constraints that you set on the table. Finally, you need to deploy the table and then deploy the map.
    Hope this helps
    Regards
    Michael

  • How to get source table inside Template Mapping code template

    Hi guys,
    I have the following scenario, I have an table from external database and want to map it to an oracle table. This is done with Template mapping and I selected an Load code template on the execution unit that holds only the external table, this load code template will read row by row from source table and make the inserts into the flow table. I know that oracle use odiRef.getFrom() in order to construct the select statement from the external table. Because i need to do something custom i will need to have a list of the source tables inside the Load code template.
    Is this possible?
    P.S. I use owb 11gr2.
    Regards,
    Cipi
    Edited by: Iancu Ciprian on Jan 11, 2011 10:58 AM

    Hi Suraj,
    Thx for your answer!
    After posting the message i found in ODI documentation about odiRef other function and this I'm trying now to see if works, will let you know my results ...
    I implemented an custom iterator that retrieves the data from an external source and pass it to INSERT commands to execute against flow table. In order that this iterator to work i need the source table name of the current execution unit. Then the iterator is using the that name to get the data from the external entity and retrieve it as an array of Objects, this array of objects will be inserted in the flow table.
    Regards,
    Cipi

  • How to get source table name according to target table

    hi all
    another question:
    once a map was created and deployed,the corresponding information was stored in the repository and rtr repository.My question is how to find the source table name according to the target table,and in which table these records are recorded.
    somebody help me plz!!
    thanks a lot!

    This is a query that will get you the operators in a mapping. To get source and targets you will need some additional information but this should get you started:
    set pages 999
    col PROJECT format a20
    col MODULE format a20
    col MAPPING format a25
    col OPERATOR format a20
    col OP_TYPE format a15
    select mod.project_name PROJECT
    , map.information_system_name MODULE
    , map.map_name MAPPING
    , cmp.map_component_name OPERATOR
    , cmp.operator_type OP_TYPE
    from all_iv_xform_maps map
    , all_iv_modules mod
    , all_iv_xform_map_components cmp
    where mod.information_system_id = map.information_system_id
    and map.map_id = cmp.map_id
    and mod.project_name = '&Project'
    order by 1,2,3
    Jean-Pierre

  • How to get source table name

    Hi,
    I need to know how get a source table name. I need to get a source table name and do some transformation. I am adding the step in IKM to do this and therefore need source table name in there.
    <%=odiRef.getSrcTablesList("","[RES_NAME]","","")%> gives work table name (C$_0XXXXX) whereas actual source table name is expected.
    Could someone please help?
    Thanks.

    Hi,
    May i add a point?
    In IKM level, if u use this API it will always return u C$ table name only since for IKM C$ is the source. You need to capture and use this API in LKM level.
    In LKM add a step in Command on target and technology as Jython and try the below code.
    mySourceTable= '<%=odiRef.getSrcTablesList("", "[RES_NAME]", ", ", "")%>'
    And later in IKM use this variable for ur transformation.
    Thanks,
    Guru

  • SSIS 2012 is intermittently failing with below "Invalid date format" while importing data from a source table into a Destination table with same exact schema.

    We migrated Packages from SSIS 2008 to 2012. The Package is working fine in all the environments except in one of our environment.
    SSIS 2012 is intermittently failing with below error while importing data from a source table into a Destination table with same exact schema.
    Error: 2014-01-28 15:52:05.19
       Code: 0x80004005
       Source: xxxxxxxx SSIS.Pipeline
       Description: Unspecified error
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC0202009
       Source: Process xxxxxx Load TableName [48]
       Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 11.0"  Hresult: 0x80004005  Description: "Invalid date format".
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC020901C
       Source: Process xxxxxxxx Load TableName [48]
       Description: There was an error with Load TableName.Inputs[OLE DB Destination Input].Columns[Updated] on Load TableName.Inputs[OLE DB Destination Input]. The column status returned was: "Conversion failed because the data value overflowed
    the specified type.".
    End Error
    But when we reorder the column in "Updated" in Destination table, the package is importing data successfully.
    This looks like bug to me, Any suggestion?

    Hi Mohideen,
    Based on my research, the issue might be related to one of the following factors:
    Memory pressure. Check there is a memory challenge when the issue occurs. In addition, if the package runs in 32-bit runtime on the specific server, use the 64-bit runtime instead.
    A known issue with SQL Native Client. As a workaround, use .NET data provider instead of SNAC.
    Hope this helps.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • Error in updating the source table in mapping

    Hi All,
    I have a mapping in which I am fetching records from a table A, performing some expression logic and then using a Splitter operator to update my target table Bas well as update one of the columns of my source table A(to indicate that record has been processed). When I execute the mapping, updation of target table B works, but updting my source table A does nt work and the job completes succesfully?
    Can somebody please help me to debug this?
    Thanks

    Hi,
    Please check the splitter condition for the two groups that you have set. This might be a case where you have put such a condition in splitter that all the record sets are going to the target table and none are going to the source table. Therefore the source table is not getting updated by any record.
    Regards
    -AP

  • Best approach to delete records that are not in the source table anymore.

    I have a situation where I need to remove records from dimensions that are not in the source data anymore. Right now we are not maintaing history, i.e. not using SCD but planning for the next release. If we did that it would be easy to figure the latest records. The load is nightly and records are updated and new added.
    The approach that I am considering is to join the dimension tables the the sources on keys and delete what doesn't join. However, is there perhaps some function in OWB that would allow to do this automatically on import so it can be also in place for the future?
    Thanks!

    Bear in mind that deleting dimension records becomes problematic if you have facts attached to them. Just because this record is no longer in the active set doesn't mean that it wasn't used historically, and so have foreign key constraints on it in your database. IF this is the case, a short-term solution would be to add an expiry_date field to the dimension and update the load to set this value when the record disappears rather than to delete it.
    And to do that, use the target dimension as a source table, outer join it to the actual source table on the natural key, and so your update will set expiry_date=nvl(expiry_date,sysdate) to set to sysdate if this record has not already been expired on all records where the outer join fails.
    Further consideration: what do you do if the record is re-inserted into the source table? create a new dimension key? Or remove the expiry date?
    But I will say that I am not a fan of deleting records in most circumstances. What do you do if you discover a calculation error and need to fix that and republish historical cubes? Without the historical data, you lose the ability to do things like that.

  • Any way to know the source table of a Form at  runtime

    Hi,
    Is there any way to know the source table/view.. of a form at run time.
    Thanks in advance

    Hi,
    But Get_Block_Property( ..., DML_DATA_TARGET_NAME)
    should be coded, I mean any way to know at the
    runtime, as like we get the Error from the
    Help->Display Errors.If you are asking if there is a way to get this information from the default menu, then the answer is no. However, you can create your own custom menu, or a button in your form that displays that information using the Get_Block_Property when you click on your menu item or button.

  • Custom delta extractor: All data deleted in source table in R/3

    Hi everyone,
    I have made a custom delta extractor from R/3 to a BW system. The setup is the following:
    The source table in R/3 holds a timestamp, which is used for the delta. The data is afterwards loaded to a DSO in the BW system. The extractor works as expected with delta capability. Furthermore if I delete a record in the source table, this is not transmitted to the DSO, which is also as expected.
    The issue is this however: If we delete all data in the source table, then on the next load there is a request showing 1 record transfered to the DSO. This request does, however, not show up in the PSA, and afterwards all data fields in the DSO is set to initial.
    Does anyone know why this happens?
    Thank you in advance.
    Philip R. Jarnhus

    Hi Philip,
    As you have used generic extractor I am not sure how the ROCANCEL will work but you can check the below link for more information,
    [0RECORDMODE;
    Regards,
    Durgesh.

  • Join two source tables and replicat into a target table with BLOB

    Hi,
    I am working on an integration to source transaction data from legacy application to ESB using GG.
    What I need to do is join two source tables (to de-normalize the area_id) to form the transaction detail, then transform by concatenate the transaction detail fields into a value only CSV, replicate it on the target ESB IN_DATA table's BLOB content field.
    Based on what I had researched, lookup by join two source tables require SQLEXEC, which doesn't support BLOB.
    What alternatives are there and what GG recommend in such use case?
    Any helpful advice is much appreciated.
    thanks,
    Xiaocun

    Xiaocun,
    Not sure what you're data looks like but it's possible the the comma separated value (CSV) requirement may be solved by something like this in your MAP statement:
    colmap (usedefaults,
    my_blob = @STRCAT (col02, ",", col03, ",", col04)
    Since this is not 1:1 you'll be using a sourcedefs file, which is nice because it will do the datatype conversion for you under the covers (also a nice trick when migrating long raws to blobs). So col02 can be varchar2, col03 a number, and col04 a clob and they'll convert in real-time.
    Mapping two tables to one is simple enough with two MAP statements, the harder challenge is joining operations from separate transactions because OGG is operation based and doesn't work on aggregates. It's possible you could end up using a combination of built in parameters and funcations with SQLEXEC and SQL/PL/SQL for more complicated scenarios, all depending on the design of the target table. But you have several scenarios to address.
    For example, is the target table really a history table or are you actually going to delete from it? If just the child is deleted but you don't want to delete the whole row yet, you may want to use NOCOMPRESSDELETES & UPDATEDELETES and COLMAP a new flag column to denote it was deleted. It's likely that the insert on the child may really mean an update to the target (see UPDATEINSERTS).
    If you need to update the LOB by appending or prepending new data then that's going to require some custom work, staging tables and a looping script, or a user exit.
    Some parameters you may want to become familiar with if not already:
    COLS | COLSEXCEPT
    COLMAP
    OVERRIDEDUPS
    INSERTDELETES
    INSERTMISSINGUPDATES
    INSERTUPDATES
    GETDELETES | IGNOREDELETES
    GETINSERTS | IGNOREINSERTS
    GETUPDATES | IGNOREUPDATES
    Good luck,
    -joe

  • Mapping Issue with 7 source tables and one target table in one step

    Hi,
    Request for a small help in OWB.
    I am trying to map 7 source tables to a single target table in one step.These source tables are in Oracle 10G database but dont have PK and FK relation ship.
    I am able to link one table to the target by pointing some of the columns. But when coming to the second table it is giving some error message :
    Ap18003: Connection target attribute group is already connected to a incompatible data source. Use a joiner or set operator to join the upstream data first before connection it to this operator.
    As per the error message I used a Join operator and tried to map the second table to the target but still giving me the same error message. Could some body give me a hand to give out from this step.
    Thanks for your help in advance.
    Cheers,
    Krishna.

    Hi,
    like this:
    Ingroup1
    - id -> Number(9,0)
    - name -> VARCHAR2(500)
    Ingroup2
    - my_id -> Number(9,0)
    - name -> VARCHAR2(500)
    Outgroup
    - id -> point to target_table.id
    - name -> point to target_table.name
    Not:
    Ingroup1
    - id -> Number(9,0)
    - name -> VARCHAR2(500)
    Ingroup2
    - name -> VARCHAR2(500)
    - my_id -> VARCHAR2(9)
    Regards
    Detlef
    null

Maybe you are looking for

  • Sound recorder with ActionScript

    Is it possible to build a sound recorder with ActionScript? I know that you can do it with a Flash Media Server. But that requires a steam and a server. I just want a simple recorder/player so that a user could test and adjust the settings of his/her

  • Regarding running Documaker on Windows 2012 (64-bit OS)

    Hi, I am looking for some clarification on running Documaker 11.5 on Windows 2012. Will there be any compatibility issues with running Documaker 11.5 on Windows 2012 server that anyone think of? Thanks, Niranjan.

  • Animating within another Animation

    I am new to Flash, but I cannot find a simple answer to my problem in any threads or in any tutorials. I am attempting to make a map of the US zoom in and move across the screen while various things pop up on the map (eventually I will have a car tra

  • CS6 Updates All Fail

    I'm running Mac OS 10.7.5, everytime I try to update my apps all updates fail, please help!

  • What is ALV grid and describe its implementation procedure

    Hi all,            Can you tell me the step by step implementation procedure of ALV grid