Mapping with only Outputs, no Target table?

Is it technically possible in OWB to have a Mapping whose output is stremed to the OUTPUT PARAMETER object instead of a Table?
I would love to have something like that so that I could convert function into Mappings with Input and Output parameters and thus reduce coding even further.
If not available, any idea if it will be in the near future?

Hi Sridhar,
This is not yet there, but we indeed added this to the next release. You can end in a API (a nice name for it...) which exactly does what you want it to do.
Maybe for now you can write a dummy insert (select 1 from dual into x)?
Jean-Pierre

Similar Messages

  • I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY WUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ??

    I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY QUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM .I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ?? HOW IT IS ??

    Hi Kishore,
    First identify deleted records by selecting "Detect deleted rows from comparison table" feature in Table Comparison
    Then Use Map Operation with Input row type as "delete" and output row type as "delete" to delete records from target table.

  • How to find the interface with the help of target table name

    Hi friends,
    Whether with the help of target table name, is it possible to find the interface name that invokes this target table in ODI 11.1.1.7.
    Thanks in advance.
    Regards,
    Saro

    highlight the datastore in the navigator, expand its sub tree, there is a 'populated by' node, under the node, it will list all datastores which are used as sources to the current datastore
    then expand 'used by'->interfaces, it will show all interfaces which include current datastore(but maybe current datastore is used as source)
    that's all you can get from UI, otherwise I suppose you can use SDK.

  • How to design a mapping with a pre-mapping operator

    Hi,
    I want to design a mapping with only one operator: a pre-mapping operator that uses a procedure with no parameters (so there are no input/output groups). When I validate this mapping I get error VLD-1009: Mapping lines have not been created.
    Question 1: how do I design a mapping if all I want to do is to execute a procedure and there are no target operators involved?
    Question 2: how are pre-mapping process operators intended to be used? (apparently not the way I use one here)
    Jaap.

    One way of doing this is to jus add a constant and a target table to insert a value into a dummy target. (i.e sysdate into dummy_table) This way you can execute the procedure using a pre mapping operator.

  • Aggregtaor Transformation without any condition showing zero records in target table

    Hi Everyone, I have a source table which has 100 records,I just passing all the 100 records to aggregator transformation. In aggregator transformation i am not giving any condition,I just mapped from aggreator directy to target table.After running this mapping in target table i found 0 records in target table. why this happing can anyone explain this?

    Magazinweg 7Taucherstraße 10Taucherstraße 10Av. Copacabana, 267Strada Provinciale 124Fauntleroy CircusAv. dos Lusíadas, 23Rua da Panificadora, 12Av. Inês de Castro, 414Avda. Azteca 123 I have the source table like this and i want to replace the character and sum up the numbers and how can i do it, I replace the character by reg_replace() function but I am not able to add the number because it is of not fixed length. My Output should be,71115705396

  • OMS Plus script needed to change target table load hint

    Greetings
    Environment:
    Metadata Repository - 10.1.0.4 currently but going to 10.2.0.3 soon running client on Windows XP Pro and repository on AIX 5.3
    I need an OMB Plus script to find all the maps where the target table(s) have a Loading Type of either INSERT/UPDATE or UPDATE/INSERT and I want to remove the APPEND loading hint.
    I have suffered with attempts at using OMB Plus long enough.
    Also, is there a way in either 10.1 or 10.2 to set the default action of that attribute to NOT include the APPEND hint?
    Thanks very much in advance.
    -gary

    Hi Gary,
    you can get loading type with command
    OMBRETRIEVE MAPPING 'MAP_NAME' OPERATOR 'TABLE_OP' GET PROPERTIES (LOADING_TYPE)
    and loading hint with command
    OMBRETRIEVE MAPPING 'MAP_NAME' OPERATOR 'TABLE_OP' GET PROPERTIES (LOADING_HINT)
    In this thread you can find tcl script for identifying target or source tables/views in mapping
    Re: Q : OMBPLUS, find TARGET TABLE
    Regards,
    Oleg

  • Mapping Issue with 7 source tables and one target table in one step

    Hi,
    Request for a small help in OWB.
    I am trying to map 7 source tables to a single target table in one step.These source tables are in Oracle 10G database but dont have PK and FK relation ship.
    I am able to link one table to the target by pointing some of the columns. But when coming to the second table it is giving some error message :
    Ap18003: Connection target attribute group is already connected to a incompatible data source. Use a joiner or set operator to join the upstream data first before connection it to this operator.
    As per the error message I used a Join operator and tried to map the second table to the target but still giving me the same error message. Could some body give me a hand to give out from this step.
    Thanks for your help in advance.
    Cheers,
    Krishna.

    Hi,
    like this:
    Ingroup1
    - id -> Number(9,0)
    - name -> VARCHAR2(500)
    Ingroup2
    - my_id -> Number(9,0)
    - name -> VARCHAR2(500)
    Outgroup
    - id -> point to target_table.id
    - name -> point to target_table.name
    Not:
    Ingroup1
    - id -> Number(9,0)
    - name -> VARCHAR2(500)
    Ingroup2
    - name -> VARCHAR2(500)
    - my_id -> VARCHAR2(9)
    Regards
    Detlef
    null

  • Updating target table with date in owb using mapping

    I want to update the effetive begin date (month begin date) when we load the target table. I have field in target table has Eff_updated_dt defined as DATE.
    If I use the Data generator operator and then expression operator (trunc (sysdate,'mm' ) it giving error. It says data generator operator should be connected to flat file .
    What other operators can I use to update the column with month begin data.
    Also which operator I have use to have sequence key in the table.
    Thanks

    you can always use a constant for the date field.
    just create a constant of type date and give it the value you want, trunc (sysdate,'mm' ) and connect it to your target column, Eff_updated_dt defined.
    you have a sequence operator that you can use to link to your "sequence key" as well.
    Borkur

  • URGENT:Problem in a mapping with 8 tables in JOIN and using the DEDUP op.

    I have an urgent problem with a mapping: I must load data from 8 source tables (joined togheter) in a target table.
    Some data from 1 of the 8 tables have to be deduplicated, so I created a sort of staging table where I inserted the "cleaned" data, I mean, the deduplicated ones.
    I made it to make all the process faster.
    Then, I joined the 8 tabled, writing the join conditions in the operator properties, and connected the outputs into the fields of the target table.
    But..it does not work because this kind of mapping create a cartesian product.
    Then, I tried again with another mapping builted up in this way: after the joiner operator, I used the Match-Merge Operator and I load all data into a staging table exactly alike the target one, except for the PK (because it is a sequence). Then, I load the data from this staging table into the target one, and, of course, I connect to the target table also the sequence (the primary key). The first loading works fine (and load all the data as I expected).
    For the next loadings,I scheduled a pre-mapping process that truncate the staging table and re-load the new data.
    But..it does not work. I mean, It doesn't update the data previously loaded (or inser the new ones), but only insert the data, not considering the PK.
    So, my questions are as follow:
    1) Why loading the data directly from the joiner operator into the fact table doesn't work? Why does it generate a cartesian product??
    2) The "escamotage" to use the Match-Merge operator is correct? I have to admit that I didn't understand very well the behaviour of this operator...
    3) And, most of all, HOW CAN I LOAD MY DATA? I cannot find a way out....

    First of all, thanks for the answer!
    Yes, I inserted the proper join condition and in fact I saw that when WB generates the script it considers my join but, instead of using the fields in a single select statement, it builts up many sub-selects. Furthermore, it seems as it doesn't evaluate properly the fields coming from the source tables where I inserted the deduplicated data...I mean, the problems seems not to be the join condition, but the data not correctly deduplicated..

  • Join two source tables and replicat into a target table with BLOB

    Hi,
    I am working on an integration to source transaction data from legacy application to ESB using GG.
    What I need to do is join two source tables (to de-normalize the area_id) to form the transaction detail, then transform by concatenate the transaction detail fields into a value only CSV, replicate it on the target ESB IN_DATA table's BLOB content field.
    Based on what I had researched, lookup by join two source tables require SQLEXEC, which doesn't support BLOB.
    What alternatives are there and what GG recommend in such use case?
    Any helpful advice is much appreciated.
    thanks,
    Xiaocun

    Xiaocun,
    Not sure what you're data looks like but it's possible the the comma separated value (CSV) requirement may be solved by something like this in your MAP statement:
    colmap (usedefaults,
    my_blob = @STRCAT (col02, ",", col03, ",", col04)
    Since this is not 1:1 you'll be using a sourcedefs file, which is nice because it will do the datatype conversion for you under the covers (also a nice trick when migrating long raws to blobs). So col02 can be varchar2, col03 a number, and col04 a clob and they'll convert in real-time.
    Mapping two tables to one is simple enough with two MAP statements, the harder challenge is joining operations from separate transactions because OGG is operation based and doesn't work on aggregates. It's possible you could end up using a combination of built in parameters and funcations with SQLEXEC and SQL/PL/SQL for more complicated scenarios, all depending on the design of the target table. But you have several scenarios to address.
    For example, is the target table really a history table or are you actually going to delete from it? If just the child is deleted but you don't want to delete the whole row yet, you may want to use NOCOMPRESSDELETES & UPDATEDELETES and COLMAP a new flag column to denote it was deleted. It's likely that the insert on the child may really mean an update to the target (see UPDATEINSERTS).
    If you need to update the LOB by appending or prepending new data then that's going to require some custom work, staging tables and a looping script, or a user exit.
    Some parameters you may want to become familiar with if not already:
    COLS | COLSEXCEPT
    COLMAP
    OVERRIDEDUPS
    INSERTDELETES
    INSERTMISSINGUPDATES
    INSERTUPDATES
    GETDELETES | IGNOREDELETES
    GETINSERTS | IGNOREINSERTS
    GETUPDATES | IGNOREUPDATES
    Good luck,
    -joe

  • "Economizing" in a list of maps with identical keys (like a database table)

    Hi there!:
    I've been checking this forum for information about something like what I state in the title of this message, but haven't found such a specific case. I'm commenting my doubt below.
    I'm working with a list of maps whose keys are exactly the same in all them (they're of type String). Indeed it could be considered an extrapolation of a database table: The list contains maps which act as rows, and every map contains keys and values which represent column names and values.
    However, this means to repeat the same key values on every map and this spends memory. Right, maybe it's not such a big spent, but since the list can contains thousands of maps, I think that it would be better to choose a more "economical" way to achieve the same result.
    I had thought about building a class which stored everything as a list of lists and, internally, it mapped that String keys with the corresponding Integer indexes of every list. But then I realized that maybe I was re-inventing the wheel, because it's very probable that someone has already made that. Maybe is there a class on the Core API which allows that?
    Thank you very much for your help.

    Well, after re-reading the Java tutorial which is located in the Sun website I've came to a conclusion which I should have before, when I thought about using StringBuffers as keys of the maps instead of Strings.
    I'm so used to build Strings using literals instead of the "new String ()" constructor (just as everyone) that I had forgotten that, as it happens with any kind of object but not the primary data types, Strings are not passed to the methods by value, but by reference. The fact of them being immutable made me think that they were passed by value.
    Apart of that, my problem also was that using literals I was creating different String objects every time, despite the fact that they were equal about their content (making 400 different keys called "name" for example)
    In other words, I was doing something like this:
    // It makes a list of maps which will contain maps of boy's personal data (as if they were "rows" in a table).
    List <Map <String, Object>> listData = new ArrayList <Map <String, Object>> (listBoy.size ());
    // It loops over a list of Boy objects, obtained using EJB.
    for (Boy boy : listBoy) {
         // It makes a new map containing only the information which I'm interested on from the Boy object.
         Map <String, Object> map = new HashMap <String, Object> (2);
         map.put ("name", boy.getName ());
         map.put ("surname", boy.getSurname ());
         // It adds the map to the list of data.
         listData.add (map);
    }Well, the "problem" here (being too demanding, but I'm :P ) is that I was adding all the time new Strings objects as keys in every map. The key "name" in the first map was different from "name" in the second one and so on.
    I guess that my knowledge got messed at certain point and thought that it was impossible to use exactly the same String object in different maps (the reference, not the same value!). Thus, my idea of using StringBuffers instead.
    But thinking about it carefully, Why not to do this?:
    List <Map <String, Object>> listData = new ArrayList <Map <String, Object>> (listBoy.size ());
    // It makes the necessary String keys previously, instead of using literals on every loop later.
    String name = "name";
    String surname = "surname";
    for (Boy boy : listBoy) {
         // It uses references (pointers) to the same String keys, instead of new ones every time.
         Map <String, Object> map = new HashMap <String, Object> (2);
         map.put (name, boy.getName ());
         map.put (surname, boy.getSurname ());
         listData.add (map);
    }Unfortunately, the "hasCode" method on String is overloaded and instead of returning the typical hash code based on the single ID of the object in memory, it returns one based on its content. That way I can't make sure that the "name" key in one map refers to the same object in memory than another one. I know, I know. The common sense and the Java documentation confirm that, but had loved having an empiric way to demonstrate it.
    I guess that using "javap" and disassembling the generated bytecode is the only way to make sure that it's that way.
    I believe that it's solved now :) (if no one tells me the contrary). I still am mad at myself for thinking that Strings were passed by value. Thinking about it now it had no sense!
    dannyyates: It's curious because re-reading every answer I think that you maybe were pointing to this solution already. But the sentence "you put the +same+ string" was a little ambiguous for me and thought that you meant putting the same String as "putting the same text" (which I already was doing), not the same object reference (in other words, using a common variable). I wish we could have continued discussing that in depth. Thanks a lot for your help anyway :) .

  • How do I create a target table with the same PK as the source table?

    I am trying to create a target table in a mapping that will end up with the same primary key as the source table.
    It is a simple map that simply uses a subset of the columns of the source table in the target table. I was wanting to create and bind a new table by dragging the columns I want from the source to the initially blank target table operator, change the column names and create a primary key to match the source table.
    I can't seem to be able to create a constraint on the table in the mapping. I can create the constraint after the table is created and boound to the database object but the PK doesn't carry back into the mapping.
    I need it in the mapping so I can use the UPDATE/INSERT operation and use the 'All Constraints' implementation. The mapping won't let me validate the object without the PK on it in the map.
    Believe it or not folks, I am getting better at this.
    Thanks very much for the guidance.
    Gary

    Hi Gary
    You are close, you are really close... :-))
    You need to do exactly as you propose plus one extra step. Build the map as you describe, binding the new table to the target. Then you edit the table definition to add the primary key and any other constraints you need. After this is the step that you are missing.
    You need to do the following:
    1. Go back and re-edit the map
    2. Right click on the table
    3. From the pop up menu, select Reconcile Inbound
    4. Set any operators that you need for the UPDATE/INSERT
    5. Save the map
    6. Commit your changes
    The first three steps above make the map read in the indexes and constraints that you set on the table. Finally, you need to deploy the table and then deploy the map.
    Hope this helps
    Regards
    Michael

  • Load data from flat file to target table with scheduling in loader

    Hi All,
    I have requirement as follows.
    I need to load the data to the target table on every Saturday. My source file consists of dataof several sates.For every week i have to load one particular state data to target table.
    If first week I loaded AP data, then second week on Saturday karnatak, etc.
    can u plz provide code also how can i schedule the data load with every saturday  with different state column values automatically.
    Thanks & Regards,
    Sekhar

    The best solution would be:
    get the flat file to the Database server
    define an External Table pointing to that flat file
    insert into destination table(s) as select * from external_table
    Loading only single state data each Saturday might mean troubles, but assuming there are valid reasons to do so, you could:
    create a job with a p_state parameter executing insert into destination table(s) as select * from external_table where state = p_state
    create a Scheduler chain where each member runs on next Saturday executing the same job with a different p_stateparameter
    Managing Tables
    Oracle Scheduler Concepts
    Regards
    Etbin

  • How to create a mapping with text file as my target

    I need to create a mapping with source as a table and target as a text file.
    I am using OWB 10g R2. with database Oracle10g.
    Any one can help me to create a mapping with a text file as target.

    Hi,
    just create a File-Location and File-definition and use this file-operator as target object inside your mapping.
    Regards jwehner

  • Select event not triggered in table with only one row

    Hi all,
    I am building a BI VC application where query data is displayed in a table. When the user clicks on a table row another query is then triggered and output in a second table. The output from table 1 is linked to the input of query2/table2 with a select event.
    The problem that I am facing is that if there is only one row in table 1, the select event is never triggered. If, however there are two or more rows in the table the select event is triggered and query 2 is executed. I have searched the forums but all I could find on select event problems was how to avoid the initial select event.
    Has anyone else experienced this issue and what is the workaround or is this a bug in Visual Composer? We are on VC 7.0 SP19.
    Cheers,
    Astein Meland

    Thanks Chittya,
    Yes we have considered this option as well. But as we have more than one table linked together we would like to avoid having to manually click several buttons.
    In the end I found Note 1364334 describing bugfixes released in VC 7.0 SP20:
    "Normally, when a Visual Composer table is populated from a data service, the first row is selected by default. However, we have found that if only one data row is returned from the data service, this row is not selected by default and cannot be manually selected by clicking on it either."
    So I think we will just have to upgrade our Portal to the latest support packs to solve this problem.
    Thanks,
    Astein

Maybe you are looking for

  • CC 2014.0.1 wont open old project

    Mac Pro with OSX 10.9.4 and a NVIDIA GeForce GTX 680 2048 MB, CUDA 6.0.54 I saved a Premier Pro CC 2014 project on 6th August (3 days ago), I then updated Premiere to the 2014.0.1 and now when it opens the project it just hangs after displaying the t

  • Blinking/flashing mouse pointer

    I'm working with the demo version of Adobe Captivate 3, and so far I'm liking it a lot. The problem I'm having, though, is that in full-motion capture mode, the mouse cursor blinks constantly at a high rate. It makes it hard to see where I'm clicking

  • Error 403 when accessing iCalendar

    When accessing iCalendar I receive the message error 403 on CalDAVAccountRefreshQueueableOperation. Can anyone tell me how to fix it?

  • Dreamweaver and Javascript error only on one site

    I receive the following error when trying to select recently modified files from the files panel. While executing onClick in SelectRecentlyModified.html, the following JavaScript error(s) occurred: At line 370 of file "C:\Program Files (x86)\Adobe\Ad

  • Problems connecting to CDDB

    This seems to crop up quite a lot but I haven't found anything that helps me. Connection to the CDDB has never worked, but I didn't bother before because I was importing with other software. I am using V6.0.5 The store and radio stations work fine, b