OWB Expert - Insert Operator in a mapping - To Share

Hi All,
Designed an expert to insert one of the following operators (aggregator, set_operation, deduplicator, expression, joiner, splitter, sorter, filter) anywhere in a mapping, if logically possible. Expert retains all the connections on insertion of the operator.
Expert MDL file, detailed demo and limitations are documented and available in the zip file, which can be downloaded from here.
Please go through the document in the zip file before using it.
Expert does not comes with any WARRANTY !!! Use at your own risk !!!! Comments and Suggestions are always welcome.
Thanks,
Sam.

Greetings SAM,
1. I cannot connect to the location you indicated for the download. Other replies indicate that your Expert tool(s) are acurate, so I am very interested in obtaining the download. Please help me by verifyin that the Share you have, in fact is stll available.
2. I am following the Oracle tutorial on OWB,,, but wanted a downloadable tutorial , OR Similar, so I can study offline,,, do you have an suggestions. I have the manuals for OWB, but wanted an offline tutorial (study while on the plane :) )
3. Do you have a set of 'scripts' which call the ombplus commands. I am looking for Examples of the use and calling of the ombplus commands, where the examples show common sets of calls, effectively completing a function in OWB. For example,,, as I use the GUI in the OWB Design, I can see that I could also be using the ombPlus commands, to get the same thing done. If I biuld the scripts (or sets of commands), then I can both Use the scripts/commands to get the job done,,, and I can Use the scripts/commands to teach others the main functional calls to work with the OMB. All ideas welcome.
ps, I am new to this forum, so please give me hints on protocol , if I make a mistake here,,, paul
Thanks kindly SAM.
Paul
user638704
-----------------

Similar Messages

  • Using delete/insert mode operator target in mapping

    Hello Guys,
    Can you help me please to resolve my problem. It's very urgent.
    I use OWB 10gR2 and I create a mapping for loading data from table to table.
    I got a source table A that I want to integrate into a table B.
    I want to put my target table B on delete/insert mode to delete data from B where A.annee = B.annee
    before inserting data from A.
    How to configure this mapping.
    Thanks in advance.
    Regards.
    fanfita.

    you dont have to do anything in particular:
    on the target table propperties click on the load type as Delete/Insert and specify the column which you need to check while deleting this can be done by clicking on the column of target table in your mapping and setting the properties such as "load while insert/delete/update etc" , you got to check the delete option of the columns that you want to check while deleting.
    If your target table has sequence then delete and insert will generate new sequence id and i am not sure if there is a Foregin key which needs to be considered here so it might be a better option to use update/insert
    if there are no dependencies at all then you can go for delete/insert option.
    Edited by: Darthvader-647181 on Feb 5, 2009 1:46 AM

  • Using two premap operator in a map.

    Hi,
    Can I use two premap operator in a map in OWB 10g Release 1?
    If no then Is it possible to do in OWB 10g release 2?
    Please reply.
    Thanks in advance.

    From Oracle® Warehouse Builder User's Guide 10g Release 2 (10.2.0.2)
    A mapping can only contain one Pre-Mapping Process operator. Only constants, mapping input parameters, and output from a Pre-Mapping Process can be mapped into a Post-Mapping Process operator
    Thanks,
    Sutirtha

  • Invalid Location for operator while batch mapping

    Hi,
    I am using 10.2g version of OWB and while deploying the batch mapping, I am getting errors like:
    "VLD-1134: Invalid location for operator DATE_REF_SEQ_0.
    Configured location for operator does not exist or is not a valid location under the referenced module."
    The "DATE_REF" is one of the tables in the database. Similarly, I am getting error for many of these operators while i try to deploy the batch mapping. Could you please help us on how can we change the location of these operators in the design center/control center?
    Thanks,
    Vipul

    Hi ,
    VLD-1134 error comes when the location in configuration properties is not set or pointing to a different location.
    This generally happens when you migrate OWB code from 1 server to another or from 1 version to another.
    Goto your OWB 10.2 Designer .
    Towards right side you will find Connection Explorer
    Create new location pointing to your database ( If location is already available then note down the name )
    Now on your Source module set the metadata location and data location . You can do this by double clicking onto the module name .
    Do the same for your Target module .
    You need to set the location for Streams Administrator and Location from module Configuration .
    Thanks,
    Sutirtha

  • What causes BUFFER GETS and PHYSICAL READS in INSERT operation to be high?

    Hi All,
    Am performing a huge number of INSERTs to a newly installed Oracle XE 10.2.0.1.0 on Windows. There is no SELECT statement running, but just INSERTs one after the other of 550,000 in count. When I monitor the SESSION I/O from Home > Administration > Database Monitor > Sessions, I see the following stats:
    BUFFER GETS = 1,550,560
    CONSISTENT GETS = 512,036
    PHYSICAL READS = 3,834
    BLOCK CHANGES = 1,034,232
    The presence of 2 stats confuses. Though the operation is just INSERT in database for this session, why should there be BUFFER GETS of this magnitude and why should there by PHYSICAL READS. Aren't these parameters for read operations? The BLOCK CHANGES value is clear as there are huge writes and the writes change these many blocks. Can any kind soul explain me what causes there parameters to show high value?
    The total columns in the display table are as follows (from the link mentioned above)
    1. Status
    2. SID
    3. Database Users
    4. Command
    5. Time
    6. Block Gets
    7. Consistent Gets
    8. Physical Reads
    9. Block Changes
    10. Consistent Changes
    What does CONSISTENT GETS and CONSISTENT CHANGES mean in a typical INSERT operation? And does someone know which all tables are involved in getting these values?
    Thank,
    ...

    Flake wrote:
    Hans, gracias.
    The table just have 2 columns, both of which are varchar2 (500). No constraints, no indexes, neither foreign key references are in place. The total size of RAM in system is 1GB, and yes, there are other GUI's going on like Firefox browser, notepad and command terminals.
    But, what does these other applications have to do with Oracle BUFFER GETS, PHYSICAL READS etc.? Awaiting your reply.Total RAM is 1GB. If you let XE decide how much RAM is to be allocated to buffers, on startup that needs to be shared with any/all other applications. Let's say that leaves us with, say 400M for the SGA + PGA.
    PGA is used for internal stuff, such as sorting, which is also used in determing the layout of secondary facets such as indexes and uniqueness. Total PGA usage varies in size based on the number of connections and required operations.
    And then there's the SGA. That needs to cover the space requirement for the data dictionary, any/all stored procedures and SQL statements being run, user security and so on. As well as the buffer blocks which represent the tablespace of the database. Since it is rare that the entire tablespace will fit into memory, stuff needs to be swapped in and out.
    So - put too much space pressure on the poor operating system before starting the database, and the SGA may be squeezed. Put that space pressure on the system and you may enbd up with swapping or paging.
    This is one of the reasons Oracle professionals will argue for dedicated machines to handle Oracle software.

  • How to use the mirrored and log shipped secondary database for update or insert operations

    Hi,
    I am doing a DR Test where I need to test the mirrored and log shipped secondary database but without stopping the mirroring or log shipping procedures. Is there a way to get the data out of mirrored and log shipped database to another database for update
    or insert operations?
    Database snapshot can be used only for mirrored database but updates cannot be done. Also the secondary database of log shipping cannot used for database snapshot. Any ideas of how this can be implemented?
    Thanks,
    Preetha

    Hmm in this case I think you need Merge Replication otherwise it breaks down the purpose of DR...again in that case.. 
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Database Adapter insert operation with return value

    Hi All,
    I have a table with auto generate parimary key in DB2 database. I need to have an insert operation ont this table which should return current value of primary key after insert.
    For this , I have created an insert operation in DB Adapter. But this insert operation in oneway operation.
    Is there any way I can create an Insert operation in DB2 adapter which returns the primary key value?
    Thanks
    --Sree                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Hi Sree,
    With insert operation it is not possible. You may use stored procedure/function to perform insert and return the required value, and call this SP/function using DB Adapter.
    Regards,
    Anuj

  • Insert operation takes looooooooooong time

    One of our ETL procedures does an Insert operation on a table with the records selected from a couple of tables across DB link.
    While the SELECT query takes about 6 seconds to retrieve nearly 42,00,000 records, the insert of those records to a table takes about 45 minutes.
    Infact I've altered the table to NOLOGGING mode and the /*+ append */ comment(for Insert) is in place to reduce the redo logs.The destination table has no index and no constraints as well..
    Is there any other way that I can adapt to reduce the time of insert operation?
    Thanks,
    Bhagat

    >While the SELECT query takes about 6 seconds to retrieve nearly 42,00,000 records
    Is this in TOAD? If so, TOAD actually returns rows in sections and may not be returning the full set. You would have to actually scroll to the bottom of the grid and wait for the data to finish loading. Caution, if you did not select the option to execute queries in threads in TOAD you will not have the cancel query ability.
    >the insert of those records to a table takes about 45 minutes
    Have you performed a CREATE TABLE AS using the query. This will give you a good benchmark for performance during a direct-path load. You can then look at the USER_SEGMENTS.BYTES column for that table after a load and, with your timings, check the data transfer rate with your network support.

  • DIServer insert operations for sales orders with error

    DIServer insert operations for sales orders with
    That even though the insert is inserted DocDueDate DocDueDate it says error.
    Subtracting the value of the format 'yyyy-mm-dd', 'yyyy/mm/dd', 'mm-dd-yyyy', 'mm/dd/yyyy' put all reporting
    When the input is entered DocDueDate ShipDate also put together ... but I get an error.
    The error message 'env: Receiver-10Enter due date [ORDR.DocDueDate] 171AddObject2EEE7D98-AB71-464A-93AB-933F0AD3D4DC'
    Purchase order entered into the normal value because the xml is missing or wrong with you.
    Please answer all the possibilities that can be resolved
    This Xml used.
    "<BOM>" +
    "<BO>" +
    "<AdmInfo>" +
    "<Object>oOrders</Object>" +
    "</AdmInfo>" +
    "<QueryParams>" +
    "<DocEntry />" +
    "</QueryParams>" +
    "<Documents>" +
            "<row>" +
            "<DocType>I</DocType>" +
            "<DocDate>2012-01-11</DocDate>" +
            "<DocDueDate>2012-01-11</DocDueDate>" +
            "<CardCode>CD00001</CardCode>" +
            "<Address>Anymode</Address>" +
            "<DocCurrency>KRW</DocCurrency>" +
            "<Comments>[sales orders] LGU TEST</Comments>" +
            "<TaxDate>2012-01-11</TaxDate>" +
            "<JournalMemo>JournalMemo</JournalMemo>" +
            "<Address2>Addr</Address2>" +
            "<BPL_IDAssignedToInvoice>1</BPL_IDAssignedToInvoice>" +
            "</row>" +
    "</Documents>" +
    "<Document_Lines>" +
            "<row>" +
            "<ItemCode>ACDT0100ET</ItemCode>" +
            "<Quantity>1</Quantity>" +
            "<Price>5000</Price>" +
            "<DiscountPercent>10</DiscountPercent>" +
            "<WarehouseCode>A100</WarehouseCode>" +
            "<VatGroup>A2</VatGroup>" +
            "</row>" +
    "</Document_Lines>" +
    "</BO>" +
    "</BOM>";

    I had the same error change the Date to the format yyyymmdd, and problem solved.

  • ORA-022887 error during insert operation how to handle properly ...

    Hello everyone I got an error during insert operation how to handle it properly ?
    SQL statements given below.
    INSERT INTO PERSONEL.TRANSLATIONS (TID,SCRIPT_NAME,TAG,TR,EN,LOCAL)
    VALUES ((SELECT PERSONEL.SQX_TID.NEXTVAL AS TID FROM DUAL),'TEST_TEST','TEST','TR','EN','LOCAL');
    thank you

    I could not find an error like ORA-022887. What is the exact error you are getting? Do a cut and past of the error here.
    Ok the proper error is ORA-02287: sequence number not allowed here
    As already said just remove the select and use the sequence directly in the insert.
    Edited by: Karthick_Arp on Sep 11, 2009 1:04 AM

  • Which kind of cache group is suitable for the intensive insertion operation

    Hi Chris,sorry for call you directly. Because you give me many good answers about my many newbile questions these days:)
    You told me that the dynamic cache group is not suitable for the intensive insertion operation
    because each INSERT to a child table has to perform an existence check against Oracle even if load the cache group into RAM manually(Please correct me if wrong).
    Here I have many log tables that they only have a primary key and no foreign references and they are basically used to reflect changes from the related main tables.
    Every insert/update/delete on the main table will insert a log record in the related logging table(No direct foreign references).
    In order to cache these log tables, I have to create a independent cache group for each one, right?
    I do not want load these logs data into RAM because my application do not use them or these logs will waste my RAM clearly.
    so here comes my question.Which kind of cache group should I use to gain the best performance with no loading them into RAM?
    As my understand,the dynamic cache group load data on demand while the regular cache group need load all the data into RAM firstly and it won't load data from oracle anymore?
    Thanks in advance
    SuoNayi

    Let me be more specific. Consider this cache group:
    CREATE DYNAMIC ASYNCHRONOUS WRITETHROUGH CACHE GROUP CG_SWT
    FROM
    TPARENT
    PPK NUMBER(8,0) NOT NULL PRIMARY KEY,
    PCOL1 VARCHAR2(100)
    TCHILD
    CPK NUMBER(6,0) NOT NULL PRIMARY KEY,
    CFK NUMBER(8,0) NOT NULL,
    CCOL1 VARCHAR2(20),
    FOREIGN KEY ( CFK ) REFERENCES TPARENT ( PPK )
    INSERTS into TPARENT will not do any existence check in Oracle. An INSERT INTO TCHILD has to verify that the corresponding parent row exists. If the parent row exists in TimesTen then no check is doen in Oracle. If the parent row does not exist in TimesTen then we have to check if it exists in Oracle and if it does we will load it into TimesTen from Oracle (along with any other child rows) before completing the INSERT in TimesTen. So in the case where the parent always exists already in TimesTen there is no overhead but on the other case there is a lot of overhead.
    If your log table is truly not related to the main table (not in TT and not in Oracle either) then they should go into separate cache groups. If each insert into the log table has a unique key and there is no possibility of duplicates then you do not need to load anything into RAM. You can start with an empty table and just insert into it (since each insert is unique). Of course, if you just keep inserting you will eventually fuill up the memory in TimesTen. So, you need a mechanism to 'purge' no longer needed rows from TimesTen (they will still exist in Oracle of course). There are really two options; investigate TimesTen auotmatic aging (see documentation) - thsi may be adeuate of the insert rate is not too high - or implement a custom purge mechanism using UNLOAD CACHE GROUP (see documentation).
    Chris

  • Degrading performance when running consecutive insert operations

    Hi,
    I'm using DB-XML (2.5.16) as a backend storage for a web application that works on top of a TBX (TermBase eXchange) document. The application is using the Python bindings and development is being done on GNU/Linux with Python 2.6.
    The document is stored in a node-storage container, autoindexing is off at the time of container creation and transactions are enabled.
    After a set of indexes are set, queries work quite fast.
    On the other hand, when users input new data (terms) or perform edits on existing data, insert and replace operations have instant effect.
    The application has also a feature to insert lots of new terms in a single click, resulting in a new insert operation for each term. If the amount of terms to be inserted is relatively small (let's say ~10), the operation is quickly performed and the user receives a response almost instantly.
    Anyway, the problem arises when there are lots of new terms to be inserted. It starts working fast but performance quickly starts to degrade badly, needing more long seconds for each insert operation. Python's CPU-usage seems to go up to 100% when doing the actual insert, too.
    I understand this is not the best-working scenario for DB-XML (a single large document), but I don't think this performance is normal or acceptable.
    I have tried increasing Berkeley DB's cache size to 64MB with no success.
    Any hints about what should I be looking at? any more recommendations?
    These are the defined indexes:
    dbxml> listindexes
    Index: node-element-equality-string for node {}:admin
    Index: node-element-equality-string for node {}:descrip
    Index: node-attribute-equality-string edge-attribute-equality-string for node {}:id
    Index: node-attribute-equality-string for node {http://www.w3.org/1999/xhtml}:lang
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    Index: node-element-equality-string for node {}:ref
    Index: node-element-equality-string node-element-substring-string for node {}:term
    Index: node-element-equality-string for node {}:termNote
    Index: node-attribute-equality-string edge-attribute-equality-string for node {}:type
    9 indexes found.Container information:
    dbxml> info
    Version: Oracle: Berkeley DB XML 2.5.16: (December 22, 2009)
             Berkeley DB 4.8.30: (2010-12-09)
    Default container name: cont.dbxml
    Type of default container: NodeContainer
    Index Nodes: on
    Auto-indexing: off
    Shell and XmlManager state:
         Transactional, no active transaction
         Verbose: on
         Query context state: LiveValues,Eager

    As you both have mentioned I have tried increasing the cache size to 512MB or even to 1GB (I have recreated the entire DBs after setting cache sizes), but I don't see any significative improvements.
    I have also tried to tune my insert queries, and now I think they're in better shape than before. I would say the initial inserts feel slightly faster, but this only happens when the DB is empty (just bootstrapped). Then, once the DB has some term entries and grows in size, it starts to degrade and inserting becomes expensive in order of magnitudes.
    Each insert operation is performed in a separate transaction. And yes, I'm using transactions all over the application.
    Vyacheslav, I'll send you a couple of containers along with insert queries created by the application so you can play with.

  • Hello to all. I'm not an expert in operating systems, and I pose this question: Can I upgrade from 10.6.8 to Mountain Lion on a Mac Book Pro

    Hello to all. I'm not an expert in operating systems, and I pose this question: Can I upgrade from 10.6.8 to Mountain Lion on a Mac Book Pro

    That depends on the Macbook Pro.The requirements for Mountain Lion are listed here:
    http://www.apple.com/osx/specs/
    It is available from the Mac App Store (in Applications).
    You should be aware that PPC programs (such as AppleWorks) will not run on Lion or above; and some other applications may not be compatible - there is a useful compatibility checklist at http://roaringapps.com/apps:table

  • INSERT operation - Doubt

    i would like to know , wht UNDO information genrated in case of an insert operation ...? since there is no befor image of a balck
    in case of a insert statment then how oracle ensure a read consistency.
    Regards,
    Gufran

    With insert , only the rowid changes get loggedHuh ? The "get logged" words are very misleading and throwaway.
    I think you mean to say that the Undo captures only the ROWIDs -- those that need to be deleted should the transaction be rolled back. However, the INSERT operation itself -- all the rows and columns going into the table -- is logged.
    For every "normal" DML, redo captures
    a. the Undo for the DML
    b. the DML itself
    In an INSERT operation, the Undo is very little. So the Redo for the Undo is also very little. But the Redo for the DML is not, necessarily, very little.

  • Is it possible to insert a PDF directly(ie from Share Point) to email (Outlook) without saving it?

    Is it possible to insert a PDF directly(ie from Share Point) to email (Outlook) without saving it?
    I open several docs on Sharepoint that I need to send out immediately and waste a lot of time saving it to my computer and than opening an email and attaching it and than deleting the file bc I don't need it - its already on Sharepoint where I can access it. 

    I have the exact same problem.  I used to could do it without saving until I upgraded to a new version.  I cannot figure out how to send it without saving.

Maybe you are looking for