Writing transformations for OWB/OWF to implement bespoke error handling

I have implemented mappings which perform lookup on a translation table and if the lookup is not found a suitable value is output to a column, e.g. 'ERROR' is written to the output column on an intermediate table e.g. xxx_temp. The intermediate table is then split into two streams and output to two tables, those with errors to xxx_errors and those that are valid to xxx_out. What I want to know is how to write a transformation which will count the number of errors in xxx_errors and if 0 will return 'success' and if > 0 will return 'error' - because OWB Process Flows can only seem to handle three state events. Will this example function work or are there other parameters that I must include in the function before Oracle Workflow can process it correctly.
FUNCTION etl_md_errors
RETURN NUMBER IS
lv_status NUMBER(22) := 0;
lv_count NUMBER(5) := 0;
** Cursor to count the number of errors in the mappings run
** for the Operating Unit Master Data.
CURSOR lcur_count_errors IS
SELECT c1.err_cnt + c2.err_cnt
FROM (
SELECT Count(Rowid) err_cnt
FROM t_mdo_ce_errors
) c1,
SELECT Count(Rowid) err_cnt
FROM t_mdo_cstobj_errors
) c2,
SELECT Count(Rowid) err_cnt
FROM t_mdo_vntr_errors
) c3;
BEGIN
OPEN lcur_count_errors;
FETCH lcur_count_errors
INTO lv_count;
CLOSE lcur_count_errors;
IF lv_count <= 0 THEN
lv_status := 1;
ELSE
lv_status := 3;
END IF;
RETURN lv_status;
EXCEPTION
WHEN OTHERS THEN
RETURN 3;
END etl_md_errors;
I cannot test the Process Flow deployment as Oracle Workflow 2.6.0 has been installed on an Oracle 8i schema but the Location registration version pulldown only has one entry 2.6.2
Cheers,
Phil Thomson

Hi,
You seem to have missed the point of the posting.
I am asking how to write a transformation which can be used in Process Flow to determine whether any lookup validation errors have occurred, e.g. to determine which e-mail to send to the system administrator's email account. I was not asking if the method of processing/validating the transactions needed revamping .. it is what the client asked for and it's what they have tested and approved. I'll give an explanation of the background to the processing.
a) we are reading in transactions from Country Data Warehouses (SAP BW) which are placed as tables in country data source schemas.
b) we want to consolidate the transactions so that they can be loaded into a European wide Data Warehouse (SAP BW) and these are placed as tables in the target country schemas ... which are consolidated in a global schema using views and dblinks.
c) as the Country Data Warehouses are using their own set of reference/look up values the transactions have the country data warehouse reference codes translated to equivalent european reference codes. Therefore in each country target schema we have sets of mapping/xref table(s) which translates one country column value to the equivalent european column value, e.g. area_ctry and area_eur. Area_ctry is the area reference code in the Country Data Warehouse and area_eur is the area reference code to be used in the European Data Warehouse.
d) when a transaction record has a reference value that does not have an equivalent european reference value we want to flag that column and record as an error, as their are several column values to be translated you do not want to only flag up the 1st validation error encountered, you want to validate the entire record and you also want to validate the entire set of transaction records. Users get a bit miffed if you fail the entire batch of transactions on the first column validation or first invalid record, they correct it then find there are other records with errors ... user has to repeat until no errror records.
e) that is the reason we use the Key Lookup operator on the mapping/xref table with the transaction column's country value in the input group and as the value for the Lookup Condition, on the Key Lookup operator's output group we set the default value for the looked up column to 'ERROR' so if no lookup is found 'ERROR' will be output as the looked up value.
f) the Split operator is then used to identify error records e.g. area_eur = 'ERROR' OR region_eur = 'ERROR' etc, and to identify valid records e.g. area_eur != 'ERROR' AND region_eur != 'ERROR' etc. The error records are output their own table and the valid records are output to their own table.
g) if their are true SQL errors e.g. tablespace exceeded, referenced procedure/function state changed then these will be handled by OWB and should be viewed vaie the OWB audit browser.
As stated in my post what is the template of PL/SQL functions that can be used as Transformations in Process Flows with their 'SUCCESS', 'WARNING' and 'ERROR' transition condition.
Hope the above explanation helps,
Cheers,
Phil Thomson

Similar Messages

  • How to implement general error handler in labview projects

    Thanks,

    Hello,
    You may also find these links useful:
    Custom Error Handling In LabVIEW
    http://zone.ni.com/devzone/conceptd.nsf/webmain/de​4f036f22c4b9f286256fee0010b6fd
    LabVIEW Introduction Course - Six Hours (has a section on error handling)
    http://zone.ni.com/devzone/learningcenter.nsf/03f7​c60f17aad210862567a90054a26c/55974411828f779086256​...
    Hope this helps!
    Charlie S.
    Visit ni.com/gettingstarted for step-by-step help in setting up your system

  • LabVIEW for Mapping of Network Drives with Error Handling

    Is there a way to do the following easily in LabVIEW?
    - List drive letters and network paths of existing mapped drives on host PC
    - Check if a desired mapped drive is connected and if its disconnected reconnect to it
    - Receive related errors that may occur during attempting a connection to a mapped drive (i.e. path cannot be found, etc...)
    I know how to do most of this in VBScript but I'd like to avoid having ANY code for the work I'm doing be "non-G"!
    Thanks for your time!

    Hello Ray,
    You may want to take a look at the following Knowledgbase articles on NI.com
    How Does LabVIEW Find Which Disk Drives Are in My Computer?
    http://digital.ni.com/public.nsf/websearch/9DFF8F1788A7171A86256D10003776C0?OpenDocument
    How Can I Programmatically Map a Network Drive in LabVIEW on a Windows 2000/XP Machine?
    http://digital.ni.com/public.nsf/websearch/313C5E597B48652F86256D2700674EDB?OpenDocument
    Best Regards,
    Chris J

  • LV7.1 Strange behavior with Automatic Error Handling occuring when it shouldn't [LV 7.1 Pro on WinXP for Tablet PC's]

    [LV 7.1 Pro on WinXP for Tablet PC's]
    I recently let a rather large LV app of mine run in the development environment while I was out for a couple of days. Upon returning I found that the app had hung for ~22 hours waiting for an answer to an Automatic Error Handling (AEH) dialog proclaiming an Error 7 in New File without any indication of the VI hierarchy that called New File.  I set about ensuring that AEH dialogs would not pop up and have not been able to discover how I could have possibly received one in the first place.
    Subsequent investigation revealed:
    Neither AEH option in Options>Block Diagrams were checked.
    Network problems had occurred around the time that the app had hung.  All file paths are network paths when running in the development environment, so the cause of the error was most likely valid, even if the AEH dialog appearance wasn't.
    My app has only one instance where the New File primitive is used by me. That subVI and all others above it in the hierarchy DO NOT have the AEH property enabled.  The error out cluster of New File in my subvi is wired.
    My app has three instances where New File is called from a vi.lib vi (Open/Create/Replace File.vi, Open Config Data.vi, and Prompt Web Browser Path.vi), none of which have the AEH property enabled.  Nor does any of their calling VI's.  All three instances also have their error out cluster wired.
    A utility to examine the AEH property of all VI's (with all top level and dynamic VI's loaded) in memory reported that only 1 of 308 vi's ( RGT Does File Exists.vi from the Report Generation Toolkit) had that property true.  That vi has no subVI's other than the File/Directory Info primitive and no calling VI's in common with any of the vi's that call New File, except a top level VI.
    As long as 'Enable automatic error handling dialogs' remains unselected in options>block diagram, I am unable to get an AEH dialog for either the New File or File/Directory Info primitives in a test VI with AEH property enabled and their error out clusters unwired no matter what invalid path I pass to the functions.  As soon as the options>block diagram>Enable AEH dialogs' is selected, both primitives fire AEH dialogs with no error out wired and don't when wired. i.e. works as advertised.
    In other words I can find no reason why I should have gotten the problem AEH dialog...
    I cannot afford for this app to hang because of a network problem, other portions of the app that were running concurrently correctly handled the error and, had the AEH dialog not appeared, the app would have made corrections or shutdown in an orderly fashion.
    Any ideas?

    Very good.
    Write Characters to File.vi>Open/Create/Replace File.vi>New File
    New File throws the error.  Open/Create/Replace strips the hierarchy from the source of the error.  Write Characters passes it to the General Error Handler.  I never looked above O/C/R file in the hierarchy except for enable automatic error handling property.  The tip-off should have been to realize that O/C/R file was stripping the hierarchy from the error and look above that. 
    The real irony is that Write Characters was being used to log error cluster data to an error log file...
    Save as... Copy without updating... the OEM 'Write Characters to File' is gone from this app.
    Thanx (a bunch)

  • Error handling for distributed cache synchronization

    Hello,
    Can somebody explain to me how the error handling works for the distributed cache synchronization ?
    Say I have four nodes of a weblogic cluster and 4 different sessions on each one of those nodes.
    On Node A an update happens on object B. This update is going to be propogated to all the other nodes B, C, D. But for some reason the connection between node A and node B is lost.
    In the following xml
    <cache-synchronization-manager>
    <clustering-service>...</clustering-service>
    <should-remove-connection-on-error>true</should-remove-connection-on-error>
    If I set this to true does this mean that the Toplink will stop sending updates from node A to node B ? I presume all of this is transparent. In order to handle any errors I do not have to write any code to capture this kind of error .
    Is that correct ?
    Aswin.

    This "should-remove-connection-on-error" option mainly applies to RMI or RMI_IIOP cache synchronization. If you use JMS for cache synchronization, then connectivity and error handling is provided by the JMS service.
    For RMI, when this is set to true (which is the default) if a communication exception occurs in sending the cache synchronization to a server, that server will be removed and no longer synchronized with. The assumption is that the server has gone down, and when it comes back up it will rejoin the cluster and reconnect to this server and resume synchronization. Since it will have an empty cache when it starts back up, it will not have missed anything.
    You do not have to perform any error handling, however if you wish to handle cache synchronization errors you can use a TopLink Session ExceptionHandler. Any cache synchronization errors will be sent to the session's exception handler and allow it to handle the error or be notified of the error. Any errors will also be logged to the TopLink session's log.

  • Error handling for master data with direct update

    Hi guys,
    For master data with flexible update, error handling can be defined in InfoPackege, and if the load is performed via PSA there are several options - clear so far. But what about direct update...
    But my specific question is: If an erroneous record (e.g invalid characters) occur in a master data load using direct update, this will set the request to red. But what does this mean in terms of what happens to the other records of the request (which are correct) are they written to the master data tables, so that they can be present once the masterdata is activated, or are nothing written to masterdata tables if a single record is erroneous???
    Many thanks,
    / Christian

    Hi Christian -
    Difference between flexible upload & Direct upload is that direct upload does not have Update Rules, direct upload will have PSA as usual & you can do testing in PSA.
    second part when you load master data - if error occurs all the records for that request no will be status error so activation will not have any impact on it i.e. no new records from failed load will be available.
    hope it helps
    regards
    Vikash

  • Error handler for event based messaging framework

    I've been very interested in using the event based messaging framework (described here http://forums.ni.com/t5/LabVIEW/Community-Nugget-2009-03-13-An-Event-based-messageing-framework/td-p...) for my next large application.
    My main concern is the fact that it seems like typos would be very difficult to debug since you need to ignore unknown commands to make this system work.
    To solve this problem I've been considering the idea of having a single message error handler VI which will store all valid commands and check all sent commands to see if they are valid.  Each VI would send out a register message on startup with their name and all messages they can send and receive.  The message error handler would store these and then check all future messages to be sure it is a valid message, throwing an error if it is not.
    My basic problem is this: for this to work the message error handler VI would have to be started before any messages are sent so that it can capture all the register events.  If this is a VI that will be continuously running the entire application how can I ensure it starts first since I cannot wait for it to complete? (I.e. the usual method of running an error out wire or using a sequence structure will not work since everything will then wait for it to complete which will not happen until the program is ready to shut down)
    I'm assuming the answer might be to use an asynchronous call but I'm not very familiar with this method.  
    Any help is appreciated.  Thanks. 

    Could you just use the error handler as a subVI inside a case structure that is only called when you have new message to be checked? I'm not sure I understood the exact functionality you are looking for, so sorry if this does not apply.
    Zach P.
    Product Support Engineer | LabVIEW R&D | National Instruments

  • Getting Error In the Routine - While writing Code for the Cross Reference.

    Hi,
    Getting Error In the Start Routine - While writing Code for the Cross Reference from the Text table ( /BIC/TZMDES with Fields /BIC/ZMDES(Key),TXTSH ) Getting Error as [ E:Field "ZMDES" unknown ].
    Transformation : IOBJ ZPRJ3(Source) -> IOBJ ZPRJC ( Target ).
    The Source  Fields are: 0logsys(Key),zprj3(Key),ZDOM3.
    The Target Fields are : 0logsys(Key),zprjc(Key),ZDOM3, UID.
    Here i am trying to Update the target Field UID by Comparing the Source Field [ zprj3(Key)] with the Text table ( /BIC/TZMDES ) and update the UID.
    The Code is as below:
    Global Declarations in the Start Routine:
    Types: begin of itabtype,
            ZMDES type /BIC/TZMDES-/BIC/ZMDES,
            TXT type /BIC/TZMDES-TXTSH,
             end of itabtype.
    data : itab type standard table of itabtype
    with key ZMDES,
    wa_itab like line of itab.
    Routine Code :
    select * from /BIC/TZMDES into corresponding fields of table itab for
    all entries in SOURCE_PACKAGE
    where ZMDES = SOURCE_PACKAGE-/BIC/ZPRJ3.
    READ TABLE itab INTO wa_itab
    WITH KEY ZMDES = SOURCE_PACKAGE-/BIC/ZPRJ3
    BINARY SEARCH.
    IF SY-SUBRC = 0.
    RESULT = wa_itab.
    CLEAR wa_itab.
    The tys_SC_1 structure is :
    BEGIN OF tys_SC_1,
         InfoObject: 0LOGSYS.
            LOGSYS           TYPE RSDLOGSYS,
         InfoObject: ZPRJ3.
            /BIC/ZPRJ3           TYPE /BIC/OIZPRJ3,
         InfoObject: ZDOM3.
            /BIC/ZDOM3           TYPE /BIC/OIZDOM3,
         Field: RECORD.
            RECORD           TYPE RSARECORD,
          END   OF tys_SC_1.
        TYPES:
          tyt_SC_1        TYPE STANDARD TABLE OF tys_SC_1
                            WITH NON-UNIQUE DEFAULT KEY.
    Please suggest with your valuable inputs.
    Thanks in Advance

    I have split the code in two.. one for start routine.. other for field routine.. hope this helps
    Types: begin of itabtype,
    ZMDES type /BIC/TZMDES-/BIC/ZMDES,
    TXT type /BIC/TZMDES-TXTSH,
    end of itabtype.
    data : itab type standard table of itabtype
    with key ZMDES,
    wa_itab like line of itab.
    Start routine
    select * from /BIC/TZMDES into corresponding fields of table itab for
    all entries in SOURCE_PACKAGE
    where ZMDES = SOURCE_PACKAGE-/BIC/ZPRJ3.
    Sort itab.
    field routine
    CLEAR wa_itab.
    READ TABLE itab INTO wa_itab
    WITH KEY ZMDES = SOURCE_FIELD-/BIC/ZPRJ3
    BINARY SEARCH.
    IF SY-SUBRC = 0.
    RESULT = wa_itab-<field name>

  • Transformations for 2LIS_13_VDHDR,2LIS_12_VCHDR/ITM,2LIS_11_VAHDR/ITM

    Hello all,
    I implementing std sales & distribution module in BI 7.0.
    So i am getting transaformation only for 2LIS_13_VDITM.
    All other datasources 2LIS_13_VDHDR, 2LIS_12_VCHDR, 2LIS_12_VCITM, 2LIS_11_VAHDR,2LIS_11_VAITM are in 3.x.
    i am not getting any transformation in BI contents.
    So what can i do now?
    can i migrate all these in BI 7 ? if i migrate these to BI 7 , can i get its transaformation , mappings etc  after migration.
    if i do it manually transaforamtion etc one by one where can i get mappings details for these datasource.
    if i migrate these and when i create transformation for it , is automatically mapping will happen or i need to do it manually?
    Please help.
    Thanks,
    Sadanand

    Hi,
    We dont have the TFs for those dataosurce.
    Activate the Updaterules and migrate them to RFs.Even after migration from UR , you still need to do the coding of start routines,,etc.
    Regards,
    Anil Kumar Sharma .P

  • Media Foundation transforms for BackgroundMediaPlayer

    So I want to apply some media foundation effect to the backgoundmediaplayer samples before they get rendered. While this seems an obvious mission with the media stream source class, I do not really have the time to implement media stream sources for all
    formats supported by windows phone just to get to the raw PCM inside them, and then do the things I want to do with them.
    The media foundation transforms does something similar to what I want to do. Is there any way I can use a transform for BackgroundMediaPlayer, intercept PCM samples delivered by built-in system codecs and modifiy them accordingly? Or do I need to relay on
    media stream sources?

    Hello,
    I'm sorry if I caused any confusion. Let me try to offer some clarification. The MediaStreamSource was designed specifically to address
    the need to ingest and parse 3rd party streaming protocols such as HLS. In this scenario you connect to, parse and pass the encoded video samples downstream and allow the hardware to decode the samples.
    While it is certainly possible to implement a stream source and codec in the same MediaStreamSource context you may not
    get the performance needed to present a good user experience. Again we just didn't intend the MediaStreamSource to be used in this way.
    While using C++ to implement the decoder may improve performance there is still a need to marshal the data between managed and unmanaged
    code. Once in managed code the GC may run and cause sample delivery to stop unexpectedly. As I'm sure you are aware actively managing the buffer size can help to reduce this effect at the expense of latency. This is an extremely advanced topic.
    That said, your business requirements may allow for dropouts to occur. This is up to you to decide. I just want to make sure that you are
    aware of the repercussions of choosing this architecture and that it is not a scenario that we intended or tested.
    So... Because of this I would never recommend this approach. I guess if it works great but be aware of the risk.
    Okay, let’s talk about the BackgroundMediaPlayer architecture a bit. Windows Phone 8 (WP8) was a three process architecture. Windows Phone 8.1 (WP8.1) is a two process architecture.
    I won’t go into how WP8 works. In WP8.1 you have a UI process and a background player process. The background player process is registered as a singleton instance OOPServer.
    This means that the background player will only be created if one does not already exist in the current app container.
    The background player process contains your background audio code. Just like Windows 8 this process is given a PLM exemption that allows it to continue to run in the background
    while your UI process can be suspended. Once the exemption is registered, management of the background player process is handed off to the media manger. The media manager coordinates closely with the PLM to ensure there is only one active background player
    process at a given time. This means only one app can be playing audio in the background at any time.
    Okay so if you have gotten this far you get some potentially good news and an apology. I humbly admit that I was incorrect. I spent most of the afternoon going over
    specs and the phone source code. I found that it should theoretically be possible to add additional components to the underlying background player MF topology.
    From what I have been able to decipher, you can register a custom MFT in the background player code and have it automatically join the topology when it is created. You can
    do this via the
    MediaExtensionManger. Sounds good right?  Keep in mind this is theoretical. I haven’t tried this and there may be some caveats, explained below.
    Here are what I can see as caveats: The MediaExtensionManger only allows you to register certain types of MFTs. In particular encoders, decoders and stream handlers. Also you
    can’t override any of the MF components that are handled by a known format. In other words you can’t override our MP4 decoder. So it’s not as easy as just saying “stick my MFT here”.
    After much thought it still might be possible to get a MFT to join the topology after the decoder. In the case of audio I’m thinking that we need to create an MFT that takes
    uncompressed audio in and outputs uncompressed audio. The question here is: “How does the topology manager evaluate components available to participate in the topology?”
    The topology we want looks like this: S->D->C->R where C is our custom MFT. The decoder (D) outputs PCM. The renderer (R) takes PCM as the input. Our C takes PCM as
    an input and output. When the topology loader builds the topology it connects the nodes from left to right. The question is: “Will our C be evaluated before the R?” If our C is evaluated before the R then we are money. However if the R is evaluated first or
    C will be left out of the topology. I honestly don’t know enough about the intricacies of the topology loader to say with any certainty if this will work (and what about format conversions?)
    Again we are trying to do things that the original developers never intended and didn’t test. If it works great! Please let me know. If it doesn’t I will certainly request
    this feature for the next version of the Phone.
    I hope this helps,
    James
    Windows SDK Technologies - Microsoft Developer Services - http://blogs.msdn.com/mediasdkstuff/

  • Transformations for 2lis_02_itm, 2lis_02_scl

    Dear Team,
    I am currently implementing Purchasing module and using the datasources 2lis_02_itm and 2lis_02_scl. I have installed 0pur_c01 cube from business content. But in the business content we dont have 7.0 transformations from data source to cube. But the update rules are available completely.
    How can we get activated complete transformations flow in system.
    Transformations for 2lis_02_itm data source is available. but the complete rules are not mapped.
    Please let me know the solution if some have faced the same situation.
    Current our system is in bi content 704 and SAP Netweaver 7.01.
    Regards,
    Don

    HI ,
    Simple way is to Install the 3.x flow from BI content and then migrate them to the 7.0 flow , we have done that most of the times when there is no BI content available for the new dataflow
    -->how to doc for reference
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00d94ca8-9538-2c10-9b99-859afde1fa4a?QuickLink=index&overridelayout=true
    Regards,
    Sathya

  • Predefined transformations for drop/create index?

    Hi,
    I want to add pre/postmap transformations for dropping/recreating indexes in the target table, for performance reasons.
    I can't find any predefined transformations for this (such as WB_DISABLE/ENABLE_ALL_CONSTRAINTS for constraints). It seems I have to program this myself (dynamic PL/SQL for reuse with other tables seems the logical choice), but it seems such a dumb oversight - has Oracle really not supplied predefined transformations for this? It must be an extremely common need in ETL?
    Regards,
    Kai Quale
    University of Oslo

    On further thought, one needs more information than the table name to (re)create an index. (That's actually one of my pet peeves against Oracle - that you can't enable/disable indexes in the same way as constraints.)
    Offhand, the only solution I see is to store index metainformation in a separate table, write custom procedures for disabling FKs/dropping indexes and enabling FKs/creating indexes, using the metatable.
    It is a pretty common task in data warehousing - disable FK/drop index => run mapping => enable FK/create index. How do people do this in OWB?
    Regards,
    Kai

  • How do i use MIB-2 for writing MIB for my application?

    I want to use the existing MIB-2 to write MIB variable for my application. The application is energy metering.So How do i use MIB-2 for writing MIBs for my application?
    Santosh Chavan
    IIT MADRAS.

    For others who do not know what an MIB is, here is link that shows some more information:
    http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/snmp.htm .   A quick Google of MIB did not turn up any useful information, but looking for SNMP quickly found this link. 
    It really turns out that I am not the person to help.
    It turns out that I am not the person to help, but I always want to know more as I may have a similar problem later.  I know from personal experience that it is easy to use a term commonly in a company or industry but have no one out side of that industry understand.  I was asking if that was such a term (hopefully with a little humor).  I'm sorry if my joke fell flat.
    I hope that you get an answer,
    Bob Young
    Bob Young - Test Engineer - Lapsed Certified LabVIEW Developer
    DISTek Integration, Inc. - NI Alliance Member
    mailto:[email protected]

  • New licensing for OWB 10g R2 (Paris)

    Hi,
    Does anyone know how much the new licenses (per DB server CPU) for OWB 10g R2 (Paris) cost?
    Is it correct that the features for the modeling of SCD 1 and 2 are not included in the basic "Core ETL Features"?
    If I'm using the the SCD 2 features to develop the OWB mappings do I also need to license the "Enterprise ETL" for the production server?
    Related to the previous question: for the production DB where I only need the runtime part of the repository is it required to license any options?
    Regards
    Maurice
    PS: 2 links related to these questions:
    http://www.rittman.net/archives/2006/05/owb_10g_release_2_now_availabl.html
    http://www.oracle.com/technology/products/warehouse/htdocs/owb_10gr2_faq.htm#HowisOWBPackaged

    Maurice, not sure what to say. If you've been using OWB, then you don't lose any functionality going to the new version, and it will be "free". You simply won't get any of the new functionality (all of the old functionality is included in the "core" features)
    However, as I said in the other post, I hope Oracle reconsiders the SCD 1 / 2 / 3 licensing. To me, SCD type 2 are not "enterprise level" functionality - that is base level functionality for ANY data mart or data warehouse. I have no problems with paying for options the other ETL vendors are charging for (data quality comes to mind...), but if we deploy a large DW on a 32 processor box, paying for the Oracle licenses AND an addition $300,000+ for OWB functionality just to simplify SCD type 2, seems WAY overkill. Actually, in our project, we had pretty much settled on Oracle as the RDBMS for our DW, but if OWB is up in the air, we will probably open this up for an RFQ.
    Scott

  • Error while activating transformation for a infocube.

    hi experts,
    This is Lalitha.I am new to BI7.
    I am trying to load data from flatfile into infocube.I have loaded master data successfully.while creating datasource for transactional data i am getting warning  :External length specification will be ignored.
    Even then I was able to load data into PSA.Now i am trying to create transformation for the infocube, but I am getting a error :Conversion type missing
    and Field /BIC/IO_PRC9 must be assigned to an InfoObject
    I have mentioned the infoObjects for datasource in the fields tab.
    Any suggestions will be appreciated.
    Thanks in advance.
    regards,
    Lalitha

    Hi,
    Some times we have to assign manually as system does not create automatically 1:1 rule in some cases which i don't know.
    Click on = at transformation and you will get the window, Under assign objects you can assign required fields to target field. this will clear your second error.
    For first error -
    If conversion type is missed for Price object. - Did you create the Price KF with Fixed currency or 0currency. if it is 0currency, in source system you should have one extra field with the units for Price field. As per iam concerned, If it is fixed currency it should activate - delete that particular transformation and recreate it.
    Thats what i used to do. i got the result.
    Have a fun
    Cheers,
    Shrinu

Maybe you are looking for

  • Size error using OC4J Works fine in Tomcat 6.0.

    Good afternoon, I am making an application under Oracle Application Server Containers for J2EE 10g (10.1.2.3.0) and I have a strange error. Initially I am developing the application in an environment with Apache Tomcat 6.0 and the application works p

  • When to repaint panel?

    I'm having a problem figuring out when exactly my panel is ready to be repainted. Inside this JPanel I'm loading Images and then drawing them with paintComponent. My problem is that when the panel is first painted, these images aren't loaded yet so p

  • TS4268 iMessaging NOT working.  Please help

    Updated my 4s and now I can't get my iMessaging or FaceTime to work.  I keep getting an error reading, "Could not sign in.  Please check your network connection and try again."  My connection is fine... I think.  I don't seem to have trouble connecti

  • Migrating to a new user

    I'm still relatively new to this concept of "users." When I got my first OS X Mac I set everything up the old way... I never created any new users, and have kept all of my files on the main administrator user (I think it was called "Apple" from the o

  • Keyboard shortcut for Compare Documents (or configurable keyboard shortcuts)

    Hi, I'm guessing the ability to configure Keyboard Shortcuts as one can in other Adobe applications is coming soon to Acrobat (the software release cycles haven't quite synched up yet), but if there are no plans for this, I would appreciate a Keyboar