Journalizing data changes in relational source

Is there any standard way to allow user to add comments to data changes?
And then to show data changes for the specified fact upon request?
I am using relational data source, not Essbase…

Did you already have look at the "write-back" functionality in OBIEE?
regards
John
http://obiee101.blogspot.com/

Similar Messages

  • Error : Data load from relation source (SQL Server 2005) to Essbase Cube

    Hi All,
    I am looking help from you. I am trying to load data from SQLServer 2005 table to Essbase Cube using IKM SQL to Hyperion Essbase (Metadata) Modules.
    I am getting below error. Let me know if i am missing some things.
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 61, in ?
    com.hyperion.odi.essbase.ODIEssbaseException: Invalid value specified [RULES_FILE] for Load option [null]
         at com.hyperion.odi.essbase.ODIEssbaseMetaWriter.validateLoadOptions(Unknown Source)
         at com.hyperion.odi.essbase.AbstractEssbaseWriter.beginLoad(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
         at org.python.core.PyMethod.__call__(PyMethod.java)
         at org.python.core.PyObject.__call__(PyObject.java)
         at org.python.core.PyInstance.invoke(PyInstance.java)
         at org.python.pycode._pyx3.f$0(<string>:61)
         at org.python.pycode._pyx3.call_function(<string>)
         at org.python.core.PyTableCode.call(PyTableCode.java)
         at org.python.core.PyCode.call(PyCode.java)
         at org.python.core.Py.runCode(Py.java)
         at org.python.core.Py.exec(Py.java)
         at org.python.util.PythonInterpreter.exec(PythonInterpreter.java)
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:144)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Invalid value specified [RULES_FILE] for Load option [null]
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)

    ODI Step: Prepare for Loading Step:
    from java.util import HashMap
    from java.lang import Boolean
    from java.lang import Integer
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    # Target planning connection properties
    serverName = "HCDCD-HYPDB01"
    userName = "admin"
    password = "<@=snpRef.getInfo("DEST_PASS") @>"
    application = "BUDGET01"
    database = "PLAN1"
    portStr = "1423"
    srvportParts = serverName.split(':',2)
    srvStr = srvportParts[0]
    if(len(srvportParts) > 1):
    portStr = srvportParts[1]
    # Put the connection properites and initialize the essbase loader
    targetProps = HashMap()
    targetProps.put(ODIConstants.SERVER,srvStr)
    targetProps.put(ODIConstants.PORT,portStr)
    targetProps.put(ODIConstants.USER,userName)
    targetProps.put(ODIConstants.PASSWORD,password)
    targetProps.put(ODIConstants.APPLICATION_NAME,application)
    targetProps.put(ODIConstants.DATABASE_NAME,database)
    targetProps.put(ODIConstants.WRITER_TYPE,ODIConstants.DATA_WRITER)
    print "Initalizing the essbase wrapper and connecting"
    pWriter = HypAppConnectionFactory.getAppWriter(HypAppConnectionFactory.APP_ESSBASE, targetProps);
    tableName = "BUDGET01_PLAN1"
    rulesFile = r"ActLd"
    ruleSeparator = "Tab"
    clearDatabase = "None"
    calcScript = r""
    maxErrors = 1
    logErrors = 1
    errFileName = r"E:\OraHome_ODI\Error\Budget01Plan1.err"
    logEnabled = 1
    logFileName = r"E:\OraHome_ODI\Error\Budget01Plan1.log"
    errColDelimiter = r","
    errRowDelimiter = r"\r\n"
    errTextDelimiter = r"'"
    logHeader = 1
    commitInterval = 1000
    calcOnly = 0
    preMaxlScript = r""
    postMaxlScript = r""
    abortOnPreMaxlError = 1
    # set the load options
    loadOptions = HashMap()
    loadOptions.put(ODIConstants.CLEAR_DATABASE, clearDatabase)
    loadOptions.put(ODIConstants.CALCULATION_SCRIPT, calcScript)
    loadOptions.put(ODIConstants.RULES_FILE, rulesFile)
    loadOptions.put(ODIConstants.LOG_ENABLED, Boolean(logEnabled))
    loadOptions.put(ODIConstants.LOG_FILE_NAME, logFileName)
    loadOptions.put(ODIConstants.MAXIMUM_ERRORS_ALLOWED, Integer(maxErrors))
    loadOptions.put(ODIConstants.LOG_ERRORS, Boolean(logErrors))
    loadOptions.put(ODIConstants.ERROR_LOG_FILENAME, errFileName)
    loadOptions.put(ODIConstants.RULE_SEPARATOR, ruleSeparator)
    loadOptions.put(ODIConstants.ERR_COL_DELIMITER, errColDelimiter)
    loadOptions.put(ODIConstants.ERR_ROW_DELIMITER, errRowDelimiter)
    loadOptions.put(ODIConstants.ERR_TEXT_DELIMITER, errTextDelimiter)
    loadOptions.put(ODIConstants.ERR_LOG_HEADER_ROW, Boolean(logHeader))
    loadOptions.put(ODIConstants.COMMIT_INTERVAL, Integer(commitInterval))
    loadOptions.put(ODIConstants.RUN_CALC_SCRIPT_ONLY,Boolean(calcOnly))
    loadOptions.put(ODIConstants.PRE_LOAD_MAXL_SCRIPT,preMaxlScript)
    loadOptions.put(ODIConstants.POST_LOAD_MAXL_SCRIPT,postMaxlScript)
    loadOptions.put(ODIConstants.ABORT_ON_PRE_MAXL_ERROR,Boolean(abortOnPreMaxlError))
    #call begin load
    pWriter.beginLoad(loadOptions)
    Execution step from opartor:
    Read rows: 0
    Insert/Delete/updat rows: 0
    ODI Step: Load Data Into Essbase
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    from java.lang import Class
    from java.lang import Boolean
    from java.sql import *
    from java.util import HashMap
    # Get the select statement on the staging area:
    sql= """select C1_ACCOUNT "Account",C3_TIMEPERIOD "TimePeriod",C4_LOBS "LOBs",C5_TREATY "Treaty",C6_SCENARIO "Scenario",C7_VERSION "Version",C8_CURRENCY "Currency",C9_YEAR "Year",C10_DEPARTMENT "Department",C11_ENTITY "Entity",C2_DIVLOC "DivLoc",C12_DATA "Data" from OdiMapping.dbo.C$_0BUDGET01_PLAN1Data where      (1=1) """
    srcCx = odiRef.getJDBCConnection("SRC")
    stmt = srcCx.createStatement()
    srcFetchSize=30
    stmt.setFetchSize(srcFetchSize)
    rs = stmt.executeQuery(sql)
    #load the data
    stats = pWriter.loadData(rs)
    #close the database result set, connection
    rs.close()
    stmt.close()
    ODI Step: Report Statistics
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 2, in ?
    Essbase Writer Load Summary:
         Number of rows successfully processed: 1
         Number of rows rejected: 0
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.h.y(h.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Where am i doing wronge for data load in Essbase? How I will find correct information to load data into Essbase?

  • How does Sybase Replication Server capture data changes?

    Hello,
    as far as I know Sybase Replication Server is a central component in a HANA-based enviroment when it comes to replicate data towards HANA engine.
    I scan briefly through a white paper of Sybase, but I gives no technical description how the Sybase Replication Server captures data change on the source database.
    Can someone gives here explanation?
    All the best,
    Guido

    Hello Marc,
    thanks for fair and hornest answer!
    I'm currently involved in SAP based project where we are migrated business on DB level, which is some kind of operation a heart. I personally have huge respect for this approach.
    Caputring the changes on DB level is a appoarch, but I personally think that you need to capture event on business object level. For exammple the SAP good old workflow knows events for business objects. Also in a ESA-driven application you should have some kind on eventing for business objects.
    For the time being the current approach might working for a kind of Proof-of-Concept,
    but on a mid and long-term based you need to RETHINK!
    All the best & Merry Christmas & Happy new year
    Guido

  • Can't view changed data in journal data

    Hi,
    I have implemented JKM Oracle 10g Consistent Logminer on Oracle 10g with the following option.
    - Asynchronous_mode : yes
    - Auto_configuration : yes
    1. Change Data Capture -> Add to CDC, 2.Subscriber->subscribe (sunopsis),
    3. Start Journal
    The journal has been started correctly wothout errors. The journalized table has always the symbol "green clock". All is gook working.
    And then i inserted 1 record in source table, but i can't view changed data in journal data. I can't understand why journal data was generated.
    There are no errors.
    Help me !!!

    Did your designer was on the good context ?
    Look the list box at the top right of the Designer interface.
    You must have the same as the one where you define your journalization.

  • Best practice "changing several related objects via BDT" (Business Data Toolset) / Mehrere verbundene Objekte per BDT ändern

    Hallo,
    I want to start a
    discussion, to find a best practice method to change several related master
    data objects via BDT. At the moment we are faced with miscellaneous requirements,
    where we have a master data object which uses BDT framework for maintenance (in
    our case an insured objects). While changing or creating the insured objects a
    several related objects e.g. Business Partner should also be changed or
    created. So am searching for a best practices approach how to implement such a
    solution.
    One Idea was to so call a
    report via SUBMIT AND RETURN in Event DSAVC or DSAVE. Unfortunately this implementation
    method has only poor options to handle errors. Second it is also hard to keep LUW
    together.
    Another idea is to call an additional
    BDT instance in the DCHCK-event via FM BDT_INSTANCE_SELECT and the parameters
    iv_xpush_classic = ‘X’ and iv_xpop_classic = ‘X’. At this time we didn’t get
    this solution working correctly, because there is always something missing
    (e.g. global memory is not transferred correctly between the two BDT instances).
    So hopefully you can report
    about your implementations to find a best practice approach for facing such
    requirements.
    Hallo
    ich möchte an der Stelle eine Diskussion starten um einen Best Practice
    Ansatz zu finden, der eine BDT Implementierung/Erweiterung beschreibt, bei der
    verschiedene abhängige BDT-Objekte geändert werden. Momentan treffen bei uns
    mehrere Anforderungen an, bei deinen Änderungen eines BDT Objektes an ein
    anderes BDT Objekte vererbt werden sollen. Sprich es sollen weitere Objekte geänderte
    werden, wenn ein Objekt (in unserem Fall ein Versicherungsvertrag) angelegt
    oder geändert wird (zum Beispiel ein Geschäftspartner)
    Die erste unserer Ideen war es, im Zeitpunkt DSAVC oder DSAVE einen
    Report per SUBMIT AND RETURN aufzurufen. Dieser sollte dann die abhängigen Änderungen
    durchführen. Allerdings gibt es hier Probleme mit der Fehlerbehandlung, da
    diese asynchrone stattfinden muss. Weiterhin ist es auch schwer die Konsistenz der
    LUW zu garantieren.
    Ein anderer Ansatz den wir verfolgt hatten, war im Zeitpunkt
    DCHCK per FuBA BDT_INSTANCE_SELECT und den Parameter iv_xpush_classic = ‘X’ and
    iv_xpop_classic = ‘X’ eine neue BDT Instanz zu erzeugen. Leider konnten wir diese
    Lösung nicht endgültig zum Laufen bekommen, da es immer Probleme beim
    Übertragen der globalen Speicher der einzelnen BDT Instanzen gab.
    Ich hoffe Ihr könnt hier eure Implementierungen kurz beschreiben, dass wir
    eine Best Practice Ansatz für das Thema finden können
    BR/VG
    Dominik

  • Start scenario when source data change

    Hi,
    is it possible to start a scenario when a data on one table changes on the source DB ?
    For example, an application change a timestamp on a table in the source db, this change triggers the start of the ODI scenario.
    Many thanks,
    Olivier

    Olivier,
    Can you create a database trigger on that table and populate a row in another table indicating that the timestamp on the table has changed.
    Then, you can use OdiWaitForData tool from toolbox in the package as a Starting step and connect its OK with odiStartScen tool which further invokes the scenario that you want to execute.

  • How source data changing synchronize to destination after exp from source?

    Stream creation, how the source data changing synchronize to the destination after exp data from source?
    Time of Stream creation:
    a. exp/imp
    exp erpdev/erpdev owner=erpdevobject_consistent=y file=erpdev.dmp grants=y rows=y indexes=y statistics=none
    imp erpdev/erpdev file=erpdev.dmp ignore=ycommit=y log=import.log streams_instantiation=y
    b. confirming source and dest database SCN
    BEGIN
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
    SOURCE_SCHEMA_NAME=> 'erpdev',
    source_database_name=> 'ERPDB',
    instantiation_scn=> &iscn ,
    recursive =>TRUE);
    END;
    c. 'START_APPLY' of dest database
    EXEC DBMS_APPLY_ADM.START_APPLY('STRMADMIN_APPLY');
    Question:
    1.
    Stream start to work at "c. START_APPLY of dest database", how the source data changing synchronize to the destination after exp data from source?
    2.
    What was "b. confirming source and dest database SCN" for?

    You missed a step : the prepare table and its role. The prepare table for capture returns an SCN which will tells Streams at which SCN it must start capture any DML. At this stage the Apply may be configured or not, but if the source tables is modified, an LCR will be created when you start the capture and this LCR will be sent to the apply site when the apply site is finally up.
    During the process of transfering the content of the Source site to target site there is always room for a desync to take place if the source site is active. Two cases exists and must be examined:
    - Prepare table first, then export source --> any DML after the prepared SCN and before the start of the export will be sent as LCR while it is also into the export->> duplicate DML
    - Export first then prepare table --> A DML occured AFTER the start of the export but before the SCN returned by the prepared SCN: this DML will be missing in the epxort (and at target site) and not yet captured neither since it occured before the prepare table.
    So what is the best? The answer the first. You resolve the issue with the apply SCN which has into the export, for each table the SCN at the start of the export. In this scenario, you have prepared first, export next source, import in target and set the scn of each table i(at apply site) using the one found nto the export (hence the parameter STREAMS_INSTANTIATION ).
    Now even all SCN captured on source after the prepared table but before the start of the export, will effectively be sent to apply site but will be IGNORED by the apply process has it would be a SCN aging before the start of the export. The SCN of the tables into the export file will become the SCN of the apply process. any SCN lower will be ignored by the apply process.
    A last word in respect of a DML that started AFTER the start of the export and BEFORE the end of the export on source site (during the export). Since it is after the prepare table, it is captured and since it is after the start of the export, it is not into the export (we assume the export is consistent - which is not always true). So you need this LCR as the mutation it describes occured after the start of the export and is not in the export: everything is ok.
    Having said that, still the best way to setup a target site is still to avoid any transction on source site during the setup.
    I hope you understand this well for it is the way it works.

  • Using journalized data in an interface with aggragate function

    Hi
    I am trying to use the journalized data of a source table in one of my interfaces in ODI. The trouble is that one of the mappings on the target columns involves a aggregate function(sum). When I run the interface i get an error saying "not a group by expression". I checked the code and found that the jrn_subscriber, jrn_flag and jrn_date columns are included in the select statement but not in the group by statement(the group by statement only contains the remiaining two columns of the target table).
    Is there a way around this? Do I have to manually modify the km? If so how would I go about doing it?
    Also I am using Oracle GoldenGate JKM (oracle to oracle OGG).
    Thanks and really aprreciate the help
    Ajay

    'ORA-00979' When Using The ODI CDC (Journalization) Feature With Knowledge Modules Including SQL Aggregate Functions [ID 424344.1]
         Modified 11-MAR-2009 Type PROBLEM Status MODERATED      
    In this Document
    Symptoms
    Cause
    Solution
    Alternatives :
    This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review.
    Applies to:
    Oracle Data Integrator - Version: 3.2.03.01
    This problem can occur on any platform.
    Symptoms
    After having successfully tested an ODI Integration Interface using an aggregate function such as MIN, MAX, SUM, it is necessary to set up Changed Data Capture operations by using Journalized tables.
    However, during execution of the Integration Interface to retrieve only the Journalized records, problems arise at the Load Data step of the Loading Knowledge Module and the following message is displayed in ODI Log:
    ORA-00979: not a GROUP BY expression
    Cause
    Using both CDC - Journalization and aggregate functions gives rise to complex issues.
    Solution
    Technically there is a work around for this problem (see below).
    WARNING : Oracle engineers issue a severe warning that such a type of set up may give results that are not what may be expected. This is related to the way in which ODI Journalization is implemented as specific Journalization tables. In this case, the aggregate function will only operate on the subset which is stored (referenced) in the Journalization table and NOT over the entire Source table.
    We recommend to avoid such types of Integration Interface set ups.
    Alternatives :
    1.The problem is due to the missing JRN_* columns in the generated SQL "Group By" clause.
    The work around is to duplicate the Loading Knowledge Module (LKM), and in the clone, alter the "Load Data" step by editing the "Command on Source" tab and by replacing the following instruction:
    <%=odiRef.getGrpBy()%>
    with
    <%=odiRef.getGrpBy()%>
    <%if ((odiRef.getGrpBy().length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
    ,JRN_FLAG,JRN_SUBSCRIBER,JRN_DATE
    <%}%>
    2. It is possible to develop two alternative solutions:
    (a) Develop two separate and distinct Integration Interfaces:
    * The first Integration Interface loads data into a temporary Table and specify the aggregate functions to be used in this initial Integration Interface.
    * The second Integration Interfaces uses the temporary Table as a Source. Note that if you create the Table in the Interface, it is necessary to drag and drop the Integration Interface into the Source panel.
    (b) Define two connections to the Database so that the Integration Interface references two distinct and separate Data Server Sources (one for the Journal, one for the other Tables). In this case, the aggregate function will be executed on the Source Schema.
    Show Related Information Related
    Products
    * Middleware > Business Intelligence > Oracle Data Integrator (ODI) > Oracle Data Integrator
    Keywords
    ODI; AGGREGATE; ORACLE DATA INTEGRATOR; KNOWLEDGE MODULES; CDC; SUNOPSIS
    Errors
    ORA-979
    Please find above the content from OTN.
    It should show you this if you search this ID in the Search Knowledge Base
    Cheers
    Sachin

  • Custom renderer in datagrid, needs to know if data changed

    Hi,
    I am hoping someone can help me out please.
    I have a datagrid that uses a custom renderer that is
    subclassed from a TextInput. When the user changes the data in a
    cell, I need to color the cell. This is so that on a potentially
    big grid, the user can easily see which cells he has made changes
    to.
    It is easy enough to set the color of the itemrenderer by
    using setStyle() inside the overridden set data() method of the
    custom renderer, but that is only a fraction of the solution. Since
    Flex instantiates and destroys custom renderers at will depending
    on if it is scrolled into view by the datagrid, keeping the state
    of whether a value has changed inside the custom rendererer is not
    an option.
    So the only choice I have is to call back from the custom
    renderer into the container that hosts the datagrid. As the
    itemEditEnd event is handled in that container, a list of cells
    that have had their data changed can be stored. The custom renderer
    then needs to call back into the container with its cell
    coordinates and ask if the data has changed, and if it has, set its
    color.
    How can a custom renderer know its cell position as x,y
    coordinates? I see that TextInput implements IListItemRenderer
    interface and has properties x and y, but putting a trace on these
    gives me nonsensical numbers that seem to have no relation to the
    cell coordinates.
    The other thing I need to do is have the custom renderer call
    back into the container that hosts the datagrid to know whether
    data has changed. Is outerDocument the reference? Or is there a
    proper way of doing this?
    Thanks for the help you can give.

    Here is a simplified version of how we track the cell by cell
    changes in a datagrid. It does require you to identify the fields
    that will be editable and maintaining the original data for each
    column in the data source.
    Our production version is much more complex than this but it
    will give you the idea.
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application
    xmlns:mx="
    http://www.adobe.com/2006/mxml"
    layout="absolute"
    creationComplete="initApp()"
    viewSourceURL="srcview/index.html">
    <mx:Script>
    <![CDATA[
    import mx.binding.utils.BindingUtils;
    import mx.collections.ArrayCollection;
    import mx.core.Application;
    import flash.events.*;
    import mx.events.DataGridEvent;
    import mx.controls.TextInput;
    [Bindable] public var editAC : ArrayCollection = new
    ArrayCollection();
    [Bindable]
    public var somedata:XML = <datum><item>
    <col0>0</col0>
    <col1></col1>
    <col2></col2>
    <col3>2</col3>
    <col4></col4>
    <col5></col5>
    <col6></col6>
    <col0Orig>0</col0Orig>
    <col1Orig></col1Orig>
    <col2Orig></col2Orig>
    <col3Orig>2</col3Orig>
    <col4Orig></col4Orig>
    <col5Orig></col5Orig>
    <col6Orig></col6Orig>
    </item>
    <item>
    <col0></col0>
    <col1></col1>
    <col2></col2>
    <col3></col3>
    <col4></col4>
    <col5></col5>
    <col6></col6>
    <col0Orig></col0Orig>
    <col1Orig></col1Orig>
    <col2Orig></col2Orig>
    <col3Orig></col3Orig>
    <col4Orig></col4Orig>
    <col5Orig></col5Orig>
    <col6Orig></col6Orig>
    </item>
    <item>
    <col0></col0>
    <col1></col1>
    <col2></col2>
    <col3></col3>
    <col4></col4>
    <col5></col5>
    <col6></col6>
    <col0Orig></col0Orig>
    <col1Orig></col1Orig>
    <col2Orig></col2Orig>
    <col3Orig></col3Orig>
    <col4Orig></col4Orig>
    <col5Orig></col5Orig>
    <col6Orig></col6Orig>
    </item>
    <item>
    <col0></col0>
    <col1></col1>
    <col2></col2>
    <col3></col3>
    <col4></col4>
    <col5></col5>
    <col6></col6>
    <col0Orig></col0Orig>
    <col1Orig></col1Orig>
    <col2Orig></col2Orig>
    <col3Orig></col3Orig>
    <col4Orig></col4Orig>
    <col5Orig></col5Orig>
    <col6Orig></col6Orig>
    </item>
    <item>
    <col0></col0>
    <col1></col1>
    <col2></col2>
    <col3></col3>
    <col4></col4>
    <col5></col5>
    <col6></col6>
    <col0Orig></col0Orig>
    <col1Orig></col1Orig>
    <col2Orig></col2Orig>
    <col3Orig></col3Orig>
    <col4Orig></col4Orig>
    <col5Orig></col5Orig>
    <col6Orig></col6Orig>
    </item>
    <item>
    <col0></col0>
    <col1></col1>
    <col2></col2>
    <col3></col3>
    <col4></col4>
    <col5></col5>
    <col6></col6>
    <col0Orig></col0Orig>
    <col1Orig></col1Orig>
    <col2Orig></col2Orig>
    <col3Orig></col3Orig>
    <col4Orig></col4Orig>
    <col5Orig></col5Orig>
    <col6Orig></col6Orig>
    </item>
    </datum>
    private function initApp():void {
    var dgcols:Array = grid1.columns;
    for (var i:int=1;i<12;i++) {
    var dgc:DataGridColumn = new DataGridColumn();
    if(i < 6) {
    dgc.width=60;
    dgc.editable=true;
    dgc.dataField="col" + i.toString() ;
    dgc.rendererIsEditor=true;
    var ir :DGItemRenderer = new DGItemRenderer();
    dgc.itemRenderer = ir;
    else {
    dgc.width=0;
    dgc.visible = false
    dgc.dataField="col" + i.toString() + "orig" ;
    dgcols.push(dgc);
    grid1.columns = dgcols;
    ]]>
    </mx:Script>
    <mx:DataGrid x="31" y="27" width="200" height="150"
    id="grid1" dataProvider="{somedata.children()}"
    horizontalScrollPolicy="on" rowHeight="25"
    editable="true">
    <mx:columns>
    <mx:DataGridColumn headerText="Column 0"
    dataField="col0" width="30" editable="false"/>
    </mx:columns>
    </mx:DataGrid>
    </mx:Application>
    <?xml version="1.0" encoding="utf-8"?>
    <mx:TextInput xmlns:mx="
    http://www.adobe.com/2006/mxml"
    creationComplete="init()"
    implements="mx.controls.listClasses.IDropInListItemRenderer,
    mx.core.IFactory"
    >
    <mx:Script>
    <![CDATA[
    import mx.collections.ArrayCollection;
    import mx.controls.dataGridClasses.DataGridListData;
    public function newInstance():* {
    var ir : DGItemRenderer = new DGItemRenderer();
    return ir;
    override public function set data(value:Object):void
    super.data = value;
    if (value != null) {
    var colName : String= DataGridListData(listData).dataField;
    var valOrig : String = data[colName + "Orig"];
    var val : String = data[colName];
    if(valOrig != val)
    this.setStyle("backgroundColor",0xA7FF3F);
    else
    this.setStyle("backgroundColor", "#ffffff");
    ]]>
    </mx:Script>
    </mx:TextInput>

  • Is change request related to ticket?

    is change request related to ticket,if so how! ,please tell me.
    Edited by: kumar_2011 on Dec 16, 2011 6:49 AM

    Hi,
    BW developers will work more in workbench request only.
    Workbench request is related to development of BW objects (i.e. info objects, data targets, dta sources..)
    Customzing request is related to customzation objects where you will do the changes to SAP standard objects like Factory calendar. The Custmizing & Work bench requests are called as Transport requests. To see the transport request in your id go to SE09.
    Change request is related to develeopment team who will handle the changes in BW system, this will be liked to the ticketing tool.
    Please go thur the below thread for more details.
    difference between workbench request and customizing request
    Hope it will helps you.
    Thanks
    Riyez

  • Journal data --- for multiple interfaces

    hi,
    i have a source table.
    i want to capture the journal data on this table.And use the data stored in the "jv$ tables" in multiple interfaces.
    The multiple interfaces have same source table and are linked in a package.
    Is it possible to achive it.

    Yes.
    Just have one extend window / lock subscriber at the start of the package, run all the interfaces you want to on that Change data before finishing the package with a Unlock Subscriber / Purge Journal step.

  • CDC, journal data from Data Store won't load

    Hi, I was having problems yesterday with CDC and setting up a package to loop and load journal data, so today I decided to patch ODI with the 11g log miner and start again. I dropped the source schema and set up anew following the Rittman guide ... http://www.oracle.com/technology/pub/articles/rittman-odi.html
    I have run through starting the journal and it all executes without errors, I can add a record to my source table and "Extend Window" and then right click on my data store -> Changed Data Capture -> Journal Data and I can see my new record in the Designer.
    I have an interface with this data store as the source and I have the checkbox with "Journalized Data Only" checked ... but it just doesn't pick up on the new records in that data store's journal.
    Is there something I am missing? I'm sure once I can figure this out it will probably solve my problem building the package to loop through and do this repeatedly as well.
    Cheers
    Damian

    Hi, a bit more investigation and a bit more info, Arif, I wonder if you have any idea why it's doing this...
    My source table is called S1SPK_DET
    When I extend window in the designer and right click on the data source , it presents my new record, and the sql that it is running is looking at a view called JV$DS1SPK_DET (with a D following the $ ) with this query: select * from ODI_WORK.JV$DS1SPK_DET
    But when I run my interface that is supposed to be getting the Journalized data from this table, in step four ( 4 - Loading - SS_0 - Load data ) the Loading query is looking at another view JV$S1SPK_DET (Without the D following the $) ... and this view is empty. I will paste the loading query below.
    Does anyone know why I have these different views and why the load step looks at the empty one?
    select     
         S1SPK_DET.SPK_NO     C1_COURSE_ID,
         S1SPK_DET.SPK_VER_NO     C2_COURSE_VERSION,
         JRN_SUBSCRIBER     JRN_SUBSCRIBER,
         JRN_FLAG     JRN_FLAG,
         JRN_DATE     JRN_DATE
    from     ODI_WORK.JV$S1SPK_DET S1SPK_DET
    where     (1=1)
    AND JRN_SUBSCRIBER = 'SUNOPSIS' /* AND JRN_DATE < sysdate */

  • ODI CDC - Getting Duplicate Records in Journal Data

    Dear Gurus
    I am getting the following issues in CDC using Oracle Logminer
    01) All operations on Source Data (Insert, Update) are shown as only Insert in Model -> CDC -> Journal Data
    02) The records in Model -> CDC -> Journal Data are double
    03) These are not travelling to Desitination table, I want to load the destiation table
    04) Is it possible to have the last value and the new value both available on same screen as output in ODI, I want to see what data changed before actually populating the tables

    Hi  Andreas, Mayank.
      Thanks for your reply.
      I created my own DSO, but its giving error. And I tried with the stanadard DSO too. Still its giving the same error as could not activate.
    In error its giving a name of function module RSB1_OLTPSOURCE_GENERATE.
    I searched in R3 but could not get that one.
    Even I tried creating DSO for trial basis, they are also giving the same problem.
    I think its the problem from BASIS side.
    Please help if you have any idea.
    Thanks.

  • Address data changed after invoice is created

    Hi,
    I've a problem to solve and it's related with data changed after invoice is created.
    The scenario is the follow:
    1º - create a complete and standard sales process - order => delivery => invoice, with the standard partner scheme and without edit the address data, for any kind of partner
    2ª after the invoice is created, I change the address data on Client Master Data, for the same client that I've used on previous process
    3º I'll go to the VF03 transaction and take a look at the partner data on header level. Here I can see that the changes on the Client Master Data ar updated to the invoice document wich is already created and printed when I maked the changes
    I think that could be a program error because, once the documento is created, you only can change texts and accounts if this document is not yet created.
    And, I can't edit this kind of data on invoice creation because it must be done at order level.
    So I don't understand why it happen, but it happen on more than one client.
    I'll hope that anyone can help me to solve this issue.
    Kind regards,
    Nuno Rodrigues

    Hi Nuno,
    the adresses of all Clients are stored in table adrc. If there are no changes in the order, the system takes the standard adress of the client. That is made for not having an extra adress for each order.
    If you change the adress - the system will create a new adressnumber ( 999........ - see in VBPA ).
    If you have different adressnumbers in your orders, you are not able th collect several orders into one delivery note - for the adressnumber ist normally a split-criteria.
    Ich you will have an extra Adress for each Order, change the adress - for example by an user exit.
    But if you have different adressnumbers - the delivery and the invoice will split the different orders - if you dont do something against in an user-exit.
    Hans

  • Where to get BW Metadata: owner, creation date, change date of a query / WS

    Hello,
    I need a report over the existing queries / worksheets and the owner, creation date, change date of a query etc.
    You see some of the information when you go over query properties in the query designer. But you see only the information of one (the opened) query. And you have to do this for every query ...
    My idea is to go over BW Metadata in the technical content.
    Here is the cube BW Metadata 0BWTC_C08
    (The InfoCube BW Statistics u2013 Metadata contains metadata from the Metadata Repository.)
    Is this the way to do it? Or any other suggestions u2026
    Can I get infos about used structures , etc over this way
    Thanks Markus

    I had to work on an other subject:
    But now the source of information is clear:
    RSRREPDIR - index of all queries
    RSZELTDIR -  index of all queries-components
    RSRWORKBOOK - Verwendungsnachweis für Berichte in Workbooks
    RSRWBINDEX - List of binary large objects (Excel workbooks)
    RSRWBINDEXT - Titles of binary objects (Excel workbooks) in InfoCatalog
    The tables are to join over
    RSRREPDIR.COMPUID  = RSZELTDIR.ELTUID
    RSZELTDIR.TEXTLG  contains the description
    RSRWORKBOOK.GENUID  = RSRREPDIR.GENUID
    RSRWBINDEXT and RSRWBINDEX are connected over WORKBOOKID
    I'd like to put the information of all of this tables in a cube and define a query on it.
    I have to define new datasource, infosource to get the data in the cube.
    Right?
    Now i see some existing datasource objects in the technical content.
    0TCTQUERID, 0TCTQUERID_TEXT, 0TCTQUERY, 0TCTQUERY_TEXT
    I can't open them to look in. But they might be helpfull. Anybody used them?
    Markus

Maybe you are looking for