SEM - CPM Data Table missing functionality

Hi,
We have recently upgraded to SEM - CPM 6.0 from 3.x. We are using the Management Cockpits.
In the older version, we were able to customize the properties of the data table and also print the data table.
But after the upgrade the customization and printing functionality of data table are missing. Did anyone encounter this issue and how was it resolved.
Thanks
Hemant

Hi,
Two ideas.
- Also assign the Objective to the Strategy and the Strategy to the Scorecard starting at 001.2005. Otherwise subelements wont be displayed.
- If this doen't work, try to assign an element with the periode 001.2005 to 012.2005. This might add a new entry in the used table instead of changing a current entry.
Regards,
Beat

Similar Messages

  • (CPM-MC ) LEGEND ICON missing in data table

    Hello,
    I have a question about cpm management cockpit, when I add data table in a column graph , some of the legend icons doesn't appear in the table. What mignt be the reason and How can I fix it ?

    I tried with refresh from "never" to "ifNeeded" and "deferred" also,but still the problem exists. Here is the part of pagedef with the iterator binding :
    <?xml version='1.0' encoding='UTF-8'?>
    <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="11.1.1.50.65" id="d_AG_NT_MR_IL_MCDEF_IL_PAGE_AL_EFF_EGO_ITEM_DL_EGO_ITEM_EFF_10010_PrivateFragPageDef" Package="pageDefs.oracle.apps.scm.productCatalogManagement.advancedItems.flex.egoItemEff.item.ui">
    <parameters>
    <parameter id="CONTEXTCODE" value="AG_NT_MR_IL_MCDEF"/>
    </parameters>
    <executables>
    <iterator Binds="Root.j_AlEffPrivateAM.j_ItemAlEffPrivateVO" DataControl="ItemAMDataControl" RangeSize="25" id="j_ItemAlEffPrivateVOIterator" Refresh="never"/>
    <accessorIterator MasterBinding="j_ItemAlEffPrivateVOIterator" Binds="ItemEFFBAG_5FNT_5FMR_5FIL_5FMCDEFPrivateVO" RangeSize="25" DataControl="ItemAMDataControl" BeanClass="oracle.apps.scm.productCatalogManagement.advancedItems.flex.egoItemEff.item.contexts.view.ItemEFFBAG_5FNT_5FMR_5FIL_5FMCDEFPrivateVO" id="ItemEFFBAG_5FNT_5FMR_5FIL_5FMCDEFPrivateVOIterator"/>
    </executables>
    <bindings>
    <listOfValues StaticList="false" IterBinding="ItemEFFBAG_5FNT_5FMR_5FIL_5FMCDEFPrivateVOIterator" id="_target_ctxt" Uses="LOV__target_ctxt">
    <AttrNames>
    <Item Value="_target_ctxt"/>
    </AttrNames>
    </listOfValues>
    <tree IterBinding="ItemEFFBAG_5FNT_5FMR_5FIL_5FMCDEFPrivateVOIterator" id="ItemEFFBAG_5FNT_5FMR_5FIL_5FMCDEFPrivateVO">
    <nodeDefinition DefName="oracle.apps.scm.productCatalogManagement.advancedItems.flex.egoItemEff.item.contexts.view.ItemEFFBAG_5FNT_5FMR_5FIL_5FMCDEFPrivateVO" Name="ItemEFFBAG_5FNT_5FMR_5FIL_5FMCDEFPrivateVO0">

  • Packaging data is not saved in document draft (missing functionality)

    Dear all,
    Packaging data, usually typed with delivery note, is not saved together with Delivery Note draft (No matter the draft is saved manually or automatically by approval procedure).
    I have tested on SBO 2005B latest patch, 2007B latest patch, and 8.8 version, they all do not save packaging data in draft.
    The draft related tables for packaging (DRF7, DRF8) are already existed in SBO, I encourage to get this missing functionality back.
    My customers are highly needed for this function for better operation flow.
    It would be appreciated if this request could be implemented in the near future patch.
    Thanks in advance

    Have also just discovered that this functionality is missing, although we were going to be using it when saving an AR Invoice as Draft. This is a major hole and is going to cause a big issue for a current implementation where this method was going to be used to create 'Split Deliveries' without making the account postings and allow Despatch paperwork for the Shipping company to be printed. The solution was going to be extremely simple, but now potentially is going to result in a substantial SDK development.
    It would be useful to know whether there is any likelihood that it will be resolved in the near future, i.e. in a patch for 2007A SP1 or 8.8.

  • DSO activation: Data in new data table but missing in active data table?

    we use an end routine in transformation to populate a data field in DSO.
    it works as we can see the value populated in new data table.
    when we activate (activation) DSO, we see the value is deleted from this data field.
    why?
    this data field is set as an characteristic info obj in the data field portion of the DSO.
    we are using 2004s, I forget to check the SP level.
    George

    good job ,thanks for the update and good luck.
    I saw it late on this, else i never miss OSS notes
    Chetan
    @CP..

  • Missing Functionality: Dunning Wizard - We need table CRD1

    Hello,
    Missing Functionality: Dunning Wizard - We need table CRD1 in the dunning PLD templates!!!
    This Workaround is impossible.
    As a workaround I suggest to store this information in the UDFs that you would create on the OCRD table that is exposed in the dunning PLD templates.

    keine Antwort ist auch eine Antwort

  • How to insert Data from a function on server A into a table on Server B.

    Hi,
    I have a function which is like this
    DECLARE @oldmax bigint, @newmax bigint
    SELECT @oldmax = max(exportTimestamp) FROM EXPORT_TIMESTAMPS
    IF @oldmax IS NULL
    SET @oldmax = 0
    SELECT * FROM ServerA.TableA.fnExportTrafficTS(@oldmax) ORDER BY storeID, TrafDate
    SELECT @newmax = max(timestamp) FROM TRAFFIC t
    IF @newmax > @oldmax
    INSERT INTO EXPORT_TIMESTAMPS
    VALUES(@newmax, GETDATE())
    And now i need to insert the data coming out of this function into ServerB Table B
    And the column names and everything coming out of the function is the same columns in Table B.So, no need to worry there.
    I have never worked with inserting data from function through linked server.
    Can someone please help me with this?
    Thanks,
    Sujith

    Well, first of all, your table structure doesn't match the structure of returned table by your table-valued function.
    This is what table valued function returns:
    [storeID] varchar(32) default('noRefID'),
    [TrafDate] [datetime] default(NULL),
    [QtyTraffic] [float] default(0)
    And this is what your table structure is:
    [tUTL_JobLogging_Key] [int] NOT NULL,
    [StoreId] [varchar](10) NULL,
    [TrafDate] [datetime] NULL,
    [Visits] [numeric](8, 2) NULL
    Apart from different size for the StoreID and using float vs. numeric and different name for the last column, there is one extra column in your table.
    So, this is the first problem you need to correct.
    Also, please post how exactly you're calling your procedure? E.g. the procedure has 2 parameters, but the values of them are not used as you're calculating them in the code. So, I assume they should not be parameters for the procedure.
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

  • Customer master data table KNVV is missing

    Hi All
    I a trying to cancel the billing document using transaction VF11. System shows error as " customer XYZ customer master data table KNVV is missing"
    Please advice how to proceed?
    Thanks and regards
    satyaprasad

    Hi,
    Goto the T.Code "Se11/SE16".
    Enter the table name as "VBRK".
    Enter your invoice number.Execute.
    Check the value of "VKORG(Sales Organisation)","VTWEG(Distribution Channel)" and "SPART(Division)".
    Or you check the same in "VF03" screen also.
    Make a copy of the above three fields.
    Goto the T.Code "Se11/SE16".
    Enter the table name as "KNVV".Enter your customer number in "KUNNR" and the values of "Sales organisation,Distribution channel and division" into their respective fields.Execute.
    Check if there are any entries.The result should be no entries so that only you will receive this error message.
    If you created the customer with out sales area,while creating the order only it will give you an error message as "No customer master record exists for sold-to party XXXXXXX
    Message no. VP199".
    As the partners are defined in sales area data only.
    Regards,
    Krishna.

  • Powershell Function Pipeline a Data Table

    I've got a function which outputs a data table result and would like to be able to make it useable in a pipeline result. I can read into the table to return just the mpname but it I would like to know if its possible to have both options? So for example
    if the command is run separately it outputs the data table but if run in a pipeline is outputs just the string?
    Any advice or suggestions greatly appreciated :)! Been scratching my head on this for a while...
    function Get-MPPropertiesTest {
    [CmdletBinding(DefaultParameterSetName="None")]
    param
    [Parameter(Position=0,Mandatory=$false,
    HelpMessage='Select Management Pack',
    ParameterSetName="Two"
    [String]$ManagementPack
    begin {
    switch ($psCmdlet.ParameterSetName) {
    "None" {$ManagementPack = 'Microsoft.Unix.Library'
    $SqlQuery = "Select MPName, MPFriendlyName, MPVersion, MPXML
    FROM
    [dbo].[ManagementPack]
    Where MPName = '$ManagementPack'"
    "Two" {$SqlQuery = "Select MPName, MPFriendlyName, MPVersion, MPXML
    FROM
    [dbo].[ManagementPack]
    Where MPName = '$ManagementPack'
    process{
    write-verbose "Beginning process"
    Write-Verbose "Processing $ManagementPack"
    $SqlConnection = New-Object System.Data.SqlClient.SqlConnection
    $SqlConnection.ConnectionString = "Server = $SCOMSQLServer; Database = $SCOMDBName; Integrated Security = True"
    $SqlConnection.Open()
    $SqlCmd = New-Object System.Data.SqlClient.SqlCommand
    $sqlcmd.CommandTimeout = 600000
    $SqlCmd.Connection = $SqlConnection
    $SqlCmd.CommandText = $SqlQuery
    $result = $sqlcmd.ExecuteReader()
    $resulttable = new-object “System.Data.DataTable”
    $resulttable.Load($result)
    $SqlConnection.Close()
    End{
    return $resulttable
    Raf Delgado

    A "Get-Process Notepad" returns the single option in the table which is also fine. What I'm trying to achieve is effectively the same as doing a get-process notepad | stop-process, how does the stop-process know to read the table for the variable
    to use?
    On my test function I can successfully do a pipe to an export-csv so it is piping, the return however shows the first line as "#TYPE System.Data.DataRow" which when piping to my other function. 
    Raf Delgado
    Get-Process notepad does not return a "table of text". It returns an object.
    Get-Process notepad
    Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName
    88 7 1344 8008 92 0.03 5928 notepad
    You may see a table of text on the console. But that's not all that has been returned. 
    In this example, it's specifically a ComponentModel.Component object
    $SomeVariable = Get-Process notepad
    $SomeVariable.GetType()
    IsPublic IsSerial Name BaseType
    True False Process System.ComponentModel.Component
    An object has properties, methods, and events:
    $SomeVariable | Get-Member TypeName: System.Diagnostics.Process
    Name MemberType Definition
    Handles AliasProperty Handles = Handlecount
    Name AliasProperty Name = ProcessName
    NPM AliasProperty NPM = NonpagedSystemMemorySize
    PM AliasProperty PM = PagedMemorySize
    VM AliasProperty VM = VirtualMemorySize
    WS AliasProperty WS = WorkingSet
    Disposed Event System.EventHandler Disposed(System.Object, System.EventArgs)
    ErrorDataReceived Event System.Diagnostics.DataReceivedEventHandler ErrorDataReceived(System.Object, System.Diagnostics.DataReceivedEventArgs)
    Exited Event System.EventHandler Exited(System.Object, System.EventArgs)
    OutputDataReceived Event System.Diagnostics.DataReceivedEventHandler OutputDataReceived(System.Object, System.Diagnostics.DataReceivedEventArgs)
    BeginErrorReadLine Method void BeginErrorReadLine()
    BeginOutputReadLine Method void BeginOutputReadLine()
    CancelErrorRead Method void CancelErrorRead()
    CancelOutputRead Method void CancelOutputRead()
    Close Method void Close()
    CloseMainWindow Method bool CloseMainWindow()
    CreateObjRef Method System.Runtime.Remoting.ObjRef CreateObjRef(type requestedType)
    Dispose Method void Dispose(), void IDisposable.Dispose()
    Equals Method bool Equals(System.Object obj)
    GetHashCode Method int GetHashCode()
    GetLifetimeService Method System.Object GetLifetimeService()
    GetType Method type GetType()
    InitializeLifetimeService Method System.Object InitializeLifetimeService()
    Kill Method void Kill()
    Refresh Method void Refresh()
    Start Method bool Start()
    ToString Method string ToString()
    WaitForExit Method bool WaitForExit(int milliseconds), void WaitForExit()
    WaitForInputIdle Method bool WaitForInputIdle(int milliseconds), bool WaitForInputIdle()
    __NounName NoteProperty System.String __NounName=Process
    BasePriority Property int BasePriority {get;}
    Container Property System.ComponentModel.IContainer Container {get;}
    EnableRaisingEvents Property bool EnableRaisingEvents {get;set;}
    ExitCode Property int ExitCode {get;}
    ExitTime Property datetime ExitTime {get;}
    Handle Property System.IntPtr Handle {get;}
    HandleCount Property int HandleCount {get;}
    HasExited Property bool HasExited {get;}
    Id Property int Id {get;}
    MachineName Property string MachineName {get;}
    MainModule Property System.Diagnostics.ProcessModule MainModule {get;}
    MainWindowHandle Property System.IntPtr MainWindowHandle {get;}
    MainWindowTitle Property string MainWindowTitle {get;}
    MaxWorkingSet Property System.IntPtr MaxWorkingSet {get;set;}
    MinWorkingSet Property System.IntPtr MinWorkingSet {get;set;}
    Modules Property System.Diagnostics.ProcessModuleCollection Modules {get;}
    NonpagedSystemMemorySize Property int NonpagedSystemMemorySize {get;}
    NonpagedSystemMemorySize64 Property long NonpagedSystemMemorySize64 {get;}
    PagedMemorySize Property int PagedMemorySize {get;}
    PagedMemorySize64 Property long PagedMemorySize64 {get;}
    PagedSystemMemorySize Property int PagedSystemMemorySize {get;}
    PagedSystemMemorySize64 Property long PagedSystemMemorySize64 {get;}
    PeakPagedMemorySize Property int PeakPagedMemorySize {get;}
    PeakPagedMemorySize64 Property long PeakPagedMemorySize64 {get;}
    PeakVirtualMemorySize Property int PeakVirtualMemorySize {get;}
    PeakVirtualMemorySize64 Property long PeakVirtualMemorySize64 {get;}
    PeakWorkingSet Property int PeakWorkingSet {get;}
    PeakWorkingSet64 Property long PeakWorkingSet64 {get;}
    PriorityBoostEnabled Property bool PriorityBoostEnabled {get;set;}
    PriorityClass Property System.Diagnostics.ProcessPriorityClass PriorityClass {get;set;}
    PrivateMemorySize Property int PrivateMemorySize {get;}
    PrivateMemorySize64 Property long PrivateMemorySize64 {get;}
    PrivilegedProcessorTime Property timespan PrivilegedProcessorTime {get;}
    ProcessName Property string ProcessName {get;}
    ProcessorAffinity Property System.IntPtr ProcessorAffinity {get;set;}
    Responding Property bool Responding {get;}
    SessionId Property int SessionId {get;}
    Site Property System.ComponentModel.ISite Site {get;set;}
    StandardError Property System.IO.StreamReader StandardError {get;}
    StandardInput Property System.IO.StreamWriter StandardInput {get;}
    StandardOutput Property System.IO.StreamReader StandardOutput {get;}
    StartInfo Property System.Diagnostics.ProcessStartInfo StartInfo {get;set;}
    StartTime Property datetime StartTime {get;}
    SynchronizingObject Property System.ComponentModel.ISynchronizeInvoke SynchronizingObject {get;set;}
    Threads Property System.Diagnostics.ProcessThreadCollection Threads {get;}
    TotalProcessorTime Property timespan TotalProcessorTime {get;}
    UserProcessorTime Property timespan UserProcessorTime {get;}
    VirtualMemorySize Property int VirtualMemorySize {get;}
    VirtualMemorySize64 Property long VirtualMemorySize64 {get;}
    WorkingSet Property int WorkingSet {get;}
    WorkingSet64 Property long WorkingSet64 {get;}
    PSConfiguration PropertySet PSConfiguration {Name, Id, PriorityClass, FileVersion}
    PSResources PropertySet PSResources {Name, Id, Handlecount, WorkingSet, NonPagedMemorySize, PagedMemorySize, PrivateMemorySize, VirtualMemorySi...
    Company ScriptProperty System.Object Company {get=$this.Mainmodule.FileVersionInfo.CompanyName;}
    CPU ScriptProperty System.Object CPU {get=$this.TotalProcessorTime.TotalSeconds;}
    Description ScriptProperty System.Object Description {get=$this.Mainmodule.FileVersionInfo.FileDescription;}
    FileVersion ScriptProperty System.Object FileVersion {get=$this.Mainmodule.FileVersionInfo.FileVersion;}
    Path ScriptProperty System.Object Path {get=$this.Mainmodule.FileName;}
    Product ScriptProperty System.Object Product {get=$this.Mainmodule.FileVersionInfo.ProductName;}
    ProductVersion ScriptProperty System.Object ProductVersion {get=$this.Mainmodule.FileVersionInfo.ProductVersion;}
    In this example, you can close notepad by using the Kill method:
    $SomeVariable.Kill()
    When writing a function to use an object as input, you specify the input variable as an object type instead of [string]. For example:
    function Kill-Process {
    [CmdletBinding()]
    Param(
    [Parameter(Mandatory=$true,
    ValueFromPipeLine=$true,
    ValueFromPipeLineByPropertyName=$true,
    Position=0)]
    [System.Diagnostics.Process]$Process
    $Process.Kill()
    So, now I can do this:
    Get-Process notepad | Kill-Process
    and that will close notepad.
    How did that work?
    In the Kill-Process function, I used
    CmdletBinding - see this link for more details
    I used
    ValueFromPipeLineByPropertyName and ValueFromPipeLine - see this link for more details
    I specified the object type I'm accepting as input.
    This is exactly the same object type that's the Output of the Get-Process cmdlet. This is how a pipeline works in Powershell. It's not that a cmdlet parses text from a prior cmdlet. It looks for its required input from the pipeline if 'ValueFromPipeLine'
    is specified.   
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx
    Thank you Sam! This is Perfect! Makes a lot more sense now.
    So if I wanted to parse to a Datatable could I do something like the below?
    function Get-ManagementPack {
    [CmdletBinding()]
    Param(
    [Parameter(Mandatory=$true,
    ValueFromPipeLine=$true,
    ValueFromPipeLineByPropertyName=$true,
    Position=0)]
    [System.Data.Datatable]$ManagementPack
    $ManagementPack
    Raf Delgado

  • Table for Function modules - Changed date & Changed by.

    Hi Experts,
    Help me to get Function module changed date & changed by.
    For programs we can get the mentained information in TABLE : TADIR.

    Function Modules change history can be retrieved from table TRDIR.
    Function Module stores a program Include in table TRDIR.
    The details of the Function Module can be found in Attributes-->general data tab in the Function Module.
    You need to pass the Include which correponds the FM.
    Hence you can find the Log changes.
    Hope this helps.
    Regards
    Vinayak

  • HTML Data Table with cell-level inline messages?

    Hi all.
    I'm not suprised by this, but I am wondering if anyone has figured out a slick/elegant way around the "problem"
    Basically, the problem I want to solve is:
    how to get corresponding validation error messages to show up in the data table cell that contains the (editiable) ui component representing the "offending" value?
    The naive first approach would be simply to place an inline message component just before or after the corresponding data table column's row value presentation component.
    But obviously, this won't work, primarily because the inline message component id will not be unique (does not pick up the currentRow to make it unique) asd well as due to the issue of binding the inlineMessage component's "for" attribute to the correct row (as well as column).
    So, the naive first approach results in a thrown exception:
    javax.servlet.ServletException: Duplicate component id: 'form1:personDataTable:firstNameValidationInlineMessage'
    Now, I know I could add "virtual" columns to my data table to hold validation errors so that they correspond to the correct row/columns and I know I can bind the rendered attribute, style etc, for corresponding presentation elements, but then I've basically side-stepped the whole faces message functionality built into the JSF framework.
    Also, if I have to use "virtual" columns, then I've complicated life for myself, since I ordinarily want to bind the data table to a list of bean instances (i.e. we are using hibernate persistence) or even a row set (difficulty setting a column value that is not updateable in data source, i.e. corresponds to no table/column and maybe involves dynamically adding "virtual" column to data cache?).
    Now, I know I can place a message list on the page instead (this is currently what I am doing), and I know I can change the application so that I provide a separate *single row" edit page, both of which provide a semi-adequate alternative.
    I like the first alternative (the one I'm using), because in-place editing of data table rows is so much more compact and elegant, both in terms of presentation and coding.
    Using this first alternative, I am currently adding to the validation error messages the currentRow attrubute so that users can see it an trace back to the corresponding row in the data table to correct the data entry.
    However, everyone, users, project drivers and developers (including myself), are grumbling a bit over such a "hackish" approach.
    Any thoughts? Am I missing something obvious?

    Well I'll be damned!
    What's done in the sample app is, of course, the intuitively obvious choice and is exactly what I started out trying to do the first time round.
    But as I mentioned before, when I first tried it (yes: I'm pretty sure I set the "for" attribute on the inlineMessage components), I got the non-unique component id exception for the inlineMessage component(s) after adding the second (but not the first) row to the page.
    Anyway, after dragging my inlineMessages to the exact same position as those in the AppModel example, now things work fine (except I think I'd like a line break before the message and to change the row/column styles so the values and messages line up properly...the look is rather ugly in the AppModel example when there are validation messages displayed).
    Not sure, but I'm thinking maybe there was an issue with where the inline message markup was placed, my first time through, relative to the data table value bound component and the column header facet?
    I was using the application view drag 'n drop feature the first time round and dragged the inlineMessage component to the spot just below the data table value bound component and hence just above the column header facet. In the AppModel example, OTOH, the inline messages are placed just after the column header facet.
    Of course, I will now try to duplicate the original exception.
    If I can (and it is an issue with placement), I will post back some sort of bug report or RFE. Otherwise, I'll post back declaring what a bone head I've been...;-)
    Anyway, thanks for the quick response, v.
    Campbell

  • OWB bugs, missing functionality and the future of OWB

    I'm working with OWB for some time now and there are a lot of rough edges to discover. Functionality and stability leave a lot to be desired. Here's a small and incomplete list of things that annoy me:
    Some annoying OWB bugs (OWB 10g 10.1.0.2.0):
    - The debugger doesn't display the output parameters of procedures called in pre-mapping processes (displays nothing, treats values as NULL). The mapping itself works fine though.
    - When calling selfmade functions within an expression OWB precedes the function call with a constant "Functions." which prevents the function from being executed and results in an error message
    - Occasionally OWB cannot open mappings and displays an error message (null pointer exception). In this case the mapping cannot be opened anymore.
    - Occasionally when executing mappings OWB doesn't remember changes in mappings even when the changes were committed and deployed
    - When using aggregators in mappings OWB scrambles the order of the output attributes
    - The deployment of mappings sometimes doesn't work. After n retries it works without having changed anything in the mapping
    - When recreating an external table directly after dropping the table OWB recreates the external table but always displays both an error message and a success message.
    - In Key Lookups the screen always gets garbled when selecting an attribute as a join condition
    - Usage of constants results in aborts in the debugger
    - When you reconcile a table used in a key lookup the lookup condition sometimes changes. OWB seems to remember only the position of the lookup condition attribute but not the name.
    - In the process of validating a mapping often changes in the mapping get lost and errors occur like 'Internal Errors' or 'Null Pointer Exceptions'.
    - When you save the definition of external tables OWB always adds 2 whitespace columns to the beginning of all the lines following 'ORGANISATION EXTERNAL'. If you save a lot of external table definitions you get files with hundreds of leading whitespaces.
    Poor or missing functionality:
    - No logging on the level of single records possible. I'd like the possibility to see the status of each single record in each operator like using 'verbose data' in PowerCenter
    - The order of the attributes cannot be changed. This really pisses me off expecially if operators like the aggregator scramble the order of attributes.
    - No variables in expressions possible
    - Almost unusable lookup functionality (no cascading lookups, no lookup overrides, no unconnected lookups, only equal condition in key lookups)
    - No SQL overrides in soruces possible
    - No mapplets, shared containers or any kind a reusable transformations
    - No overview functionality for mappings. Often it's very hard to find a leftover operator in a big mapping.
    - No copy function for attributes
    - Printing functionality is completely useless
    - No documentation functionality for mappings (reports)
    - Debugger itself needs debugging
    - It's very difficult to mark connections between attributes of different operations. It's almost impossible to mark a group of connections without marking connections you don't want to mark.
    I really wonder which of the above bugs and mssing functionality 'Paris' will address. From what I read about 'Paris' not many if at all. If Oracle really wants to be a competitor (with regard to functionality) to Informatica, IBM/Ascential etc. they have a whole lot of work to do or purchase Informatica or another of the leading etl tool
    vendors.
    What do you think about OWB? Will it be a competitor for the leading etl tools or just a cheap database add on and become widely used like SAB BW not for reasons of technology or functionality but because it's cheap?
    Looking forward to your opinions.
    Jörg Menker

    Thanks to you two for entertaining my thoughts so far. Let me respond to you latest comments.
    Okay, lets not argue which one is better.. when a tool is there .. then there are some reasons to be there...But the points raised by Jorg and me are really very annoying. Overall I agree with both yours and Jorg's points (and I did not think it was an argument...merely sharing our observations with each other (;^)
    The OWB tool is not as mature as Informatica. However, Informatica has no foothold in the database engine itself and as I mentioned earlier, is still "on the outside looking in..." The efficiency and power of set-based activity versus row-based activity is substantial.
    Looking at it from another way lets take a look at Microstrategy as a way of observing a technical strategy for product development. Microstrategy focused on the internals (the engine) and developed it into the "heavy-lifting" tool in the industry. It did this primarily by leveraging the power of the backend...the database and the hosting server. For sheer brute force, it was champion of the day. It was less concerned with the pretty presentation and more concerned with getting the data out of the back-end so the user didn't have to sit there for a day and wait. Now they have begun to focus on the presentation part.
    Likewise this seems to be the strategy that Oracle has used for OWB. It is designed around the database engine and leverages the power of the database to do its work. Informatica (probably because it needs to be all things to all people) has tended to view the technical offerings of the database engine as a secondary consideration in its architectural approach and has probably been forced to do so more now that Oracle has put themselves in direct competition with Informatica. To do otherwise would make their product too complex to maintain and more vendor-specific.
    I am into the third data warehousing/data migration project and my previous two have been on Informatica (3 years on it).I respect your experience and your opinions...you are not a first timer. The tasks we have both had to solve and how we solved them with these tools are not necessarily the same. Could be similar in instances; could be quite different.
    So the general tendency is to evaluate the tool and try to see how things that were needed to be done in my previous projects can be done with this tool. I am afraid to say .. I am still not sure how these can be implemented in OWB. The points raised by us are probably the fall out of this deficiency.One observation that I would make is that in my experience, calls to the procedural language in the database engine have tended to perform very poorly with Informatica. Informatica's scripting language is week. Therefore, if you do not have direct usability of a good, strong procedural language to tackle some complicated tasks, then you will be in a pickle when the solution is not well suited to a relational-based approach. Informatica wants you to do most things outside of the database (in the map primarily). It is how you implement the transformation logic. OWB is built entirely around the relational, procedural, and ETL components in the Oracle database engine. That is what the tool is all about.
    If cost is the major factor for deciding a tool then OWB stands far ahead...Depends entirely on the client and the situation. I have implemented solutions for large companies and small companies. I don't use a table saw to cut cake and I don't use a pin knife to fall trees. Right tool for the right job.
    ...thats what most managers do .. without even looking how in turn by selecting such a tool they make the life tough for the developers.Been there many times. Few non-technical managers understand the process of tool evaluation and selection and the value a good process adds to the project. Nor do they understand the implications of making a bad choice (cost, productivity, maintainability).
    The functionality of OWB stands way below Informatica.If you are primarily a GUI-based implementer that is true. However, I have often found that when I have been brought in to fix performance problems with Informatica implementations that the primary problem is usually with the way that the developer implemented it. Too often I have found that the developer understands how to implement logic in the GUI component (the Designer/Maps and Sessions) with a complete lack of understanding of how all this activity will impact load performance (they don't understand how the database engine works.) For example, a strong feature in Informatica is the ability to override the default SQL statement generated by Informatica. This was a smart design decision on Informatica's part. I have frequently had to go into the "code" and fix bad joins, split up complex operations, and rip out convoluted logic to get the maps to perform within a reasonable load window. Too often these developers are only viewing the problem through the "window" of the tool. They are not stepping back and look at the problem in the context of the overall architecture. In part Informatica forces them to do this. Another possible factor is they probably don't know better.
    "One tool...one solution"
    Microstrategy until recently had been suffering from that same condition of not allowing the developer to create the actual query). OWB engineers need to rethink their strategy on overriding the SQL.
    The functionality of OWB stands way below Informatica.In some ways yes. If you do a head-to-head comparison of the GUI then yes. In other ways OWB is better (Informatica does not measure up when you compare it with all of the architectural features that the Oracle database engine offers). They need to fix the bugs and annoyances though.
    .. but even the GUI of Informatica is better than OWB and gives the developer some satisfaction of working in it.Believe me I feel your pain. On the other hand, I have suffered from Informatica bugs. Ever do a port from one database eingine to another just to have it convert everything into multi-byte? Ever have it re-define your maps to parallel processing threads when you didn't ask it to?
    Looking at the technical side of things I can give you one fine example ... there is no function in Oracle doing to_integer (to_number is there) but Informatica does that ... Hmm-m-m...sorry, I don't get the point.
    The style of ETL approach of Informatica is far more appealing.I find it unnecessarily over-engineered.
    OWB has two advantages : It is basically free of cost and it has a big brother in Oracle.
    It is basically free of cost...When you are another "Microsoft", you can throw your weight around. The message for Informatica is "don't bite the hand that feeds you." Bad decisions at the top.
    Regards,
    Dan Phillips

  • Actual Goods Movement Date is missing in 2LIS_11_V_SSL

    Hi
    Daily we are loading the Order Delivery data from the 2LIS_11_V_SSL datasource into the Global Level DSO(ZTEST) and it is snapshot of ECC Production System . That means, only the latest data is avialable in the ZTEST DSO .
    Now for some of the Sales Order's , actual goods movement date is missing in the ZTEST DSO. So i need to investigate why the actual goods movement date to missing in the ZTEST DSO .
    So i want to test this datasource(2LIS_11_V_SSL ) whether it is bring the actual goods movement date or not .
    So how can i test this datasource 2LIS_11_V_SSL . Do i need to populate the setup with a particular sales order number.
    What is the steps for that . Please let me know .
    Regards
    Mohammed

    Hi,
    Please check the PSA data first whether the field is empty . Because sometimes if you have not added the correct key fields in the DSO,you may not get the changes. So check the PSA first. If it is empty then check the data in the ECC system . If you use the standard field for the actual goods movement date please check with the functional consultant whether the below logic is followed (extracted from help.sap.com) for those sales order missing the date .
    MCEX_I_WADAT
    Actual goods movement date of a delivery
    Definition
    This field contains the actual goods movement date of a delivery of an item order.
    Use
    The field is only used in the extract structure MC11V_0SSL (DataSource 2LIS_11_V_SSL).
    There, this date is determined for one delivery at a time, and is assigned to all data records for this delivery.
    In these data records, you are dealing with confirmed quantities of order schedule lines, which are assigned a corresponding delivery item as delivery.
    The date is only assigned if it was entered in the associated delivery.
    If the logic is correct for the sales order ,then you need to reload the data .Fill the set-up table with those missing sales orders ,and run RSA3 with the selections after the set-up table filling. Check the data again here and if you find date values,then you can run the repair full request to load the data.
    Still if you miss the date ,as i said earlier please check with the functional people whether any business logic is missing .
    Thanks.

  • Generic Data Source with Function Module data mismatch in BI

    Hi All,
    I'm using Generic Data Source with Function Module, When I execute the Function Module (Which I have Created), I'm getting 16000 records and when run extractor(in RSA3) im getting different no.of records(infact they are more no.).
    when I run the InfoPackage in BI im Getting more no. of records than what i got executing the function module..
    and single record is divided into 2 records in BI side(not all the records), how can it be possible???
    is there anything Im missing to explain you my issue???
    if understood please help me out.
    Thanks n Regards,
    ravi.

    the datasource frame work starts the function module several times.
    1. the initialization
    2. the serval times, until you "raise no_more_data".
    check you coding: have you refreshed necessary internal tables.
    Sven

  • Data Load - Data / Table Mapping Columns Not Showing

    Hi,
    Using the new 4.1 functionality I have created a data load wizard.
    Everything seems to have created OK however when i run the wizard and get to the Data / Table Mapping stage (2nd page) the column names lov contains no list of values.
    The table i am trying to load in is in another schema but all the grants are there so that schema can view it.
    Any idea what I need to do or what I have missed so the columns can be viewed.
    Thanks in advance.

    Hi,
    You have to log in as admin and there should be an option to add schemas, once adding it should appear in the lov.
    This link should help.
    Parsing Schema

  • Derive found flag in SQL with where clause using TABLE(CAST function

    Dear All,
    Stored procedure listEmployees
    ==========================
    CREATE OR REPLACE TYPE STRING_ARRAY AS VARRAY(8000) OF VARCHAR2(15);
    empIdList STRING_ARRAY
    countriesList STRING_ARRAY
    SELECT EMP_ID, EMP_COUNTRY, EMP_NAME, FOUND_FLAG_
    FROM EMPLOYEE WHERE
    EMP_ID IN
    (SELECT * FROM TABLE(CAST(empIdList AS STRING_ARRAY))
    AND EMP_COUNTRY IN
    (SELECT * FROM TABLE(CAST(countriesList AS STRING_ARRAY))
    =================
    I have a stored procedure which lists the employees using above simple query.
    Here I am using table CAST function to find the list of employees in one go
    instead of looping through each and every employee
    Everything fine until requirements forced me to get the FOUND_FLAG as well.
    Now I wanted derive the FOUND_FLAG by using rownum, rowid, decode functions
    but I was not successful
    Can you please suggest if there is any intelligent way to say weather the
    row is found for given parameters in the where clause?
    If not I may have to loop through each set of empIdList, countriesList
    and find the values individually just to set a flag. In this approach I can’t use
    the TABLE CAST function which is efficient I suppose.
    Note that query STRING_ARRAY is an VARRAY. It is very big in size and this procedure
    suppose to handle large sets of data.
    Thanks In advance
    Regards
    Charan
    Edited by: kmcharan on 03-Dec-2009 09:55
    Edited by: kmcharan on 03-Dec-2009 09:55

    If your query returns results, you have found them... so your "FOUND" flag might be a constant,...

Maybe you are looking for