Optimized SQL generation?

Hi,
I am working with Crystal Reports 2008 SP3 together with an Oracle database. I am using several tables and views; my reports then have several formula fields where I make some calculations and restrictions.
When I look at the SQL statement from the database menu I see that CR constructs the SQL statement the way I made the joins in the Database Expert.
Does CR sends then an optimized SQL statement to the database including my restrictions and conditions from my formula fields or does it simply fetch data from the database and makes the filtering then?
Thanks!

IanWaterman wrote:
> If you add
>
> = {@yourformula}
> Provided your formula is not print time (ie using variables or summaries) it will resolve and pass to database. If not all data will be brought back to crystal for filtering
I have e.g. formulas like this:
@restriction1
{mytable1.col1} = 8 and {mytable1.col3} = 7
Are such formulas pushed to the database server or does CR then do it locally, retrieving more data from the database then necessary?

Similar Messages

  • Using the Unpivot Operator when in Oracle8i PL/SQL Generation Mode

    Hi,
    when you are validating a mapping with contains an unpivot operator and the PL/SQL Generation Mode is set to Oracle8i (because you're using an Oracle8i 8.1.7. target database) the following error is raised:
    VLD-3127: Cannot generate code for UNPIVOT because the unpivot operator is only supported starting with the Oracle9i version of the database.","The unpivot operator is only supported starting with the Oracle9i version of the database. To resolve this, set the PL/SQL Generation Mode to Oracle9i in the configuration of the Oracle module that contains this mapping.
    When you set the PL/SQL Generation Mode to Oracle9i and after successfully validating the mapping generate the code you can see that within the generated code case-statements are used for the unpivot translation.
    like:
    MIN(CASE WHEN "AGG_YEAR_MONTH" = 200301 THEN "NO_CALLS" ELSE NULL END) "JAN2003_CALLS"
    And as Oracle8i doesn't support the case-statement, the raised validation error is understandable. But the generated code can be easily modified using decode-classes instead of the case-statements.
    results in:
    MIN(DECODE("AGG_YEAR_MONTH",''200301'',"NO_CALLS",NULL)) "JAN2003_CALLS"
    And the generated code works fine in the Oracle8i 8.1.7 target database.
    But now my question:
    Does someone know if it is possible to create a custom unpivot operator in OWB9.2.0 which will generate the code using the decode-class instead of using the case-statements. And if so, how I can create such custom operator.
    Many Thx in advantage!
    Remco

    Hi,
    The reason why OWB does not generate decode statements is that the generated code needs to support both set-based and row-based operation modes. Decodes are valid in SQL (set based), but not in PL/SQL (row based)
    Oracle 8i (8.1.7) does in fact support CASE, but only in SQL statements, not PL/SQL
    Have you considered creating a view to perform the unpivot operation using decode? It should also be possible solve your problem using a function
    Regards,
    Roald

  • Optimized sql not properly generated when using SAP tables

    Hi Experts
    We are using BODS 4.1 sp2.
    We have a simple dataflow where we pull data from SAP r3 using direct download and push it in our database.
    Basically its like this:
    SAP Table-->Query Transform-->Oracle Database
    Inside query transform,we have applied some conditions in where clause,which are combinations of 'or' and 'and' operator.
    However when we are generating optimized SQL for the same,we observe that the conditions in where clause are not being included in the optimized query.
    But if we replace all conditions by only 'or'  or by only 'and' condition,the optimized sql query with condition  is generated.
    Somehow the optimized sql with conditions is not generated if where clause has mixer of 'and' and 'or' conditions.
    The issue is only observerd for SAP Tables
    However the same dataflow in bods 4.0,generates an optimized sql containing the mixed conditions of 'and' and 'or' in where clause.
    Please help me on this.

    Dear Shiva
    We e
    Let me explain this in detail to you:
    We have a simple dataflow where we push data from the sap table directly into our oracle database.
    The dataflow has a query transform where we have some conditions to filter the data.
    The method used to pull data from R3 is direct download.
    System Used:
    BODS 4.2 SP1
    SAP R3 620 release(4.7)
    The where clause has mixed 'and' and 'or' operators.The problem is that this where clause is not getting pushed in the Optimized sql.
    Eg: Table1.feild1=some value and Table1.feild2=some value or Table1.feild3=some value.
    If this is the condition in  where clause,the where clause does not appear in optimized sql.
    Also the condition in where clause is from same table.
    However if all the condition in where clause are either all 'and'  or either all 'or',the condition gets pushed and appears in optimized sql.
    But if their is mixed 'and' and 'or' it fails.
    The same was not the case when we used BODS 4.0 with R3 620(4.7).
    We investigated further and used R3 730 with BODS 4.2.With R3 720,we the where condition was getting pushed into optimized sql for mixed 'and' 'or' predicated.
    SAP support too confirmed the same.They were able to reproduce the issue.
    We will use ABAP dataflow and see if if resolves the issue.
    Regards
    Ankit

  • Internal memory error during SQL generation. (QP0002)

    Post Author: Rajesh Kumar
    CA Forum: WebIntelligence Reporting
    Hi,
    I developed one Report in BO 5.1 version (Report size à 13 MB) and I Migrated this Report to BO XIR2,
    After I Migrated this Report to BO XI R2 this Report was worked perfectly in DESKI & also in WEBI
    But now for the past few Days (nearly 1 week) this Report is not working in WEBI, but itu2019s perfectly working in DESKI. In WEBI itu2019s showing error message à u201CInternal memory error during SQL generation. (QP0002)u201D
    Iu2019m having one PDF documentation for BO Error Messages Listing, in that Documentation I have found the below à
    Internal memory error during SQL generation. (QP0002)
    Cause This error occurs when there is no longer enough memory to generate the SQL.
    Action You should close other applications and then rerun the query.
    I tried this alsou2026.
    I closed all other applications and I Refreshed this Report, but again the same error is coming in WEBI
    Report is working in DESKI but itu2019s not working in WEBI, I donu2019t know how to rectify this problem
    Can anyone help me in this to rectify.. please
    Thanks in advance
    Rajesh Kumar

    Hi,
    I investigated further and if the previous solution doesn't help you to resolve the issue please test the below mentioned solution.
    When several contexts are possible for a query, the system tests if they produce the same set of tables. If they are identical, it is not necessary to prompt the user. It is the default behavior. But for some particular universes, the designer defines different contexts with the same tables, but with a different set of joins. This will compare the context with the joins. When this happens, InfoView fails with this error.
    Resolution
    1. Import the universe.
    2. Modify the following parameter:
    COMPARE_CONTEXTS_WITH_JOINS = No
    3. Export the universe.
    4. Open the Desktop Intelligence report in InfoView and refresh it.
    It will refresh successfully
    Regards,
    Sarbhjeet Kaur

  • PL/SQL Generation Mode by default

    Hello
    We are switching to OWB 11.1, so I have a simple question I can't find a simple answer. I have to check all properties of modules and mappings. My simple question is: On module level -> Configutre ... 'PL/SQL Generation Mode' beside of some Oracle DB version I also can choose 'Default'. What does 'Default' mean in that case? Is thie 'Default' related to the default Location set in modules Location (Identification)? On creating a new Location you can set the DB version behind it, so the module could get that DB version over the associated location(?).
    We are also switchting the DB to 11.1 and the version set in the Location is then 11.1, what makes sense. If I associate this Location inside the module's configuration and will set the PL/SQL Generation Mode in the module configuration to 'Default', how will the PL/SQL Generation Mode act?
    Any hint is appreciated.
    Tyger

    The Default value will use the version defined for the module's configured location. It impacts code generation in some scenarios.
    Leaving to default is fine in general.
    Cheers
    David

  • SQL generation failed see your business objects administrator (Error:WIS 00

    I have a user who in WEBI has refreshes report frequently.
    :stupid: . After 5-10 minutes of inactivity it will toss out this error:
    SQL generation failed. See your business objects administrator. Error: WIS 00013 Error: INF

    Hi Rohit,
    could you please test the following settings to resolve the issue.
    1. CORBA
    -requestTimeout
    This is the default CORBA timeout which can be set on any BOE services through command lines in CCM. And this switch is in millisecond.
    Registry CORBA Timeout
    This is a default hard coded 10 mins timeout for CORBA. WebI and DeskI did get integrated into this regkey:
    (1) Go to HEKY_LOCAL_MACHINE -> SOFTWARE -> Business Objects -> Suite 11.5 -> CER.
    (2) Modify ConnectionTomeout from 600000 to higher value.
    (3) Restart all services in CCM.
    2. Infoview Java Application (For example, Tomcat) Session Timeout If a user idles in Infoview for longer than this timeout, the session will be killed automatically. The default session timeout is 20 mins. To change the default session time out for InfoView:
    (1) Go to u201Cu2026\Tomcat\webapps\businessobjects\enterprise115\desktoplaunch\WEB-INF\web.xmlu201D
    (2) Modify web.xml. Scroll to the following section:
    <session-config>
    <session-timeout>20</session-timeout>
    </session-config>
    (3) Edit the <session-timeout> value to the desired value. Save the web.xml file.
    (4) Restart Tomcat.
    If you meet their sessions are not getting released when they log out, here are two things to check in web.xml:
    (1) If the below configuration was set by true, that means they will have another session allocated to them once they begin to move around in Infoview after the user times out. If it was
    set by false, the user have to re-logon after they times out.
    <context-param>
    <param-name>logontoken.enabled</param-name >
    <param-value>true</param-value >
    </context-param>
    (2) All sessions will be cleanup after they times out if uncommented the below configuration. If you are having issues with sessions staying in CMS, try to uncomment the below line and
    restart tomcat.
    <listener>
    <listener-class>com.businessobjects.sdk.ceutils.SessionCleanupListener
    </listener-class>
    3. Tomcat
    ConnectionTimeout
    The default value is 20000 milliseconds. To disable it, try to set the value to -1 as below:
    (1) Go to u201Cu2026\Tomcat\conf\server.xmlu201D.
    (2) Find line u201CConnector on port 8080u201D.
    (3) Modify connectiontomeout = u201C20000u201D to ConnectionTimeout = u201C-1u201D.
    4. Universes
    Execution Timeout
    (1) Go to Universe Designer -> File -> Parameters -> Controls.
    (2) Check the value of u201CLimit execution time tou201D.
    5. WebI
    Connection Timeout
    The number of minutes before an idle connection to the WebI Report Server will be closed. When Java Report Panel, it is now controlled by this timeout switch:
    (1) Log into CMC
    (2) Go to Servers -> WebI Report Server
    (3) Set Connection Timeout
    WebI Report Timeout
    WebI designers have the ability to set a limit on how long a query can be run on a database before the query is stopped:
    (1) Edit/Create a new WebI report using Java Report Panel.
    (2) Choose Edit Query -> Properties.
    (3) Now deselect/increase the "max retrieval time" value.
    Swap Timeout
    In Java Report Panel and you leave it idle for more than 5 minutes (Default), you will notice that you will no longer be able to save your WebI report into CMS or your report will take longer
    to generate. This is because SwapTimeOut setting which controls the flushing interval of WebI temp files:
    (1) HKEY_LOCAL_MACHINE\SOFTWARE\Business Objects\Suite 11.5\default\WebIntelligence\Server\Admin\SwapTimeOut
    Regards,
    Sarbhjeet Kaur

  • Optimal SQL

    hi,
    when i see performance tuning in orafaq.com, they mentioned the following with respect to SQL.
    Application Tuning:
    Experience shows that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL.
    i would like to know is there any document with guidelines (dos and donts) on writing optimal SQL ?
    regards

    Hi,
    Here are a few things I try to keep in mind when writing SQL, and some common mistakes I've noticed.
    Optimal SQL starts before you even write a query; it starts with a good table design.
    Normalize your tables.
    Use the right datatype. A common mistake is to use a VARCHAR2 or NUMBER column when a DATE is appropriate.
    Use SQL instead of PL/SQL, expecially PL/SQL that does DML one row at a time. MERGE is a very powerful tool in pure SQL.
    Help the optimizer.
    Write comparisons so that an indexed column is alone on one side of the comparison operator. For example, if you're looking for orders that are more than 60 days old, don't say
    WHERE   SYSDATE - order_date  > 60    -- *** INEFFICIENT  ***write it this way instead
    WHERE   order_date  < SYSDATE - 60Some tools are inherently slow. These include<ul>
    <li>SELECT DISTINCT
    <li>UNION
    <li>Regular expressions
    <li>CONNECT BY</ul>
    All of these are wonderful, useful tools, but they have a price, and you can often get the exact results you need faster with some weaker tool, even if it requires a little more code.

  • Internal Memory Error During SQL Generation

    Hi,
    Please find below the details:
    Issue: I am using BO XI R3 DeskI.
    When I try to refresh/edit the Dataprovider of a report, it says "Internal Memory Error During SQL Generation"
    I am sure that there is no case of Multiple contexts with this report and its a very simple one too.
    I get this error in WebI as well
    Workaround: Restarting the DesktopIntelligence Processing Server solves the issue for some time but then it reoccurs-for some other report -at a later time.  
    Could anyone please explain the root cause of this and is there a permanent solution to this problem.
    Thanks
    Prabhat Jha

    Hi,
    I investigated further and if the previous solution doesn't help you to resolve the issue please test the below mentioned solution.
    When several contexts are possible for a query, the system tests if they produce the same set of tables. If they are identical, it is not necessary to prompt the user. It is the default behavior. But for some particular universes, the designer defines different contexts with the same tables, but with a different set of joins. This will compare the context with the joins. When this happens, InfoView fails with this error.
    Resolution
    1. Import the universe.
    2. Modify the following parameter:
    COMPARE_CONTEXTS_WITH_JOINS = No
    3. Export the universe.
    4. Open the Desktop Intelligence report in InfoView and refresh it.
    It will refresh successfully
    Regards,
    Sarbhjeet Kaur

  • Sub data flow (Optimized SQL) execution order ?

    I am looking for a solution in Designer of Data Services XI 3.2.
    Is there a way to specifiy in a Data Flow
    (without using 'Transaction control' options and Embedded Data Flow )
    the order in which Sub data flows ( Optimized SQL ) are executed ?
    Thank you in advance.
    Georg

    First, if you are using MDX to calculate the value of C - don’t.  MDX script logic can be extremely inefficient from a processing and memory utilization standpoint vs. SQL logic even if the syntax is shorter.
    Logic executes in the order you place the code in the script.  You have three commit blocks and they would execute in that order.  I notice you don't have a commit after the calculation for C.  You should always put a commit statement after each calculation section or you can get uncommitted data even though there is an implied commit after the last line of code executes.  Don't get in the habit of relying on this.
    You can see the logic logs from the temp folders on the file server as suggested above but they will mainly give you the SQL queries generated which can be helpful in debugging scoping issues but they can be hard to sift through.
    I recommend trying putting a commit statement after your calculation of C and that will probably resolve the issue.  I also strongly suggest you switch the calculation to SQL logic to avoid performance issues when you start having these calculations run under high concurrency or on larger volumes of data than what you're probably testing with.

  • Guidelines to create an optimized SQL queries

    Dear all,
    what is the basic strategy to create an optimized query? In the FAQ SQL and PL/SQL FAQ , it shows how to post a question on a query that needs to be optimized, but it doesn't tell how to with it. The Performance Tuning Guide for 11g rel 2 doc shows how access path works and how to read explain plan, but I cannot really find a basic guidelines to optimize query in general.
    Say I have a complex query that I need to optimized, I get all the info that the thread advise to like optimizer info, explain plan, tkprof dump, and so on. I determine the nature of the data that are being queried with their index. At this point what else I need to do? I cannot just post my query to OTN every time I bump into slow query.
    Best regards,
    Val

    I think that some of the most important guidelines are:
    1. ALWAYS create documentation that explains
    a. what the query is supposed to do - 'selects all records for employees that have stock options that are due to expire within 60 days'
    b. any special constructs/tricks used in the query. For example if you added one or more hints to the query explain why. If the query has
    a complex/compound CASE or DECODE statement add a one line comment explaining what it does.
    2. Gather as much information about the context the query runs in as possible.
    a. how much source data are we dealing with? Large tables or small?
    b. how large is the result set expected to be? A few records, thousands, millions?
    c. are they regular tables or partitioned tables?
    d. are the tables local or on remote servers?
    e. how often is the query executed? once in a blue moon or concurrently by large numbers of users?
    3. For existing queries always confirm that it is the query that actually needs to be tuned before trying to tune it. Too many times I have seen people trying to tune a query when it is actually another part of the process that needs to be tuned.
    4. Always test queries using realistic data - data and environment as close as possible to that which will be used in production.
    5. Always run an explain plan before, during and after you test the query. Save the final plan in a repository so that it is available for comparison if a problem later occurs that you suspect might be related to the query. It is easier to diagnose a possible degradation of performance if you have a previous execution plan to compare the current one to.
    6. Use common sense when writing/evaluating you query - if it looks too complicated it probably is. If you have trouble understanding or testing it the next person that comes along will probably have even more trouble.
    Hope that adds some to what the docs provide.

  • Help Needing in optimizing SQL

    I need help optimizing the following SQL.
    Following are the schema elements -
    cto_xref_job_comp - Contains the Job Data
    cto_mast_component - Contains component
    cto_mast_product - Contains product info
    cto_mast_model - Contains model info
    cto_xref_mod_prod - Contains model and product assoiciation
    cto_mast_model_scan - Contain the Scan order for each model family (the mod_id(should be renamed) refers to mod_fam_id on the cto_mast_model table).
    Here is what I am trying to achieve
    I want all the ATP components whose Travel card order (on cto_mast_model_scan > 0 ) and in addition I need the Phantoms that do not have any ATP components under them if they do not appear on ATP component list.
    SELECT f.travelcard_order, b.job_number,
    b.product_code,
    b.line_number,
                   b.component_type,
    b.component_item_number,
    b.parent_phantom,
    b.item_type,
    a.comp_id,
    a.comp_type_id,
    a.comp_desc_short,
    b.batch_id,
    b.quantity_per_unit,
    a.comp_notes,
                   b.COMPONENT_ITEM_DESCRPTION
    FROM      cto_xref_job_comp b,
                        cto_mast_component a,
                        cto_mast_product c,
    cto_xref_mod_prod d,
    cto_mast_model e,
                   cto_mast_model_scan f
    WHERE ( b.job_number = 'CTO2499814001' ) AND
    ( b.batch_id = 21 ) AND
    (b.item_type = 'ATP') AND
                   (b.parent_phantom = a.comp_desc_short ) and          
                   (b.product_code = c.prod_desc_short ) and
    (c.prod_id = d.prod_id ) and
    (d.mod_id = e.mod_id) and
    (e.mod_fam_id = f.mod_id) and
    (a.comp_type_id=f.comp_type_id) and
                                       (f.travelcard_order > 0)
    union
    SELECT distinct f.travelcard_order, b.job_number,
    b.product_code,
    b.line_number,
                   b.component_type,
    b.parent_phantom,
    b.item_type,
    a.comp_id,
    a.comp_type_id,
    a.comp_desc_short,
    b.batch_id,
    1,
    a.comp_notes,
    FROM      cto_xref_job_comp b,
                        cto_mast_component a,
                        cto_mast_product c,
    cto_xref_mod_prod d,
    cto_mast_model e,
                   cto_mast_model_scan f
    WHERE ( b.job_number = 'CTO2499814001' ) AND
    ( b.batch_id = 21 ) AND (b.item_type is null) and
                   (substr(b.parent_phantom,1,1)='C') and
                   (b.parent_phantom = a.comp_desc_short )
    and (b.parent_phantom not in (select parent_phantom from cto_xref_job_comp where job_number=b.job_number and batch_id=b.batch_id and item_type='ATP')) and           
                   (b.product_code = c.prod_desc_short ) and
    (c.prod_id = d.prod_id ) and
    (d.mod_id = e.mod_id) and
    (e.mod_fam_id = f.mod_id) and
    (a.comp_type_id=f.comp_type_id) and (f.travelcard_order > 0)

    Here is a small example.
    DECLARE
         my_table VARCHAR2(10) := 'DUAL';
         my_value INTEGER;
    BEGIN
         EXECUTE IMMEDIATE 'SELECT 1 FROM ' || my_table INTO my_value;
         DBMS_OUTPUT.PUT_LINE(my_value);
    END;
    you can use your cursor value in the place of my_table.
    Thanks,
    Karthick.
    http://www.karthickarp.blogspot.com/

  • Dynamic sql generation using meta data

    I am trying to generate a sql statement, which should select all the columns of a table. (120 columns around.) I dont want to write like select * from tablename
    so I am trying to wrtie liek this, but I am having hard time.
    select 'select '||column_name||',' from user_tab_columns where table_name='MBR_RPT_CONT'
    I like output select col1,col2,col3,col5,col6 from MBR_RPT_CONT;
    Can you help me?

    In 11gR2 e.g.:
    SQL> select 'select ' || listagg (column_name, ',') within group (order by 1) || ' from ' || table_name sel_stm
      from cols
    where table_name = 'DEPT'
    group by table_name
    SEL_STM                                                                        
    select DEPTNO,DNAME,LOC from DEPT 

  • SQL Generation Error after converting eFashion.unv to eFashion.unx

    One of the first things I tried to do with SAP BusinessObjects Business Intelligence 4.0 was convert the built-in eFashion universe.  Unfortunately, the UNX generates unresolvable outer joins, even though the data foundation layer does not contain any.  I am using BI 4.0 SP02 Fix 4.  Any ideas?
    Here is what a query on the original eFashion.unv looks like for Year, State, Store name, and Revenue.
    SELECT
    Calendar_year_lookup.Yr,
    Outlet_Lookup.State,
    Outlet_Lookup.Shop_name,
    sum(Shop_facts.Amount_sold)
    FROM
    Calendar_year_lookup,
    Outlet_Lookup,
    Shop_facts
    WHERE
    ( Outlet_Lookup.Shop_id=Shop_facts.Shop_id )
    AND
    ( Shop_facts.Week_id=Calendar_year_lookup.Week_id )
    GROUP BY
    Calendar_year_lookup.Yr,
    Outlet_Lookup.State,
    Outlet_Lookup.Shop_name
    And here's the SQL generated by the converted eFashion.UNX.  Notice the outer joins in the FROM clause even though the universe doesn't contain outer joins.
    SELECT
    Calendar_year_lookup.Yr,
    Outlet_Lookup.State, Outlet_Lookup.
    Shop_name,
    sum(Shop_facts.Amount_sold)
    FROM Calendar_year_lookup,
    Outlet_Lookup,
    Shop_facts,
    { oj Outlet_Lookup LEFT OUTER JOIN Shop_facts ON Outlet_Lookup.Shop_id=Shop_facts.Shop_id },
    { oj Shop_facts LEFT OUTER JOIN Calendar_year_lookup ON Shop_facts.Week_id=Calendar_year_lookup.Week_id }
    GROUP BY Calendar_year_lookup.Yr, Outlet_Lookup.State, Outlet_Lookup.Shop_name
    How should I resolve the issue so correct SQL is generated by the Information Design Tool 4.0?

    Correct,
    Miguel has opened a dialog with the Sample Report Team and not sure what the answer was. Currently there are no samples shipped with BOE 4.0 so technically there is no issue...
    All I can suggest is you use your own Universe or try to fix it your self if that's possible. I don't think they are planning on shipping samples with the GA release. They may eventually but no sure at this time.
    Don

  • The database sql generation parameters file could not be loaded. (MS Analysis Services 2012, OLEDB, OLAP)

    I am facing this error while creating a report in BI launchpad.It is working fine in rich client.I am using BO 4.1 and ms sql server 2012
    Thanks
    Riaz

    Hi amit,
    Do I need to create DSN for cube or data base?
    I am working on sql server cube
    By UDT i have selected OLE DB to create connection.
    While creating DSN what i have to select in the list

  • Dynamic SQL generation

    Hi All,
    As part of the ODI Transforms, is it possible to create dynamic sqls at the run time, which could be dependent on the data coming inside, which in turn would be looking at some kind of mapping table and create the sql dynamically.
    Is this feature possible ODI?
    Thanks in advance for your input.

    Hi Cezar Santos,
    Thanks for the reply. The scenario is given below:
    Logic to dynamically map chartfield to segment.
    A user would have access to a GUI domain value map that would allow them to map BU/Chartfield to Ledger/Segment.
    Edge App 1 (PSFT)     AIA Common Key     Edge App 2 (Retail)
    Business Unit (from SETID)     Chartfield          Segment     Ledger
    US001     ACCOUNT     1     SEGMENT1     Ledger1
    US001     DEPARTMENT     2     SEGMENT2     Ledger1
    US001     PRODUCT     3     SEGMENT3     Ledger1
    US001     OPERATING_UNIT     4     SEGMENT4     Ledger1
    US002     OPERATING_UNIT     5     SEGMENT2     Ledger2
    US002     ACCOUNT     6     SEGMENT3     Ledger2
    US002     DEPARTMENT     7     SEGMENT4     Ledger2
    US003     OPERATING_UNIT     8     SEGMENT1     Ledger3
    The transformation happens inside the XFORM view by using an alias to select the columns from RETAIL_STG. Once the aliases are applied, the view can be mapped one to one with the PSFT_STG table.
    Use ODI to select unique Ledgers from RETAIL_STG.
    Execute transformation and transportation of data.
    Move to next Ledger and repeat.
    For US001/Ledger1
    SELECT LEDGER AS BUSINESS_UNIT, SEGMENT1 AS ACCOUNT, SEGMENT2 AS DEPARTMENT, SEGMENT3 AS PRODUCT, SEGMENT4 AS OPERATING UNIT FROM RETAIL_STG WHERE LEDGER = “Ledger1”
    For US002/Ledger2
    SELECT LEDGER AS BUSINESS_UNIT, SEGMENT3 AS ACCOUNT, SEGMENT4 AS DEPARTMENT, ‘ ‘, SEGMENT2 AS OPERATING UNIT FROM RETAIL_STG WHERE LEDGER = “Ledger2”
    For US003/Ledger3
    SELECT LEDGER AS BUSINESS_UNIT, ‘ ‘,’ ‘, ‘ ‘, SEGMENT1 AS OPERATING UNIT FROM RETAIL_STG WHERE LEDGER = “Ledger3”
    Kindly provide your thoughts. If you have questions, please let me know.
    Thanks.

Maybe you are looking for