Adding a filter to an SQL Join table in OBIEE

Hello,
I have created a query in OBIEE Answers to Left outer join two tables. This all works fine but I would like to set this up as a dashboard and create a prompt. The problem I have is because this is not a subject area how can I create a prompt? I would like the prompt on B.saw_2
My query is below:
SELECT
A.saw_0,
B.saw_2
FROM
(SELECT
Employee."Employee Reference" saw_0
FROM
"Applications and Awards (Employee)"
WHERE (Employee."Employee Reference" LIKE 'MR/L%')) A
LEFT OUTER JOIN
(SELECT
Employee."Employee Reference" saw_0,
LEFT(Classification."Classification Id",10) saw_1,
Classification."Classification Name" saw_2
FROM
"Applications and Awards (Employee)"
WHERE (LEFT(Classification."Classification Id",10) LIKE 'CSMRBS%')) B ON A.saw_0 = B.saw_0
Thank you,
John

Hi John,
For direct database request ::::
Please create bind variable like sql at the position of the column you are going to or want to have the value prompted. ':xyz'.
For creating prompt choose any column from any subject area, remove the formula and put dual statement like select xyz from dual. This is your value which you are passing there.
For your logical query ::: you can use presentation variable '@{xyz}123'
Thanks,
Amol
(Please mark this answer, if you found correct)

Similar Messages

  • SQL - Joining tables

    Hi, im having problems joining all my tables.
    Everythings ok, until i add the bold lines in the sql.
    code:
    SELECT distinct RTRIM(
    decode(c.ADDRESS1, NULL, NULL,c.ADDRESS1|| ' , ') ||
    decode(c.ADDRESS2, NULL, NULL,c.ADDRESS2|| ' , ') ||
    decode(c.ADDRESS3, NULL, NULL,c.ADDRESS3|| ' , ') ||
    decode(c.ADDRESS4, NULL, NULL,c.ADDRESS4|| ' , ') ||
    decode(c.CITY, NULL, NULL,c.CITY|| ' , ') ||
    --decode(c.POSTAL_CODE, NULL, NULL,c.POSTAL_CODE|| ' , ') ||
    --c.CITY,' , ') "Address" ,
    c.COUNTRY, ' , ') "Address" ,
    rtrim(decode(p.person_pre_name_adjunct, null, null, p.person_pre_name_adjunct||' ')||
    decode(p.person_first_name, null, null,p.person_first_name||' , ')||
    p.person_last_name, ' , ') "Name",
    P.PARTY_ID,
    p.person_number,
    c.contract_number "Subscriber num",
    c.item_description,
    c.start_date,
    c.end_date,
    a.customer_class_meaning "Subscriber Type",
    n.NOTES "Delivery Point"
    from AMS_P_PERSON_V P, xxokb_k_details_v C, AR_CUSTOMERS_V A, AST_NOTES_DETAILS_VL N
    WHERE c.PARTY_ID = p.party_id
    and p.party_id = a.party_id
    and c.party_id = n.source_object_id
    and n.NOTE_TYPE_MEANING ='Delivery Instruction'
    AND c.contract_status ='ACTIVE'

    My guess is that some deliveries don't have delivery instructions, so nothing is retrieved for those rows. In which case an outer join is called for.
    The other possibility is that n.source_object_id is not unique so you're getting multiple rows.
    But maybe the query crashes your database due to a corrupted block. Who knows?

  • Having trouble adding information to a join table

    I am building a Proof-of-concept HR application in Apex using the hosted site and having trouble entering information into a join table. There are three tables in my schema: Applicants, Postings, and ApplPostings
    "Applicants" fields: Applicant_ID (PK), Name, Address, City, State, Zip, Phone, Email
    "Postings" fields: Posting_ID(PK), Posting_Number, Item_Number, Posting_Date, Title, Description
    "ApplPostings" fields: Appp_ID (PK), Apply_Date, appl_appl_id (FK), post_post_id (FK)
    I have 5 pages in my application
    1. A Report that shows all Applicants
    2. A Form that shows Applicant details and allows editing
    Contains 3 regions: Applicant Detail, Associated Postings, Available Postings
    3. A Report that shows all Postings
    4. A Form that shows Posting details and allows editing
    5. A Form that allows creating an entry in ApplPostings
    On page #2 (from the list above, it's p3 in my app), the "Associated Postings" region shows data from the Postings table for selected applicant per the ApplPostings table. This is the SQL:
    select jp.posting_number,
    jp.posting_date, jp.title
    from applicants ja,
    applpostings jap,
    postings jp
    where ja.applicant_id = jap.appl_appl_id
    and jp.posting_id = jap.post_post_id
    and ja.applicant_id = (:p3_applicant_ID)
    The "Available Postings" region uses this SQL to show information on the remaining postings:
    select jp2.posting_id, jp2.posting_number, jp2.posting_date, jp2.title
    from postings jp2 where JP2.POSTING_ID
    not in
    (select jp.posting_id
    from applicants ja,
    applpostings jap,
    postings jp
    where ja.applicant_id = jap.appl_appl_id
    and jp.posting_id = jap.post_post_id
    and ja.applicant_id = :p3_applicant_id )
    The Posting ID column in "Available Postings" is set up as link to page #5 (from list above, p7 in my app)
    This page shows three fields: Apply_Date, appl_appl_id, post_post_id
    Source information for appl_appl_id:
    Source used: Always
    Source type: Item
    Source value or expression: p3_applicant_id
    Source information for post_post_id:
    Source used: Always
    Source type: Item
    Source value or expression: p3_posting_id
    Both ID fields populate with inherited values correctly.
    The problem is when I try to use the Apex generated "Apply Changes" button
    Nothing happens, there is not data added to that table.
    I get the "Action Processed" message, but the row is not created in the ApplPostings table.
    Any suggestions on what to do next will be helpful.
    Thanks,
    Jody
    Edited by: user10651585 on Dec 31, 2008 9:41 AM

    Hi,
    On the form, any items that represent database columns must use "Database Column" as the Source Type and the column names as the Source Value or Expression. The "Fetch Row..." and "Process Row..." processes refer to these to determine where to write the data. So your first step is to change the two page items to use these settings. Without these settings, there is nothing to write into the table - the "Action performed" message will be displayed because there is no SQL error.
    Now, on your report, your link should pass in the two values needed. In the Column Link settings for the column you are linking on, you have three pairs of fields headed Item 1, Item 2 and Item 3. Each item has a Name and Value setting. Into the Name setting enter in the page 7 item names. Into the corresponding Value setting enter in the column names that contain the values these items need to receive, surrounded by # symbols (for example, #POSTING_ID#).
    Then, when you click on the item in the report, you are passed to page 7 and the fields are correctly populated. As the form correctly identifies these fields as database columns, when you click Apply Changes, the insert can take place.
    Andy

  • Joining tables with SQL in Crystal XI

    I am new to Crsytal Reports. I want to join 2 tables using a formula, which I am trying to do in SQL. I created a simple test report and I can get it to work if I don't put any fields on the report from the joined table.  ie  -  if I just use fields from "sal-rep" the report works.  As soon as I add a field from "freight" (my joined table) the report will not display anything.  (but I don't get any error messages - just a blank report).  Here is my SQL: 
    SELECT "sal-rep"."full-name","sal-rep"."invoice-nbr","freight"."misc-charge-ammount"  FROM   "PUB"."sal-rep" INNER JOIN "PUB"."FREIGHT" ON ("sal-rep"."invoice-nbr"="freight"."invoice-nbr")  WHERE  "sal-rep"."invoice-nbr"='0000189957'

    I got it to work!!!   This is the SQL I used -
    SELECT "sal_rep"."full-name", "sal_rep"."invoice-nbr", "freight"."misc-charge-ammount", "freight"."invoice-date"
    FROM   "PUB"."freight" "freight" INNER JOIN "PUB"."sal-rep" "sal_rep" ON ("freight"."invoice-nbr"="sal_rep"."invoice-nbr") AND ("freight"."invoice-date"="sal_rep"."inv-date")
    WHERE  "sal_rep"."invoice-nbr"='0000189957'
    Now when I look at the Table used in Database expert it lists -  Command and Freight.
    Before it was listing Command and Sal-Rep. 
    Not sure I understand why, but at least it's working.
    thanks for your help - really appreciated!

  • How to provide joins between oracle tables and sql server tables

    Hi,
    I have a requirement that i need to generate a report form two different data base. i.e Oracle and Sql Server.
    how to provide joins between oracle tables and sql server tables ? Any help on this
    Regards,
    Malli

    user10675696 wrote:
    I have a requirement that i need to generate a report form two different data base. i.e Oracle and Sql Server. Bad idea most times. Heterogeneous joins do not exactly scale and performance can be severely degraded by network speed and b/w availability. And there is nothing you can do in the application and database layers to address performance issue at the network level in this case - your code's performance is simply at the mercy of network performance. With a single glaring fact - network performance is continually degrading. All the time. Always. Until it is upgraded. When the performance degradation starts all over again.
    If the tables are not small (few 1000 rows each) and row volumes static, I would not consider doing a heterogeneous join. Instead I would rather go for a materialised view on the Oracle side, use a proper table and index structure, and do a local database join.

  • Sql statement for join tables

    hi there..i need ur helps
    Here are my tables n column name for each tables
    a) rep_arrngt
    Name
    REP_ARRNGT_ID
    REP_ARRNGT_DESC
    REP_ARRNGT_TYPE
    ACCT_CAT
    DEF_INDUSTRY_CODE
    MEDIA_ID
    LANGUAGE_ID
    CURRENCY
    PAGE_ID
    b) bci_rep_arrng
    Name
    REP_ARRNGT_ID
    BCI
    SUB_SUM_CODE
    DP_SUB_SUM_CODE
    c) acct_rep_arrngt_link
    Name
    ACCT_NO
    REP_ARRNGT_ID
    DOC_TYPE
    LAST_BILL_DATE
    BILL_FREQUENCY
    SCALING
    EFF_FROM_DATE
    EFF_TO_DATE
    MEDIA_ID
    Actually, i want to get the unique value for sub_sum_code according to the bci..
    i already use join tables n here is my sql statement :
    SELECT T1.SUB_SUM_CODE,T1.BCI,T1.REP_ARRNGT_ID,T2.REP_ARRNGT_ID,T3.REP_ARRNGT_ID FROM BCI_REP_ARRNG T1, REP_ARRNGT T2, ACCT_REP_ARRNGT_LINK T3 WHERE T1.REP_ARRNGT_ID=T2.REP_ARRNGT_ID AND T2.REP_ARRNGT_ID=T3.REP_ARRNGT_ID AND BCI='TTA00F06'
    n my results is :
    SUB_SUM_CODE BCI REP_ARRNGT_ID
    TBGSR TTA00F06 R1
    TBGSR TTA00F06 R1
    TBGSR TTA00F06 R1
    TBGSR TTA00F06 R1
    I get the repeated results for sub_sum_code..
    so, what i need to do if i want only 1 row results like this :
    [u]SUB_SUM_CODE BCI REP_ARRNGT_ID
    TBGSR TTA00F06 R1
    i try to use group by, but i get the error..plz help me

    If you only want "to get the unique value for sub_sum_code according to the bci" then why are you joining the tables in the first place? Not knowing PKs etc you could just use DISTINCT in your select statement.
    SELECT DISTINCT T1.SUB_SUM_CODE,
                    T1.BCI
    FROM BCI_REP_ARRNG T1
    AND BCI='TTA00F06'

  • How to use a MAP whithout join table

    Hello
    I am still evaluating KODO ;-)
    I am using kodo 3.1.2 with an evaluation version
    linux (kernel 2.6)
    SUN JDK 1.4.2_04
    MYSQL MAX 4.0.18 -Max
    IDEA 4.0.3 and ANT 1.5.4 (to be exhaustive)
    I am wondering how to configure the following mapping involving a Map. :
    public class Translation {
    private String locale;
    private String txt;
    public class TranslatableDescription {
    /**Map of Key=locale as String; Value = {@link Translation}*/
    ==> private Map translations = new HashMap(); <==
    public void addATranslation(Translation t){
    translations.put(t.getLocale(), t);
    file package.jdo :
    <?xml version="1.0"?>
    <jdo>
    <package name="data">
    <class name="Translation"/>
    <class name="TranslatableDescription">
    <field name="translations">
    <map key-type="java.lang.String"
    value-type="tutorial.data.Translation"/>
    <extension vendor-name="kodo" key="jdbc-key-size" value="10"/>
    </field>
    </class>
    </package>
    </jdo>
    The default Mapping generate a join table : TRANS_TRANSLATION which works
    fine, but I would like to remove this table by adding a
    colonne in the "TRANSLATION" table containing the JDOID of the
    TRANSLATIONDESCRIPTION owner of this TRANSLATION.
    I have made some try like this one in the mapping file
    <class name="TranslatableDescription">
    <field name="translations">
    <jdbc-field-map type="n-many-map" key-column="LOCALE"
    ref-column.JDOID="OWNERJDOID" table="TRANSLATION0"
    value-column.JDOID="JDOID"/>
    </field>
    The schema generated in my DB is correct but when I try to persist some
    objects I have the following Exception :
    727 INFO [main] kodo.jdbc.JDBC - Using dictionary class
    "kodo.jdbc.sql.MySQLDictionary" (MySQL 4.0.18'-Max' ,MySQL-AB JDBC Driver
    mysql-connector-java-3.0.10-stable ( $Date: 2004/01/13 21:56:18 $,
    $Revision: 1.27.2.33 $ )).
    Exception in thread "main" kodo.util.FatalDataStoreException: Invalid
    argument value, message from server: "Duplicate entry '2' for key 1"
    {prepstmnt 8549963 INSERT INTO TRANSLATION0 (JDOID, LOCALESTR, OWNERJDOID)
    VALUES (?, ?, ?) [reused=0]} [code=1062, state=S1009]
    NestedThrowables:
    com.solarmetric.jdbc.ReportingSQLException: Invalid argument value,
    message from server: "Duplicate entry '2' for key 1" {prepstmnt 8549963
    INSERT INTO TRANSLATION0 (JDOID, LOCALESTR, OWNERJDOID) VALUES (?, ?, ?)
    [reused=0]} [code=1062, state=S1009]
    java.sql.BatchUpdateException: Invalid argument value, message from
    server: "Duplicate entry '2' for key 1"
         at kodo.jdbc.sql.SQLExceptions.getFatalDataStore(SQLExceptions.java:42)
         at kodo.jdbc.sql.SQLExceptions.getFatalDataStore(SQLExceptions.java:24)
         at kodo.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:594)
         at
    kodo.runtime.DelegatingStoreManager.flush(DelegatingStoreManager.java:152)
         at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.java:964)
         at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.java:813)
         at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:69)
         at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:542)
         at
    tutorial.CreateTranslatableDescription.main(CreateTranslatableDescription.java:48)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at com.intellij.rt.execution.application.AppMain.main(Unknown Source)
    I have the feeling that KODO does try to make diffent row in the table
    "TRANSLATION0" for the TRANSLATION and the MAP.
    So, is this kind of mapping supported by KODO and if yes, how can I
    configure it.
    Thank you for your help
    Nicolas

    So, is this kind of mapping supported by KODO and if yes, how can I
    configure it.It is not directly supported. You can fake it using a one-many mapping
    (see thread: Externalizing Map to Collection of PC), but unless you must
    map to an existing schema, it's probably not worth the effort. (Note:
    also there is a bug that actually prevents this from working with 3.1.2;
    the bug is fixed for 3.1.3, but it hasn't been released yet).

  • Conditions on Outer Joined Tables

    I'm trying to understand how Discoverer handles conditions created on outer joined tables. We're using Discoverer Plus 11.1.1.3.
    I have a query using two tables connected by an outer join. While I want all records back from the first table, I don't want all records back from the outer joined table. So, if I typed the SQL directly in a SQL Editor, I would put the '(+)' in each condition. However, I find that in Discoverer, sometimes it does this, and sometimes it doesn't. If it does not put the '(+)' on all conditions as well as the join, that essentially nullifies the outer join and makes it an standard join.
    For example, I expanded an item on the outer joined table in the Items dialog to see its list of values, and selected one, to add the column to the worksheet and create a condition. No (+). But when I deleted this and created the condition using the Condition dialog, I got a (+). Then I created another condition using the condition dialog. I used the IN operator and pasted in a list (this item had no item class to choose from)--no (+).
    When I tried manually adding (+) after the item name in the condition dialog, it put quotes around the whole thing and treated it as a string.
    I can find nothing in the documentation that discusses this. Is is possible to control whether or not it uses the (+)?
    Thank you, Scott Newman

    Dear Michael,
    Last night I had a call from user who was experiencing a very strange behaviour of her Discoverer workbook.
    I replicated the issue on my machine and could not believe my eyes. A condition on an item was being ignored. I then analysed the workbook and realised it was due to a condition on an item from an outer-joined folder. I did not have the strength after a long day to deal with it and was looking forward to having some fun the next day. Firstly, I tried few tricks such as NVL, LENGTH functions in a test query in Toad. I hate to give up but I did and searched threads on this forum and opened few. The very first thread I read was this one and I laughed when I read your advice. Only until the moment I tested it in the troubled workbook. It worked like a charm. I take my hat off to you Michael. I have rarely experienced such a satisfaction when solving a tricky problem during my 9-year-long Oracle Discoverer journey. This goes straight to my little text file with interesting problems and solutions.
    Thank you very much. I owe you a beer or two.
    It is great to have an expert like you, always ready to share his knowledge with his colleagues.
    P.S.
    My apologies for this massive post, I could not resist expressing my joy and gratitude.
    Jozef Hlavaty

  • How can I load a .xlsx File into a SQL Server Table using a Foreach Loop Container in SSIS?

    I know I've REALLY struggled with this before. I just don't understand why this has to be soooooo difficult.
    I can very easily do a straight Data Pump of a .xlsX File into a SQL Server Table using a normal Excel Connection and a normal Excel Source...simply converting Unicode to DT_STR and then using an OLE DB Destination of the SQL Server Table.
    If I want to make the SSIS Package a little more flexible by allowing multiple .xlsX spreadsheets to be pumped in by using a Foreach Loop Container, the whole SSIS Package seems to go to hell in a hand basket. I simply do the following...
    Put the Data Flow Task within the Foreach Loop Container
    Add the Variable Mapping Variable User::FilePath that I defined as a Variable and a string within the FOreach Loop Container
    I change the Excel Connection and its Expression to be ExcelFilePath ==> @[User::FilePath]
    I then try and change the Excel Source and its Data Access Mode to Table Name or view name variable and provide the Variable Name User::FilePath
    And that's when I run into trouble...
    Exception from HRESULT: 0xC02020E8
    Error at Data Flow Task [Excel Source [56]]:SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occured. Error code: 0x80004005.
    Error at Data Flow Task [Excel Source [56]]: Opening a rowset for "...(the EXACT Path and .xlsx File Name)...". Check that the object exists in the database. (And I know it's there!!!)
    I don't understand by adding a Foreach Loop Container to try and make this as efficient as possible has caused such an error unless I'm overlooking something. I have even tried delaying my validations and that doesn't seem to help.
    I have looked hard in Google and even YouTube to try and find a solution for this but for the life of me I cannot seem to find anything on pumping a .xlsX file into SQL Server using a Foreach Loop Container.
    Can ANYONE please help me out here? I'm at the end of my rope trying to get this to work. I think the last time I was in this quandry, trying to pump a .xlsX File into a SQL Server Table using a Foreach Loop Container in SSIS, I actually wrote a C# Script
    to write the contents of the .xlsX File into a .csv File and then Actually used the .csv File to pump the data into a SQL Server Table.
    Thanks for your review and am hoping and praying for a reply and solution.

    Hi ITBobbyP,
    If I understand correctly, you want to load data from multiple sheets in an .xlsx file into a SQL Server table.
    If in this scenario, please refer to the following tips:
    The Foreach Loop container should be configured as shown below:
    Enumerator: Foreach ADO.NET Schema Rowset Enumerator
    Connection String: The OLE DB Connection String for the excel file.
    Schema: Tables.
    In the Variable Mapping, map the variable to Sheet_Name, and change the Index from 0 to 2.
    The connection string for Excel Connection Manager is the original one, we needn’t make any change.
    Change Table Name or View name to the variable Sheet_Name.
    If you want to load data from multiple sheets in multiple .xlsx files into a SQL Server table, please refer to following thread:
    http://stackoverflow.com/questions/7411741/how-to-loop-through-excel-files-and-load-them-into-a-database-using-ssis-package
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • What is the better way to join table for ASE 12.5?

    To join table with T-SQL, there are 2 options:
    select * from tab1 a join tab2 b on a.id = b.id where x=y
    select * from tab1a, tab2 b where a.id = b.id and x=y
    this is only a syntax difference or there is a performance difference? which one is better for performance?

    The first query (using the join clause) is the ANSI standard way of writing joins and is usually supported by all mainstream RDBMSs.
    The second query is the T-SQL method of writing joins and may have limited re-usability with other RDBMSs.
    In most cases which one you use is usually one of preference, ie, which ever one you're comfortable with.
    From a technical perspective there are some join constructs that you cannot build with T-SQL joins, but can build with ANSI joins (eg, you may have problems in T-SQL with a table that you want to be both a) an inner table of an outer join and b) part of a equi/inner join).
    Soooo, ANSI joins provide you a lot more flexibility in coding as well as portability (between other RDBMSs).  But if you're going to work with ASE you'll still need to understand how T-SQL joins work as you'll see quite a lot of T-SQL join-based queries.

  • How can I pass multiple condition in where clause with the join table?

    Hi:
    I need to collect several inputs at run time, and query the record according to the input.
    How can I pass multiple conditions in where clause with the join table?
    Thanks in advance for any help.
    Regards,
    TD

    If you are using SQL-Plus or Reports you can use lexical parameters like:
    SELECT * FROM emp &condition;
    When you run the query it will ask for value of condition and you can enter what every you want. Here is a really fun query:
    SELECT &columns FROM &tables &condition;
    But if you are using Forms. Then you have to change the condition by SET_BLOCK_PROPERTY.
    Best of luck!

  • MVC: Create a view which populates two (or more) joined tables in a single view table

    I am beginner in MVC and I would like to create a view which populates two (or more) joined tables in a single view table as described below.
    I have two tables:
    1 - Bloggers: - bloggerID (int);
                        - bloggerName (string)
    2 - Blogs: - blogID (int);
    bloggerID (int);
                    - blogTitle (string);
                    - blogImage (string)
    A blogger can have one or more blogs while one blog must be related to only one blogger.
    I would like to have a view table on my webpage as the following:
    Blogger Name
    Blog Title
    Blog Image
    Noris Gang
    Virus
    virus.jpg
    Noris Gang
    Desktops
    desktop.jpg
    Gauthier
    Books
    books.png
    John Leon
    NNNMHJhjdhj
    Nmbj.jpg
    I'm using MVC 4 (or at least 3).
    Thanks for your help.

    Hello,
    From your description, it is not very clear that what you mean about the View, if it means the View concept in database as SQL Server, your required view should be as below:
    Create view
    as
    Select Bloggers.bloggerName, Bloggers.blogTitle, Bloggers.blogImage
    From Bloggers join Blogs on Bloggers.bloggerID = Blogs. bloggerID
    If it means the UI view in MVC concept, I suggest you could ask it on the MVC forum:
    http://forums.asp.net/1146.aspx
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • What are the Pros and Cons while joining tables at DF and Universe level

    Hi Experts,
    I am new to Data federator designer. I need help on the below.
    Could you please let me know the Pros and Cons while joining the source tables in data federator
    And While joining the DF target tables in universe designer.
    Regards,
    Gana

    Hi,
    1. I have created target tables based on source tables with one to one mapping and then join all target tables in universe.
    Ex: Source tables: Infocube text tables, fact tables and 3rd party data base table
    Target tables:Target tables are same as source tables
    --- Yes this is the way to create target Tables  and join them in the universe.These target Tables gives you the flexibility  like in future your requirement is to add one more object based on some calculation which is not possible in the universe so that time you can create  one more column in the target table and do the calculation. at the same time if you are using source table you can not do anything.
    2. Created single target table with all objects of source tables and merged all sources tables data.
    Ex: Source tables: Infocube text tables, fact tables and 3rd party data base table
    Target table: Single table.
    --- This is bit complex structure to merge all tables data in the one target table.in this situation you have to put more efforts and it is more complex.basically this type of target table is useful when you are merging data by multiple source into single  or  creating one target table based on the  union of tables by adding two mapping rules or more and you can not join tables in universe.
    Better approach is first 1. create target tables based on the source tables and join them in the universe.
    Thanks,
    Amit

  • Adding a new field to a Z table

    We have a Z table on which we have created a datasource. We have created a virtual cube on this datasource and queries are built on this cube. This setup is present on Dev, Quality and Prod environment.
    Now we have added a new field to the Z table in Dev. We will be creating a new Infoobject for this new field and will be adding it in the Virtual Infoprovider and also in the queries.
    1. Do we need to replicate the datasource so that newly added field is visible to the datasource? If no, is there any way?
    2. When I have to transport this to Quality and Prod, do I have to capture all the components and objects (Table,Datasoure,Virtual Cube Queries)? If no, what should I capture in the transport request?

    Hi,
    First, you have to regenerate the datasource in order to add the field:
    1) Log on to source system (where Z table is stored) and go to transaction RSA6.
    2) Select the root node and click Expand (icon with '+' sign).
    3) Search for your datasource (you can use the binoculars icon).
    4) Click on Change.
    5) Verify that the field appears on the list.  Make sure that "Hide field" and "Field only known on client exit" options are not marked for this field.
    6) Click on Save.
    7) Select "Datasource" -> "Generate" (CTRL+S) menu option.
    8) Use RSA3 transaction in order to verify the new field appears on test extractions.
    Then, to replicate the datasource in BI:
    1) Log on to SAP BI and go to RSA1 transaction.
    2) Go to "Modeling" -> "DataSources" section.
    3) Search for your datasource and right click on it.  Select "Replicate metadata" from context menu.
    4) Add the new InfoObject to the VirtualProvider.
    5) Update the transformation which joins the DataSource to the VirtualProvider.
    As far as transport requests are concerned, even though it requires more work, it is a good practice to group objects in different transport requests according to their type:
    In the source System (R3 for instance):
    1) One transport request with the table.
    2) One transport request with the datasource and its structure.
    In BI:
    1) One transport order with the datasource replica.
    2) One transport order with the new InfoObject(s).
    3) One transport order with the modified VirtualProvider.
    4) One transport order with the modified transformations from the datasource towards the VirtualProvider.
    5) One transport order with the Queries and their elements.
    I hope this helps you.
    Regards,
    Maximiliano

  • SSIS write custom logging information to sql 2012 table

    I have a Package that I want to put logging information in to a SQL table.
    I can do this with a script task but each time the package runs it will open a SQL connection write the log information and then close this connection it will do this 10 times every time the package executes and i could 50 of the packages running at the
    same time so i am looking for a better way to do this.
    My thought was to create a in memory table object at the start of the package and then insert the log records in to it and at the end do a bulk insert at the end of the package.
    My problem is I don't know if this can be done and if it can how would I create the in memory table insert the records and then bulk load it to the SQL server table.

    While I do not see any justification for what you want, "My thought was to create a in memory table object at the start of the package and then insert the log records in to it and at the end do a bulk insert at the end of the package" is doable
    by creating an ADO table, adding records to it and then writing to a destination:
    https://support.microsoft.com/en-us/kb/195082?wa=wsignin1.0 has all the needed code it seems. Just wrap into the Script Task.
    Arthur
    MyBlog
    Twitter

Maybe you are looking for