Join table with additional data

Hi,
I'm new to JPA (Toplink Essentials) and I'd like to know how to handle this situation:
I have ORDER table, ITEMS table and "join table" ORDER_ITEMS which holds refernces to ordered items with quantity of each oredered item.
Many thanks

I did mapping as shown in listing. It works for reading, but for writing JPA returns error. Can somebody help?
Thanks
[Microsoft][SQLServer 2000 Driver for JDBC][SQLServer]Column name 'idexpedice' appears more than once in the result column list.
Error Code: 264
Call: INSERT INTO vklad_expedice (IDEXPEDICE, IDVKLAD, VAHA, idexpedice, idvklad) VALUES (?, ?, ?, ?, ?)
bind => [null, null, 0, 4411, 2]
@Entity
public class Expedice implements Serializable {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer idexpedice;
@Temporal(TemporalType.DATE)
private Date datum;
@OneToMany(fetch = FetchType.EAGER, mappedBy="expedice", cascade = CascadeType.ALL)
private List<ExpediceVklad> vklady;
@Entity
public class Vklad implements Serializable {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer idvklad;
private String nazev;
@Entity
@Table(name = "vklad_expedice")
@IdClass(ExpediceVkladId.class)
public class ExpediceVklad implements Serializable {
@Id
private Integer idexpedice;
@Id
private Integer idvklad;
private Integer vaha;
@ManyToOne
@JoinColumn(name = "idexpedice")
private Expedice expedice;
@ManyToOne
@JoinColumn(name = "idvklad")
private Vklad vklad;
public class ExpediceVkladId {
@Id
private Integer idexpedice;
@Id
private Integer idvklad;
SQL tables
EXPEDICE
idexpedice
datum
VKLAD
idvklad
nazev
VKLAD_EXPEDICE (this is the join table with additional column)
idexpedice
idvklad
vaha
Edited by: user10933983 on 31.3.2009 13:47

Similar Messages

  • Join table with additional state

    I have two entities:
    A: @Id String acronym;
    B: @Id String name;
    and I need to create a relationship between these tables adding a new attribute:
    @Entity
    @SecondaryTables( {
              @SecondaryTable(name = "A", pkJoinColumns = @PrimaryKeyJoinColumn(name = "A_ACRONYM", referencedColumnName = "acronym")),
              @SecondaryTable(name = "B", pkJoinColumns = @PrimaryKeyJoinColumn(name = "B_NAME", referencedColumnName = "name")) })
    public class C implements Serializable {
         private static final long serialVersionUID = PujAbstractEntity.serialVersionUID;
         @Id
         @Column(name="acronym")
         private String acronym;
         @Id
         @Column(name="name")
         private String name;
    ERROR:
    [exec] Exception Description: An incomplete @PrimaryKeyJoinColumns was specified on the annotated element [class com.kenai.puj.arena.model.entity.PujInstitutionRoles]. When specifying @PrimaryKeyJoinColumns for an entity that has a composite primary key, a @PrimaryKeyJoinColumn must be specified for each primary key join column using the @PrimaryKeyJoinColumns. Both the name and the referencedColumnName elements must be specified in each such @PrimaryKeyJoinColumn.
    ????? any tip ?

    Thanks but I am still facing problems...
    --------- PujInstitutionEntity:
    @Entity
    public class PujInstitutionEntity extends PujAbstractRootEntity {
         @Id
         @Column(length = 20)
         private String acronym;
    --------- PujCompetitionEntity:
    @Entity
    public class PujCompetitionEntity extends PujAbstractRootEntity {
         @Id
         @Column(length = 12)
         private String name;
    --------- PujInstitutionRoles (the mapping):
    @Entity
    @IdClass(PujInstitution_Roles_ID.class)
    public class PujInstitutionRoles extends PujAbstractEntity {
         @ManyToOne
         @PrimaryKeyJoinColumn(name = "INSTITUTION_ACRONYM", referencedColumnName = "acronym")
         private PujInstitutionEntity institution;
         @ManyToOne
         @PrimaryKeyJoinColumn(name = "COMPETITION_NAME", referencedColumnName = "name")
         private PujCompetitionEntity competition;
    ---------- The deployment failure:
    [exec] Exception Description: Predeployment of PersistenceUnit [arenapuj] failed.
    [exec] Internal Exception: Exception [EclipseLink-7150] (Eclipse Persistence Services - 2.0.0.v20091009-r5515): org.eclipse.persistence.exceptions.ValidationException
    [exec] Exception Description: Invalid composite primary key specification. The names of the primary key fields or properties in the primary key class [com.kenai.puj.arena.model.entity.PujInstitution_Roles_ID] and those of the entity bean class [class com.kenai.puj.arena.model.entity.PujInstitutionRoles] must correspond and their types must be the same. Also, ensure that you have specified ID elements for the corresponding attributes in XML and/or an @Id on the corresponding fields or properties of the entity class.

  • How to compare two rows from two table with different data

    how to compare two rows from two table with different data
    e.g.
    Table 1
    ID   DESC
    1     aaa
    2     bbb
    3     ccc
    Table 2
    ID   DESC
    1     aaa
    2     xxx
    3     ccc
    Result
    2

    Create
    table tab1(ID
    int ,DE char(10))
    Create
    table tab2(ID
    int ,DE char(10))
    Insert
    into tab1 Values
    (1,'aaa')
    Insert
    into tab1  Values
    (2,'bbb')
    Insert
    into tab1 Values(3,'ccc')
    Insert
    into tab1 Values(4,'dfe')
    Insert
    into tab2 Values
    (1,'aaa')
    Insert
    into tab2  Values
    (2,'xx')
    Insert
    into tab2 Values(3,'ccc')
    Insert
    into tab2 Values(6,'wdr')
    SELECT 
    tab1.ID,tab2.ID
    As T2 from tab1
    FULL
    join tab2 on tab1.ID
    = tab2.ID  
    WHERE
    BINARY_CHECKSUM(tab1.ID,tab1.DE)
    <> BINARY_CHECKSUM(tab2.ID,tab2.DE)
    OR tab1.ID
    IS NULL
    OR 
    tab2.ID IS
    NULL
    ID column considered as a primary Key
    Apart from different record,Above query populate missing record in both tables.
    Result Set
    ID ID 
    2  2
    4 NULL
    NULL 6
    ganeshk

  • How to fill internal table with no data in debugging mode

    Hi all,
             I modified one existing program.Now I want to test it.I am not given test data.So in the middle of my debugging, I found that one internal table with no data.My problem is how to fill that internal table with few records in that debugging mode just as we change contents in debugging mode.If I want to proceed further means that internal table must have some records.
    Please I dont know how to create test data so I am trying to create values temporarily in debugging mode only.
    Thanks,
    Balaji

    Hi,
    In the debugging do the following..
    Click the Table button..
    Double click on the internal table name..
    Then in the bottom of the screen you will get the buttons like CHANGE, INSERT, APPEND, DELETE..
    Use the APPEND button to insert records to the internal table..
    Thanks,
    Naren

  • Query on a table with indexed date field

    I have a table with a date column which is indexed. If I run a query like "select column1 where date_field='20-JAN-04' for example it is fast and uses index.
    If I run select column1 where date < '20-JAN-04' it is slow and doesnt use the index. I logged a TAR and Oracle told me that this is to be expected as not using the index in this case is the most effiecient way of doing the query.
    Now my concept of an index is like the index of Yellow Pages(telephone directory) for example. In this example if I look for a name that is say "Halfords" or below, I can see all entries for Halfords and all the way to ZZZ in one block.
    I just cant see , in a common sense way why Oracle wont use the index in this type of query.
    George

    Using the concept of a telephone directory is wrong. In a telephone directory you have all information order by the name. However in your table (if it is not an IOT) you don't have all information/rows ordered by your date_field. Rather think at the document "Oracle9i Database Concepts" and it's index.
    Let's say you want to find all indexed words larger then "ISO SQL standard" (ok that doesn't make sense but it is just an example). So would it be faster to read the whole document or to lookup each word in the index and then read the entire page (Oracle block) to find the word.
    It's not allways easy to know in advance if the query will be faster over the index or a full table scan. what you need to do is to well analyze (dbms_stats) the table and it's index, in most cases Oracle chooses the right way. You may also use the hint /*+ index(table_name index_name) */ and will see if it would be faster over the index or not.
    A good document about that subject is:
    http://www.ioug.org/tech/IOUGinDefense.pdf
    HTH
    Maurice

  • How To Create Table with Static Data

    JDEV 10.1.3
    ADF BC
    ADF Faces
    I am trying to make some simple screen/screenflow diagrams to help flesh out some requirements. To do that, I need to make a table with static data that is not hooked up to a data source (because the data model has not yet been clearly defined, and I'm using the diagrams to help iterate the requirements).
    Is it possible to create a table that shows static data (i.e. a set of rows that does not come from a model data source, but rather is hardcoded.)
    If not, how does one create mock ups without actually implementing the data model?
    Thank you.

    Deepak, what specifically in those 2 links from Amis are useful? Those 2 posts are about bind variables, not static list of values?
    In response to the original poster, I'll attempt to help a little more.
    In the 11g release you can create VOs based on a static list of values. However in your case on 10.1.3, the best method I've found is to create a VO based on a SELECT <columns> FROM DUAL statement. The columns then include your dummy data. If you need more than one row, simply UNION ALL a number of SELECT statements together.
    What I haven't checked, is when you eventually transform the VO based on the SELECT DUAL statement into a VO based on an EO drawing real data from the database, is it an easy process? I recommend you try this out before committing to the approach above. Let us know how you go.
    Regards,
    CM.

  • Sample report for filling the database table with test data .

    Hi ,
    Can anyone provide me sample report for filling the database table with test data ?
    Thanks ,
    Abhi.

    hi
    the code
    data : itab type table of Z6731_DEPTDETAIL,
           wa type Z6731_DEPTDETAIL.
    wa-DEPT_ID = 'z897hkjh'.
    wa-DESCRIPTION = 'computer'.
    append wa to itab.
    wa-DEPT_ID = 'z897hkjhd'.
    wa-DESCRIPTION = 'computer'.
    append wa to itab.
    loop at itab into wa.
    insert z6731_DEPTDETAIL from wa.
    endloop.
    rewards if helpful

  • Web Analysis : populate the same table with multiple data sources

    Hi folks,
    I would like to know if it is possible to populate a table with multiple data sources.
    For instance, I'd like to create a table with 3 columns : Entity, Customer and AvgCostPerCust.
    Entity and Customer come from one Essbase, AvgCostPerCust comes from HFM.
    The objective is to get a calculated member which is Customer * AvgCostPerCust.
    Any ideas ?
    Once again, thanks for your help.

    I would like to have the following output:
    File 1 - Store 2 - Query A + Store 2 - Query B
    File 2 - Store 4 - Query A + Store 4 - Query B
    File 3 - Store 5 - Query A + Store 5 - Query B
    the bursting level should be give at
    File 1 - Store 2 - Query A + Store 2 - Query B
    so the tag in the xml has to be split by common to these three rows.
    since the data is coming from the diff query, and the data is not going to be under single tag.
    you cannot burst it using concatenated data source.
    But you can do it, using the datatemplate, and link the query and get the data for each file under a single query,
    select distinct store_name from all-stores
    select * from query1 where store name = :store_name === 1st query
    select * from query2 where store name = :store_name === 2nd query
    define the datastructure the way you wanted,
    the xml will contain something like this
    <stores>
    <store> </store> - for store 2
    <store> </store> - for store 3
    <store> </store> - for store 4
    <store> </store> - for store 5
    <stores>
    now you can burst it at store level.

  • Logical fact table with fragmented data sources with different dimensions

    Hello.
    I have a logical fact table with four logical table sources. Three of the LTS's share the same dimensions, but the fourth LTS has one dimension (called Dim_A) less. In the physical layer the dimension Dim_A is joined to the first three physical fact tables, but not to the fourth fact table (since it doesn't have that dimensionality). In the BMM layer the logical fact table is joines to the logical dimansion Dim_A.
    When I run an analysis on this RPD the measures from the logical fact is aggregated correctly (union of all four table sources) as long as I doesn't include Dim_A, but as soon as I include dimension Dim_A I get the error message:
    +State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 14052] Internal Error: Logical column Dim_A.Column_X has no physical sources that can be joined to the physical fact table source [Logical table sources (Priority=0): Fact_B.Fact_Y]. (HY000)+
    I would like a solution where the analysis returns correctly aggregated measures also for the LTS with the "missing" dimension, but with a dimension value NULL for this LTS. Or something like this.
    Is there a way to set this up in the RPD.
    Thanks,
    Henning Eriksen

    The SQL could look something like this.
    SELECT dim_a.col_1, fact_a.measure_1
    FROM db.dim_a
    JOIN
    db.fact_a
    ON fact_a.col_2 = dim_a.col_2
    WHERE fact_a.date = '28-nov-2012'
    UNION ALL
    SELECT dim_a.col_1, SUM (fact_b.measure_1)
    FROM db.dim_a
    JOIN
    db.fact_b
    ON fact_b.col_2 = dim_a.col_2
    WHERE fact_b.date = '28-nov-2012'
    UNION ALL
    SELECT dim_a.col_1, SUM (fact_c.measure_1)
    FROM db.dim_a
    JOIN
    db.fact_c
    ON fact_c.col_2 = dim_a.col_2
    WHERE fact_c.date = '28-nov-2012'
    UNION ALL
    SELECT NULL, SUM (fact_d.measure_1)
    FROM    db.fact_d
    WHERE fact_d.date = '28-nov-2012'
    I would appreciate if you could give me some hints for the RPD.
    Thanks,
    Henning

  • How do you Select data from two tables with similar data amd merge the output together.

    I have two Tables containing Sales Data. I want to read the Table a sort by date and accumulate dollars by order date. Then I want to read the second table and accumulate these dollar amounts by date and then merge the records together so that I gave 1 row
    with amounts for type A and amounts for type b.
    Here are the tables I am looking at.
    Select Cast(J.Order_Date As Varchar(11))) As [Order Date]
              ,Sum(Case when Sales_Code like '%Comm%' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [Job Comm]
              ,Sum(Case when Sales_Code = '5-Day' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [Job Auto]
              ,Sum(Case when Sales_Code like '%Auto%" then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [Job Auto]
              ,Sum(Case when Sales_Code = '' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [Job Fixed]
              ,Sum(Case when Sales_Code = 'XX' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [SO Comm)
              ,Sum(Case when Sales_Code = 'YY' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [SO Auto)
              ,Sum(Case when Sales_Code = 'ZZ' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [SO Fixed)
    from [PRODUCTION].dbo.Job As J
    union all
    Select Cast(SH.Order_Date As Varchar(11))) As [Order Date]
              ,Sum(Case when Sales_Code like '%Comm%' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [SO Comm]
              ,Sum(Case when Sales_Code = '5-Day'     then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [SO Auto]
              ,Sum(Case when Sales_Code like '%Auto%" then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [SO Auto]
              ,Sum(Case when Sales_Code = ''          then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [SO Fixed]
              ,Sum(Case when Sales_Code = 'XX' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [Job Comm)
              ,Sum(Case when Sales_Code = 'YY' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [Job Auto)
              ,Sum(Case when Sales_Code = 'ZZ' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [Job Fixed)
    from [PRODUCTION].dbo.SO_Detail As SD
    Inner Join [PRODUCTION].dbo.SO_Header As SH
        on SD.Sales_Order = SH.Sales_Order
    Group by J.Order_Date
    Order by J.Order_Date Desc
    Looking for output like
    Order Date   Job Comm   Job AUto   Job Fixed    SO Comm  SO AUto  SO Fixed
    Mar-11-2014    100.00     250.00       50.00     200.00   300.00    400.00
    Mar-10-2014    500.00     340.00        0.00     110.00   400.00    500.00
    Mar-09-2014    600.00     333.00       56.00     210.00   500.00    300.00
    Thanks for your help
    SWProduction

    Seeing the output it looks like what you need is this
    select COALESCE(p.[Order Date],q.[Order Date]) AS [Order Date],
    COALESCE([Job Comm],0) AS [Job Comm],
    COALESCE([Job AUto],0) AS [Job AUto],COALESCE([Job Fixed],0) AS [Job Fixed],COALESCE([SO Comm],0) AS [SO Comm],COALESCE([SO AUto],0) AS [SO AUto],COALESCE([SO Fixed],0) AS [SO Fixed]
    from
    Select Cast(J.Order_Date As Varchar(11))) As [Order Date]
    ,Sum(Case when Sales_Code like '%Comm%' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [Job Comm]
    ,Sum(Case when Sales_Code = '5-Day' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [Job Auto]
    ,Sum(Case when Sales_Code like '%Auto%" then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [Job Auto]
    ,Sum(Case when Sales_Code = '' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [Job Fixed]
    ,Sum(Case when Sales_Code = 'XX' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [SO Comm)
    ,Sum(Case when Sales_Code = 'YY' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [SO Auto)
    ,Sum(Case when Sales_Code = 'ZZ' then (J.Order_Quantity * J.Unit_Price) Else 0 end) As Decimal(11,2) As [SO Fixed)
    from [PRODUCTION].dbo.Job As J
    )p
    full join
    Select Cast(SH.Order_Date As Varchar(11))) As [Order Date]
    ,Sum(Case when Sales_Code like '%Comm%' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [SO Comm]
    ,Sum(Case when Sales_Code = '5-Day' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [SO Auto]
    ,Sum(Case when Sales_Code like '%Auto%" then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [SO Auto]
    ,Sum(Case when Sales_Code = '' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [SO Fixed]
    ,Sum(Case when Sales_Code = 'XX' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [Job Comm)
    ,Sum(Case when Sales_Code = 'YY' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [Job Auto)
    ,Sum(Case when Sales_Code = 'ZZ' then SD.Ext_Amt Else 0 end) As Decimal(11,2) As [Job Fixed)
    from [PRODUCTION].dbo.SO_Detail As SD
    Inner Join [PRODUCTION].dbo.SO_Header As SH
    on SD.Sales_Order = SH.Sales_Order
    Group by J.Order_Date
    )q
    on p.[Order Date] = q.[Order Date]
    Order by COALESCE(p.[Order Date],q.[Order Date]) Desc
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Fact table with different dates

    Hello,
    In my fact table I have several date columns (order date, payment date, ....) and I have only one time table in my physical model.
    For example this model
    Fact table
    order day
    payment day
    Timetable
    day - pk
    I want to create 2 fks in order to have time analyse with order date and payment date
    FK1 : fact_table.order_day and timetable.day
    FK2 : fact_table.payment_day and timetable.day
    validating the model, OBI tells me that i cant' t have various joins between 2 tables
    Anyone know how to solve this ? (I think it is a quite common problem)
    Thanks in advance

    Create an alias for your dimension Timetable and then join that with your fact table. So basically you would have 2 dimensions, one joining with order day the other joining with payment day.
    Thanks,
    Venkat
    http://oraclebizint.wordpress.com

  • Can we join table with structure

    Hi
    i have taken fields from Plaf table and some fields from structures so now i want to join that how can i join.
    Is there any option?
    regards

    Hi,
    structure dont have any data base table associated with them so they dont have any data.
    that's why we cannot join structure witha table but we can include a structure within any table.
    Sytex to include structure in ztable:
    fieldname     data element.
    .include        struname
    hope it will ans ur query.
    Thanks
    Rajesh Kumar

  • Data and Cleansing export TO SQL table with Melissa Data appended fails

    I am using Data Quality services with Melissa Data Address Check as reference data.  Everything works fine until I take the option to export Data and Cleansing Info which will give me my cleansed data plus additional data points such as geocodes from
    Melissa.  When I do it fails with the error below.
    (Failed to create a new table geocode in database DQS_STAGING_DATA. Check whether the table already exists  and have the database administrator make sure the DQS Service has CREATE TABLE rights in the destination database and can INSERT to the destination
    table.)
    This error makes no sense as the table does not exist and I do have proper rights. I can export Data and Cleansing data if Melissa Data is not involved  ,  when I dig further it seems to be complaining about column header lengths.   
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_CBSADivisionCod' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_DeliveryPointCo' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_ResponseRecordI' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_DeliveryPointCh' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_CBSADivisionLev' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_CongressionalDi' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_CBSADivisionTit' is too long. Maximum length is 128.;
    I can see no option to control these column headers in DQS.  Has anyone else experienced this ?  Does anyone know of a workaround ? 
    I have already reported to Melissa data and they agreed the problem was the column header length but said they also had no control of that.

    Hello,
    You can create an SR with a based outbound filter. All object that match the filter will be provisioning to CS SQL (if you do not define filter, all objects will be provisioning).
    Or you can create an MVextension rules
    Regards,
    Sylvain

  • Crosstab two joined tables multiplies the data in each column ....

    below is code that crosstabs monthy sales numbers and totals for each month.  So what I need to do is add a column that totals last years sales (table will be called [2013]) and add a column called 2013 YTD. I also need to add a calculation column that
    calculates growth from one year to another. So I join the 2013 table and add one last line that sums 2013 sales but i need only Jan-Mar at this point. in other words I just need to see the total sales for each dealer from 2013 Jan-Mar while still viewing Jan-Dec
    for the current year. So looking at the end product I need to see the Dealer Info, Jan, Feb, Mar, Apr......and at the end 2013 Sales and Growth. Thus, the Where clause. Unfortunately, each months sales are multiplied x 3 with or without the 2013 sum line.
    As soon as I join the tables the numbers get multiplied.
    What am I missing??????
    SELECT    
    substring([2014].Dealer,18,50)AS
    [Dealer Name],substring([2014].Dealer,9,1)AS
    [District],substring([2014].Dealer,11,6)AS
    [Dealer Code],
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 1 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Jan,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 2 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Feb,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 3 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Mar,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 4 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Apr,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 5 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS May,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 6 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Jun,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 7 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Jul,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 8 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Aug,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 9 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Sep,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 10 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Oct,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 11 THEN
    [2014].[Sales Count]
    ELSE 0 END)
    AS Nov,
    SUM(CASE
    WHEN [2014].[Transaction Sales Month]
    = 12 THEN
    [2014].[Sales Count]
    ELSE 0 END
    AS
    Dec,
    SUM([2014].[Sales Count])
    AS [2014 Total],
    FROM        
    dbo.[2014]
    GROUP
    BY substring([2014].Dealer,18,50),substring([2014].Dealer,9,1),substring([2014].Dealer,11,6)

    This needs to be moved to Transact-SQL forum. You may want to post DDL of your tables, some input data (as insert statements) and desired output.
    In the meantime I think you'll find this blog post helpful:
    Aggregates with multiple tables
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

  • Error while accessing BSAD Table with dunning date

    Hi ,
    I developed a report for FI module accessing BSAD table with default customer ranges and for specific dunning dates - It ran for a very long time and timed out - (I know this is due to huge volume of data) -
    Is there any way to access BSAD table easily with Dunning dates (Other than creating Index on it) ???
    Or any standard function module available ??
    Regards
    Rajesh.

    Hi
    Try the below tables for the dunning data details:
    MHND            Dunning Data
    MHNDO           Dunning data version before the next change
    MHNK            Dunning data (account entries)
    MHNKA           Version administration of dunning changes
    MHNKO           Dunning data (acct entries) version before the next chang
    SKS

Maybe you are looking for

  • Plsql

    how to use 'nvl' against the various 'search' items in the default 'default_where'. i have search window contain ( bank name - bank_account num - currency - branch name) how to use set_block_property with default where with more than 1 condtion any h

  • Backing Up your files and HD content.

    Just wondered what program everyone was using to back up there files and HD. I tried iBackup but as I have so much music on my MBP it was taking ages to do it. But, then after time if I had added 2 more cd albums to my iTunes then tried to back it up

  • How to remove a DoubleLinkedNode ?

    Hi all, I am very new to Java and have a question to you. Currently I am trying to understand how Doublelinkedlists are working. I was very happy because I could solved a lot of problems I had with the implementation of some methods. There is just on

  • Whenever I try to open Mozilla a pop-up box appears stating: "Warning: Unresponsive script."

    When I log onto Mozilla, a pop-up box appears stating the following: Warning: Unresponsive script A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complet

  • List of infoobjects with transfer routines

    Hi,   I want to get the list of all the infoobjects for which the transfer routines are existing.Is ther any table r transaction from where we can get this information? can anyone give me quick answer pls. thank you venku Message was edited by: