Issue with using Multiple Tables in Join

Post Author: dwessell
CA Forum: .NET
Hi, I'm using the Basic version of Crystal Reports that shipped with Visual Studio 2k5.I'm doing a report with three tables, CQ_HEADER, SO_HEADER and SALESPERSON. Both the CQ_HEADER and the SO_HEADER tables link to the SALESPERSON table via a SPN_AUTO_KEY field. However, I always receive duplicates in my result set, due to the joins made. Here's the query that's produced by CR.  SELECT "CQ_HEADER"."CQ_NUMBER", "CQ_HEADER"."ENTRY_DATE", "CQ_HEADER"."TOTAL_PRICE", "SALESPERSON"."SALESPERSON_NAME", "SO_HEADER"."ENTRY_DATE", "SO_HEADER"."TOTAL_PRICE" FROM   "CQ_HEADER" "CQ_HEADER" INNER JOIN ("SO_HEADER" "SO_HEADER" INNER JOIN "SALESPERSON" "SALESPERSON" ON "SO_HEADER"."SPN_AUTO_KEY"="SALESPERSON"."SPN_AUTO_KEY") ON "CQ_HEADER"."SPN_AUTO_KEY"="SALESPERSON"."SPN_AUTO_KEY" WHERE  ("CQ_HEADER"."ENTRY_DATE">={ts '2007-12-01 00:00:00'} AND "CQ_HEADER"."ENTRY_DATE"<{ts '2007-12-18 00:00:00'}) AND ("SO_HEADER"."ENTRY_DATE">={ts '2007-12-01 00:00:00'} AND "SO_HEADER"."ENTRY_DATE"<{ts '2007-12-18 00:00:00'}) ORDER BY "SALESPERSON"."SALESPERSON_NAME" There is no link between the SO_HEADER and the CQ_HEADER. Can anyone make a suggestion as to how I could go about structuring this such that it doesn't return duplicate values? ThanksDavid        

Post Author: dwessell
CA Forum: .NET
Addendum: I've partially been able to solve it.. The information that I'm after is: Salesperson     # of Quotes   # of Sales    #Total Sales (Sum(Total_Price))  #Total Quoted Salesperson is what I group after, and I was able to get correct numbers for # of Quotes and # of sales by changing COUNT to DISTINCT COUNT. However, for the summations I'm still getting duplicate rows. ThanksDavid    

Similar Messages

  • Are there any know issues with using multiple (2) apple tv's in the same household??

    I'm thinking of getting a second Apple TV. Are there any known issues with using multiple (2) apple tv's in the same household? Thank you in advance.

    No, shouldn't have any problems

  • Left outer join using multiple table

    Hi,
    I am trying to use left outer join with multiple tables , the join condition will be based on  PERNR and BEGDA & ENDA for each infotype in selection screen.
      select pa00~pernr pa00~begda pa00~endda pa00~massn pa00~massg pa00~stat2 pa00~aedtm pa00~uname
        pa01~begda pa01~endda pa01~bukrs pa01~persg pa01~persk pa01~mstbr pa01~ename pa01~aedtm pa01~uname
        pa02~begda pa02~endda pa02~nachn pa02~vorna pa02~midnm pa02~aedtm pa02~uname
        pa016~begda pa016~endda pa016~cttyp pa016~aedtm pa016~uname
        into CORRESPONDING FIELDS OF TABLE i_pall
        from  ( PA0000 as pa00 left OUTER JOIN pa0001 as pa01 on pa00~pernr = pa01~pernr )
        left OUTER JOIN pa0002 as pa02 on pa00~pernr eq pa02~pernr )
        left OUTER JOIN  pa0016 as pa016 on pa00~pernr eq pa016~pernr )
        where pa00~pernr in S_pernr
        AND pa00~begda in s_bg0000
        and pa00~endda in s_nd0000.
    but this fails  to fetch the value of begda enda from each pa0000,pa0001,pa0002,pa0016.
    Please help!
    Monirul

    Why don't you use the standard logical database PNPCE and then Provide statement?

  • Performance issue with using buffering in a APPL0 or APPL1 Table

    Hi,
    Can anyone please tell me whether there's any serious performace issue with using buffering for a Master or Transaction table? I'm asking this because when I run code inspector for my transp table I'm getting information message:
    Message Code 0011 ==> Buffereing is Activated but Delivery Class Is "A" and Message Code 0014 ==> Buffereing is Activated but Data Class Is "APPL1".
    So what's other way round for improving performance.
    Thanks,
    Mahesh M.S.

    Hi,
    have you read the documentation?
    Let me paste it here for you:
    Buffering is switched on for the examined table and it has data type 'APPL0' or 'APPL1'.
    Tables with data type 'APPL0' or 'APPL1' should contain master or transaction data, so these tables should either contain a large amount of data or their content should change frequently. Therefore buffering the table is unfavourable. Very large tables suppress other tables in the buffer memory and hence slow done any access to them. Transaction data should not be buffered because the synchronization of the changes on the various application servers is very time consuming.
    In exceptional cases, small master data tables ('APPL0', size category 0) can be buffered.
    The solution depends on the table content. If it is master or transaction data, the table should not be buffered. If the table content does not consist of master or transaction data, the data type should be corrected accordingly.
    This should answer your questions...
    Kind regards,
    Hermann

  • Using table comparison can we use multiple tables as source?

    Using table comparison can we use multiple tables as source?
    Thank you very much for the helpful info.

    Table Comparison
    1) Input Data coming in
    2) Comparison table (table to which the data is compared)
    3) Output (input rows with respective opcodes based on the comparison result of input dataset with the comparison table)
    If your question is whether table comparison can accept union/join of multiple table sources, you can achieve by using merge/query transforms and then feeding to table comparison. Here, you have to be careful about choosing the primary keys inside table comparison

  • Issues with using relative links in Captivate 8

    Is anyone else having issues with using relative links in Captivate 8?  These links all used to work in the previous version of Captivate. And I could have sworn this was fixed already once in Captivate 8 but it's popping up again for us. Here is the situation... We have courses that are made up of multiple lessons which as separate Captivate files. Within those lessons are buttons to link to external documents (which live in a shared document folder), demonstrations, etc.  We use relative links because we post these to our amazon servers and we also sell them to clients where they can post them on their own web servers or in their LMS.  SO we can't put in full paths for the links or we'd have to change them constantly.  So an example is that the link for a button might be "../Document/nameofdoc.pdf"  This would be going to a user guide or something that is posted in the "Document" folder that lives at the same level as the lesson's folder. But now, all of the sudden, none of our bazillion links is working. And I've tried buttons, hyperlinks, and even the old click box. Nothing works with relative links. And I did check the permissions on every file and folder on our Amazon server to verify nothing changed there as well.   Any suggestions?

    I have the same issue with relative links using Captivate 8.  I am trying to load Captivate modules into an LMS using relative links to document files within the LMS.  The links work fine during a site page test so not an issue in the LMS, but from the Captivate module they aren't working....
    Help?

  • Issues with using the output redirection character with newer NXOS versions?

    Has anyone seen any issues with using the output redirection character with newer NXOS versions?
    Am receiving "Error 0x40870004 while copying."
    Simply copying a file from bootflash to tftp is ok.
    This occurs for both 3CDaemon and Tftpd32 softwares.
    Have tried it on multiple switches - same issue.
    Any known bugs?
    thanks!
    The following is an example of bad (NXOS4.1.1b) and good (SANOS3.2.1a)
    MDS2# sho ver | inc system
      system:    version 4.1(1b)
      system image file is:    bootflash:///m9200-s2ek9-mz.4.1.1b.bin
      system compile time:     10/7/2008 13:00:00 [10/11/2008 09:52:55]
    MDS2# sh int br > tftp://10.73.54.194
    Trying to connect to tftp server......
    Connection to server Established. Copying Started.....
    TFTP put operation failed:Access violation
    Error 0x40870004 while copying tftp://10.73.54.194/
    MDS2# copy bootflash:cpu_logfile tftp://10.73.54.194
    Trying to connect to tftp server......
    Connection to server Established. Copying Started.....
    |
    TFTP put operation was successful
    MDS2#
    ck-ci9216-001# sho ver | inc system
      system:    version 3.2(1a)
      system image file is:    bootflash:/m9200-ek9-mz.3.2.1a.bin
      system compile time:     9/25/2007 18:00:00 [10/06/2007 06:46:51]
    ck-ci9216-001# sh int br > tftp://10.73.54.194
    Trying to connect to tftp server......
    |
    TFTP put operation was successful

    Please check with new version of TFTPD 32 server. The error may be due to older version of TFPT server, the new version available solved this error. Files are getting uploaded with no issues.
    1. Download tftpd32b.zip from:
    http://tftpd32.jounin.net/tftpd32_download.html
    2. Copy the tftpd32b.zip file into an empty directory and extract it.
    3. Copy the file you want to transver into the directory containing tftpd32.exe.
    4. Run tftpd32.exe from that directory. The "Base Directory" field should show the path to the directory containing the file you want to transfer.
    At this point, the tftpserver is ready to begin serving files. As devices request files, the main tftpd32 window will log the requests.
    Best Regards...

  • How to use multiple table in single control file?

    Hi,
    How to use multiple table and data file in sigle control file? I have a four table and four csv file i mean data file for that. I am running concurrent program to load the data from csv file to custom table. based on my input data file name, it has to take automatically from one control file.
    Can anyone share with me how can i acheive this?
    Thanks

    Hi,
    Can't we acehive like below. I don't this exactly corrcect.
    OPTIONS (SKIP=1)
    LOAD DATA
    INFILE << file name 1 >>
    APPEND INTO TABLE XXCZ_VA_SAMPLE1
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
         PARENT_ITEM               "TRIM(BOTH FROM :PARENT_ITEM)"
    LOAD DATA
    INFILE << file name 2 >>
    APPEND INTO TABLE XXCZ_VA_SAMPLE2
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
         ITEM_NUMBER               "TRIM(BOTH FROM :ITEM_NUMBER)"
    )Edited by: orasuriya on Sep 14, 2009 3:03 AM

  • Writeback using Multiple tables

    Hello Everyone,
    1. Can we create a report with writeback capabilities using Multiple tables (Say one fact and several dimensions)?.
    Whatever examples I'm seeing in blogs or forums with screesnshots , explains writeback Reports created with just one table.
    2. If writeback can work with reports using multiple tables, how do we need to change the XML scripts in custom message folder(insert and update)?
    any example ?
    Your suggestions are appreciated.
    Thanks

    You can only send one statement. Then if you want update many tables, you have to use a procedure/function in your SQL statement. For instance with a table function:
    http://gerardnico.com/wiki/database/oracle/table_function
    But I will advice you to do it with the simplest way possible: then one answer for one table to update.
    Success
    Nico

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • CVC creation - Strange issue with Master data table of 9AMATNR

    Hi Experts,
    We have encountered a strange issue with Master data table (/BI0/9APMATNR) of info object 9AMATNR.
    We have a BADI implemented for checking the valid Characteristic before creation of the CVC using transaction /SAPAPO/MC62. This BADI puts a select on master data tab of material /BI0/9APMATNR and returns no value. But the material actually exists in the table (checked through SE16).
    Now we go inside the info object 9AMATNR and go to the Master data Tab. There we go inside the master table
    /BI0/9APMATNR and activate that. After activating the table it is read by the select statement inside BADI (Strange) and allows the CVC to be created.
    Ideally it should not allow us to activate the SAP standard table /BI0/9APMATNR. I observed that in technical settings of this table it has single record buffering as switched on. (But as per my knowledge buffer gets refreshed every 2 to 4 mins and not in 2 days or something).
    Your expert comment is valuable to us. Thanks.
    Best Regards,
    Chandan Dubey

    Hi Chandan,
                 Try to use a WAIT statment with 5 seconds before your select statment.
    I'm not sure whether this will work. Anyway check it and let me know the result.
    Regards,
    Siva.

  • Any issues with using LDAP on LINUX for GRC 5.2 UME?

    Our company is converting our LDAP servers from AIX to LINUX.  The DNS name used in our UME connection should not change.  Are there any issues with using LDAP on LINUX?  We are currently on GRC 5.2 SP9 (in the middle of upgrading to SP12).
    Also, I have been trying to connect our test UME system to a test LDAP box that has already been converted to LINUX but keep getting a 'connection failed' error when I try to test it. 
    Do you have to reboot the server to test changing the LDAP connections?  I've been trying it by going into UME, pulling up the LDAP tab, hitting the Modify button, entering the new userid and password for test LDAP, and hitting the Test Connection button.  I've verified that this userid and password is correct for test LDAP.
    Is there a way to get more information about why the connection failed?
    Thanks.

    I've been told by our LDAP Support group that none of the other configuration settings should have to be changed.  I should only have to change the id and password to connect to a test version of LDAP instead of our regular connection to the production LDAP.
    Can you test a connection for a different userid/password without having to reboot/restart the server?  Do I need to change these two settings, save then, reboot/restart, and then do the Test Connection button?
    Thanks.

  • HR ABAP: Issue with using 'nocommit' parameter on FM HR_INFOTYPE_OPERATION

    Issue with using nocommit parameter on FM HR_INFOTYPE_OPERATION:
    My client has a requirement to create the following 4 infotypes in sequence in a LUW, i.e either all are created or none is created.
    9045   (custom infotype)
    0045
    0078
    0015
    I tried to use the nocommit parameter on FM HR_INFOTYPE_OPERATION to insert the 4 infotypes
    in a nocoomit mode and then at the end I have issued
    'Commit Work', but to my surprise only I/T 0015 is created in the database and the first three (9045, 0045 and 0078) did not make it to database.
    I searched many threads on SDN but could not find a solution.
    Please let me know if there could be any solution to implement the LUW.
    YOur inputs will be appreciated.

    Hi ,
    i think u can also try with this FM HR_MAINTAIN_MASTERDATA , see its documentations.
    no commit works like a simulation mode , what u can do is  ,
    call FM for all Infotypes and collect all error msgs if any , then finally call FM for all infotypes again without passing nocommit work ( i.e space).
    regards
    prabhu

  • Issue with Update of Table VARINUM

    Hi,
    I am getting waiting Issues with Update of table VARINUM. Has anybody faced such an issue.
    I have a lot of Jobs which are running in background. I am submitting it through a report. what can be the issue.
    Regards,
    Abhishek jolly

    Thisi is quite old, but not answered properly yet, so there you go:
    SAP generates a new job and temporary variant on report RSDBSPJS, for each HTTP call,which creates database locks on table VARINUM .
    This causes any heavyweight BSP application  to hang and give timeout errors.
    The problem is fixed applying OSS note 1791958, which is not included in any service pack.

  • Issue with data dictionary -Table maintanance generator

    Hi all,
    I have an issue with Data dictionary, table maintenance generator. I have entered some records in a custom table (ZBCSECROLETOGRP) and changed the delivery class from C to A. When I create the table maintainance generator, I am encountered with the following errors:
    1)Field ZBCSECROLETOGRP-PORTALGROUP shortened (new visible length: 000032)
    2)0012 could not be generated
    3)In TCTRL_ZBCSECROLETOGRP field LENGTH has the invalid value 01
    My main motto is to create the table maintainace generator and transport to the furthur systems .
    Please help.
    ThnX in advance,
    Vishal..

    HI,
    Regenerate the table maintenance by selecting the checkbox of "Modified field structure" => new entry & then save.
    Also ensure that the new changes are not affecting old data bcz of data type changes. If that is the case, then delete the old records, regenerate table maint. & re-enter those records which you had deleted.
    Thanks,
    Best regards,
    Prashant

Maybe you are looking for