JOIN/CASE

Hello,
Here I'm doing two JOIN queries.
I need help with inserting 2nd query into 1st.
in the final result I would like to see all the transaction that are 'PP' w customer_name and the rest of the transactions that are not 'PP' i would like to see them as 'N/A' for customer_name
Thank-you for your HELP!
1ST QUERY
SELECT
b.pl_ent,
e.pl_sa,
c.pl_cc,
d.pl_acc,
b. EN_ENTITY_LNG,
e. SA_SUB_LNG,
c. COST_CTR_LNG,
d. ACCT_ACC_LNG ,
f. FISCAL_MONTH,
f. FISCAL_YEAR_LNG,
d. ACCT_TYPE,
SUM(a.gl_amt)
FROM
F_ACCT_TRX_HIST_STG1 a,
D_ENTITY_STG2 b,
D_COSTCTR_STG2 c,
D_ACCTS_STG2 d,
D_SUBACCTS_STG2 e,
D_PERIOD_STG1 f
WHERE
a.PP_ENT = b.EN_ENT AND
c.CC_CSTCTR= UPPER (a.PP_CC) AND
d.acct_acc = a.pl_acc AND
e.sa_sub = a.pl_sa AND
a.pl_eff_dt = f.calendar_date
GROUP BY b.EN_ENT,
e.sa_sub,
c.cc_cstctr,
d.acct_acc,
b. EN_ENTITY_LNG,
e. SA_SUB_LNG,
c. COST_CTR_LNG,
d. ACCT_ACC_LNG ,
f. FISCAL_MONTH,
f. FISCAL_YEAR_LNG,
d. ACCT_TYPE
2ND QUERY
SELECT a.pl_ent,
     a.pl_sa,
     a.pl_cc,
     a.pl_acc,
     b.customer_name,
     SUM(a.gl_amt)
FROM F_ACCT_TRX_HIST_STG1 a,
D_CUSTOMER b,
     f_sales_invoice c
WHERE a.pl_tr_type = 'PP' AND
a.pl_doc= c.inv_nbr AND
     c.inv_cust_bill_to_nbr = b.cust_nbr
GROUP BY a.pl_ent,
     a.pl_sa,
     a.pl_cc,
     a.pl_acc,
     b.customer_name

I tried to write what i need (it's not working), maybe this will help to understand more clearly!!!!
Thank-you
SELECT *
FROM (SELECT
b.en_ent,
e.sa_sub,
c.cc_cstctr,
d.acct_acc,
g.cust_name,
b. EN_ENTITY_LNG,
e. SA_SUB_LNG,
c. COST_CTR_LNG,
d. ACCT_ACC_LNG ,
f. FISCAL_MONTH,
f. FISCAL_YEAR_LNG,
d. ACCT_TYPE,
SUM(a.gl_amt),
CASE WHEN gl_tr_type = 'SO'
THEN
          a.gl_doc = h.inv_nbr AND h.inv_cust_bill_to_nbr = g.cust_nbr
          END
CASE WHEN gl_tr_type = 'IC'
THEN
a.gl_doc = h.tr_trnbr AND h.tr_nbr = g.channel_code
          END
FROM
F_ACCT_TRX_HIST_STG1 a,
D_ENTITY_STG2 b,
D_COSTCTR_STG2 c,
D_ACCTS_STG2 d,
D_SUBACCTS_STG2 e,
D_PERIOD_STG1 f,
FINMART.D_CUSTOMER g,
DSSMART.F_SALES_INVOICE h,
WHERE
a.GL_ENT = b.EN_ENT AND
c.CC_CSTCTR= UPPER (a.GL_CC) AND
d.acct_acc = a.gl_acc AND
e.sa_sub = a.gl_sa AND
a.gl_eff_dt = f.calendar_date
GROUP BY b.EN_ENT,
e.sa_sub,
c.cc_cstctr,
d.acct_acc,
b. EN_ENTITY_LNG,
e. SA_SUB_LNG,
c. COST_CTR_LNG,
d. ACCT_ACC_LNG ,
f. FISCAL_MONTH,
f. FISCAL_YEAR_LNG,
d. ACCT_TYPE
g.cust_name)
WHERE g.cust_name IS NULL THEN INSERT 'N/A'

Similar Messages

  • Left joins : Case or if statement

    Hi
    I want to know if one can build in a case or if statement into a left join. I want to multiply a value with -1 if the condition is met.

    I think it is not possible in SELECT statement...but you can do the below to resolve it.
    add a field for FKART in your internal table.
    TYPES: BEGIN OF tab ,
           kunag LIKE vbrk-kunag,
           fkdat like vbrk-fkdat,
           matnr LIKE vbrp-matnr,
           werks LIKE vbrp-werks,
           fkart like vbrk-fkart,
           volum LIKE vbrp-volum,
           END OF tab.
    DATA: wa_outtab TYPE table of tab WITH HEADER LINE.
    SELECT akunag  afkdat
           bmatnr bwerks a~fkart
           SUM( b~volum )
       INTO  table wa_outtab
    FROM vbrk AS a
    INNER JOIN vbrp AS b
          ON  avbeln  = bvbeln
    WHERE
    avbeln = bvbeln                                                                               
    AND ( afkart LIKE 'F2' OR afkart LIKE 'RE' )
    GROUP BY  akunag afkdat bmatnr bwerks a~fkart.
    sort wa_outtab.
    Loop at wa_outtab.
      if wa_outtab-fkart = 'RE'.
        lpos = lpos +  wa_outtab-volum.
      else.
        lneg = lneg +  wa_outtab-volum.
      endif.
      at end of werks.
         lvaule = lpos - lneg.
        lpos = lneg = 0.
      endat.
    endloop.

  • Mid month Relieving and joining

    Hi All,
    Relieving and joining during mid of the month are not getting pro- rated.
    System is able to pro-rate the salary for any unpaid absence but not for relieving and joining cases.
    What is that I am missing? Please tell
    Regards
    Astha

    Hi,
    Relieving in the middle of the month:
    Please check the payroll status infotype - 0003 of the relieved employee and mark the relieving date at field "Run Payroll upto" field.
    Joining in the middle of the month:
    Please check the date specification infotype - 0041 of the employee joined in the middle of the month (i.e hiring date) and mark it in that infotype.
    After completing above process. Run the payroll.
    Lets wait, what other experts says for this.
    Regards,
    Shadeesh.G

  • How can we make an outer join (+) between 2 Queries

    in the data model, i have 2 queries
    i.e
    Q_master and Q_detail
    i want to make a data link between
    these two queries and
    also make an outer join between these
    two queries(i.e. to display all the detail
    records, whether they have details or not)
    please reply is it possible ?
    if yes then how?
    plz write.
    [email protected]
    null

    Hello,
    Left outer join behavior is what you get by default with a link between two queries in Reports.
    If you want a full outer join behavior, you'll need to create a third query that selects the detail records that have no corresponding master, and also create an extra layout region to display them in as a default group left or group above won't pick up these extra records.
    If you want right outer join behavior, you'll need to put in a summary in the master query that counts the rows in the detail, and then put in a format trigger in the master repeating frame that suppresses printing when there are no detail records. And you'll also need the third query and layout section as in the full outer join case.
    Regards,
    The Oracle Reports Team --skw                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Replacing a inner join with for all entries

    Hi Team,
       In a already developed program I am replacing a inner join with select query follow up with for-all-entris and passing the data to final internal table but in both the case the result should be same then only my replacement will be correct. But my no records in both cases differs. This happening because when i am selecting data from first data base table is 32 lines. then I am doing fo-all-entries moving all the duplicate entries then the no records are four. but in final internal table i am looping the first internal table. So in final internal table the no of records are 32. But in inner join query the records are 16.So please let me know how resolve this issue?
    Thanks and REgards
    Deepa

    Hi Thomas,
      Thanks for ur suggestion.
    The solved that in below.
    In select query I did not change anything The way I had written the code was correct.
    I think many of us know how to write that how to make the performance better in that way.
    I made the change when I transfered the to final internal table.
    The original Inner join code:
    select a~field1 a~field2 a~field3 b~field2 b~field3 b~field4
               from dbtab1 as a  inner join dbtab2 as b
              on a~field1 = b~field1 into it_final where
              a~field1 in s_field1. [Field1  in both the table are key field]
    Before code:
    Sort itab1 by key-fields.
    sort itab2 by keyfields.
    loop at itab1 into wa1.
    move: wa1-field1 to wa_final-field1,
               wa1-field2 to wa_final-field2,
               wa1-field3 to wa_final-field3.
    read table itab2 into wa2 witk key field1 = wa1-field1 binary search.
      if sy-subrc = 0.
      move : wa2-field2 to wa_final-field4,
                 wa2-field3 to wa_final-field5,
                 wa2-field4 to wa_final-field6.
    append wa_final to it_final.
    endif.
    Clear : wa1, wa2, wa_final.
    endloop.
    In this case if the one key fieild value is not present there in second internal table but its there in first internal table still it will read that row with 2nd internal values having zeroes. Normally what does not happen in inner join case if the key field value will same in both the case ,then that will fetch only those rows.
    Changed Code
    loop at itab1 into wa1.
    read table itab2 into wa2 witk key field1 = wa1-field1 binary search.
      if sy-subrc = 0.
    move: wa1-field1 to wa_final-field1,
               wa1-field2 to wa_final-field2,
               wa1-field3 to wa_final-field3.
      move : wa2-field2 to wa_final-field4,
                 wa2-field3 to wa_final-field5,
                 wa2-field4 to wa_final-field6.
    append wa_final to it_final.
    endif.
    Clear : wa1, wa2, wa_final.
    endloop.
    In this case the values will read to final internal table if both key field matches.
    With Regards
    Deepa

  • Join With Date Dimension

    Hi,
    I have one Dimension called Caseheader and Date Dimension. In Date dimension I have date key  yyyymmdd format. I have fact table in which i am getting CaseCreationDate from Caseheader Dimension. in case creationdate it contains time as well because
    there is a requirement to do some reports based on timing.
    When I join case creationdate to date column in a data source view with date dimension i do not get any thing  and get error in cube processing as well because it does not match with case creationdate because of time. What is the best possible approach
    to resolve this issue? Do I need to add time dimension? or have a time field in date dimension?
    Could anyone suggest please?
    MH

    Thanks Christian, I have separated date and time and kept the original field with datetime as well in case if somebody wants it for reporting purpose. Thank you so much for all your help suggestions. Much appreciated. Used this code
    convert(date,ch.CaseCreationDate)asCaseCreationDate
    ,convert(varchar(8),convert(time,ch.CaseCreationDate))as[CaseCreationTime]
    Regards,
    Mustafa
    MH

  • Loaded dump from 12.5.4 to 15.7, Error 622 & Tables/Views not found

    Hi all,
    I am getting desperate trying to fix that issue. I had to import a dump from an ASE 12.5.4 into a new database on an ASE 15.7. I therefore enabled the compatibility mode server-wide and then proceeded to the load which seems to have performed correctly. The database seems well imported, and I can access most of the tables/views/procedures (from my C# application), but a procedure is giving me troubles.
    I am getting this error while executing a given procedure :
    Error: 622, Severity: 20, State: 1
    Opentable was passed a varno of 52. Object 'temp worktable' in database 'tempdb' already has that session descriptor in use. This occured while opening object id -1245236 in database id 2.
    I realised this was caused by a view which is used in this procedure but which is weird is that the error is not thrown when running the SQL code itself, out of the procedure context.
    I am using Toad for Sybase as a client, and realised I cannot access the data tab of the concerned view, Toad gives me the same error. So I cannot execute this view while it performs without any trouble on  the original server (production server which runs ASE 12.5.4).
    I also realised, when trying to execute some pieces of the query individually that I get warnings about unfound tables (and that I should use sp_help to fix the problem). Although, the tables actually exists and I can access their data.
    I have tried so many things and still cannot fix this problem. I've used 'upgrade_object', 'sp_recompile' on all tables, 'dbcc reindex' on all tables.
    Would you have any idea where the problem could come from? Thank you very much

    Thank you for your reply.
    Good to know that enabling compatibility mode before or after loading the load doesn't have any effect.
    I didn't drop/recreate all the SP and views but I can tell you I recreated a similar view and it causes the same issue.
    I had a look at your link yes. I have run 'dbcc upgrade_object' several times already, on the view and the procedures calling the view, and it won't solve anything.
    I've seen a similar post about the problem I have when executing the code itself of the view in a "normal" adhoc query (errors saying the tables are not found) : SAP Sybase Forums - ASE - Backup and Recovery - PRoblem after restore - Object does not exist.
    Could it be the same problem for me?
    Although I've tried splitting the SQL query and executing the parts individually and it works without any trouble. Problem is when the 2 parts of the view are put together. It may be something that was possible with ASE 12.5.4 but not with 15.7 and that compatiblity mode doesn't solve. My view consists of an union a two selects where both have left joins, case when then else, not in and a final group by (adding an order by doesn't help, as I tried).
    PS : I don't have access to the production server where the dump is actually from (but I could if it is really necessary)

  • Parent - child table issue wrt to count - SQL question

    I have a scenario:
    There are 2 tables (parent and child). lets say, case summary table and task level dimension table.
    for every case id in case summary table, there would be multiple tasks in task level dim table with a flag indicator set to 1 for all tasks.
    but while counting the number of cases active with flag indicator 1 (ofcourse when joining case summary table with task dimension table), for a case id only 1 instance of task needs to be accounted (even though it has more than one task , for counting active cases, the flag ind corresponding to a task in a case if set to 1 , then the case is considered active)..but while joining and taking count of case ids with flag indicator as 1, you get the count of every task row of a case which is incorrect logically. how to discard the rest of child records of a case in child table (task dimension table)?
    I am not sure how to achieve this in sql query
    Kindly help!
    Case summary table
    case id, busininess_unit, agent_name
    1001, admin, Ram
    1002, Finance, Sam
    task table
    case id, task_id,task_name, flag_indicator
    1001, 1, 'New', 1
    1001,2, 'Open',1
    1001,3,'In progress',1
    1002, 4, 'New', 1
    (In fact task_id is not a big deal... even you can assume task id doesn't exist..only task name ... )
    now my question... if my query should get the current active cases (ind=1); as per above it should essentially give 2... but my query gives me 4..you know the reason why.. but how do i get the correct count?
    Thanks!

    may be you need just this:
    select count(distinct case_id) from task
    where indicator = 1;
    If this is not what you are looking for, please elaborate and tell us the expected output and rest of the details as mentioned in FAQ Re: 2. How do I ask a question on the forums?:

  • Identifying Primary Keys for table(s) which have no natural PK

    Hello,
    At my organization, a person used the loader to bring in 4 tables which have no natural PK's. The user is using BI Studio to demo to upper management. He wants to be able to get down to a single row returned, but the table(s) don't appear to have anything unique, and the data pulled from the warehouse has no timestamps, and only dates. Since there can be multiple occurrences of case names, supervisor names, worker names, case numbers, dates, etc. I have no idea how to get anything for sure down to a unique single row returned and this table has no other table in relation to it:
    Name Null Type
    APPROVAL_PERIOD VARCHAR2(50)
    PAGE_NUMBER VARCHAR2(50)
    SERVICING_AGENCY VARCHAR2(50)
    ELIGIBILITY_OVERRIDES_1 VARCHAR2(50)
    SERVICE_AUTH_OVERRIDES_1 VARCHAR2(50)
    SUPERVISOR VARCHAR2(50)
    ELIGIBILITY_OVERRIDES_2 VARCHAR2(50)
    SERVICE_AUTH_OVERRIDES_2 VARCHAR2(50)
    WORKER VARCHAR2(50)
    ELIGIBILITY_OVERRIDES_3 VARCHAR2(50)
    SERVICE_AUTH_OVERRIDES_3 VARCHAR2(50)
    TYPE_OF_OVERRIDE VARCHAR2(50)
    CASE_NUMBER VARCHAR2(50)
    CASE_NAME VARCHAR2(50)
    CC_PROGRAM VARCHAR2(50)
    PERIOD_BEGIN DATE
    OVERRIDE_REASON VARCHAR2(50)
    OVERRIDE_VERSION VARCHAR2(2)
    APPROVED_DATE DATE
    If anyone has some advice on how to proceed with this data I would appreciate it.

    Data model normalization is usually different for data warehouses.
    Data often is denormalized purposely.
    To get "unique" rows you have to aggregate data one or another way.
    Work with OLAP/BI is oriented rather on aggregations of facts by dimensions, versus looking to individual rows in fact table.
    Aggregations are usually done by BI Studio automatically when user slices and dices, a fact column will be either summarized, or max'ed or aggregated otherwise.
    Getting down "to the row" may make sense when user wants to go (drill through) to original data that came from OLTP database.
    Usually it is required for investigations or like that.
    Normally OLAP/BI applications do no go that deep and manipulate aggregated data on more high level.
    He wants to be able to get down to a single row returnedWhat is a reason for that? What will change if he will see that same worker had multiple interactions for a case for one day?
    Worker  Case       Date
    Joe Doe SC12345 2013-02-07
    Joe Doe SC12345 2013-02-07
    Joe Doe SC12345 2013-02-07
    Joe Doe SC12345 2013-02-07instead of normally aggregated result
    Worker  Case       Date           Count
    Joe Doe SC12345 2013-02-07  4Edited by: Mark Malakanov (user11181920) on Feb 7, 2013 12:22 PM
    If user still want to drill through, a drill through query needs to be specified and send to OLTP database where this data was originated from.
    For above it should be like:
    select w.WorkerName, c.Case#, i.Timestamp, ... other details
    from Workers@OLTP w
    join Cases@OLTP c on c.worker_id=w.worker_id
    join Interactions@OLTP i on i.case#=c.case#
    where w.WorkerName='Joe Doe' and c.Case#='SC12345'
    and TRUNC(i.Timestamp)=TO_DATE('2013-02-07');Edited by: Mark Malakanov (user11181920) on Feb 7, 2013 12:22 PM
    Edited by: Mark Malakanov (user11181920) on Feb 7, 2013 12:35 PM

  • Alternative query options

    I am an assisting a co-worker on trying to improve an ugly query and I can't come up with good suggestions, based on all the business rules that have been applied.
    The main rule is to try and convert everything from the production system into the data warehouse and if it cannot be converted, create a log for it.
    Here is a short test case.
    CREATE TABLE ATTY (atty_id number, nme varchar2(20));
    INSERT INTO atty VALUES (1, 'Bob');
    INSERT INTO atty VALUES (2, 'Perry');
    INSERT INTO atty VALUES (3, 'Ben');
    CREATE TABLE PHN (phn_id number, atty_id number, phn_no varchar2(15));
    INSERT INTO phn VALUES (10,1,'800-555-1234');
    INSERT INTO phn VALUES (11,2,'800-555-7890');
    INSERT INTO phn VALUES (12,3,'800-555-6541');
    CREATE TABLE ATTY_CASE_XREF (atty_id number, pers_id number, case_id number);
    -- The attorney representing a person on a case.  Could be multiple people and/or attornies per case.
    INSERT INTO atty_case_xref VALUES (2, 30, 40);
    INSERT INTO atty_case_xref VALUES (2, 31, 40);
    INSERT INTO atty_case_xref VALUES (3, 32, 41);
    INSERT INTO atty_case_xref VALUES (2, 33, 41);
    INSERT INTO atty_case_xref VALUES (1, 34, 42);  -- yes this points to nothing (aka bad data to deal with)
    CREATE TABLE case (case_id number, docket varchar2(20));
    -- Due to bad data, may not always have case for each reference above.
    INSERT INTO case VALUES (40, '2013-abcdefg');
    INSERT INTO case VALUES (41, '2013-pouyfbkjo');
    COMMIT;
    The test case query (a simplified version of the real one but captures the concepts)
    SELECT ap.atty_id,
           ap.nme,
           p.phn_no,
           c.docket
      FROM atty ap
           INNER JOIN phn p
              ON ap.atty_id = p.atty_id
           INNER JOIN atty_case_xref acx
              ON ap.atty_id = acx.atty_id
           LEFT OUTER JOIN case c
             ON acx.case_id = c.case_id
    UNION
    SELECT ap.atty_id,
           ap.nme,
           p.phn_no,
           NULL
      FROM atty ap
           INNER JOIN phn p
              ON ap.atty_id = p.atty_id
           INNER JOIN atty_case_xref acx
              ON ap.atty_id = acx.atty_id;
    The desired output (and what is output by the above SQL statement is) (order does not matter)
    Atty_ID
    NME
    PHN_NO
    DOCKET
    1
    Bob
    800-555-1234
    2
    Perry
    800-555-7890
    2013-abcdefg
    2
    Perry
    800-555-7890
    2013-pouyfbkjo
    2
    Perry
    800-555-7890
    3
    Ben
    800-555-6541
    2013-pouyfbkjo
    3
    Ben
    800-555-6541
    Basically, the first part (above the union) creates a record for each attorney on a case, even if a case record is not loaded.
    The second part converts just a generic non case record for each unique attorney.
    If attorney 1 is listed on 10 cases, need to create 11 records. One for each case and one with no case information 
    The data warehouse table the data is loaded into serves two purposes.  It is a normal data table, but also acts as a code table.  That way if someone wants a list of all attorneys, they don't have to do a distinct, but can just query for a certain type of record (don't ask why).
    The cost for the real version of this query is 248 Billion and was killed after several days of running as it was still building the result set.  The explain plan image I have shows the following row counts, ATTY = 96,255, PHN = 3,284,444, ATTY_CASE_XREF = 11,553,888, CASE = 14,421772.  We are looking for options that turn this into a one-pass run (or just reduce the cost) so it would complete in a reasonable time.
    Any suggestions need to work on 10.2.  This seems like a good place for a MODEL clause, but I still struggle with that syntax.
    And the cleanup
    DROP TABLE atty purge;
    DROP TABLE phn purge;
    DROP TABLE atty_case_xref purge;
    DROP TABLE case purge;

    Glad you asked because I found out a couple of assumptions were no longer valid.  The code was developed on 10.2.0.4, and a copy still exists there, but apparently the client had upgraded their DB since I helped out many months ago.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> select count(*) from dms_attyphn;
      COUNT(*)
      158612
    SQL> select count(*) from dms_phn;
      COUNT(*)
      3284444
    SQL> select count(*) from dms_atty;
      COUNT(*)
      96255
    SQL> select count(*) from dms_ptyatty;
      COUNT(*)
      11553888
    SQL> select count(*) from dms_case;
      COUNT(*)
      14421772
    SQL> select count(*) from dm_phone;
      COUNT(*)
      2971525
    I had also thought that DM_PHONE was empty, but an earlier part of the procedure that this query runs in populates a different set of phone data into the table, so the stats show it as empty, but in reality it had nearly 3 Million rows.
    The actual query
    SELECT DISTINCT p.cnv_phn_id,
                          p.cnv_phn_nbr,
                          pa.cnv_case_id,
                          ap.cnv_atty_id,
                          f_remove_site(ap.cnv_phn_cd) cnv_phn_cd,
                          ap.cnv_phn_cd_dscr,
                          a.cnv_idnt_id,
                          c.cnv_site_id
            FROM dms_attyphn ap
                 INNER JOIN dms_phn p
                    ON ap.cnv_phn_id = p.cnv_phn_id
                 INNER JOIN dms_atty a
                    ON a.cnv_atty_id = ap.cnv_atty_id
                 INNER JOIN dms_ptyatty pa
                    ON pa.cnv_atty_id = a.cnv_atty_id
                 LEFT OUTER JOIN dms_case c
                   ON pa.cnv_case_id = c.cnv_case_id
           WHERE NOT EXISTS (SELECT 1
                               FROM dm_phone dp
                              WHERE nvl(dp.pers_id_seq,'-1') = nvl(a.cnv_idnt_id,'-1')
                                AND nvl(dp.case_id_seq,'-1') = nvl(pa.cnv_case_id,'-1')
                                AND nvl(dp.phone_id,'-1')    = nvl(p.cnv_phn_id,'-1')
                                AND nvl(dp.type_cd,'-1')     = nvl(f_remove_site(ap.cnv_phn_cd),'-1'))
          UNION
          SELECT DISTINCT p.cnv_phn_id,
                          p.cnv_phn_nbr,
                          NULL,
                          ap.cnv_atty_id,
                          f_remove_site(ap.cnv_phn_cd) cnv_phn_cd,
                          ap.cnv_phn_cd_dscr,
                          a.cnv_idnt_id,
                          NULL
            FROM dms_attyphn ap
                 INNER JOIN dms_phn p
                    ON ap.cnv_phn_id = p.cnv_phn_id
                 INNER JOIN dms_atty a
                    ON a.cnv_atty_id = ap.cnv_atty_id
           WHERE NOT EXISTS (SELECT 1
                               FROM dm_phone dp
                              WHERE nvl(dp.pers_id_seq,'-1') = nvl(a.cnv_idnt_id,'-1')
                                AND dp.case_id_seq IS NULL
                                AND nvl(dp.phone_id,'-1')    = nvl(p.cnv_phn_id,'-1')
                                AND nvl(dp.type_cd,'-1')     = nvl(f_remove_site(ap.cnv_phn_cd),'-1'))
    The NOT EXISTS are there to prevent this record from being loaded twice into the data warehouse.  On the initial load, this will not be an issue and we are discussing removing it from the initial load run.  For function f_remove_site(), I have come up with a few lines of SQL that will replace what that function does, in order to remove the SQL -> PL/SQL -> SQL context switches.
    We do not have access via SQL*Plus to do trace/explain plan info so the following is taken from PL/SQL Developer (the format looked good in the pre block)
    Description                     Object name          Cost            Cardinality     Bytes
    SELECT STATEMENT, GOAL = ALL_ROWS               248006382352     19366615     1950133571
    SORT UNIQUE                              248006382352     19366615     1950133571
      UNION-ALL                         
       FILTER                         
        HASH JOIN RIGHT OUTER                    260203          19207303     1939937603
         TABLE ACCESS FULL          DMS_CASE          111541          14421772     302857212
         HASH JOIN                              42334          19207303     1536584240
          HASH JOIN                              9204          159312          10195968
           HASH JOIN                         884          159312          7169040
            TABLE ACCESS FULL     DMS_ATTY          137          96255          1540080
            TABLE ACCESS FULL     DMS_ATTYPHN          309          158612          4599748
           TABLE ACCESS FULL     DMS_PHN               3043          3284444          62404436
          INDEX FAST FULL SCAN     DMS_PTYATTY_ID_IDX     17114          11553888     184862208
        TABLE ACCESS FULL          PHONE               12810          1          33
       FILTER                         
        HASH JOIN                              9204          159312          10195968
         HASH JOIN                              884          159312          7169040
          TABLE ACCESS FULL          DMS_ATTY          137          96255          1540080
          TABLE ACCESS FULL          DMS_ATTYPHN          309          158612          4599748
         TABLE ACCESS FULL          DMS_PHN               3043          3284444          62404436
        TABLE ACCESS FULL          PHONE               12735          1          33
    As stated, the big question is, can we avoid the full table scan twice on the DMS_ATTY, DMS_ATTYPHN, and DMS_PHN tables.  That was my reason for creating the simple example, based on this larger complex query.

  • Help with Event Structure!.

    Hello,
    I am trying to config one event structure in the way I could change the range of sample graph to analyze it. 
    I have 3 graphs. The upper is the signal input.  In the middle is the signal fitted with some calculations and the bottom is the graph where I fix the sample width to analyze with anothers values.
    I put event structure in the way I read from file and later update the array of data to force the execution of Case "array 2". 
    Its only one example but I would like to change the value of "start" and " length" of sample signal but I cant get the data to show again. I tried diferent options, like put another event source inside of event case "array 2".
    I could join case 1 and 2, but the problem is with "start and lenght" control, I can`t get its display when i change the values.
    I created a example since its part of my program, to simulate the similar way i need.
    Thank in advance, Fred.
    Solved!
    Go to Solution.
    Attachments:
    example-eventStructure.vi ‏56 KB
    006_RR.txt ‏7 KB

    altenbach wrote:
    billko wrote:
    crossrulz wrote:
    You need to wire up the value of Array 2 to the output tunnel of the event case.  Currently, that output tunnel is set to "Use Default if Unwired".  So since you didn't wire up that tunnel, the value going into the shift register will become an empty array.
    That's one thing I am totally paranoid about.  I hate that the tunnels in an event structure are "Use Default if Unwired" by default, yet they don't show that little dot that suggests that it is.
    Of course they do! What makes you say they don't?
    I actually like the current behavior, especially for the boolean going to the stop button. I would not want to wire it in all the other cases where the loop should not stop.
    Maybe the behavior could be a bit fine-tuned. Automatic "Use default if unwired" should only apply to scalars. Tunnels for arrays and clusters, etc. should require a manual "use default if unwired" setting.
    Ha - it's because of my own paranoia about the "Default if Unwired" that led me to believe this.  I guess I always have something wired to every single case so I haven't seen it in ages.  I created one just now and specifically looked for the dot, and when I added a second case, there it was.  Of course if I wire something to that second case, it disappears.
    Bill
    (Mid-Level minion.)
    My support system ensures that I don't look totally incompetent.
    Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.

  • Poor performance reading MBEWH table

    Hi,
    I'm getting serious performance problems when reading MBEWH table directly.
    I did the following tests:
      GET RUN TIME FIELD t1.
      SELECT mara~matnr
        FROM mara
        INNER JOIN mbewh ON mbewh~matnr = mara~matnr
        INTO TABLE gt_mbewh
        WHERE mbewh~lfgja = '2009'.
      GET RUN TIME FIELD t2.
      GET RUN TIME FIELD t3.
      SELECT mbewh~matnr
        FROM mbewh
        INTO TABLE gt_mbewh
        WHERE mbewh~lfgja = '2009'.
      GET RUN TIME FIELD t4.
    t2 = t2 - t1.
    t4 = t4 - t3.
    write: 'With join: ', t2.
    write /.
    write: 'Without join: ', t4.
    And as result I got:
    With join:      27.166
    Without join:  103970.297
    All MATNR in MBEWH are in MARA.
    MBEWH has 71.745 records and MARA has 705 records.
    I created an index for lfgja field in MBEWH.
    Why I have better performance using inner join?
    In production client, MBEW has 68 million records, so any selection takes too much time.
    Thanks in advance,
    Frisoni

    Guilherme, Hermann, Siegfried,
    I have just seen this thread and read it from top to bottom, and I would say now is a good time to make a summary..
    This is want I got from Guilherme's comments:
    1) MBEWH has 71.745 records
    2) There are two hidden clients in the same server with 50 million rocords.
    3) Count Distinct mandt = 6
    4) In production client, MBEW has 68 million records
    First measurement
    With join               : 27.166
    Without join            :  103970.297
    Second measurement
    With join               : 96.217
    Without join            : 93.781            << now with hint
    The original question was to understand why using the JOIN made the query much faster.
    So the conclusions are:
    1) Execution times are really now much better (comparing only the not using join case, which is the one we are working on), and the original "mystery" is gone
    2) In this client, MANDT is actually much more selective that the optimizer thinks it is (and it's because of this uneven distrubution, as Hermann mentioned, that forcing the index made such a difference)
    4) Bad news is that this solution was good because of the special case of your development system, but will probably not help in the production system
    5) I suppose the index that Hermann suggested is the best possible thing to do (the table won't be read, assuming you really only want only MATNR from MBEWH, and that it wasn't a simplification for illustration purposes); anyway, noone can really expect that getting all entries from MBEWH for a given year will be a fast thing...
    Rui Dantas

  • EIM tools on HANA

    Hi All,
    With SP9 we have the EIM bringing in SDI and SDQ with many BODS like features for ETL.
    Can someone please shed some light on whether these are capable of replacing the logic we apply in our regular views (Attr, Analytical, and calc)?
    Are these flowgraphs the next best practice for data modeling in HANA?
    If not yet, what are there limitations in terms of development and transport,
    Please let me know.
    Thanks,
    Shyam

    Hi Shyam,
    With my knowledge, I believe we can apply most of the transformation using flowgraphs like filters, joins,Case, merge, validation and data validation transforms too.
    However you are going to persist the result data when you execute the flowgraphs in HANA. It's not the case with actual modeling objects where it applies the transformations in run time.
    Regards,
    Venkat N.

  • HOW to use new Oracle 9i Features in Pro*C

    Hi All,
    I am an Pro*C developer. I tried using Oracle 9i Features like RIGHT OUTER JOIN, FULL OUTER JOIN, CASE ... etc inside proc program. But I am getting error while compiling it.
    My sample code:
    EXEC SQL DECLARE CARD_CUR CURSOR FOR
    SELECT A.EMP_NBR,
    FROM EMP A FULL OUTER JOIN DEPT B
    ON A.DEPT_NO = B.DEPT_NO;
    EXEC SQL OPEN CARD_CUR;
    Pre Compilier Error:
    Pro*C/C++: Release 9.2.0.5.0 - Production on Wed Dec 14 02:34:35 2005
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    System default option values taken from: /oracle/oracle/9.2.0.5//precomp/admin/pcscfg.cfg
    Syntax error at line 476, column 50, file br_afs_cardxrf.pc:
    Error at line 476, column 50 in file br_afs_cardxrf.pc
    FROM EMP A FULL OUTER JOIN
    ......................................1
    PCC-S-02201, Encountered the symbol "FULL" when expecting one of the following:
    ; , for, union, connect, group, having, intersect, minus,
    order, start, where, with,
    Even when I use CASE statement, i get some similar error.
    Can anyone guide me on this.
    Regards,
    ghu

    You will probably need to use dynamic SQL.
    See the discussion at:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:53140567326263
    (while the question deals with SQLX, the answer is the same - Pro*C can't parse all SQL, but dynamic SQL is not parsed by Pro*C)

  • Using Case and Joins in update statement

    Hi all,
    I need to update one column in my table...but need to use case and joins...I wrote the query and it is not working
    I am getting an error msg saying...the SQL command not ended properly...
    I am not that good at SQL...Please help...
    update t1 a
    set a.name2=
    (case
    when b.msg2 in ('bingo') then '1'
    when b.msg2 in ('andrew') then '2'
    when b.msg2 in ('sam') then '3'
    else '4'
    end )
    from t1 a left outer join t2 b
    on a.name1 = b.msg1 ;
    Waiting for help on this...!
    Thanks in Advance... :)

    Another approach is to update an inline view defining the join:
    update
    ( select a.name2, b.msg2
      from   t1 a
      join   t2 b on b.msg1 = a.name1 ) q
    set q.name2 =
        case
          when q.msg2 = 'bingo' then '1'
          when q.msg2 = 'andrew' then '2'
          when q.msg2 = 'sam' then '3'
          else '4'
        end;which could also be rewritten as
    update
    ( select a.name2
           , case q.msg2
                when 'bingo'  then '1'
                when 'andrew' then '2'
                when 'sam'    then '3'
                else '4'
             end as new_name
      from   t1 a
      join   t2 b on b.msg1 = a.name1 ) q
    set name2 = new_name;The restriction is that the lookup (in this case, t2.msg1) has to be declared unique, via either a primary or unique key or unique index.
    (You don't strictly need to give the view an alias, but I used 'q' in case you tried 'a' or 'b' and wondered why they weren't recognised outside the view.)

Maybe you are looking for