Need Help with Details of A/R Query "0FIAR_C03_Q0005"

Hello Gurus:
Can anyone please explain the logic behind the calculation of the buckets in the Business Content Query
"0FIAR_C03_Q0005". I want to add a column to the report "Number of Days". Not sure of the logic I can
use to calculate this. Can some one please suggest....?
Thanks.....PBSW

I believe that they take 0DEB_CRE_LC (Debit/Credit Amount) and restrict by value range for 0NETDUEDATE. They then take variable 0DAT and offset accordingly for each of the time buckets.
e.g. 0NETDUEDATE [] 0DAT-1 - 0DAT-30
Also, crucially, you must set 0FI_DOCSTAT to characteristics restriction and filter to include only "O".
Edited by: Khaled McGonnell on Jan 29, 2010 2:51 PM

Similar Messages

  • Need help with detail by hour in SQL query

    Hello all,
    I am using the following query to track the usage on a circuit and I have the detail by day, but now they are asking for hourly usage from 0900 to 1200 on these days. Any ideas how I can append to include hour in my detail?
    select 'Report Name Here'  as Circuit,'Usage'  as Measurement,
     MONTH(interfacetraffic.datetime) as month, year(interfacetraffic.datetime) as year, day(interfacetraffic.datetime) as day,
     '' as Mo_yr,
     interfaces.inbandwidth as bandwidth,
      '' as adjustedbandwidth,
     max (interfacetraffic.in_maxbps )  as max_in,
    max (interfacetraffic.out_maxbps)  as max_out,
    avg(interfacetraffic.in_maxbps )  as avg_in,
    avg(interfacetraffic.out_maxbps)  as avg_out,
    max(case (interfacetraffic.in_maxbps ) when 0 then 0  else(interfacetraffic.in_maxbps  )/interfaces.inbandwidth *100 end) as 'max_in_%',
    max(case ( interfacetraffic.out_maxbps) when 0 then 0  else( interfacetraffic.out_maxbps )/interfaces.outbandwidth *100 end) as 'max_out_%',
    avg(case (interfacetraffic.in_maxbps ) when 0 then 0  else(interfacetraffic.in_maxbps  )/interfaces.inbandwidth *100 end) as 'avg_in_%',
    avg(case ( interfacetraffic.out_maxbps) when 0 then 0  else( interfacetraffic.out_maxbps )/interfaces.outbandwidth *100 end) as 'avg_out_%',
    nodes.location as location,nodes.sysname as sysname,nodes.timezone,interfaces.interfaceid as interfaceid,nodes.nodeid as nodeid,interfaces.fullname as fullname
     FROM 
    (Nodes INNER JOIN Interfaces ON (Nodes.NodeID = Interfaces.NodeID))  
    INNER JOIN InterfaceTraffic ON (Interfaces.InterfaceID = InterfaceTraffic.InterfaceID AND InterfaceTraffic.NodeID = Nodes.NodeID)
    where InterfaceTraffic.DateTime > GETDATE() -180
    and interfaces.InterfaceID = '31072'
    and month(interfacetraffic.datetime) = 1
    and year(interfacetraffic.datetime) = 2015
    --and  DATEPART(hh,interfacetraffic.datetime)  in ('09','10','11','12','13','14','15','16','17','18','19')
    group by interfaces.inbandwidth, year(interfacetraffic.datetime), 
    MONTH(interfacetraffic.datetime) ,nodes.location ,nodes.sysname,interfaces.inbandwidth,nodes.timezone ,interfaces.interfaceid,nodes.nodeid,interfaces.fullname, day(interfacetraffic.datetime) 
    --DAY(InterfaceTraffic.DateTime)

    select 'Report name here' as Circuit, 'Usage' as Measurement,
    month(...) as month, year(...) as year, day(...) as day, datepart(hour, ...) as hour,
    from ...
    group by month(...), year(...), day(...), datepart(hour, ...), ...
    Note - have you considered just having a single column for the date as opposed to 3 separate columns? And for efficiency, change your where clause from
    where InterfaceTraffic.DateTime > GETDATE() -180
    and interfaces.InterfaceID = '31072'
    and month(interfacetraffic.datetime) = 1
    and year(interfacetraffic.datetime) = 2015
    to
    where interfaces.InterfaceID = '31072'
    and interfacetraffic.datetime >= '20150101' and interfacetraffic.datetime < '20150201'
    and datepart(hour, ...) between 9 and 19
    That first part involving "getdate() - 180" does nothing useful when you only want values from January of this year.

  • Need Help with instr/Regexp for the query

    Hi Oracle Folks
    I am using Oracle
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    I have some student responses and the valid values are +/-/O(alphabet)/P and spaces at the end of the sting only not in the middle.
    As per my requirement the record number 2 3,4 should be listed from the query but I am getting only one (record 3).
    Can we use REG_EXP
    Please help.
    Thanks in advance.
    Rajesh
    with x as (
    SELECT '+-+-POPPPPPP   ' STUDENT_RESPONSE, 1 record_number FROM DUAL union all
    SELECT '+--AOPPPPPP++' STUDENT_RESPONSE, 2 record_number FROM DUAL union all
    SELECT '+-+-  OPPPPPP--' STUDENT_RESPONSE, 3 record_number FROM DUAL union all
    SELECT '+-+-9OPPPPPP   ' STUDENT_RESPONSE, 4 record_number FROM DUAL )
    (SELECT RECORD_NUMBER,
    TRIM(STUDENT_RESPONSE) FROM X
    WHERE
    ((INSTR (UPPER(TRIM(STUDENT_RESPONSE)),'-') =0)
    OR (INSTR (UPPER(TRIM(STUDENT_RESPONSE)),'+') =0)
    OR (INSTR (UPPER(TRIM(STUDENT_RESPONSE)),'O') =0)
    OR (INSTR (UPPER(TRIM(STUDENT_RESPONSE)),'P') =0)
    OR (INSTR (UPPER(TRIM(STUDENT_RESPONSE)),' ') !=0)
    )

    Hi, Rajesh,
    Rb2000rb65 wrote:
    Hi Oracle Folks
    I am using Oracle
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing optionsThanks for posting this (and the WITH clause for the sample data). That's very helpful.
    I have some student responses and the valid values are +/-/O(alphabet)/P and spaces at the end of the sting only not in the middle.Are you combining the responses to several qeustions in one VARCHAR2 column? It might be better to have a separate row for each question.
    As per my requirement the record number 2 3,4 should be listed from the query but I am getting only one (record 3). What exactly is your requirement? Are you trying to find the rows where student_response contains any of the forbidden characters, or where it contains a space anywhere but at the end of the string?
    Can we use REG_EXPYes, but it's easy enough, and probably more efficient, to not use regular expressions in this case:
    Here's one way:
    SELECT     record_number
    ,     student_response
    FROM     x
    WHERE     TRANSLATE ( UPPER ( RTRIM (student_response, ' '))
                , 'X+-OP'
                , 'X'
                )     IS NOT NULL
    ;That is, once you remove trailing spaces and all occurrences of '+', '-', 'O' or 'P', then only the forbidden characters are left, and you want to select the row if there are any of those.
    If you really, really want to use a regular expression:
    SELECT     record_number
    ,     student_response
    FROM     x
    WHERE     REGEXP_LIKE ( RTRIM (student_response)
                  , '[^-+OP]'          -- or '[^+OP-]', but not '[^+-OP]'.  Discuss amongst yourselves
                  , 'i'
    ;but, I repeat, this will probably be slower than the first solution, using TRANSLATE.
    Edited by: Frank Kulash on Oct 17, 2011 1:05 PM
    Edited by: Frank Kulash on Oct 17, 2011 1:41 PM
    The following is slightly simpler than TRANSLATE:
    SELECT     record_number
    ,     student_response
    FROM     x
    WHERE     RTRIM ( UPPER ( RTRIM (student_response, ' '))
               , '+-OP'
               )          IS NOT NULL
    ;

  • Need Help With ACS LDAP setup to Query AD

    I have 2 Win 2003 ADs, one of them is configured and working under Windows Database (using remote agent) configuration. I am trying to setup the second AD with Generic LDAP setup. I want to know what exactly I should use in the fields UserObjectType and Class, and GroupObjectType and Class for Windows 2003 AD. All Cisco documents give example of Netscape LDAP syntax. I was told by our server admin. what to put under Admin DN, CN=myid,OU=mygroup,OU=myorg,DC=mydomain,DC=com
    I have both user & group directory subtree fields filled with DC=mydomain,DC=com.
    I am using the ip address for Primary LDAP server, and port is 389, LDAP version 3 is checked.
    Is any of these DC, OU, etc. case sensitive?
    With all entries that I have tried, when I go to map a group, I am getting error "LDAP server NOT reachable. Please check the configuration". My ACS can ping the domain controller's IP address fine.
    Please help. Thank you in advance,
    Murali

    Murali,
    These references may help...
    http://download.microsoft.com/download/3/d/3/3d32b0cd-581c-4574-8a27-67e89c206a54/uldap.doc
    http://www.microsoft.com/technet/archive/winntas/plan/dda/ddach02.mspx?mfr=true
    http://technet.microsoft.com/en-us/library/aa996205.aspx
    Regards,
    Richard

  • Need help with an XML-based SQL query

    Hello, everyone:
    I get 'no rows selected' from the following queries:
    SELECT extractValue((p.entity_types_xml), '/entities/test[a=fagioli]')
    FROM entity_types p
    WHERE
    existsNode(p.entity_types_xml,
    '/entities/test/text()') = 1;
    SELECT extractValue((p.entity_types_xml), '/entities/test/text()')
    FROM entity_types p
    WHERE
    existsNode(p.entity_types_xml, '/entities/test/text()') = 1;
    SELECT
    extractValue(p.entity_types_xml,'/entities/ARFinance-elem/entity-type')
    FROM entity_types p
    WHERE
    existsNode(p.entity_types_xml,'/entities/ARFinance-elem/ARFinanceTBD/text()') = 1;
    where the entity_types table looks like this:
    SQL> desc entity_Types;
    Name Null? Type
    ENTITY_TYPES_ID NOT NULL VARCHAR2(36)
    ENTITY_TYPES_XML NOT NULL XMLTYPE
    CLIENT_ID VARCHAR2(30)
    and the XML fragment looks like this:
    +++++++++++++++++++++++++++++++++++++
    <?xml version="1.0" encoding="UTF-8"?>
    <entities
    xmlns="http://www.thirdpillar.com/xml/namespace/aom/entity"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.thirdpillar.com/xml/namespace/aom/entity ../generated/denormalized-schema.0.xsd">
    <test a="fagioli">pasta</test>
    <ARFinance-elem>
    <uid b="pasta">123</uid>
    <entity-type><![CDATA[ARFinance]]></entity-type>
    <ARFinanceTBD>tobedetermined</ARFinanceTBD>
    </ARFinance-elem>
    ++++++++++++++++++++++++++++++++++++++++++++
    As you can see, there is data in /entities/test[@a] as well as /entities/test/text(),
    so why aren't any of the queries returning the preceding XML fragment? Suggestions welcome and appreciated....
    Regards,
    Oswald
    ps: I'm sure I've overlooked something simple.

    Oswald,
    I think you need to add the namespace
    select extractValue(.....','xmlns:xsi="http://www.thirdpillar.com/xml/namespace/aom/entity"')
    from entity_types p

  • Need help with conditional query

    guys this is just an extension of this post that Frank was helping me with. im reposting because my requirements have changes slightly and im having a hell of a time trying to modify the query.
    here is the previous post.
    need help with query that can look data back please help.
    CREATE TABLE "FGL"
        "FGL_GRNT_CODE" VARCHAR2(60),
        "FGL_FUND_CODE" VARCHAR2(60),
        "FGL_ACCT_CODE" VARCHAR2(60),
        "FGL_ORGN_CODE" VARCHAR2(60),
        "FGL_PROG_CODE" VARCHAR2(60),
        "FGL_GRNT_YEAR" VARCHAR2(60),
        "FGL_PERIOD"    VARCHAR2(60),
        "FGL_BUDGET"    VARCHAR2(60)
      )data
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('240055','240055','7600','4730','02','11','00','400');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('240055','240055','7240','4730','02','10','1','100');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('240055','240055','7240','4730','02','10','1','0');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('240055','240055','7600','4730','02','11','1','400');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('360055','360055','7200','4730','02','10','1','400');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('360055','360055','7600','4730','02','10','1','400');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('240055','240055','7240','4730','02','10','14','200');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('240055','240055','7600','4730','02','10','14','100');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('240055','240055','7240','4730','02','10','14','200');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('240055','240055','7240','4730','02','10','2','100');
    Insert into FGL (FGL_GRNT_CODE,FGL_FUND_CODE,FGL_ACCT_CODE,FGL_ORGN_CODE,FGL_PROG_CODE,FGL_GRNT_YEAR,FGL_PERIOD,FGL_BUDGET) values ('240055','240055','7240','4730','02','11','2','600');
    I need to find the greatest grant year for the grant by a period parameter.
    once i find the greatest year i need to check the value of period 14 for that grant for the previous year and add it to the budget amount for that grant. however if their is an entry in the greatest year for period 00 then i need to ignore the period 14 of previous year and do this calculation current period +(current period - greatest year 00)
    hope that makes sense so in other words with the new data above. if i was querying period two of grant year 11. i would end up with $800
    because the greatest year is 11 it contains a period 0 with amount of $400 so my total should be
    period 2 amount $ 600
    period 0 amount $ 400 - period 2 amount of $600 = 200
    600+200 = $800
    if i query period 1 of grant 360055 i would just end up with 800 of grnt year 10.
    i have tried to modify that query you supplied to me with no luck. I have tried for several day but im embarrased to say i just can get it to do what im trying to do .
    can you please help me out.
    here is the query supplied by frank kulash who gracefully put this together for me.
    WITH     got_greatest_year     AS
         SELECT     fgl.*     -- or whatever columns are needed
         ,     MAX ( CASE
                     WHEN  fgl_period = :given_period
                     THEN  fgl_grnt_year
                    END
                  ) OVER ()     AS greatest_year
         FROM     fgl
    SELECT     SUM (fgl_budget)     AS total_budget     -- or SELECT *
    FROM     got_greatest_year
    WHERE     (     fgl_grnt_year     = greatest_year
         AND     fgl_period     = :given_period
    OR     (     fgl_grnt_year     = greatest_year - 1
         AND     fgl_period     = 14
    ;Miguel

    Hi, Miguel,
    Are you waying that, when the greatest year that has :given_period also has period='00' (or '0', or whatever you want to use), then you want to double the budget from the given_period (as well as subtract the budget from the '00', and not count the pevious year's '14')? If so, add another condition to the CASE statement which decides what you're SUMming:
    WITH     got_greatest_year     AS
         SELECT       TO_NUMBER (fgl_grnt_year)     AS grnt_year
         ,       fgl_period
         ,       TO_NUMBER (fgl_budget)     AS budget
         ,       MAX ( CASE
                       WHEN  fgl_period = :given_period
                       THEN  TO_NUMBER (fgl_grnt_year)
                      END
                    ) OVER ()     AS greatest_year
         FROM       fgl
    ,     got_cnt_00     AS
         SELECT     grnt_year
         ,     fgl_period
         ,     budget
         ,     greatest_year
         ,     COUNT ( CASE
                       WHEN  grnt_year     = greatest_year
                       AND       fgl_period     = '00'
                       THEN  1
                         END
                    ) OVER ()          AS cnt_00
         FROM    got_greatest_year
    SELECT       SUM ( CASE
                        WHEN  grnt_year     = greatest_year                    -- New
                  AND       fgl_period     = :given_period                    -- New
                  AND       cnt_00     > 0            THEN  budget * 2     -- New
                        WHEN  grnt_year     = greatest_year
                  AND       fgl_period     = :given_period       THEN  budget
                        WHEN  grnt_year     = greatest_year
                  AND       fgl_period     = '00'            THEN -budget
                        WHEN  grnt_year     = greatest_year - 1
                  AND       fgl_period     = '14'     
                  AND       cnt_00     = 0            THEN  budget
                    END
               )          AS total_budget
    FROM       got_cnt_00
    ;You'll notice this is the same as the previous query I posted, except for 3 lines maked "New".

  • Need help with a query

    Hi,
    I need help with the following query. I want the balance (bal) with the latest exchange rate available.
    Sample table & data
    with
    FX_RATE as
    select 11 as id_date, 1 as id_curr, 47 as EXCH_rate from dual union
    select 12, 1, 48 from dual union
    select 13, 2, 54 from dual union
    select 14, 2, 55 from dual union
    select 15, 3, 56 from dual union
    select 15, 2, 49 from dual),
    TBL_NM as
    select 13 as p_date, 2 as p_curr, 200 as bal from dual union
    select 14, 2, 200 from dual union
    select 15, 2, 200 from dual union
    select 16, 2, 200 from dual union
    select 17, 2, 200 from dual union
    select 11, 5, 100 from dual
    select p_date, p_curr, bal * nvl(exch_rate,1) from TBL_NM T LEFT OUTER JOIN FX_RATE F1 on (id_curr = p_curr and F1.id_date = T.p_Date)In the above query for p_date 16 & 17 and p_curr 2 it returns just balance multiplied by exchange rate 1"default". But i want the balance to have data as per latest exchange rate which is of exchange rate 15.
    I tried this but returns error ORA-01799: a column may not be outer joined to a subquery ..
    with
    FX_RATE as
    select 11 as id_date, 1 as id_curr, 47 as EXCH_rate from dual union
    select 12, 1, 48 from dual union
    select 13, 2, 54 from dual union
    select 14, 2, 55 from dual union
    select 15, 3, 56 from dual union
    select 15, 2, 49 from dual),
    TBL_NM as
    select 13 as p_date, 2 as p_curr, 200 as bal from dual union
    select 14, 2, 200 from dual union
    select 15, 2, 200 from dual union
    select 16, 2, 200 from dual union
    select 17, 2, 200 from dual union
    select 11, 5, 100 from dual
    select p_date, p_curr, bal * nvl(exch_rate,1) from TBL_NM T LEFT OUTER JOIN FX_RATE F1
    on (id_curr = p_curr and F1.id_date = (select max(F2.id_date) from FX_RATE F2 where F2.id_curr = T.p_curr and F2.id_Date <=  T.p_date))Please advice on how i can achieve this ..

    The entire query wud be like this .. I've to incorporate in here
    CREATE MATERIALIZED VIEW MV_DUMMY
    BUILD IMMEDIATE
    REFRESH FORCE ON DEMAND
    AS
        SELECT T.ID_TSACTION_RELEASED                                                                                
        BAL.ID_CONTRACT_BALANCE                                                                                                                                                                                                                                                                           AS ID_CONTRACT_BALANCE,   
        T.N_REFERENCE_NUMBER                                                                                          
        T.INSTRUMENT_N_REFERENCE                                                                                      
        T.ITEM_NUMBER                                                                                                   
        T.EXTERNAL_SYSTEM_ID                                                                                            
        T.SEQUENCE_NUMBER                                                                                               
        T.ID_RELEASED_DATE                                                                                              
       ROUND(BAL.LC_AVAILABLE_BALANCE * NVL(FX1.EXCHANGE_RATE,1) / NVL(FX2.EXCHANGE_RATE,1) , 4)                           
        BAL.LIABILITY_BALANCE                                                                                              
        BAL.LIABILITY_BALANCE * NVL(FX3.EXCHANGE_RATE,1)                                                                   
        BAL.LIABILITY_CHANGE_USD                                                                                           
        BAL.MEMO_LIABILITY_BALANCE                                                                                         
        BAL.MEMO_LIABILITY_BALANCE * NVL(FX3.EXCHANGE_RATE,1)                                                              
        BAL.MEMO_LIABILITY_CHANGE_USD                                                                                      
        BAL.ORIGINAL_FACE_AMOUNT                                                                                           
        decode(T.TENOR_CODE,'Time','T','Sight','S','Split Sight Time','SST','Split Multiple Time','SMT',T.TENOR_CODE)
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN T.ID_LIABILITY_CIF
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN T.Id_Beneficiary
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN T.ID_Applicant
        END PRIMARY_CUSTOMER_ID,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN plbcif.EXTERNAL_SYSTEM_ID
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN PBCIF.EXTERNAL_SYSTEM_ID
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN pappcif.EXTERNAL_SYSTEM_ID
        END PRIMARY_CUSTOMER_EXT_SYS_ID,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN plbcif.CIF_NAME
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN PBCIF.CIF_NAME
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN pappcif.CIF_NAME
        END PRIMARY_CUSTOMER_NAME,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN plbbac.BAC
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN pbbac.BAC
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN pappbac.BAC
        END PRIMARY_CUST_BAC_CODE,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN nvl(plbmg.MARKET,'NOT APPLICABLE')
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN nvl(pbmg.MARKET,'NOT APPLICABLE')
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN nvl(pappmg.MARKET,'NOT APPLICABLE')
        END PRIMARY_CUST_MARKET,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN nvl(plbmg.SUB_MARKET,'NOT APPLICABLE')
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN nvl(pbmg.SUB_MARKET,'NOT APPLICABLE')
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN nvl(pappmg.SUB_MARKET,'NOT APPLICABLE')
        END PRIMARY_CUST_SUB_MARKET
      FROM F_TSACTION_RELEASED T
      LEFT OUTER JOIN D_BAC_CODE BAC
      ON (T.BAC_CODE_LIABILITY = BAC.BAC_CODE)
      LEFT OUTER JOIN REF_BAC_SORT_CODE REF_BAC
      ON (T.BAC_CODE_LIABILITY = REF_BAC.BAC)
      LEFT OUTER JOIN F_CONTRACT_BALANCE BAL
      ON (T.ID_TSACTION_RELEASED = BAL.ID_TSACTION_RELEASED)
      LEFT OUTER JOIN D_MARKET_SEGMENT MG
      ON (T.ID_MARKET_SEGMENT = MG.ID_MARKET_SEGMENT)
      LEFT OUTER JOIN D_DATE DT
      ON (DT.ID_DATE = T.ID_RELEASED_DATE)
      LEFT OUTER JOIN D_DATE DB
      ON (DB.ID_DATE = BAL.ID_RELEASED_DATE)
      LEFT OUTER JOIN D_PROCESSING_UNIT PU
      ON (PU.ID_PROCESSING_UNIT = T.ID_PROCESSING_UNIT)
      LEFT OUTER JOIN D_BIR_PRODUCT BIRPROD
      ON (BIRPROD.ID_BIR_PRODUCT=T.ID_BIR_PRODUCT)
      LEFT OUTER JOIN D_GTS_PRODUCT_TYPE GTSPROD
      ON (GTSPROD.ID_GTS_PRODUCT_TYPE= T.ID_GTS_PRODUCT_TYPE)
      LEFT OUTER JOIN D_GTS_TSACTION_TYPE GTST
      ON (GTST.ID_GTS_TSACTION_TYPE = T.ID_GTS_TSACTION_TYPE)
      LEFT OUTER JOIN D_CURRENCY CCYT
      ON (CCYT.ID_CURRENCY = T.ID_TSACTION_CURRENCY)
      LEFT OUTER JOIN d_cif lcif
      ON (lcif.id_cif = T.id_liability_cif)
      LEFT OUTER JOIN d_cif lbcif
      ON (lbcif.id_cif = bal.id_liability_cif)
      LEFT OUTER JOIN d_cif bcif
      ON (bcif.id_cif = T.id_BENEFICIARY)
      LEFT OUTER JOIN d_cif icif
      ON (icif.id_cif = T.id_ISSUING_BANK)
      LEFT OUTER JOIN d_cif acif
      ON (acif.id_cif = T.id_ADVISING_BANK)
      LEFT OUTER JOIN d_cif appcif
      ON (appcif.id_cif = T.id_applicant)
      LEFT OUTER JOIN d_state astate
      ON (astate.id_state = acif.id_state)
      LEFT OUTER JOIN d_state bstate
      ON (bstate.id_state = bcif.id_state)
      LEFT OUTER JOIN d_state lstate
      ON (lstate.id_state = lcif.id_state)
      LEFT OUTER JOIN d_state lbstate
      ON (lbstate.id_state = lbcif.id_state)
      LEFT OUTER JOIN d_state istate
      ON (istate.id_state = icif.id_state)
      LEFT OUTER JOIN d_state appstate
      ON (appstate.id_state = appcif.id_state)
      LEFT OUTER JOIN D_TSACTION_SOURCE TSrc
      ON (T.ID_TSACTION_SOURCE = TSrc.ID_TSACTION_SOURCE)
      LEFT OUTER JOIN D_COUNTRY LCTRY
      ON (LCTRY.ID_COUNTRY = lcif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY LBCTRY
      ON (LBCTRY.ID_COUNTRY = lbcif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY BCTRY
      ON (BCTRY.ID_COUNTRY = bcif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY ICTRY
      ON (ICTRY.ID_COUNTRY = icif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY ACTRY
      ON (ACTRY.ID_COUNTRY = acif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY APPCTRY
      ON (APPCTRY.ID_COUNTRY = appcif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY PCTRY
      ON (PCTRY.ID_COUNTRY = T.ID_PRESENTER_COUNTRY)
      LEFT OUTER JOIN D_LOCATION LOC
      ON (LOC.ID_LOCATION = T.ID_PROCESSING_LOCATION)
      LEFT OUTER JOIN D_CURRENCY BCCYT
      ON (BCCYT.ID_CURRENCY = BAL.ID_LIABILITY_CURRENCY)
      LEFT OUTER JOIN D_CURRENCY BALCYT
      ON (BALCYT.ID_CURRENCY = BAL.ID_BALANCE_CURRENCY)
      LEFT OUTER JOIN d_liability_type li
      ON (li.id_liability_type = BAL.id_liability_type)
      LEFT OUTER JOIN d_cif plbcif
      ON (plbcif.id_cif = T.id_liability_cif)
      LEFT OUTER JOIN REF_BAC_SORT_CODE plbbac
      ON (plbcif.bac_code=plbbac.bac)
      LEFT OUTER JOIN D_MARKET_SEGMENT plbmg
      ON (plbbac.SORT_CODE=plbmg.MARKET_SEGMENT)
      LEFT OUTER JOIN d_cif pbcif
      ON (pbcif.id_cif = T.id_BENEFICIARY)
      LEFT OUTER JOIN REF_BAC_SORT_CODE pbbac
      ON (pbcif.bac_code=pbbac.bac)
      LEFT OUTER JOIN D_MARKET_SEGMENT pbmg
      ON (pbbac.SORT_CODE=pbmg.MARKET_SEGMENT)
      LEFT OUTER JOIN d_cif pappcif
      ON (pappcif.id_cif = T.id_applicant)
      LEFT OUTER JOIN REF_BAC_SORT_CODE pappbac
      ON (pappcif.bac_code=pappbac.bac)
      LEFT OUTER JOIN D_MARKET_SEGMENT pappmg
      ON (pappbac.SORT_CODE=pappmg.MARKET_SEGMENT)
      LEFT OUTER JOIN D_CURRENCY LOCALCYT
      ON (LOCALCYT.alpha_code = PU.local_ccy)
      LEFT OUTER JOIN D_BRANCH Branch              
      ON (T.ID_BRANCH  = Branch.ID_BRANCH )
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX1
      ON (BAL.ID_BALANCE_CURRENCY = FX1.ID_CURRENCY and FX1.ID_DATE = (select max(FX11.ID_DATE) from F_USD_FX_RATE_HISTORY FX11 where BAL.ID_BALANCE_CURRENCY = FX11.ID_CURRENCY and FX11.ID_DATE <= BAL.id_released_date))
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX2
      ON (LOCALCYT.ID_CURRENCY = FX2.ID_CURRENCY and FX2.ID_DATE = (select max(FX22.ID_DATE) from F_USD_FX_RATE_HISTORY FX22 where LOCALCYT.ID_CURRENCY = FX22.ID_CURRENCY and FX22.ID_DATE <= BAL.id_released_date))
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX3
      ON (BAL.ID_LIABILITY_CURRENCY = FX3.ID_CURRENCY and FX3.ID_DATE = (select max(FX33.ID_DATE) from F_USD_FX_RATE_HISTORY FX33 where BAL.ID_LIABILITY_CURRENCY = FX33.ID_CURRENCY and FX33.ID_DATE <= BAL.id_released_date))Note the lines
    ROUND(BAL.MN_AVAILABLE_BALANCE * NVL(FX1.EXCHANGE_RATE,1) / NVL(FX2.EXCHANGE_RATE,1) , 4)                           
    BAL.LIABILITY_BALANCE * NVL(FX3.EXCHANGE_RATE,1)                                                                   
    BAL.MEMO_LIABILITY_BALANCE * NVL(FX3.EXCHANGE_RATE,1)                 
    And
    LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX1
      ON (BAL.ID_BALANCE_CURRENCY = FX1.ID_CURRENCY and FX1.ID_DATE = (select max(FX11.ID_DATE) from F_USD_FX_RATE_HISTORY FX11 where BAL.ID_BALANCE_CURRENCY = FX11.ID_CURRENCY and FX11.ID_DATE <= BAL.id_released_date))
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX2
      ON (LOCAMNYT.ID_CURRENCY = FX2.ID_CURRENCY and FX2.ID_DATE = (select max(FX22.ID_DATE) from F_USD_FX_RATE_HISTORY FX22 where LOCAMNYT.ID_CURRENCY = FX22.ID_CURRENCY and FX22.ID_DATE <= BAL.id_released_date))
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX3
      ON (BAL.ID_LIABILITY_CURRENCY = FX3.ID_CURRENCY and FX3.ID_DATE = (select max(FX33.ID_DATE) from F_USD_FX_RATE_HISTORY FX33 where BAL.ID_LIABILITY_CURRENCY = FX33.ID_CURRENCY and FX33.ID_DATE <= BAL.id_released_date))Thsi is where I need to incorporate the change

  • I need help with a SELECT query - help!

    Hello, I need help with a select statement.
    I have a table with 2 fields as shown below
    Name | Type
    John | 1
    John | 2
    John | 3
    Paul | 1
    Paul | 2
    Paul | 3
    Mark | 1
    Mark | 2
    I need a query that returns everything where the name has type 1 or 2 but not type 3. So in the example above the qery should bring back all the "Mark" records.
    Thanks,
    Ian

    Or, if the types are sequential from 1 upwards you could simply do:-
    SQL> create table t as
      2  select 'John' as name, 1 as type from dual union
      3  select 'John',2 from dual union
      4  select 'John',3 from dual union
      5  select 'Paul',1 from dual union
      6  select 'Paul',2 from dual union
      7  select 'Paul',3 from dual union
      8  select 'Paul',4 from dual union
      9  select 'Mark',1 from dual union
    10  select 'Mark',2 from dual;
    Table created.
    SQL> select name
      2  from t
      3  group by name
      4  having count(*) <= 2;
    NAME
    Mark
    SQL>Or another alternative if they aren't sequential:
    SQL> ed
    Wrote file afiedt.buf
      1  select name from (
      2    select name, max(type) t
      3    from t
      4    group by name
      5    )
      6* where t < 3
    SQL> /
    NAME
    Mark
    SQL>Message was edited by:
    blushadow

  • Need help with SQL Query with Inline View + Group by

    Hello Gurus,
    I would really appreciate your time and effort regarding this query. I have the following data set.
    Reference_No---Check_Number---Check_Date--------Description-------------------------------Invoice_Number----------Invoice_Type---Paid_Amount-----Vendor_Number
    1234567----------11223-------------- 7/5/2008----------paid for cleaning----------------------44345563------------------I-----------------*20.00*-------------19
    1234567----------11223--------------7/5/2008-----------Adjustment for bad quality---------44345563------------------A-----------------10.00------------19
    7654321----------11223--------------7/5/2008-----------Adjustment from last billing cycle-----23543556-------------------A--------------------50.00--------------19
    4653456----------11223--------------7/5/2008-----------paid for cleaning------------------------35654765--------------------I---------------------30.00-------------19
    Please Ignore '----', added it for clarity
    I am trying to write a query to aggregate paid_amount based on Reference_No, Check_Number, Payment_Date, Invoice_Number, Invoice_Type, Vendor_Number and display description with Invoice_type 'I' when there are multiple records with the same Reference_No, Check_Number, Payment_Date, Invoice_Number, Invoice_Type, Vendor_Number. When there are no multiple records I want to display the respective Description.
    The query should return the following data set
    Reference_No---Check_Number---Check_Date--------Description-------------------------------Invoice_Number----------Invoice_Type---Paid_Amount-----Vendor_Number
    1234567----------11223-------------- 7/5/2008----------paid for cleaning----------------------44345563------------------I-----------------*10.00*------------19
    7654321----------11223--------------7/5/2008-----------Adjustment from last billing cycle-----23543556-------------------A--------------------50.00--------------19
    4653456----------11223--------------7/5/2008-----------paid for cleaning------------------------35654765-------------------I---------------------30.00--------------19
    The following is my query. I am kind of lost.
    select B.Description, A.sequence_id,A.check_date, A.check_number, A.invoice_number, A.amount, A.vendor_number
    from (
    select sequence_id,check_date, check_number, invoice_number, sum(paid_amount) amount, vendor_number
    from INVOICE
    group by sequence_id,check_date, check_number, invoice_number, vendor_number
    ) A, INVOICE B
    where A.sequence_id = B.sequence_id
    Thanks,
    Nick

    It looks like it is a duplicate thread - correct me if i'm wrong in this case ->
    Need help with SQL Query with Inline View + Group by
    Regards.
    Satyaki De.

  • Need help with writing a query with dynamic FROM clause

    Hi Folks,
    I need help with an query that should generate the "FROM" clause dynamically.
    My main query is as follows
    select DT_SKEY, count(*)
    from *???*
    where DT_SKEY between 20110601 and 20110719
    group by DT_SKEY
    having count(*) = 0
    order by 1; The "from" clause of the above query should be generated as below
    select 'Schema_Name'||'.'||TABLE_NAME
    from dba_tables
    where OWNER = 'Schema_Name'Simply sticking the later query in the first query does not work.
    Any pointers will be appreciated.
    Thanks
    rogers42

    Hi,
    rogers42 wrote:
    Hi Folks,
    I need help with an query that should generate the "FROM" clause dynamically.
    My main query is as follows
    select DT_SKEY, count(*)
    from *???*
    where DT_SKEY between 20110601 and 20110719
    group by DT_SKEY
    having count(*) = 0
    order by 1; The "from" clause of the above query should be generated as below
    select 'Schema_Name'||'.'||TABLE_NAME
    from dba_tables
    where OWNER = 'Schema_Name'
    Remember that anything inside quotes is case-sensitive. Is the owner really "Schema_Name" with a capital S and a capital N, and 8 lower-case letters?
    Simply sticking the later query in the first query does not work.Right; the table name must be given when you compile the query. It's not an expression that you can generate in the query itself.
    Any pointers will be appreciated.In SQL*Plus, you can do something like the query bleow.
    Say you want to count the rows in scott.emp, but you're not certain that the name is emp; it could be emp_2011 or emp_august, or anything else that starts with e. (And the name could change every day, so you can't just look it up now and hard-code it in a query that you want to run in the future.)
    Typically, how dynamic SQL works is that some code (such as a preliminary query) gets some of the information you need to write the query first, and you use that information in a SQL statement that is compiled and run after that. For example:
    -- Preliminary Query:
    COLUMN     my_table_name_col     NEW_VALUE my_table_name
    SELECT     table_name     AS my_table_name_col
    FROM     all_tables
    WHERE     owner          = 'SCOTT'
    AND     table_name     LIKE 'E%';
    -- Main Query:
    SELECT     COUNT (*)     AS cnt
    FROM     scott.&my_table_name
    ;This assumes that the preliminary query will find exactly one row; that is, it assumes that SCOTT has exactly one table whose name starts with E. Could you have 0 tables in the schema, or more than 1? If so, what results would you want? Give a concrete example, preferably suing commonly available tables (like those in the SCOTT schema) so that the poepl who want to help you can re-create the problem and test their ideas.
    Edited by: Frank Kulash on Aug 11, 2011 2:30 PM

  • Please, need help with a query

    Hi !
    Please need help with this query:
    Needs to show (in cases of more than 1 loan offer) the latest create_date one time.
    Meaning, In cases the USER_ID, LOAN_ID, CREATE_DATE are the same need to show only the latest, Thanks!!!
    select distinct a.id,
    create_date,
    a.loanid,
    a.rate,
    a.pays,
    a.gracetime,
    a.emailtosend,
    d.first_name,
    d.last_name,
    a.user_id
    from CLAL_LOANCALC_DET a,
    loan_Calculator b,
    bv_user_profile c,
    bv_mr_user_profile d
    where b.loanid = a.loanid
    and c.NET_USER_NO = a.resp_id
    and d.user_id = c.user_id
    and a.is_partner is null
    and a.create_date between
    TO_DATE('6/3/2008 01:00:00', 'DD/MM/YY HH24:MI:SS') and
    TO_DATE('27/3/2008 23:59:00', 'DD/MM/YY HH24:MI:SS')
    order by a.create_date

    Take a look on the syntax :
    max(...) keep (dense_rank last order by ...)
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions056.htm#i1000901
    Nicolas.

  • Please need help with this query

    Hi !
    Please need help with this query:
    Needs to show (in cases of more than 1 loan offer) the latest create_date one time.
    Meaning, In cases the USER_ID, LOAN_ID, CREATE_DATE are the same need to show only the latest, Thanks!!!
    select distinct a.id,
    create_date,
    a.loanid,
    a.rate,
    a.pays,
    a.gracetime,
    a.emailtosend,
    d.first_name,
    d.last_name,
    a.user_id
    from CLAL_LOANCALC_DET a,
    loan_Calculator b,
    bv_user_profile c,
    bv_mr_user_profile d
    where b.loanid = a.loanid
    and c.NET_USER_NO = a.resp_id
    and d.user_id = c.user_id
    and a.is_partner is null
    and a.create_date between
    TO_DATE('6/3/2008 01:00:00', 'DD/MM/YY HH24:MI:SS') and
    TO_DATE('27/3/2008 23:59:00', 'DD/MM/YY HH24:MI:SS')
    order by a.create_date

    Perhaps something like this...
    select id, create_date, loanid, rate, pays, gracetime, emailtosend, first_name, last_name, user_id
    from (
          select distinct a.id,
                          create_date,
                          a.loanid,
                          a.rate,
                          a.pays,
                          a.gracetime,
                          a.emailtosend,
                          d.first_name,
                          d.last_name,
                          a.user_id,
                          max(create_date) over (partition by a.user_id, a.loadid) as max_create_date
          from CLAL_LOANCALC_DET a,
               loan_Calculator b,
               bv_user_profile c,
               bv_mr_user_profile d
          where b.loanid = a.loanid
          and   c.NET_USER_NO = a.resp_id
          and   d.user_id = c.user_id
          and   a.is_partner is null
          and   a.create_date between
                TO_DATE('6/3/2008 01:00:00', 'DD/MM/YY HH24:MI:SS') and
                TO_DATE('27/3/2008 23:59:00', 'DD/MM/YY HH24:MI:SS')
    where create_date = max_create_date
    order by create_date

  • Hi.. need help with a query

    hello guys :)
    I need help with this exercise:
    Find the most common cooking method in recipes that contain tomatoes.
    the 4 tables:
    t_recipes
    --recipe_no
    --recipe_name
    t_products
    --product_no
    -product_name [tomatoes,cucumbers, onions]
    t_integration
    --product_no
    --recipe_no
    t_cooking_mode
    recipe_no
    mode [Frying,baking]
    I am realy lost :S
    thank you

    Hi,
    851072 wrote:
    Thank you for your comment :)
    But... I didn`t understand the first partSorry, what part is that? You probably understood "Welcome to the forum!" It looks like you understood most of what I said, perhaps before I said it. You seem to be on the right track.
    Unlike talking to a co-worker in the next cube, exchanging messages with someone on this forum takes a fair amount of time, so it's worth the time it takes to explain things very clearly, more than you would in conversation. Post all the relevant information, and say exactly what the problem is.
    amm i tried to do this:
    select t_recipes.recipe_name, count(t_cooking_mode.cooking_mode)
    from (t_cooking_mode INNER JOIN t_integration ON t_cooking_mode.recipe_no = t_integration.recipe_no )
    INNER JOIN t_products ON t_integration.product_no = t_products.product_no)
    WHERE t_products.product_name = 'tomatoes'
    GROUP BY t_recipes.recipe_name
    help?Please help by posting whatever you know about the problem. For example, if there's a error message, post the complete error message, including the line number.
    Help the people who want to help you by formatting your code and making it easy to read and understand. This will help you, too.
    For example, here's exactly what you posted, with only the white-space changed to make parallel items, such as parentheses, line up nicely:
    select        t_recipes.recipe_name
    ,       count(t_cooking_mode.cooking_mode)
    from             (          t_cooking_mode
                INNER JOIN      t_integration      ON t_cooking_mode.recipe_no = t_integration.recipe_no
    INNER JOIN      t_products      ON t_integration.product_no = t_products.product_no
    WHERE       t_products.product_name      = 'tomatoes'
    GROUP BY  t_recipes.recipe_nameWhen you format your code like this, it can be very easy to spot errors like unbalanced parentheses. In this case, the ')' right before the WHERE clause has no matching '('. You don't need to use any parentheses at all in the FROM clause. You can simply say:
    FROM     t_cooking_mode
    JOIN      t_integration      ON t_cooking_mode.recipe_no = t_integration.recipe_no
    JOIN      t_products      ON t_integration.product_no = t_products.product_no
    WHERE     ...INNER JOIN is the default kind of join, which makes sense, since the majority of all joins are inner joins. I usually say JOIN (instead of INNER JOIN) because it makes the code more compact and easier to read (at least for me), but I won't be maintaining your code, so do what's best for you.
    What are you trying to find in this problem? Is it a recipe name or a cooking mode? If it's a cooking mode, the why do you have t_recipes.reciple_name in the SELECT (and GROUP BY) clause? Shouldn't you be using some other column, from some other table?
    If I understand the problem correctly, the t_recipes table is not needed in this problem. You won't necessarily use every table in every query.
    When you post formatted text on this site, type these 6 characters:
    \(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Need help with Berkeley XML DB Performance

    We need help with maximizing performance of our use of Berkeley XML DB. I am filling most of the 29 part question as listed by Oracle's BDB team.
    Berkeley DB XML Performance Questionnaire
    1. Describe the Performance area that you are measuring? What is the
    current performance? What are your performance goals you hope to
    achieve?
    We are measuring the performance while loading a document during
    web application startup. It is currently taking 10-12 seconds when
    only one user is on the system. We are trying to do some testing to
    get the load time when several users are on the system.
    We would like the load time to be 5 seconds or less.
    2. What Berkeley DB XML Version? Any optional configuration flags
    specified? Are you running with any special patches? Please specify?
    dbxml 2.4.13. No special patches.
    3. What Berkeley DB Version? Any optional configuration flags
    specified? Are you running with any special patches? Please Specify.
    bdb 4.6.21. No special patches.
    4. Processor name, speed and chipset?
    Intel Xeon CPU 5150 2.66GHz
    5. Operating System and Version?
    Red Hat Enterprise Linux Relase 4 Update 6
    6. Disk Drive Type and speed?
    Don't have that information
    7. File System Type? (such as EXT2, NTFS, Reiser)
    EXT3
    8. Physical Memory Available?
    4GB
    9. Are you using Replication (HA) with Berkeley DB XML? If so, please
    describe the network you are using, and the number of Replica’s.
    No
    10. Are you using a Remote Filesystem (NFS) ? If so, for which
    Berkeley DB XML/DB files?
    No
    11. What type of mutexes do you have configured? Did you specify
    –with-mutex=? Specify what you find inn your config.log, search
    for db_cv_mutex?
    None. Did not specify -with-mutex during bdb compilation
    12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
    Which compiler and version?
    Java 1.5
    13. If you are using an Application Server or Web Server, please
    provide the name and version?
    Oracle Appication Server 10.1.3.4.0
    14. Please provide your exact Environment Configuration Flags (include
    anything specified in you DB_CONFIG file)
    Default.
    15. Please provide your Container Configuration Flags?
    final EnvironmentConfig envConf = new EnvironmentConfig();
    envConf.setAllowCreate(true); // If the environment does not
    // exist, create it.
    envConf.setInitializeCache(true); // Turn on the shared memory
    // region.
    envConf.setInitializeLocking(true); // Turn on the locking subsystem.
    envConf.setInitializeLogging(true); // Turn on the logging subsystem.
    envConf.setTransactional(true); // Turn on the transactional
    // subsystem.
    envConf.setLockDetectMode(LockDetectMode.MINWRITE);
    envConf.setThreaded(true);
    envConf.setErrorStream(System.err);
    envConf.setCacheSize(1024*1024*64);
    envConf.setMaxLockers(2000);
    envConf.setMaxLocks(2000);
    envConf.setMaxLockObjects(2000);
    envConf.setTxnMaxActive(200);
    envConf.setTxnWriteNoSync(true);
    envConf.setMaxMutexes(40000);
    16. How many XML Containers do you have? For each one please specify:
    One.
    1. The Container Configuration Flags
              XmlContainerConfig xmlContainerConfig = new XmlContainerConfig();
              xmlContainerConfig.setTransactional(true);
    xmlContainerConfig.setIndexNodes(true);
    xmlContainerConfig.setReadUncommitted(true);
    2. How many documents?
    Everytime the user logs in, the current xml document is loaded from
    a oracle database table and put it in the Berkeley XML DB.
    The documents get deleted from XML DB when the Oracle application
    server container is stopped.
    The number of documents should start with zero initially and it
    will grow with every login.
    3. What type (node or wholedoc)?
    Node
    4. Please indicate the minimum, maximum and average size of
    documents?
    The minimum is about 2MB and the maximum could 20MB. The average
    mostly about 5MB.
    5. Are you using document data? If so please describe how?
    We are using document data only to save changes made
    to the application data in a web application. The final save goes
    to the relational database. Berkeley XML DB is just used to store
    temporary data since going to the relational database for each change
    will cause severe performance issues.
    17. Please describe the shape of one of your typical documents? Please
    do this by sending us a skeleton XML document.
    Due to the sensitive nature of the data, I can provide XML schema instead.
    18. What is the rate of document insertion/update required or
    expected? Are you doing partial node updates (via XmlModify) or
    replacing the document?
    The document is inserted during user login. Any change made to the application
    data grid or other data components gets saved in Berkeley DB. We also have
    an automatic save every two minutes. The final save from the application
    gets saved in a relational database.
    19. What is the query rate required/expected?
    Users will not be entering data rapidly. There will be lot of think time
    before the users enter/modify data in the web application. This is a pilot
    project but when we go live with this application, we will expect 25 users
    at the same time.
    20. XQuery -- supply some sample queries
    1. Please provide the Query Plan
    2. Are you using DBXML_INDEX_NODES?
    Yes.
    3. Display the indices you have defined for the specific query.
         XmlIndexSpecification spec = container.getIndexSpecification();
         // ids
         spec.addIndex("", "id", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         spec.addIndex("", "idref", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // index to cover AttributeValue/Description
         spec.addIndex("", "Description", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_SUBSTRING, XmlValue.STRING);
         // cover AttributeValue/@value
         spec.addIndex("", "value", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // item attribute values
         spec.addIndex("", "type", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // default index
         spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // save the spec to the container
         XmlUpdateContext uc = xmlManager.createUpdateContext();
         container.setIndexSpecification(spec, uc);
    4. If this is a large query, please consider sending a smaller
    query (and query plan) that demonstrates the problem.
    21. Are you running with Transactions? If so please provide any
    transactions flags you specify with any API calls.
    Yes. READ_UNCOMMITED in some and READ_COMMITTED in other transactions.
    22. If your application is transactional, are your log files stored on
    the same disk as your containers/databases?
    Yes.
    23. Do you use AUTO_COMMIT?
         No.
    24. Please list any non-transactional operations performed?
    No.
    25. How many threads of control are running? How many threads in read
    only mode? How many threads are updating?
    We use Berkeley XML DB within the context of a struts web application.
    Each user logged into the web application will be running a bdb transactoin
    within the context of a struts action thread.
    26. Please include a paragraph describing the performance measurements
    you have made. Please specifically list any Berkeley DB operations
    where the performance is currently insufficient.
    We are clocking 10-12 seconds of loading a document from dbd when
    five users are on the system.
    getContainer().getDocument(documentName);
    27. What performance level do you hope to achieve?
    We would like to get less than 5 seconds when 25 users are on the system.
    28. Please send us the output of the following db_stat utility commands
    after your application has been running under "normal" load for some
    period of time:
    % db_stat -h database environment -c
    % db_stat -h database environment -l
    % db_stat -h database environment -m
    % db_stat -h database environment -r
    % db_stat -h database environment -t
    (These commands require the db_stat utility access a shared database
    environment. If your application has a private environment, please
    remove the DB_PRIVATE flag used when the environment is created, so
    you can obtain these measurements. If removing the DB_PRIVATE flag
    is not possible, let us know and we can discuss alternatives with
    you.)
    If your application has periods of "good" and "bad" performance,
    please run the above list of commands several times, during both
    good and bad periods, and additionally specify the -Z flags (so
    the output of each command isn't cumulative).
    When possible, please run basic system performance reporting tools
    during the time you are measuring the application's performance.
    For example, on UNIX systems, the vmstat and iostat utilities are
    good choices.
    Will give this information soon.
    29. Are there any other significant applications running on this
    system? Are you using Berkeley DB outside of Berkeley DB XML?
    Please describe the application?
    No to the first two questions.
    The web application is an online review of test questions. The users
    login and then review the items one by one. The relational database
    holds the data in xml. During application load, the application
    retrieves the xml and then saves it to bdb. While the user
    is making changes to the data in the application, it writes those
    changes to bdb. Finally when the user hits the SAVE button, the data
    gets saved to the relational database. We also have an automatic save
    every two minues, which saves bdb xml data and saves it to relational
    database.
    Thanks,
    Madhav
    [email protected]

    Could it be that you simply do not have set up indexes to support your query? If so, you could do some basic testing using the dbxml shell:
    milu@colinux:~/xpg > dbxml -h ~/dbenv
    Joined existing environment
    dbxml> setverbose 7 2
    dbxml> open tv.dbxml
    dbxml> listIndexes
    dbxml> query     { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }
    dbxml> queryplan { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }Verbosity will make the engine display some (rather cryptic) information on index usage. I can't remember where the output is explained; my feeling is that "V(...)" means the index is being used (which is good), but that observation may not be accurate. Note that some details in the setVerbose command could differ, as I'm using 2.4.16 while you're using 2.4.13.
    Also, take a look at the query plan. You can post it here and some people will be able to diagnose it.
    Michael Ludwig

  • Need help with the session state value items.

    I need help with the session state value items.
    Trigger is created (on After delete, insert action) on table A.
    When insert in table B at least one row, then trigger update value to 'Y'
    in table A.
    When delete all rows from a table B,, then trigger update value to 'N'
    in table A.
    In detail report changes are visible, but the trigger replacement value is not set in session value.
    How can I implement this?

    You'll have to create a process which runs after your database update process that does a query and loads the result into your page item.
    For example
    SELECT YN_COLUMN
    FROM My_TABLE
    INTO My_Page_Item
    WHERE Key_value = My_Page_Item_Holding_Key_ValueThe DML process will only return key values after updating, such as an ID primary key updated by a sequence in a trigger.
    If the value is showing in a report, make sure the report refreshes on reload of the page.
    Edited by: Bob37 on Dec 6, 2011 10:36 AM

Maybe you are looking for