USE of PREVIOUS command to eliminate duplicate records in counter formula

i'm trying to create a counter formula to count the number of documents paid over 30 days.  to do this i have to subtract the InvDate from the PayDate.   and then create a counter based on this value.  if {days to pay} is greater than 30 then 1 else 0.
then sum the {days to pay} field to each group.   groups are company, month, and supplier.
becuase invoices can have multiple payments and payments can have multiple invoices. there is no way around having duplicate records for the field. 
so my counter is distorted by by the duplicate records and my percentage of payments over 30 days formula will not be accurate do to these duplicates.
I've tried Distinct Count based on this formula  if {days to pay} is greater than 30 then . and it works except that is counts 0.00 has a distinct records so my total is off 1 for summaries with a record that (days to pay} is less than or equal to 30.
if i subract 1 from the formula then it will be inaccurate for summaries with no records over 30 days.
so i'm come to this.
if Previous() do not equal
then
  if {day to days} greater than 30
  then 1
  else 0.00
else 0.00
but it doesn't work.  i've sorted the detail section by
does anyone have any knowledge or success using the PREVIOUS command in a report?
Edited by: Fred Ebbett on Feb 11, 2010 5:41 PM

So, you have to include all data and not just use the selection criteria 'PayDate-InvDate>30'?
You will need to create a running total on the RPDOC ID, one for each section you need to show a count for, evaluating for your >30 day formula. 
I don't understand why you're telling the formula to return 0.00 in your if statement.
In order to get percentages you'll need to use the distinct count (possibly running totals again but this time no formula). Then in each section you'd need a formula that divides the two running totals.
I may not have my head around the concept since you stated "invoices can have multiple payments and payments can have multiple invoices".  So, invoice A can have payments 1, 2 and 3.  And Payment 4 can be associated with invoice B and C?  Ugh.  Still though, you're evaluating every row of data.  If you're focus is the invoices that took longer than 30 days to be paid...I'd group on the invoice number, put the "if 'PayDate-InvDate>30' then 1 else 0" formula in the detail, do a sum on it in the group footer and base my running total on the sum being >0 to do a distinct count of invoices.
Hope this points you in the right direction.
Eric

Similar Messages

  • Replaced Table with Command, now have duplicate records

    Hi all,
    I'm not sure if there is a problem with my command, or if I'm not doing something correctly within Crystal itself.
    I replaced a table with a command. The command is filtering the dataset as I wanted but for some reason I'm getting duplicate records in the report. I tried using the Distinct keyword, but that doesn't seem to help.
    Any suggestions?
    SELECT DISTINCT BI_CLOSE_DT, BI_OPEN_DT, BI_SO_COM, BI_SO_NBR, BI_ACCT, BI_SO_DET_KEY,BI_SO_STAT_CD, BI_SO_TO_ACCT, BI_SO_TO_CUST_NBR, BI_SO_TYPE_CD, BI_SRV_LOC_NBR
    FROM BI_SO_DET
    WHERE (BI_SO_STAT_CD = 'X'
    and BI_SO_TYPE_CD IN  ( 'NEW', 'NCBM', 'NEW-WF' )
    and BI_SO_TO_ACCT IS NOT NULL)

    .. coworker helped me realize the data was actually correct; I was getting the complete dataset as I should with duplicates and just needed to filter out the duplicates by using the grouping function on the unique id. All good now

  • Eliminate duplicate while fetching data from source

    Hi All,
    CUSTOMER TRANSACTION
    CUST_LOC     CUT_ID          TRANSACTION_DATE     TRANSACTION_TYPE
    100          12345          01-jan-2009          CREDIT
    100          23456          15-jan-2000          CREDIT
    100          12345          01-jan-2010          DEBIT
    100          12345          01-jan-2000          DEBITNow as per my requirement, i need to fetch data from CISTOMER_TRANSACTION table for those customer which has transaction in last 10 years. In my above data, customer 12345 has transaction in last 10 years, whereas for customer 23456, does not have transaction in last 10 years so will eliminate it.
    Now, CUSTOMER_TRANSACTION table has approximately 100 million records. So, we are fectching data in batches. Batching is divided into months. Total 120 months. Below is my query.
    select *
    FROM CUSTOMER_TRANSACTION CT left outer join
    (select distinct CUST_LOC, CUT_ID FROM CUSTOMER_TRANSACTION WHERE TRANSACTION_DATE >= ADD_MONTHS(SYSDATE, -120) and TRANSACTION_DATE < ADD_MONTHS(SYSDATE, -119) CUST
    on CT.CUST_LOC = CUST.CUST_LOC and CT.CUT_ID = CUST.CUT_IDThru shell script, months number will change. -120:-119, -119:-118 ....., -1:-0.
    Now the problem is duplication of records.
    while fetching data for jan-2009, it will get cust_id 12345 and will fetch all 3 records and load it into target.
    while fetching data for jan-2010, it will get cust_id 12345 and will fetch all 3 records and load in into target.
    So instead of having only 3 records, for customer 12345 it will be having 6 records. Can someone help me on how can i eliminate duplicate records from getting in.
    As of now i have 2 ways in mind.
    1. Fetch all records at once. Which is impossible as it will give space issue.
    2. After each batch, run a procedure which will delete duplicate records based on cust_loc, cut_id and transaction_date. But again it will have performance problem.
    I want to eliminate it while fetching data from source.
    Edited by: ace_friends22 on Apr 6, 2011 10:16 AM

    You can do it this way....
    SELECT DISTINCT cust_doc,
                    cut_id
      FROM customer_transaction
    WHERE transaction_date >= ADD_MONTHS(SYSDATE, -120)
       AND transaction_date < ADD_MONTHS(SYSDATE, -119)However please note that - if want to get the transaction in a month like what you said earlier jan-2009 and jan-2010 and so on... you might need to use TRUNC...
    Your date comparison could be like this... In this example I am checking if the transaction date is in the month of jan-2009
    AND transaction_date BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27)  AND LAST_DAY(ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27)) Your modified SQL...
    SELECT *
      FROM customer_transaction 
    WHERE transaction_date BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27)  AND LAST_DAY(ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27))Testing..
    --Sample Data
    CREATE TABLE customer_transaction (
    cust_loc number,
    cut_id number,
    transaction_date date,
    transaction_type varchar2(20)
    INSERT INTO customer_transaction VALUES (100,12345,TO_DATE('01-JAN-2009','dd-MON-yyyy'),'CREDIT');
    INSERT INTO customer_transaction VALUES (100,23456,TO_DATE('15-JAN-2000','dd-MON-yyyy'),'CREDIT');
    INSERT INTO customer_transaction VALUES (100,12345,TO_DATE('01-JAN-2010','dd-MON-yyyy'),'DEBIT');
    INSERT INTO customer_transaction VALUES (100,12345,TO_DATE('01-JAN-2000','dd-MON-yyyy'),'DEBIT');
    --To have three records in the month of jan-2009
    UPDATE customer_transaction
       SET transaction_date = TO_DATE('02-JAN-2009','dd-MON-yyyy')
    WHERE cut_id = 12345
       AND transaction_date = TO_DATE('01-JAN-2010','dd-MON-yyyy');
    UPDATE customer_transaction
       SET transaction_date = TO_DATE('03-JAN-2009','dd-MON-yyyy')
    WHERE cut_id = 12345
       AND transaction_date = TO_DATE('01-JAN-2000','dd-MON-yyyy');
    commit;
    --End of sample data
    SELECT *
      FROM customer_transaction 
    WHERE transaction_date BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27)  AND LAST_DAY(ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27));Results....
    CUST_LOC     CUT_ID TRANSACTI TRANSACTION_TYPE
          100      12345 01-JAN-09 CREDIT
          100      12345 02-JAN-09 DEBIT
          100      12345 03-JAN-09 DEBITAs you can see, there are only 3 records for 12345
    Regards,
    Rakesh
    Edited by: Rakesh on Apr 6, 2011 11:48 AM

  • 36 duplicate record  found. -- error while loading master data

    Hello BW Experts,
    Error while loading master data
    ( green light )Update PSA ( 50000 Records posted ) No errors
    ( green light )Transfer rules ( 50000  „³ 50000 Records ): No errors
    ( green light )Update rules ( 50000 „³ 50000 Records ): No errors
    ( green light ) Update ( 0 new / 50000 changed ): No errors
    ( red light )Processing end: errors occurred
         Processing 2 finished
    ƒÞ     36 duplicate record found. 15718 recordings used in table /BIC/PZMATNUM
    ƒÞ     36 duplicate record found. 15718 recordings used in table /BIC/XZMATNUM
    This error repeats with all the data-packets.
    what could be the reason of the error. how to correct the error.
    Any suggestions appreciated.
    Thanks,
    BWer

    BWer,
    We have exactly the same issue when loading the infoobject 0PM_ORDER, the datasource is 0PM_ORDER_ATTR.
    The workaround we have been using is 'Manual push' from the details tab of the monitor. Weirdly we don't have this issue in our test and dev systems, even in Production, this doesn't occur somedays. We have all the necessary settings enabled in the infopackage.
    Did you figure out a solution for your issue, if so please let me know and likewise I will.
    -Venkat

  • Check duplicate records

    Right now i'm doing a migration project. I need to import data from excel and text files into the oracle database. Now my question is how to do the duplication data checking,validation on identical attributes and data type need to make sure i import the data correctly and accurately..Does anyone have any suggestion..your ideas n comments are highly appreciated..thanx in advance

    Hi,
    I'm new to this forum, so my answer is a little bit late ...
    Export data from all documents in an identical format. For example |name|adress|and|so|on|, merge all files to a big one. Than eliminate duplicate record with cat file | sort -u and you'll have unique rows. Next step, you have to check whether different keys means different attribute-values.Truncate the key-values from the file and do a unique sort again. Than diff -c both files and you get a list with the duplicate key. The last step ist to decide, which are the correct ones - I'm afraid this will be the hardest work.
    After that you can import all data without any constraint violation.
    Best regards
    Andreas

  • Duplicate Records in DTP's Temporary Storage Area

    I am getting duplicate records (36Records) in DTP's Temporary storage, while loading data to 0MAT_SALES (Text). In Error DTP also I am not getting the Error stack. What will be the reason?
    Ram Mohan

    As I informed in the Previous thread, I have Duplicate record.
    in PSA there is no any duplicate, while execute the DTP i am getting duplicate records in DTP's temporary storage area...
    Ram

  • Duplicate record found short dump, if routed through PSA

    Hi Experts,
    I am getting these errors extracting the master data for 0ACTIVITY
    1 duplicate record found. 1991 recordings used in table /BI0/XACTIVITY
    1 duplicate record found. 1991 recordings used in table /BI0/PACTIVITY
    2 duplicate record found. 2100 recordings used in table /BI0/XACTIVITY
    2 duplicate record found. 2100 recordings used in table /BI0/PACTIVITY.
    If I extract via PSA with the option to ingnore duplicate records, I am getting a short dump
    ShrtText                                            
        An SQL error occurred when accessing a table.   
    How to correct the error                                                             
         Database error text........: "ORA-14400: inserted partition key does not map to  
          any partition"                                                                  
    What is  causing errors in my extraction ?
    thanks
    D Bret

    Go to RSRV .Go to All Elementary Tests -->PSA Tables -->Consistency Between PSA Partitions and SAP Administration Information## ..and correct the error...

  • How to remove the duplicate record in DART Extract

    Hi Guys,
    We are getting duplicate record when we do validate the DART extract file through DATA VIEWS for FI General Ledger Account Balances. If any one have experance on this, pls help us.
    Following are the steps we done to Validate the DART EXTRACT File for FI General Ledger Account Balances.
    1. We have run the DART extract program to extract the data from table to directory file by period vice in T.code FTW1A.
    2. When we do validate the data from DART extract file through DATA VIWE for FI General Ledger Account Balances in T.code FTWH, getting duplicate record.
    We unable to find out from where the duplicate records are coming out. will be great if any one can help us immediately.
    Thanks & Records,
    Boobalan,v

    If the dup records are actually in the DART View versus the DART Extract, you could try OSS Note 1139619 DART: Eliminate duplicate records from DART view.
    Additional Note - 1332571 FTWH/FTWY - Performance for "Eliminate duplicate records
    Colleen
    Edited by: Colleen Geraghty on May 28, 2009 6:07 PM

  • Select query-using Union All display duplicate records.

    Hello All Gurus-
    I am using Oracle 9.i
    When i use the following query to fetch the records based on BUILDNUMBERNAME and ASSIGNED_BUILD then i am getting duplicate records -
    select T1.ID FROM Defect T1,statedef T2,repoproject T3
    WHERE T1.STATE=T2.ID AND T1.repoproject = T3.dbid AND T3.name Like 'ABC' AND T1. ASSIGNED_BUILD like '1.4.5.6'
    Union All
    select T1.ID FROM Defect T1,statedef T2,repoproject T3
    WHERE T1.STATE=T2.ID AND T1.repoproject = T3.dbid AND T3.name Like 'ABC' AND T1.BUILDNUMBERNAME like '1.4.5.6'
    How can i use the order by on T1.ID ? When i use the Order by T1.ID then it throws some error.
    Kindly help me in this :(
    Thanks in advance.

    Sorry for not providing all of the details -
    I am using Toad tool to run the query.
    1-When i use the following query -
    Select T1.ID FROM Defect T1,statedef T2,repoproject T3
    WHERE T1.STATE=T2.ID AND T1.repoproject = T3.dbid AND T3.name Like 'ABC' AND T1. ASSIGNED_BUILD like '1.4.5.6' order by T1.ID
    Union All
    select T1.ID FROM Defect T1,statedef T2,repoproject T3
    WHERE T1.STATE=T2.ID AND T1.repoproject = T3.dbid AND T3.name Like 'ABC' AND T1.BUILDNUMBERNAME like '1.4.5.6' order by T1.ID
    ORA-00933: SQL command not properly ended.
    2-If i am not using the T1.ID and run the following query
    Select T1.ID FROM Defect T1,statedef T2,repoproject T3
    WHERE T1.STATE=T2.ID AND T1.repoproject = T3.dbid AND T3.name Like 'ABC' AND T1. ASSIGNED_BUILD like '1.4.5.6'
    Union All
    select T1.ID FROM Defect T1,statedef T2,repoproject T3
    WHERE T1.STATE=T2.ID AND T1.repoproject = T3.dbid AND T3.name Like 'ABC' AND T1.BUILDNUMBERNAME like '1.4.5.6'
    Then it is running fine but it is displaying the duplicate values like -
    00089646
    00087780
    00089148
    00090118
    00090410
    00088503
    00080985
    00084526
    00087108
    00087109
    00087117
    00088778
    00086714
    00079518
    00087780
    00089148
    00090392
    00090393
    00090395
    00090398
    00090401
    00090402
    00090403
    00090406
    00090408
    00088503
    00080985
    00084526
    00087108
    00087109
    00087117
    00088778
    00086714
    00079518

  • Using Rownum and ROwid returns duplicate records

    Hi All,
    We have implemented pagination as below using rowid and rownum
    SELECT
    id
    FROM
    emp
    WHERE
    ROWID IN
    SELECT RID FROM (SELECT
    ROWID RID,
    ROWNUM RNUM
    FROM
    SELECT ID FROM emp
    WHERE
    ((T_ID IN (200005,200229,200230,200249,200250,200049))) AND
    (dte >= sysdate-90) AND
    (LOWER(DESC) = LOWER ('A') AND
    LOWER(NVL(FLAG,'0')) != LOWER ('3') AND
    LOWER(MODDE) like LOWER ('%210%')) ORDER BY dte ASC ))
    WHERE ROWNUM < 11) WHERE RNUM>= 1)) ORDER BY dte emp.ASC
    But, we face that - the query inserts duplicate records in consecutive pages. For Eg:
    1.if a,b,c,d,e - is returned for first iteration, then for the next iteration - f,g,h,a,y is returned.
    Is it because that the Order by clause doesnt have a Unique key column.
    Please help. or suggest how to efficietly implement pagination without performance hit

    try distinct since you are using only one column it will eliminate any duplicates.
    SELECT distinct id
      FROM emp
    WHERE (ROWID IN ( SELECT RID
                         FROM (SELECT ROWID RID,ROWNUM RNUM
                                 FROM (SELECT ID
                                         FROM emp
                                        WHERE ((T_ID IN (200005,200229,200230,200249,200250,200049)))
                                          AND (dte >= sysdate-90)
                                          AND (LOWER(DESC) = LOWER ('A')
                                          AND LOWER(NVL(FLAG,'0')) != LOWER ('3')
                                          AND LOWER(MODDE) like LOWER ('%210%'))
                                       ORDER BY dte ASC ))
                                WHERE ROWNUM < 11) 
                        WHERE RNUM>= 1))
    ORDER BY dte emp.ASC

  • Duplicate records in flat file extracted using openhub

    Hi folks
    I am extracting data from the cube to opnhub into a flat file, I see duplicate records in the file.
    I am doing a full load to a flat file
    I cannot have technical key because I am using a flat file.
    Poonam

    I am using aggregates(In DTP there is a option to use aggregates) and the aggregates are compressed and I am still facing thiis issue.
    Poonam

  • How to delete Duplicate records from the informatica target using mapping?

    Hi, I have a scenario like below: In my mapping I have a source, which may containg unique records or duplicate records. Source and target are different tables. I have a target in my mapping which contains duplicate records. Using Mapping I have to delete the duplicate records in the target table. Target table does not contain any surrogate key. We can use target table as a look up table, but target table cannot be used as source in the mapping. We cannot use post SQL.

    Hi All, I have multiple flat files which i need to load in a single table.I did that using indirect option at session level.But need to dig out on how to populate substring of header in name column in target table. i have two columns Id and Name. in all input file I have only one column 'id' with header like H|ABCD|Date. I need to populate target like below example. File 1                                    File2     H|ABCD|Date.                      H|EFGH|Date.1                                            42                                            5  3                                            6 Target tale: Id    Name1     ABCD2     ABCD3     ABCD4     EFGH5     EFGH6     EFGH can anyone help on what should be the logic to get this data in a table in informatica.

  • What is the use of ingnore duplicate records ?

    Hi guru's
    what is the use of ingnore duplicate records ? it will not allow duplicate records when u r loading masterdata ?
    actually md will not duplicate records , why we use this option? with out select the check box in duplicate records, what will happen?
    its supports only for flat file or r/3 sys, if supports flat file tell me both procedure
    Thanks
    Reddy

    Hi,
    If u checks Ignore Duplicate records means, system will allow duplicate records.
    Actually Master data will not have duplicate records. This option is rarely useful in certain scenarios.
    regards
    SR

  • Job Fail:9 duplicate record found. 73 recordings used in table /BIC/XZ_DEAL

    Dear All
      Job load from bw ODS to bw Master data InfoObject and ODS. This job always fail with same message: <b>9 duplicate record found. 73 recordings used in table /BIC/XZ_DEAL</b> (Z_DEAL is Infoobject).
    When I rerun, this job has job no error.
    Please help me solved this problem.
    thanks
    Phet

    Hi,
    What is the info object name.
    Regards,
    Goutam

  • How to handel duplicate record by bcp command

    Hi All,
    I`m using BCP to import ASCII data text into a table that already has many records. BCP failed because of `Duplicate primary key`. 
    Now, is there any way using BCP to know precisely which record whose primary key caused that `violation of inserting duplicate key`.
    I already used the option -O to output error to a `error.log`, but it doesn`t help much, because that error log contains the same error message mentioned above without telling me exactly which record so that I can pull that `duplicate record`
    out of my import data file.
    TIA and you have a great day.

    The only way of figuring out what PKs conflicted I know of is to load the data to a different table and then run an INNER JOIN select between the two.
    BCP.exe is not part of SSIS technically, don't know why you are posting in this section of the forum.
    Arthur My Blog

Maybe you are looking for