Duplicate records in report
hello guys,
he was asking,i have duplicate ecords in the report how do we rectify them?
why and how the duplicate records come in reporting?how is it possible??
pls explain me how can this is possible?
thanks & regards
Hi,
It may be possible that your data target may be reading data from DSO (for eg).
If this DSO have a keyfield as account but not center then in this case , the accounts with different centers but with same amount can acculmualte to duplicate data.
This case may occur with a flat file load and the records need to be corrected in that case. Also the flat file can work directly in the case when we have both account & center as Keyfield for that particular DSO.
This is scenario which can happen other than the above.
Best Regards,
Arpit
Similar Messages
-
Duplicate records in report but not in Database
I am new to Crystal Reports and took an online training course for it this week. I am pulling the data from and Access Database and there is no duplication of records in the tables.
I can create and run the same report from a different workstation and it works fine, but when I create it on my laptop it creates duplicate entries.
I am pulling information from 3 seperate tables within the same database. It will pull the correct "order id" from an order table but it won't pull the correct "unit price" or "quanity" from the order details table.
The instructor was able to look at report and he even tried to recreate it with the same results. He stated that there was something wrong with Crystal Report and then something about the SQL, but he talked quickly and then moved out and I wasn't able to ask questions.
I'm not sure what kind of information you need from me. Any assistance that you provide would be appricated. I tried to search on the threads, but couldn't find any similar problems.
Thanks!Hi Angela
Please check following things:
1. The databse that is used while creating report on the workstation and laptop are same.
2. The vesion of crystal reports on both the machines is same(HELP>>About Crystal Reports).
3.Could you compare this situation for some other database.
Pleas keep the thread updated so that we can discuss further.
Thanks -
Prevent Duplicate records in Report
I have the following 2 records...
PART_EIPN -------|NOMEN--|FMC | PDT | PDT_DESC---------------|LOG_NO
70107-28400-043 | BIFILAR | 16 ++| A23 | BIFILAR ASSEMBLY |+ 18
70107-28400-043 | BIFILAR | 16 ++| A23 | BIFILAR ASSEMBLY |+ 23
How can I have report only print one record and have under log_no print 18,23
Is there anyway I can prevent the 2nd record from print but still display the 2 log_no (18,23) (+) are not part of record... just for formatting purposes for this forumSorry I missed something.
Please confirm that this data:
PART_EIPN -------|NOMEN--|FMC | PDT | PDT_DESC---------------|LOG_NO
70107-28400-043 | BIFILAR | 16 ++| A23 | BIFILAR ASSEMBLY |+ 18
70107-28400-043 | BIFILAR | 16 ++| A23 | BIFILAR ASSEMBLY |+ 23
is produced by this query?
Select f.model,f.part_eipn,f.nomen,e.FMC,s.PDT,p.PDT_DESC,e.log_no
from GENFRAC_r_part_EIpn f, event_log e,event_status s,r_pdt p
where s.pdt = :pdt
and s.pdt(+) = p.PDT_CODE
and p.model = 'PBLH60'
and f.PART_EIPN = e.EIPN
and e.log_no = s.log_no
and f.model = 'PBLH60'
and e.model = 'PBLH60'
and s.model = 'PBLH60'
order by fmc
e.log_no last colum
If this is not true then what I need to help you is one query(maybe you could create a view), that produces the data that is currently in your report, then getting the log_no's on one line is not a problem. -
Avoiding duplicate records in report
Hi All,
I have a scenario where
Delivery document gets created in R/3 say on 7/1 with Act GI date "#" and KFs are all "0". This gets loaded into BI.
On "7/5" this is PGId and the status in R/3 changesto ACT GI date "7/5" and Qty of "100" . This when loaded into BI is getting published as dupicate records i.e.
Del doc Created date Act GI Del. Ind Qty
12345 1-Jul # # 0
12345 1-Jul 5-Jul # 100
Please note that the data is getting loaded from DSO into Infocube and DSO is in overwrite mode.
Any suggestions to overcome this problem.Is ACT GI date a keyfield in the DSO ?
If yes, data will not be overwritten and two records will be loaded into the Cube.
Make ACT GI date a datafield which will result in only one record 12345 1-Jul 5-Jul # 100 as the keyfield values are same.
Firstly make sure if this is right for all business scenarios. -
How to delete duplicate record in Query report
Hi Experts,
I had created an infoset and query in my sap, but I want to delete some duplicate records before the list out put.Please can we add some further codes in the Extras code to delete duplicates? And how do it? Would you please give me a simple brief.
JoeHi,
You can try to restrict in the filter area in query designer with the values for characteristic which gives correct
result.
But still i would suggest that in the cube you keep not the duplicate records as this is not your requirement and giving
you wrong result.
So you can reload the correct records in the cube inorder to avoid such problems even in future.
Regards,
Amit -
USE of PREVIOUS command to eliminate duplicate records in counter formula
i'm trying to create a counter formula to count the number of documents paid over 30 days. to do this i have to subtract the InvDate from the PayDate. and then create a counter based on this value. if {days to pay} is greater than 30 then 1 else 0.
then sum the {days to pay} field to each group. groups are company, month, and supplier.
becuase invoices can have multiple payments and payments can have multiple invoices. there is no way around having duplicate records for the field.
so my counter is distorted by by the duplicate records and my percentage of payments over 30 days formula will not be accurate do to these duplicates.
I've tried Distinct Count based on this formula if {days to pay} is greater than 30 then . and it works except that is counts 0.00 has a distinct records so my total is off 1 for summaries with a record that (days to pay} is less than or equal to 30.
if i subract 1 from the formula then it will be inaccurate for summaries with no records over 30 days.
so i'm come to this.
if Previous() do not equal
then
if {day to days} greater than 30
then 1
else 0.00
else 0.00
but it doesn't work. i've sorted the detail section by
does anyone have any knowledge or success using the PREVIOUS command in a report?
Edited by: Fred Ebbett on Feb 11, 2010 5:41 PMSo, you have to include all data and not just use the selection criteria 'PayDate-InvDate>30'?
You will need to create a running total on the RPDOC ID, one for each section you need to show a count for, evaluating for your >30 day formula.
I don't understand why you're telling the formula to return 0.00 in your if statement.
In order to get percentages you'll need to use the distinct count (possibly running totals again but this time no formula). Then in each section you'd need a formula that divides the two running totals.
I may not have my head around the concept since you stated "invoices can have multiple payments and payments can have multiple invoices". So, invoice A can have payments 1, 2 and 3. And Payment 4 can be associated with invoice B and C? Ugh. Still though, you're evaluating every row of data. If you're focus is the invoices that took longer than 30 days to be paid...I'd group on the invoice number, put the "if 'PayDate-InvDate>30' then 1 else 0" formula in the detail, do a sum on it in the group footer and base my running total on the sum being >0 to do a distinct count of invoices.
Hope this points you in the right direction.
Eric -
Hi everyone,
I'm having a a little difficulty resolving a problem with a repeating field causing duplication of data in a report I'm working on, and was hoping someone on here can suggest something to help!
My report is designed to detail library issues during a particular period, categorised by the language of the item issued. My problem is that on the sql database that out library management system uses, it is possible for an item to have more than one language listed against it (some books will be in more than one language). When I list the loan records excluding the language data field, I get a list of distinct loan records. Bringing the language data into the report causes the loan record to repeat for each language associated with it, so if a book is both in English and French, it will cause the loan record to appear like this:
LOAN RECORD NO. LANGUAGE CODE
123456 ENG
123456 FRE
So, although the loan only occurred once I have two instances of it in my report.
I am only interested in the language that appears first and I can exclude duplicated records from the report page. I can also count only the distinct records to get an accurate overall total. My problem is that when I group the loan records by language code (I really need to do this as there are millions of loan records held in the database) the distinct count stops being a solution, as when placed at this group level it only excludes duplicates in the respective group level it's placed in. So my report would display something like this:
ENG 1
FRE 1
A distinct count of the whole report would give the correct total of 1, but a cumulative total of the figures calculated at the language code group level would total 2, and be incorrect. I've encountered similar results when using Running Totals evaluating on a formula that excludes repeated loan record no.s from the count, but again when I group on the language code this goes out of the window.
I need to find a way of grouping the loan records by language with a total count of loan records alongside each grouping that accurately reflects how many loans of that language took place.
Is this possible using a calculation formula when there are repeating fields, or do I need to find a way of merging the repeating language fields into one field so that the report would appear like:
LOAN RECORD LANGUAGE CODE
123456 ENG, FRE
Any suggestions would be greatly appreciated, as aside from this repeating language data there are quite a few other repeating database fields on the system that it would be nice to report on!
Thanks!if you create a group by loan
then create a group by language
place the values in the group(loan id in the loan header)
you should only see the loan id 1x.
place the language in the language group you should only see that one time
a group header returns the 1st value of a unique id....
then in order to calculate avoiding the duplicates
use manual running totals
create a set for each summary you want- make sure each set has a different variable name
MANUAL RUNNING TOTALS
RESET
The reset formula is placed in a group header report header to reset the summary to zero for each unique record it groups by.
whileprintingrecords;
Numbervar X := 0;
CALCULATION
The calculation is placed adjacent to the field or formula that is being calculated.
(if there are duplicate values; create a group on the field that is being calculated on. If there are not duplicate records, the detail section is used.
whileprintingrecords;
Numbervar X := x + ; ( or formula)
DISPLAY
The display is the sum of what is being calculated. This is placed in a group, page or report footer. (generally placed in the group footer of the group header where the reset is placed.)
whileprintingrecords;
Numbervar X;
X -
How to delete Duplicate records in IT2006
Dear Experts
We have a situation like where we have duplicate records with same start and end dates in IT2006. This is because of the incorrect configuration which we have corrected now, but we need to do a clean-up for the existing duplicate records. Any idea on how to clean it? I ran report RPTKOK00 to find these duplicates but I could not delete the duplicate/inconsistenct record using report RPTBPC10 or HNZUPTC0, i Could only delete the deductions happened in the record.
Is there any standard report/any other means of deleting the duplicate records created in IT2006?
Thanks in advance for all your help.
Regards
Vignesh.You could probably use se16n to identify the duplicates and create the list of quotas to delete, and you could probably use t-code lsmw to write up a script to delete them, but be aware that you can't delete a Quota if it's been deducted from.
You'd have to delete the Absence/Attendance first, then delete the Quota, then recreate the Absence/Attendance. -
How to delete duplicate records in cube
Hi,
can u help me how to delete the duplicate records in my cube
and tell me some predifined cubes and data sourcess for MM and SD modulesHi Anne,
about "duplicate records" could you be more precise?.
The must be at least one different Characteristic to distinguish one record from the other (at least Request ID). In order to delete Data from InfoCubes (selectively) use ABAP Report RSDRD_DELETE_FACTS (be carefull it does not request any confirmation as in RSA1 ...).
About MM and SD Cubes see RSA1 -> Business Content -> InfoProvider by InfoAreas. See also for MetadataRepository about the same InfoProviders.
About DataSources just execute TCode LBWE in you source sys: there you see all LO-Cockipt Extrators.
Hope it helps (and if so remember reward points)
GFV -
Hi gurus
We created a text datasource in R/3 and replicated it into BW 7.0
An infopackage (loading to PSA) and DataTransferProcess was created and included in a process chain.
The job failed because of duplicate records.
We now discovered that the setting of the u201CDelivery of Duplicate recordsu201D for this Datasource in BW is set to u201CUndefinedu201D
When creating the datasource in R/3, there were no settings for the u201CDelivery of duplicate recordsu201D.
In BW, Iu2019ve tried to change the settings of u201CDelivery of Duplicate data recordsu201D to NONE, but when I go into change-mode, the u201CDelivery of duplicateu201D is not changeable.
Does anyone have any suggestion on how to solve this problem?
Thanks,
@nne ThereseHi Muraly,
I do have the same issue. I am loading texts from R/3 to PSA using Infopackage with Full update. From PSA I am using DTP with Delta with the option" vaild records update, No reporting(Request Record).
It was running fine for last few weeks like transfer records and added records are same as in the PSA request every day.
suddenly the load has filed to infoobect . I deleted the request from Infoobject and reloaded using the DTP then again failed. I tried loading full update as it texts then again failed. Now I analised the error it says Duplicate records. So I changed the DTP by checking the option Handling Duplicate records and loaded with full update . It worked fine the transferred records more than 50000 and added records are exact no of the PSA request.
I reset the DTP again back to Delta and loaded today but the transferred records are 14000 and added records(3000) same as PSA request. I am fine if you see the history of loads the no of records in Transfer and Added in infoobjects and the no of records in PSA request are same every day..
Why this difference is now? But in Production I have no issues. Since I changed the DTP if I transport to Production does it make any difference. I am first time doing BI 7.0.
Please suggest me and explain me if I am wrong.
Thanks,
Sudha.. -
Duplicate Records in Details for ECC data source. Help.
Hello. First post on SDN. I have been searching prior posts, but have come up empty. I am in the middle of creating a report linking directly into 4 tables in ECC 6.0. I am having trouble in getting either the table links set up correctly, or filtering out duplicate record sets that are being reporting in the details section of my report. It appears that I have 119 records being displayed, when the parameters values should only yeild 7. The details section is repeating the 7 records 17 times (there are 17 matching records for the parameter choices in one of the other tables which I think is the cause).
I think this is due to the other table links for my parameter values. But, I need to keep the links the way they are for other aspects of the report (header information). The tables in question are using an Inner Join, Enforced Both, =. I tried the other link options, with no luck.
I am unable to use the "Select Disctinct Records" option in the Database menu since this is not supported when connecting to ECC.
Any ideas would be greatly appreciated.
Thanks,
Barret
PS. I come from more of a Functional background, so development is sort of new to me. Take it easy on the newbie.If you can't establish links to bring back unique data then use a group to diplay data.
Group report by a filed which is the lowest commom denominator.
Move all fields into group footer and suppress Group header and details
You will not be able to use normal summaries as they will count/sum all the duplicated data, use Running Totals instead and select evaluate on change of the introduced group
Ian -
How to avoid duplicate measures in reports due to case functions?
Hi,
If I create a report, using a dimension called insert_source_type where the next measure would be insert_source in the dimensions hirarchie, if I do not put any formula, when I become a report where i can drill down on insert_source_type and i get insert_source values.
If I use a function like (CASE "Ins Source"."Ins Source Type" WHEN 'OWS' THEN 'WEB' ELSE "Ins Source"."Ins Source Type" END) and change the label of insert_source_tpye to Channel Group instead, when
I drill down on Channel Group, it goes to insert_source_tpye and from there i can drill down to insert_source.
There is an insert_source_type too much!
How can be this avoided?
Thanks and Regards
Giulianohi mahesh,
write module processor for checking the duplicate record in file adapter
or
With a JAVA/ABAP mapping u can eliminate the duplicate records
and check this links
Re: How to Handle this "Duplicate Records"
Duplicate records
Ignoring Duplicate Records--urgent
Re: Duplicate records frequently occurred
Re: Reg ODS JUNK DATA
http://help.sap.com/saphelp_nw2004s/helpdata/en/d0/538f3b294a7f2de10000000a11402f/frameset.htm
regards
srinivas -
Need to include duplicate records in sub query
Hi All,
I am using the following query and i am getting error message that your sub query return dupliate records and throwing error. Actually i need these duplicate records for my report. I want to get records for
whole year like
JAN FEB MARCH .. ... ...
and idea how i can achieve this task and my query is as follows
select pmnum
,SITEID,
(select description from locations where pm.location = locations.location and pm.siteid=locations.siteid) as site,
(select description from commodities where commodities.commodity= pm.commoditygroup) as workcategory,
description, (select wonum from workorder where workorder.pmnum = pm.pmnum
and targstartdate < '2013-02-01') as jan,
(select wonum from workorder where workorder.pmnum = pm.pmnum and
workorder.status<>'CAN' and targstartdate >= '2013-02-01' and
targstartdate < '2013-03-01') as feb,
(select wonum from workorder where workorder.pmnum = pm.pmnum and
workorder.status<>'CAN' and targstartdate >= '2013-03-01' and
targstartdate < '2013-04-01') as mar
(select name from companies where companies.company = pm.vendor) as contractor
from pm where ((PM.siteid = 'AAA'))Subqueries in the SELECT column list must return a scalar value (single row, single column). If you need multiple rows returned, use a join instead. But you need to consider what will happen when more than one row is returned by more than one
of the joins because these are correlated with the pm table row but not each other. For example, let's say you have a single row returned from "pm" matching 5 sites and 3 workcategories. This will result in 15 rows being returned for the single
pm row.
Below is an untested example.
SELECT
pmnum
,SITEID
,locations.description as site
,commodities.description as workcategory
,pm.description
,workorder_jan.wonum AS jan
,workorder_feb.wonum AS feb
,workorder_mar.wonum AS mar
,companies.name AS contractor
FROM dbo.pm
LEFT JOIN dbo.locations ON pm.location = locations.location
AND pm.siteid=locations.siteid
LEFT JOIN dbo.commodities ON commodities.commodity = pm.commoditygroup
LEFT JOIN dbo.workorder AS workorder_jan ON workorder_jan.pmnum = pm.pmnum
AND workorder_jan.targstartdate < '2013-02-01'
LEFT JOIN dbo.workorder AS workorder_feb ON workorder_feb.pmnum = pm.pmnum
AND workorder_feb.status <> 'CAN'
AND workorder_feb.targstartdate >= '2013-02-01'
AND workorder_feb.targstartdate < '2013-03-01'
LEFT JOIN dbo.workorder AS workorder_mar ON workorder_mar.pmnum = pm.pmnum
AND workorder_mar.status <> 'CAN'
AND workorder_mar.targstartdate >= '2013-03-01'
AND workorder_mar.targstartdate < '2013-04-01'
LEFT JOIN dbo.companies ON companies.company = pm.vendor
WHERE pm.siteid = 'AAA';
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
How to remove duplicates records from output ?
how to remove duplicates records from output ? i used delete adjacent but duplicates records are coming again ..suggest me
hi shruthi,
thanks for ur answer ..but duplicates records coming again
here is my code >> plz check it out
*& Report ZCRM_TROUBLE_TICKET
REPORT zcrm_trouble_ticket.
TYPES : BEGIN OF ty_qmih,
qmnum TYPE qmnum,
equnr TYPE equnr,
iloan TYPE iloan,
ausvn TYPE ausvn,
ausbs TYPE ausbs,
auztv TYPE auztv,
auztb TYPE auztb,
iwerk TYPE iwerk,
END OF ty_qmih,
BEGIN OF ty_qmel,
qmnum TYPE qmnum,
qmtxt TYPE qmtxt,
indtx TYPE indltx,
priok TYPE priok,
strmn TYPE strmn,
strur TYPE strur,
ltrmn TYPE ltrmn,
ltrur TYPE ltrur,
objnr TYPE qmobjnr,
arbpl TYPE lgwid,
vkorg TYPE vkorg,
vtweg TYPE vtweg,
spart TYPE spart,
END OF ty_qmel,
BEGIN OF ty_ihpa,
parnr TYPE i_parnr,
parvw TYPE parvw,
objnr TYPE qmobjnr,
END OF ty_ihpa,
BEGIN OF ty_crhd,
arbpl TYPE arbpl,
objid TYPE cr_objid,
END OF ty_crhd,
BEGIN OF ty_crtx,
ktext TYPE cr_ktext,
objid TYPE cr_objid,
END OF ty_crtx,
BEGIN OF ty_qmfe,
fecod TYPE fecod,
fegrp TYPE fegrp,
qmnum TYPE qmnum,
END OF ty_qmfe,
BEGIN OF ty_qmur,
urcod TYPE urcod,
urgrp TYPE urgrp,
urtxt TYPE urstx,
qmnum TYPE qmnum,
END OF ty_qmur,
BEGIN OF ty_iloa,
tplnr TYPE tplnr,
iloan TYPE iloan,
END OF ty_iloa,
BEGIN OF ty_output,
qmnum TYPE qmnum,
equnr TYPE equnr,
iloan TYPE iloan,
ausvn TYPE ausvn,
ausbs TYPE ausbs,
auztv TYPE auztv,
auztb TYPE auztb,
iwerk TYPE iwerk,
qmtxt TYPE qmtxt,
indtx TYPE indltx,
priok TYPE priok,
strmn TYPE strmn,
strur TYPE strur,
ltrmn TYPE ltrmn,
ltrur TYPE ltrur,
objnr TYPE qmobjnr,
arbpl TYPE lgwid,
vkorg TYPE vkorg,
vtweg TYPE vtweg,
spart TYPE spart,
parnr TYPE i_parnr,
parvw TYPE parvw,
arbpl TYPE arbpl,
objid TYPE cr_objid,
arbpl1 TYPE arbpl,
ktext TYPE cr_ktext,
fecod TYPE fecod,
fegrp TYPE fegrp,
urcod TYPE urcod,
urgrp TYPE urgrp,
urtxt TYPE urstx,
tplnr TYPE tplnr,
END OF ty_output.
DATA : it_qmih TYPE STANDARD TABLE OF ty_qmih,
it_qmel TYPE STANDARD TABLE OF ty_qmel,
it_ihpa TYPE STANDARD TABLE OF ty_ihpa,
it_crhd TYPE STANDARD TABLE OF ty_crhd,
it_crtx TYPE STANDARD TABLE OF ty_crtx,
it_qmfe TYPE STANDARD TABLE OF ty_qmfe,
it_qmur TYPE STANDARD TABLE OF ty_qmur,
it_iloa TYPE STANDARD TABLE OF ty_iloa,
it_output TYPE STANDARD TABLE OF ty_output,
wa_qmih TYPE ty_qmih,
wa_qmel TYPE ty_qmel,
wa_ihpa TYPE ty_ihpa,
wa_crhd TYPE ty_crhd,
wa_crtx TYPE ty_crtx,
wa_qmfe TYPE ty_qmfe,
wa_qmur TYPE ty_qmur,
wa_iloa TYPE ty_iloa,
wa_output TYPE ty_output.
INITIALIZATION.
REFRESH : it_qmih,
it_qmel,
it_ihpa,
it_crhd,
it_crtx,
it_qmfe,
it_qmur,
it_iloa,
it_output.
CLEAR: wa_qmih,
wa_qmel,
wa_ihpa,
wa_crhd,
wa_crtx,
wa_qmfe,
wa_qmur,
wa_iloa,
wa_output.
start-of-selection.
SELECT qmnum
equnr
iloan
ausvn
ausbs
auztv
auztb
iwerk
FROM qmih
INTO TABLE it_qmih.
SORT it_qmih BY qmnum .
DELETE ADJACENT DUPLICATES FROM it_qmih COMPARING qmnum equnr iloan ausvn ausbs auztv auztb iwerk.
SELECT qmnum
qmtxt
indtx
priok
strmn
strur
ltrmn
ltrur
objnr
arbpl
vkorg
vtweg
spart
FROM qmel
INTO TABLE it_qmel
FOR ALL ENTRIES IN it_qmih
WHERE qmnum = it_qmih-qmnum.
SORT it_qmel BY qmnum.
DELETE ADJACENT DUPLICATES FROM it_qmel COMPARING qmnum
qmtxt
indtx
strmn
strur
ltrmn
ltrur
objnr
arbpl
vkorg
vtweg
spart.
IF it_qmel IS NOT INITIAL.
SELECT parnr
parvw
objnr
FROM ihpa
INTO TABLE it_ihpa
FOR ALL ENTRIES IN it_qmel
WHERE objnr = it_qmel-objnr.
ENDIF.
DELETE ADJACENT DUPLICATES FROM it_ihpa COMPARING parnr
parvw
objnr.
IF it_qmel IS NOT INITIAL.
SELECT arbpl
objid
FROM crhd
INTO TABLE it_crhd
FOR ALL ENTRIES IN it_qmel
WHERE objid = it_qmel-arbpl.
ENDIF.
DELETE ADJACENT DUPLICATES FROM it_crhd COMPARING arbpl
objid.
IF it_qmel IS NOT INITIAL.
SELECT ktext
objid
FROM crtx
INTO TABLE it_crtx
FOR ALL ENTRIES IN it_crhd
WHERE objid = it_crhd-objid.
ENDIF.
DELETE ADJACENT DUPLICATES FROM it_crtx COMPARING ktext
objid.
IF it_qmih IS NOT INITIAL.
SELECT fecod
fegrp
qmnum
FROM qmfe
INTO TABLE it_qmfe
FOR ALL ENTRIES IN it_qmih
WHERE qmnum = it_qmih-qmnum.
ENDIF.
SORT it_qmfe BY qmnum.
DELETE ADJACENT DUPLICATES FROM it_qmfe COMPARING fecod
fegrp.
IF it_qmih IS NOT INITIAL.
SELECT urcod
urgrp
urtxt
qmnum
FROM qmur
INTO TABLE it_qmur
FOR ALL ENTRIES IN it_qmih
WHERE qmnum = it_qmih-qmnum.
ENDIF.
SORT it_qmur BY qmnum.
DELETE ADJACENT DUPLICATES FROM it_qmur COMPARING urcod
urgrp
urtxt.
IF it_qmih IS NOT INITIAL.
SELECT tplnr
iloan
FROM iloa
INTO TABLE it_iloa
FOR ALL ENTRIES IN it_qmih
WHERE iloan = it_qmih-iloan.
ENDIF.
DELETE ADJACENT DUPLICATES FROM it_iloa COMPARING tplnr
iloan.
LOOP AT it_qmih INTO wa_qmih.
wa_output-qmnum = wa_qmih-qmnum.
wa_output-equnr = wa_qmih-equnr.
wa_output-iloan = wa_qmih-iloan.
wa_output-ausvn = wa_qmih-ausvn.
wa_output-ausbs = wa_qmih-ausbs.
wa_output-auztv = wa_qmih-auztv.
wa_output-auztb = wa_qmih-auztb.
wa_output-iwerk = wa_qmih-iwerk.
APPEND wa_output TO it_output.
CLEAR wa_output.
READ TABLE it_qmel INTO wa_qmel WITH KEY qmnum = wa_qmih-qmnum.
wa_output-qmtxt = wa_qmel-qmtxt.
wa_output-indtx = wa_qmel-indtx.
wa_output-priok = wa_qmel-priok.
wa_output-strmn = wa_qmel-strmn.
wa_output-strur = wa_qmel-strur.
wa_output-ltrmn = wa_qmel-ltrmn.
wa_output-ltrur = wa_qmel-ltrur.
wa_output-objnr = wa_qmel-objnr.
wa_output-arbpl = wa_qmel-arbpl.
wa_output-vkorg = wa_qmel-vkorg.
wa_output-vtweg = wa_qmel-vtweg.
wa_output-spart = wa_qmel-spart.
APPEND wa_output TO it_output.
CLEAR wa_output.
READ TABLE it_ihpa INTO wa_ihpa WITH KEY objnr = wa_qmel-objnr.
wa_output-parnr = wa_ihpa-parnr.
wa_output-parvw = wa_ihpa-parvw.
APPEND wa_output TO it_output.
CLEAR wa_output.
READ TABLE it_crhd INTO wa_crhd WITH KEY objid = wa_qmel-arbpl.
wa_output-arbpl = wa_crhd-arbpl.
APPEND wa_output TO it_output.
CLEAR wa_output.
READ TABLE it_crtx INTO wa_crtx WITH KEY objid = wa_crhd-objid.
wa_output-ktext = wa_crtx-ktext.
APPEND wa_output TO it_output.
CLEAR wa_output.
READ TABLE it_qmfe INTO wa_qmfe WITH KEY qmnum = wa_qmih-qmnum.
wa_output-fecod = wa_qmfe-fecod.
wa_output-fegrp = wa_qmfe-fegrp.
APPEND wa_output TO it_output.
CLEAR wa_output.
READ TABLE it_qmur INTO wa_qmur WITH KEY qmnum = wa_qmih-qmnum.
wa_output-urcod = wa_qmur-urcod.
wa_output-urgrp = wa_qmur-urgrp.
wa_output-urtxt = wa_qmur-urtxt.
APPEND wa_output TO it_output.
CLEAR wa_output.
READ TABLE it_iloa INTO wa_iloa WITH KEY iloan = wa_qmih-iloan.
wa_output-tplnr = wa_iloa-tplnr.
APPEND wa_output TO it_output.
CLEAR wa_output.
ENDLOOP.
DELETE ADJACENT DUPLICATES FROM it_output COMPARING qmnum
equnr
ausvn
ausbs
auztv
auztb
iwerk
qmtxt
indtx
priok
strmn
strur
ltrmn
ltrur
vkorg
vtweg
spart
parnr
parvw
arbpl
ktext
fecod
fegrp
urcod
urgrp
urtxt
tplnr.
*CALL FUNCTION 'STATUS_TEXT_EDIT'
EXPORTING
CLIENT = SY-MANDT
FLG_USER_STAT = ' '
objnr =
ONLY_ACTIVE = 'X'
spras = en
BYPASS_BUFFER = ' '
IMPORTING
ANW_STAT_EXISTING =
E_STSMA =
LINE =
USER_LINE =
STONR =
EXCEPTIONS
OBJECT_NOT_FOUND = 1
OTHERS = 2
*IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
*ENDIF.
*CALL FUNCTION 'READ_TEXT'
EXPORTING
CLIENT = SY-MANDT
id =
language =
name =
object =
ARCHIVE_HANDLE = 0
LOCAL_CAT = ' '
IMPORTING
HEADER =
tables
lines =
EXCEPTIONS
ID = 1
LANGUAGE = 2
NAME = 3
NOT_FOUND = 4
OBJECT = 5
REFERENCE_CHECK = 6
WRONG_ACCESS_TO_ARCHIVE = 7
OTHERS = 8
*IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
*ENDIF.
*LOOP AT IT_OUTPUT INTO WA_OUTPUT.
*WRITE : / WA_OUTPUT-qmnum,
WA_OUTPUT-equnr,
WA_OUTPUT-iloan,
WA_OUTPUT-ausvn,
WA_OUTPUT-ausbs,
WA_OUTPUT-auztv,
WA_OUTPUT-auztb,
WA_OUTPUT-qmtxt,
WA_OUTPUT-indtx,
WA_OUTPUT-strmn,
WA_OUTPUT-strur,
WA_OUTPUT-ltrmn,
WA_OUTPUT-ltrur,
WA_OUTPUT-objnr,
WA_OUTPUT-arbpl,
WA_OUTPUT-parnr,
WA_OUTPUT-parvw,
WA_OUTPUT-objid,
WA_OUTPUT-ktext,
WA_OUTPUT-fecod,
WA_OUTPUT-fegrp,
WA_OUTPUT-urcod,
WA_OUTPUT-urgrp,
WA_OUTPUT-urtxt,
WA_OUTPUT-tplnr.
*ENDLOOP.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
BIN_FILESIZE =
filename = 'E:\CRM1.TXT'
FILETYPE = 'ASC'
APPEND = ' '
write_field_separator = '|'
HEADER = '00'
TRUNC_TRAILING_BLANKS = ' '
WRITE_LF = 'X'
COL_SELECT = ' '
COL_SELECT_MASK = ' '
DAT_MODE = ' '
CONFIRM_OVERWRITE = ' '
NO_AUTH_CHECK = ' '
CODEPAGE = ' '
IGNORE_CERR = ABAP_TRUE
REPLACEMENT = '#'
WRITE_BOM = ' '
TRUNC_TRAILING_BLANKS_EOL = 'X'
WK1_N_FORMAT = ' '
WK1_N_SIZE = ' '
WK1_T_FORMAT = ' '
WK1_T_SIZE = ' '
WRITE_LF_AFTER_LAST_LINE = ABAP_TRUE
IMPORTING
FILELENGTH =
TABLES
data_tab = it_output
FIELDNAMES =
EXCEPTIONS
FILE_WRITE_ERROR = 1
NO_BATCH = 2
GUI_REFUSE_FILETRANSFER = 3
INVALID_TYPE = 4
NO_AUTHORITY = 5
UNKNOWN_ERROR = 6
HEADER_NOT_ALLOWED = 7
SEPARATOR_NOT_ALLOWED = 8
FILESIZE_NOT_ALLOWED = 9
HEADER_TOO_LONG = 10
DP_ERROR_CREATE = 11
DP_ERROR_SEND = 12
DP_ERROR_WRITE = 13
UNKNOWN_DP_ERROR = 14
ACCESS_DENIED = 15
DP_OUT_OF_MEMORY = 16
DISK_FULL = 17
DP_TIMEOUT = 18
FILE_NOT_FOUND = 19
DATAPROVIDER_EXCEPTION = 20
CONTROL_FLUSH_ERROR = 21
OTHERS = 22
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF. -
Remove duplicate records in Live Office, caused by CR Groups
hello all
i have a CR with groups. all works well, until i use the report in live office, were it is duplicating the group data for each of the detail records
i have removed the details from the CR report, leaving only the group data, but it still happens
anyone have a work around ?
thanks
gHi,
First you select the report name from the left panel and check whether option is coming on not.
or you try with right click on any report cell then go to live office and object properties.
Second , you are getting duplicate record in the particular this report or all reports.And how many time highlight expert you are using in this report.
Thanks,
Amit
Maybe you are looking for
-
Hi All, We use Scheduling agreements for purchasing spare parts. The nightly MRP run checks the warehouse requirements and creates delivery schedule line to scheduling agreement. Now the MRP run creates unnecessary delivery schedule lines to scheduli
-
Stored Proc with SSRS multi value parameter gives " Must Declare scalar Varaiable @StateID
Hi All, I have one stored proc with @fromDate , @Todate and multivalue input parameter@StateID of type integer. When I run below stored proc via SSRS by selecting multiple values thru multiValue parameter into @StateID...it gives error saying "Must D
-
Black MacBook and 23" Apple Studio Display
Hi all, Wondering if there's any way to get my black MacBook to work with a 23" Apple Studio Display. The display is a previous version (clear plastic casing). Any help greatly appreciated!
-
System lockups/video corruption after adding hard drive
I have a first generation MacPro with an ATI video board. I recently added a 1 TB Western Digital "Black" hard drive, and I used Carbon Copy Cloner to duplicate the first drive onto the Black HD, which is now my new boot drive. Ever since I did that,
-
How do i initialise the installation of elements 12. the installer fails
I purchased elements 12. I have downloaded the virtual disc. I open it and the 5 packages appear. I click the Install. the installer fails to initialise. ..my machine is up to date. Support advisor says that there are no issues. ..??