Create_record, duplicate records n times.

hii all,
i have a detail datablock thats has;
icode     totqty     perbox     boxes_required
prod1     20     6     4
prod2     10     4     3
prod3     15     4     4
- boxes_required column rounded up (totalqty/perbox).
i have a button, which when pressed...will bring up a data block
with records based on the 'boxes_required' column values..something like this...
icode     inbox     box no.
prod1     6     1
prod1     6     2
prod1     6     3
prod1     2     4
prod2     4     5
prod2     4     6
prod2     2     7
prod3     4     8
prod3     4     9
prod3     4     10
prod3     3     11any ideas how this can be achieved...
Form6i,, db 10g
regards...

Something like this
DECLARE
  nRemaining NUMBER;
  nbOxNumber:=0;
BEGIN
  GO_BLOCK('DETAIL');
  FIRST_RECORD;
  LOOP
    EXIT WHEN :DETAIL.TOT_QUANTITY IS NULL;
    nRemaining:=:DETAIL.TOT_QUANTITY;
    nBoxNumber:=0;
    GO_BLOCK('WHATEVER');
    LOOP
      IF :SYSTEM.RECORD_STATUS!='NEW' THEN
        CREATE_RECORD;
      END IF;
      nBoxNumber:=nBoxNumber+1;
      :WHATEVER.PRODUCT:=:DETAIL.PRODUCT;
      :WHATEVER.BOX_NUM:=nBoxNumber;
      IF nRemaining>:DETAIL.PERBOX THEN
        :WHATEVER.INBOX:=:DETAIL.PERBOX;
        nRemaining:=nRemaining-:DETAIL.PERBOX;
      ELSE
        :WHATEVER.INBOX:=nRemaining;
        EXIT;
      END IF;
    END LOOP;
    GO_BLOCK('DETAIL');
    EXIT WHEN :SYSTEM.LAST_RECORD='TRUE';
  END LOOP;
END;not tested

Similar Messages

  • Duplicate records at a particular time

    Hi, i am getting duplicate records at a particular point of time could you please help me with this regard.

    The drive in which I installed informatica ran out of disc space. So I found this in the error log SF_34125 Error in writing storage file [C:\Informatica\9.0.1\server\infa_shared\Storage\pmservice_Domain_ssintr01_INT_SSINTR01_1314615470_0.dat].  System returns error code [errno = 28], error message [No space left on device]. Then I tried to shut down the integration service and then freeup some space on the disc. I got the following message in the log file LM_36047 Waiting for all running workflows to complete.SF_34014 Service [INT_SSINTR01] on node [node01_ssintr01] shut down. Then when I tried to start the integration service again, I got the following error Could not execute action... The Service INT_SSINTR01 could not be enabled due to the following error: [DOM_10079] Unable to start service [INT_SSINTR01] on any node specified for the service After this I am not able to find any entry in the log file for the integration service. So I went to the domain log to find out more details. I found these in the domain log DOM_10126 Request to disable [SERVICE] [INT_SSINTR01] in [COMPLETE] mode.DOM_10130 Stop service process for [SERVICE] [INT_SSINTR01] on node [node01_ssintr01].LIC_10040 Service [INT_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [INT_SSINTR01] with mode [COMPLETE] on node [node01_ssintr01].DOM_10127 Request to disable service [INT_SSINTR01] completed.DOM_10126 Request to disable [SERVICE] [Repo_SSINTR01] in [ABORT] mode.DOM_10130 Stop service process for [SERVICE] [Repo_SSINTR01] on node [node01_ssintr01].LIC_10042 Repository instance [Repo_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [Repo_SSINTR01] with mode [ABORT] on node [node01_ssintr01].DOM_10127 Request to disable service [Repo_SSINTR01] completed.DOM_10115 Request to enable [service] [Repo_SSINTR01].DOM_10117 Starting service process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10018 Request to start process for service [Repo_SSINTR01] was successful.SPC_10051 Service [Repo_SSINTR01] started on port [6,019] successfully.DOM_10118 Service process started for service [Repo_SSINTR01] on node [node01_ssintr01].DOM_10121 Selecting a primary service process for service [Repo_SSINTR01].DOM_10120 Service process on node [node01_ssintr01] has been set as the primary node of service [Repo_SSINTR01].DOM_10122 Request to enable service [Repo_SSINTR01] completed.LIC_10041 Repository instance [Repo_SSINTR01] has started on node [node01_ssintr01].DOM_10115 Request to enable [service] [INT_SSINTR01].DOM_10117 Starting service process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [INT_SSINTR01] on node [node01_ssintr01].DOM_10055 Unable to start service process [INT_SSINTR01] on node [node01_ssintr01].DOM_10079 Unable to start service [INT_SSINTR01] on any node specified for the service.DOM_10126 Request to disable [SERVICE] [INT_SSINTR01] in [COMPLETE] mode.DOM_10130 Stop service process for [SERVICE] [INT_SSINTR01] on node [node01_ssintr01].LIC_10040 Service [INT_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [INT_SSINTR01] with mode [COMPLETE] on node [node01_ssintr01].DOM_10127 Request to disable service [INT_SSINTR01] completed.DOM_10126 Request to disable [SERVICE] [Repo_SSINTR01] in [ABORT] mode.DOM_10130 Stop service process for [SERVICE] [Repo_SSINTR01] on node [node01_ssintr01].LIC_10042 Repository instance [Repo_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [Repo_SSINTR01] with mode [ABORT] on node [node01_ssintr01].DOM_10127 Request to disable service [Repo_SSINTR01] completed.DOM_10115 Request to enable [service] [Repo_SSINTR01].DOM_10117 Starting service process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10018 Request to start process for service [Repo_SSINTR01] was successful.SPC_10051 Service [Repo_SSINTR01] started on port [6,019] successfully.DOM_10118 Service process started for service [Repo_SSINTR01] on node [node01_ssintr01].DOM_10121 Selecting a primary service process for service [Repo_SSINTR01].DOM_10120 Service process on node [node01_ssintr01] has been set as the primary node of service [Repo_SSINTR01].DOM_10122 Request to enable service [Repo_SSINTR01] completed.LIC_10041 Repository instance [Repo_SSINTR01] has started on node [node01_ssintr01].DOM_10115 Request to enable [service] [INT_SSINTR01].DOM_10117 Starting service process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [INT_SSINTR01] on node [node01_ssintr01].DOM_10055 Unable to start service process [INT_SSINTR01] on node [node01_ssintr01].DOM_10079 Unable to start service [INT_SSINTR01] on any node specified for the service.Then I tried shutting down the domain and restarting the informatica service again. I got the following error when the Integration service was initializedDOM_10115 Request to enable [service] [INT_SSINTR01].DOM_10117 Starting service process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10009 Service process [INT_SSINTR01] output [Informatica(r) Integration Service, version [9.0.1], build [184.0604], Windows 32-bit].SPC_10009 Service process [INT_SSINTR01] output [Service [INT_SSINTR01] on node [node01_ssintr01] starting up.].SPC_10009 Service process [INT_SSINTR01] output [Logging to the Windows Application Event Log with source as [PmServer].].SPC_10009 Service process [INT_SSINTR01] output [Please check the log to make sure the service initialized successfully.].SPC_10008 Service Process [INT_SSINTR01] output error [ERROR: Unexpected condition at file:[..\utils\pmmetrics.cpp] line:[2118]. Application terminating. Contact Informatica Technical Support for assistance.].SPC_10012 Process for service [INT_SSINTR01] terminated unexpectedly.DOM_10055 Unable to start service process [INT_SSINTR01] on node [node01_ssintr01].DOM_10079 Unable to start service [INT_SSINTR01] on any node specified for the service. I tried creating a new integration service and associating it with the same repository. Even then I got the same error. So I tried creating a new repository and a new integration service. Even then I got the same error. What might be the workaround to start the integration service?

  • How to suppress duplicate records in rtf templates

    Hi All,
    I am facing issue with payment reason comments in check template.
    we are displaying payment reason comments. Now the issue is while making batch payment we are getting multiple payment reason comments from multiple invoices with the same name and it doesn't looks good. You can see payment reason comments under tail number text field in the template.
    If you provide any xml syntax to suppress duplicate records for showing distinct payment reason comments.
    Attached screen shot, template and xml file for your reference.
    Thanks,
    Sagar.

    I have CRXI, so the instructions are for this release
    you can create a formula, I called it cust_Matches
    if = previous () then 'true' else 'false'
    IN your GH2 section, right click the field, select format field, select the common tab (far left at the top)
    Select the x/2 to the right of Supress  in the formula field type in
    {@Cust_Matches} = 'true'
    Now every time the {@Cust_Matches} is true, the CustID should be supressed,
    do the same with the other fields you wish to hide.  Ie Address, City, etc.

  • USE of PREVIOUS command to eliminate duplicate records in counter formula

    i'm trying to create a counter formula to count the number of documents paid over 30 days.  to do this i have to subtract the InvDate from the PayDate.   and then create a counter based on this value.  if {days to pay} is greater than 30 then 1 else 0.
    then sum the {days to pay} field to each group.   groups are company, month, and supplier.
    becuase invoices can have multiple payments and payments can have multiple invoices. there is no way around having duplicate records for the field. 
    so my counter is distorted by by the duplicate records and my percentage of payments over 30 days formula will not be accurate do to these duplicates.
    I've tried Distinct Count based on this formula  if {days to pay} is greater than 30 then . and it works except that is counts 0.00 has a distinct records so my total is off 1 for summaries with a record that (days to pay} is less than or equal to 30.
    if i subract 1 from the formula then it will be inaccurate for summaries with no records over 30 days.
    so i'm come to this.
    if Previous() do not equal
    then
      if {day to days} greater than 30
      then 1
      else 0.00
    else 0.00
    but it doesn't work.  i've sorted the detail section by
    does anyone have any knowledge or success using the PREVIOUS command in a report?
    Edited by: Fred Ebbett on Feb 11, 2010 5:41 PM

    So, you have to include all data and not just use the selection criteria 'PayDate-InvDate>30'?
    You will need to create a running total on the RPDOC ID, one for each section you need to show a count for, evaluating for your >30 day formula. 
    I don't understand why you're telling the formula to return 0.00 in your if statement.
    In order to get percentages you'll need to use the distinct count (possibly running totals again but this time no formula). Then in each section you'd need a formula that divides the two running totals.
    I may not have my head around the concept since you stated "invoices can have multiple payments and payments can have multiple invoices".  So, invoice A can have payments 1, 2 and 3.  And Payment 4 can be associated with invoice B and C?  Ugh.  Still though, you're evaluating every row of data.  If you're focus is the invoices that took longer than 30 days to be paid...I'd group on the invoice number, put the "if 'PayDate-InvDate>30' then 1 else 0" formula in the detail, do a sum on it in the group footer and base my running total on the sum being >0 to do a distinct count of invoices.
    Hope this points you in the right direction.
    Eric

  • Duplicate records problem

    Hi everyone,
    I'm having a a little difficulty resolving a problem with a repeating field causing duplication of data in a report I'm working on, and was hoping someone on here can suggest something to help!
    My report is designed to detail library issues during a particular period, categorised by the language of the item issued. My problem is that on the sql database that out library management system uses, it is possible for an item to have more than one language listed against it (some books will be in more than one language). When I list the loan records excluding the language data field, I get a list of distinct loan records. Bringing the language data into the report causes the loan record to repeat for each language associated with it, so if a book is both in English and French, it will cause the loan record to appear like this:
    LOAN RECORD NO.     LANGUAGE CODE
      123456                             ENG
      123456                             FRE
    So, although the loan only occurred once I have two instances of it in my report.
    I am only interested in the language that appears first and I can exclude duplicated records from the report page. I can also count only the distinct records to get an accurate overall total. My problem is that when I group the loan records by language code (I really need to do this as there are millions of loan records held in the database) the distinct count stops being a solution, as when placed at this group level it only excludes duplicates in the respective group level it's placed in. So my report would display something like this:
    ENG     1
    FRE      1
    A distinct count of the whole report would give the correct total of 1, but a cumulative total of the figures calculated at the language code group level would total 2, and be incorrect. I've encountered similar results when using Running Totals evaluating on a formula that excludes repeated loan record no.s from the count, but again when I group on the language code this goes out of the window.
    I need to find a way of grouping the loan records by language with a total count of loan records alongside each grouping that accurately reflects how many loans of that language took place.
    Is this possible using a calculation formula when there are repeating fields, or do I need to find a way of merging the repeating language fields into one field so that the report would appear like:
    LOAN RECORD     LANGUAGE CODE
      123456                      ENG, FRE
    Any suggestions would be greatly appreciated, as aside from this repeating language data there are quite a few other repeating database fields on the system that it would be nice to report on!
    Thanks!

    if you create a group by loan
    then create a group by language
    place the values in the group(loan id in the loan header)
    you should only see the loan id 1x.
    place the language in the language group you should only see that one time
    a group header returns the 1st value of a unique id....
    then in order to calculate avoiding the duplicates
    use manual running totals
    create a set for each summary you want- make sure each set has a different variable name
    MANUAL RUNNING TOTALS
    RESET
    The reset formula is placed in a group header report header to reset the summary to zero for each unique record it groups by.
    whileprintingrecords;
    Numbervar  X := 0;
    CALCULATION
    The calculation is placed adjacent to the field or formula that is being calculated.
    (if there are duplicate values; create a group on the field that is being calculated on. If there are not duplicate records, the detail section is used.
    whileprintingrecords;
    Numbervar  X := x + ; ( or formula)
    DISPLAY
    The display is the sum of what is being calculated. This is placed in a group, page or report footer. (generally placed in the group footer of the group header where the reset is placed.)
    whileprintingrecords;
    Numbervar  X;
    X

  • Duplicate record with same primary key in Fact table

    Hi all,
       Can the fact table have duplicate record with same primary key . When i checked a cube i could see records with same primary key combination but the key figure values are different. My cube has 6 dimentions (Including Time,Unit and DP) and 2 key figures. So 6 fields combined to form the composite primary key of the fact table. When i checked the records in se16 i could see duplicate records with same primary key. Ther are no parallel loading happening for the cube.
    BW system version is 3.1
    Data base is : Oracle 10.2
    I am not sure how is this possible.
    Regards,
    PM

    Hi Krish,
       I checked the datapacket dimention also. Both the record have same dimention id (141). Except the Keyfigure value there is no other change in the Fact table record.  I know this is against the basic DBMS primary key rule. But i have records like this in the cube.
    Can this situation arise when same records is there in different data packet of same request.
    Thx,
    PM
    null

  • Duplicate Records & CFTRANSACTION

    Maybe I'm missing the obvious on this, as I've never had a
    problem with this before, but recently I developed a custom tag
    that logs and blocks the IP addresses of all these recent DECLARE
    SQL injection attempts.
    The issue is however that my blocked IP address seems to be
    getting duplicates here and there. My "datetime" field in the
    database show the duplicates are all added to the database the
    exact same second. What gives?
    Shouldn't CFTRANSACTION be preventing such a thing, even if
    multiple injection attempts come at the same time?

    I've always coded my applications where my primary key is my
    database's autonumber field, and instead insure my coding is solid
    enough to prevent duplicate records from appearing. Oddly enough
    it's worked flawlessly until now.
    Would not ColdFusion throw errors if I made the "ip_address"
    field my primary key, and my code allowed for a duplicate record to
    be entered? Am I interpretting the CFTRANSACTION code to do
    something it doesn't do?
    Also, the duplicates aren't causing problems, so a DISTINCT
    select isn't necessary. The IP address is blocked whether one
    record or fifty exist in my blocked_ip_addresses table. My goal is
    just not to waste database space.
    Any further help you can provide is MUCH appreciated!
    Thanks!

  • Duplicate records in input structure of model node

    Hi,
    Following is the way, I am assigning data to a model node:
    //Clearing the model input node
    for (int i = wdContext.nodeInsppointdata().size(); i > 0; i--)
         wdContext.nodeInsppointdata().removeElement(wdContext.nodeInsppointdata().getElementAt(i - 1));
    //Creating element of the input model node
    IPrivateResultsView.IInsppointdataElement eleInspPointData;
    //START A
    Bapi2045L4 objBapi2045L4_1 = new Bapi2045L4(); //Instance of the input structure type
    //Populating data
    eleInspPointData = wdContext.nodeInsppointdata().createInsppointdataElement(objBapi2045L4_1);
    wdContext.nodeInsppointdata().addElement(eleInspPointData);
    eleInspPointData.setInsplot(wdContext.currentContextElement().getInspectionLotNumber());
    eleInspPointData.setInspoper("0101");
    //Inspection_Validate_Input is the model node. Adding instance to main node
    wdContext.currentInspection_Validate_InputElement().modelObject().addInsppointdata(objBapi2045L4_1);
    //STOP A
    //Now executing the RFC
    Above code seems to be fine. Works very well for the first time. But, when the user clicks on the same button for the second time, I can see duplicate records getting passed to RFC [Debugged using external breakpoint]. When I am sending 4 records, I can see there are total of 6 records. The number keeps increasing when clicked on the button.
    I am adding multiple records to input model node using the code from START A to STOP A. Does the code look fine? Why do I see multiple records?
    Thanks,
    Sham

    Issue solved.
    After executing RFC, I used following code to clear the input model node:
    try
         wdContext.current<yourBAPI>_InputElement().modelObject().get<yourinputnode>().clear();
    catch (Exception e1)

  • Duplicate records-problems

    Hi gurus
    We created a text datasource in R/3 and replicated it into BW 7.0
    An infopackage (loading to PSA) and DataTransferProcess was created and included in a process chain.
    The job failed because of duplicate records.
    We now discovered that the setting of the u201CDelivery of Duplicate recordsu201D for this Datasource in BW is set to u201CUndefinedu201D
    When creating the datasource in R/3, there were no settings for the u201CDelivery of duplicate recordsu201D.
    In BW, Iu2019ve tried to change the settings of u201CDelivery of Duplicate data recordsu201D to NONE, but when I go into change-mode, the u201CDelivery of duplicateu201D is not changeable.
    Does anyone have any suggestion on how to solve this problem?
    Thanks,
    @nne Therese

    Hi Muraly,
    I do have the same issue. I am loading texts from R/3 to PSA using Infopackage with Full update. From PSA I am using DTP with Delta with the option" vaild records update, No reporting(Request Record).
    It was running fine for last few weeks like transfer records and added records are same as in the PSA request every day.
    suddenly the load has filed to infoobect . I deleted the request from Infoobject and reloaded using the DTP then again failed. I tried loading full update as it texts then again failed. Now I analised the error it says Duplicate records. So I changed the DTP by checking the option Handling Duplicate records and loaded with full update . It worked fine the transferred records more than 50000 and added records are exact no of the PSA request.
    I reset the DTP again back to Delta and loaded today but the transferred records are 14000 and added records(3000) same as PSA request. I am fine if you see the history of loads the no of records in Transfer and Added in infoobjects and the no of records in PSA request are same every day..
    Why this difference is now? But in Production I have no issues. Since I changed the DTP if I transport to Production does it make any difference. I am first time doing BI 7.0.
    Please suggest me and explain me if I am wrong.
    Thanks,
    Sudha..

  • Duplicate Records generating for infocube.

    hi all,
    when I load the data from datasource to infocube I am getting the duplicate records.Actually the data is loaded into the datasource from a flat file when i executed first time it worked but when i changed  the flat file structure i am not getting the modified content in the infocube instead it is showing the duplicates.In the data source     'preview data' option  it is showing the required data i.e modified flat file) .But where as in the infocube  i made all the necessary changes in the datasource,infocube,infopackage,dtp but still I am getting the duplicates. I even deleted the data in the infocube.Still i am getting the duplicates. What is the ideal solution for this problem ? One way is to create a new  data source with the modified flat file but  I think it is not ideal .Then what is the possible solution with out creating the data source again.
    Edited by: dharmatejandt on Oct 14, 2010 1:46 PM
    Edited by: dharmatejandt on Oct 14, 2010 1:52 PM
    Edited by: dharmatejandt on Oct 14, 2010 1:59 PM

    Finally i got it .I deleted the requestids in the infopackage ( right click infopackage go to manage) then i executed the transformation ,dtp finally i got the required output with out duplicate.
    Edited by: dharmatejandt on Oct 14, 2010 4:05 PM

  • Duplicate Records in Details for ECC data source. Help.

    Hello. First post on SDN. I have been searching prior posts, but have come up empty. I am in the middle of creating a report linking directly into 4 tables in ECC 6.0. I am having trouble in getting either the table links set up correctly, or filtering out duplicate record sets that are being reporting in the details section of my report. It appears that I have 119 records being displayed, when the parameters values should only yeild 7. The details section is repeating the 7 records 17 times (there are 17 matching records for the parameter choices in one of the other tables which I think is the cause).
    I think this is due to the other table links for my parameter values. But, I need to keep the links the way they are for other aspects of the report (header information). The tables in question are using an Inner Join, Enforced Both, =. I tried the other link options, with no luck.
    I am unable to use the "Select Disctinct Records" option in the Database menu since this is not supported when connecting to ECC.
    Any ideas would be greatly appreciated.
    Thanks,
    Barret
    PS. I come from more of a Functional background, so development is sort of new to me. Take it easy on the newbie.

    If you can't establish links to bring back unique data then use a group to diplay data.
    Group report by a filed which is the lowest commom denominator.
    Move all fields into group footer and suppress Group header and details
    You will not be able to use normal summaries as they will count/sum all the duplicated data, use Running Totals instead and select evaluate on change of the introduced group
    Ian

  • Remove Duplicate record

    Dear All,
    I have oracle 10g R2 On windows.
    I have table structure like below...
    ASSIGNED_TO
    USER_ZONE
    CREATED
    MASTER_FOLIO_NUMBER
    NAME
    A_B_BROKER_CODE
    INTERACTION_ID
    INTERACTION_CREATED
    INTERACTION_STATE
    USER_TEAM_BRANCH
    A4_IN_CALL_TYPE
    A5_IN_CALL_SUBTYPE
    DNT_AGING_IN_DAYS
    DNT_PENDING_WITH
    DNT_ESCALATION_STAGE_2
    DT_UPDATEI use sql loader to load the data from .csv file to oracle table and have assign the value to dt_update sysdate. Everytime i execute the sql loader control file dt_update set as sysdate.
    Sometimes problem occures while inserting data through sql loader and half row get insert. after solving the problem again i execute sql loader and hence these duplicate records get inserted.
    Now I want to remove all the duplicate records for those dt_update is same.
    Please help me to solve the problem
    Regards,
    Chanchal Wankhade.

    Galbarad wrote:
    Hi
    I think you have two ways
    first - if it is first import in your table - you can delete all record from table and run import yet one time
    second - you can delete all duplicate records and not running import
    try this script
    <pre>
    delete from YOUR_TABLE
    where rowid in (select min(rowid)
    from YOUR_TABLE
    group by ASSIGNED_TO,
    USER_ZONE,
    CREATED,
    MASTER_FOLIO_NUMBER,
    NAME,
    A_B_BROKER_CODE,
    INTERACTION_ID,
    INTERACTION_CREATED,
    INTERACTION_STATE,
    USER_TEAM_BRANCH,
    A4_IN_CALL_TYPE,
    A5_IN_CALL_SUBTYPE,
    DNT_AGING_IN_DAYS,
    DNT_PENDING_WITH,
    DNT_ESCALATION_STAGE_2,
    DT_UPDATE)
    </pre>Have you ever tried that script for deleting duplicates? I think not. If you did you'd find it deleted non-duplicates too. You'd also find that it only deletes the first duplicate where there are duplicates.
    XXXX> CREATE TABLE dt_test_dup
      2  AS
      3  SELECT
      4      mod(rownum,3) id
      5  FROM
      6      dual
      7  CONNECT BY
      8      level <= 9
      9  UNION ALL
    10  SELECT
    11      rownum + 3 id
    12  FROM
    13      dual
    14  CONNECT BY
    15      level <= 3
    16  /
    Table created.
    Elapsed: 00:00:00.10
    XXXX> select * from dt_test_dup;
            ID
             1
             2
             0
             1
             2
             0
             1
             2
             0
             4
             5
             6
    12 rows selected.
    Elapsed: 00:00:00.18
    XXXX> delete
      2  from
      3      dt_test_dup
      4  where
      5      rowid IN ( SELECT
      6                    MIN(rowid)
      7                 FROM
      8                     dt_test_dup
      9                 GROUP BY
    10                     id
    11                )
    12  /
    6 rows deleted.
    Elapsed: 00:00:00.51
    XXXX> select * from dt_test_dup;
            ID
             1
             2
             0
             1
             2
             0
    6 rows selected.
    Elapsed: 00:00:00.00

  • Duplicate records error?

    hello all
    while extracting master data am getting duplicate records error?
    how do i rectify this?
    in infopackage screen in processing tab,will i get the  option " ignore double data records",?
    when will this option will be enable?
    regards

    Hello
    This option will be available only for Master Data and not for Transactional Data. You could control the Duplicate Records for Transactional Data in ODS, there is an option in the ODS Settings.
    ***F1 Help
    Flag: Handling of duplicate data records
    From BW 3.0 you can determine for DataSources for master data attributes and texts whether the extractor transfers more than one data record in a request for a value belonging to time-independent master data.
    Independently of the extractor settings (the extractor potentially delivers duplicate data records) you can use this indicator to tell the BW whether or not you want it to handle any duplicate records.
    This is useful if the setting telling the extractor how to handle duplicate records is not active, but the system is told from another party that duplicate records are being transferred (for example, when data is loaded from flat files).
    Sankar

  • Duplicate record identifier and update

    My records look like 
    Name City Duplicateindicator 
    SAM   NYC   0
    SAM   NYC1 0
    SAM    ORD  0
    TAM   NYC  0
    TAM   NYC1  0 
    DAM   NYC  0  
    for some reason numeric character are inserted into city which duplicated my records , 
    I need to 
    Check for the duplicate records by name ( If name is repeating ) check for city if they  having same city (NYC and NYC1) are consider same city here. I am ok to do this for one city at a time.
    SAM has a duplicate record as NYC and NYC01 , the record which is having  SAM   NYC1 0 must be updated to SAM   NYC1 1 

    Good day tatva
    Since the Cities names is not exactly the same, you will need to parse the text somehow in order to clean the numbers from the name, this is best to do with SQLCLR using regular expression (If this fit your need, then I can post the CLR code for you).
    In this case you use simple regular expression replace function.
    On the result of the function you use simple query with the function ROW_NUMBER over (partition by RegularExpressionReplace(ColumnName, '[0-9]') order by
    ColumnName)
    on the result of the ROW_NUMBER  every row with ROW_NUMBER  more then 1 is duplicate
    I hope this useful :-)
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • Remove duplicate records in Live Office, caused by CR Groups

    hello all
    i have a CR with groups. all works well, until i use the report in live office, were it is duplicating the group data for each of the detail records
    i have removed the details from the CR report, leaving only the group data, but it still happens
    anyone have a work around ?
    thanks
    g

    Hi,
    First you select the report name from the left panel and  check whether  option is coming on not.
    or you try  with right click on any report cell then go to live office and object properties.
    Second , you are getting duplicate record in the particular this report or all reports.And how many time highlight expert you are using in this report.
    Thanks,
    Amit

Maybe you are looking for