SRM issue with Invoice delta loads

I am loading data into DSO - 0SRIV_D3 (Invoices) from SRM where I am getting duplicate data records updated into the DSO.  The extractor for this DSO is 0SRM_TD_IV is AIMD which should deliver only after image delta records on changes.
Can someone advise what needs to be done to avoid data record duplication ? 
In the Data source setting, the 'Delivery of Duplicate Data Records is set to "Undefined" ' mode.  I have tried to change the data source setting to "None" but this is greyed out and cannot change in BI.  Any help?
Thanks.
Ramesh

I have been wondering about the same thing. Some information would be greatly appreciated. Anyone?
Thanks and best regards
debru

Similar Messages

  • Issues with 0PP_C03 delta loads

    Hi,
    I loaded the data from 2LIS_04_P_ARBPL & 0PP_WCCP to 0PP_C03. Initialization of the data load went fine. Did the delta loads to BI and observed that I am not getting any of the Key figure data in the cube. All the values are showing as Zero.
    When I observed the in the PSA table and in the RSA3 , I observed that for every order 2 entries got created, One is with +ve values and another with u2013ve. So at the end while updating to cube its nullifying the values. Because of this reason I am not able to view the latest data which is updated as Deltas.
    I am not sure what settings I missed. Could some one please help me to fix this issue.
    Thanks & Regards,
    Shanthi.

    Thanks Francisco Milan and Shilpa for the links. Itu2019s very useful. But still I didnu2019t able to find the cause of the issue.
    My data source 2LIS_04_P_ARBPL is of ABR type and the update mode for KF is Summation. In the data source level, I am getting the values with before & after image (same entries one with +ve & another with u2013ve) and as I am using the u201CSummation as my update type for KF its getting nullified.
    Because of this reason I could not able to get the values in my report. Could any one pls help me on this as I am reaching the go-live date. I need to address this issue immediately. Thanks for all your inputs.
    Regards,
    Shanthi.

  • Issue with invoice

    Hi Experts
    I am facing an issue with invoice. When i am cancelling an invoice , the system is actually not clearing the accounting doc for original invoice and neither it is clearing the accounting doc for cancelled invoice.
    Also if i see the customer line item of the accounting doc for the cancelled invoice , then instead of taking the posting key as 12 i.e. Reverse Invoice it is taking as 11 : Credit Memo which is quite unusual and as far as i know this is something standard and not modifiable.
    Kindly let me know if you can suggest any solution or comments to this issue
    Thanks

    Hi,
    Check copy controls in VTFF.
    Cancelled invoice should have the original invoice number in header and original invoice should contain the cancelled tick at header.
    Check if you are getting those status in the respective documents.
    Also, your delivery or order with reference to which the invoice was created will become open or being processed.
    Also try running std program for reorganization of status indexes in SE38 , program RVV05IVB.
    Regards,
    Amit

  • Issue in the Delta load using RDA

    Hi All,
    I am facing an issue while trying to load delta using RDA from R/3 source system.
    Following are the steps followed:
    1. Created a realtime Generic Datasource with timestamp as delta specific field and replicated it to BI.
    2. First I have created the Infopackage(Initialization with data transfer) and loaded upto PSA.
    3. Created a standard DTP to load till DSO and activated the data.
    4. Then created a realtime delta infopackage and assigned it to a Daemon.
    5. Converted the standard DTP to realtime DTP and assigned the same to the Daemon.
    6. Started the Daemon, giving an interval of 5 minutes.
    In the first time, Initialization with data transfer is taking the records correctly. But when I run Daemon for taking delta records, its taking all the records again, i.e. both the previously uploaded historical data and the delta records).
    Also after the first delta run, the request status in the Daemon monitor, for both Infopackage and DTP changes to red and Daemon stops automatically.
    Can anyone please help me to solve these issues.
    Thanks & Regards,
    Salini.

    Salini S wrote:
    In the first time, Initialization with data transfer is taking the records correctly. But when I run Daemon for taking delta records, its taking all the records again, i.e. both the previously uploaded historical data and the delta records). .
    If I understand you correctly you Initially did a full load. Yes? Well next you need to do intialise & after that delta.
    The reason is that if you will select delta initialisation & initialisation without data transfer- it means delta queue is initialised now & next time when you will do delta load it will pick only changed records
    If you will select delta initialisation & initialisation with data transfer- It means delta queue is initialised & it will pick records in the same load.
    As you know your targets will receive the changed records from the delta queue.
    Salini S wrote:
      Also after the first delta run, the request status in the Daemon monitor, for both Infopackage and DTP changes to red and Daemon stops automatically.
    I take it the infopackage has run successfully? Did you check? If it has and the error is on the DTP then i suggest the following.
    At runtime, erroneous data records are written to an error stack if the error handling for the data transfer process is activated. You use the error stack to update the data to the target destination  once the error is resolved.
    To resolve the error, in the monitor for the data transfer process, you can navigate to the PSA maintenance by choosing Error Stack in the toolbar, and display and edit erroneous records in the
    error stack.
    I suggest you create an error DTP for an active data transfer process  on the Update tab page (If key fields of the error stack for DataStore objects are overwrite On the Extraction tab page under  Semantic Groups, define the key fields for the error stack.)  The error DTP uses the full update mode to extract data from the error stack (in this case, the source of the DTP) and transfer
    it to the target that you have already defined in the data transfer process. Once the data records have been successfully updated, they are deleted from the error stack. If there are any erroneous data records, they are written to the error stack again in a new error DTP request.
    As I'm sure you know when a DTP request is deleted, the corresponding data records are also deleted from the error stack.
    I hope the above helps you.

  • Performance issues with Planning data load & Agg in 11.1.2.3.500

    We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E  every time in Prod. Specs of the new Env is much higher than the old one.
    We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?

    Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • I'm using Firefox 3.6.23 on my windows 2000 and lately i've been having issues with web pages loading and need help figuring out why.

    Some of the issues : '''1'''. Page loads blank showing web address in tab. Try reloading usually with no results.(problem have the most!) '''2'''. Message appears "problem loading page, server not found" (diff times & diff websites). '''3.''' Pg with msg "oops link broke, DNS error, server not found" then lists suggested links to sites for fix though all links connect to site IGEARED.COM. (new issue) I don't know what to do please help me identify and fix my issue(s) in english, kind of a newbie : ). thx

    Try this.
    Type in the address bar about:config.
    Accept the warning.
    In the page that appears, in the Filter box, type network.http.max-connections. Change the value to 32 (which is probably set to 256 in your case).
    Close that page. Restart the browser.

  • Peformance issue with iSetup for loading fnd_flex_value_norm_hierarchy recs

    Hi
    The customer site where I am working in currently has implemented isetup to load data from Hyperion DRM to Oracle GL. They are currently on 11i.AZ.F patch level.
    The customer has constantly had problems in two areas with iSetup -
    1. iSetup has a limitation that all existing children records plus new ones being added have to sent in the XML file provided as input to iSetup. For eg. one parent ENTXXX might have 15000 children records i.e records in FND_FLEX_VALUE_NORM_HIERARCHY. Now when 2 children are removed from this parent in Hyperion DRM and these two records sent to Oracle iSetup for getting deleted from the parent, the current implementation stipulates that iSetup requires in the XML input file the remaining 14998 records. Thats how it knows to remove the two nodes. This is a huge performance issue.
    They are looking at loading into iSetup only the two deleted nodes with an action code of DELETE and let iSetup handle this rather than having to provide the entire list of children less the removed nodes for iSetup.
    2. Can we load using the final xml file directly by calling any Java class/process directly rather than going through iSetup Loader Concurrent Request?
    3. Is there any documentation on how to use the isetup Java classes?
    Will upgrading to 11i.AZ.H patch level solve any of the above concerns/issues?
    Regards,
    Richard

    #1. I guess, you are referring to the problems that you are encountering while loading GL COA. You may log SR against GL to know more details on how API loads data to instance. DELETE is a specific requirement in your case and I would suggest you to work with GL team and they may provide some solution or workaround to overcome this performance issue.
    #2. No, concurrent program does good amount of pre-processing that you would not get if you directly call the java classes.
    #3. Not sure, what you are exactly looking at. Are you looking for user guide to write your own iSetup API classes?
    11i.AZ.H has got good amount of performances fixes overall and it is recommended release on top of 11.5.10.2CU. I would suggest you to upgrade to 11i.AZ.H.
    Specific to GL COA issue, I don't think 11i.AZ.H would really help you much. It is very much functional issue with respect to API and you have to work with GL team to get workaround/solution. This may involve customizing the API according to your requirement.
    Thanks
    Mugunthan.

  • Issues with ondemand Data loader

    Hello,
    We are facing 2 issues with on demand data loader.
    Issue 1
    While inserting 'Contacts' and 'Assets' if the 'Account' information is wrong, the records are created with out accounts even though "Account" is a required field.
    Issue 2
    While inserting records data loader is not checking for duplicates. So duplicate records are getting created.
    Kindly advise if anyone has come across similar issues. Thanks
    Dipu
    Edited by: user11097775 on Jun 20, 2011 11:46 PM

    Hello,
    We are facing 2 issues with on demand data loader.
    Issue 1
    While inserting 'Contacts' and 'Assets' if the 'Account' information is wrong, the records are created with out accounts even though "Account" is a required field.
    Issue 2
    While inserting records data loader is not checking for duplicates. So duplicate records are getting created.
    Kindly advise if anyone has come across similar issues. Thanks
    Dipu
    Edited by: user11097775 on Jun 20, 2011 11:46 PM

  • IPAD issue with app store loading..

    Hi All,
    I have very strange issue of using app store in my new Ipad. In my home network if I connects to app store it shows "loading..." and after few minutes it says "Cannot connect to iTunes Store" (But Safari web browser works fine and able to connect to any website. Using my laptop itunes, I am able to connect to itunes store.) The same problem is encountering in my Ipod Touch(IOS 6) also. But If I use any of these apple devices in any other network outside my home network(even with my neighbor's network who uses same service provider as me) it works very well. I use Beetel ADSL2+router(450TC1). Even I connected a new WiFi router to this modem and tried, but no luck. So guessing it has nothing to do with WiFi router. I am trying this for the past few weeks. Can anybody help to find out the issue?. I suspect it could be some issue with modem, because it works in all other networks..

    App store frequently asked questions
    http://support.apple.com/kb/HT2001
    Cannot connect to iTunes Store
    http://support.apple.com/kb/TS1368

  • Table with Full / Delta Load information?

    Is there a table I can go to where I can see if a cube is a full load vs delta?
    Thanks, I will assign points!
    ~Nathaniel

    Hi,
    ckeck the table ROOSPRMSC in R/3.It gives you the complete details of your Init and Delta Loads.
    hope this helps.
    Assign points if useful.
    Regards,
    Venkat

  • Issue with flat file loading timing out

    Hello
    I have a scenario, where I am loading a flat file with 65k records into a cube. The problem is, in the loading process, I have to look up the 0Material table which has a million records.
    I do have an internal table in the program, where I select a subset of the Material table ie around 20k to 30k records. But my extraction process takes more than 1 1/2 hrs and is failing (timed out).
    How can i address this issue? I tried building indexes on the Material table and its not helping.
    Thanks,
    Srinivas.

    Unfortunately, this is BW 3.5, so there is no END routine option here. And I tried both .csv and notepad file methods, and both are creating problems for us.
    This is the total code, do you guyz see any potential issues:
    Start Routine (main code)
    refresh i_oldmats.
    refresh ZI_MATL.
    data: wa_datapak type transfer_structure.
    loop at datapak into wa_datapak. (** I collect all the old material numbers from my flat file into an internal table i_oldmats**)
       i_oldmats-/BIC/ZZOLDMATL = wa_datapak-/BIC/ZZOLDMATL.
       collect i_oldmats.
    endloop.
    sort i_oldmats.
      SELECT /BIC/ZZOLDMATL MATERIAL (** ZI_MATL only has recs. where old materials exist, this gets about 300k records out of 1M**)
             FROM /BI0/PMATERIAL
             INTO ZI_MATL FOR ALL
             ENTRIES in i_oldmats WHERE
              /BIC/ZZOLDMATL = i_oldmats-/BIC/ZZOLDMATL .
                collect    ZI_MATL.
      Endselect.
      Sort ZI_MATL.
    Transfer rule routine (main code)
    IF TRAN_STRUCTURE-MATERIAL = 'NA'.
        READ TABLE ZI_MATL INTO ZW_MATL
        WITH KEY /BIC/ZZOLDMATL = TRAN_STRUCTURE-/BIC/ZZOLDMATL
        BINARY SEARCH.
        IF SY-SUBRC = 0.
          RESULT = ZW_MATL-MATERIAL.
        ENDIF.
      ELSE.
        RESULT = TRAN_STRUCTURE-MATERIAL.
      ENDIF.
    Regards,
    Srinivas.

  • Issues with WSDL file loading in BPEL process Composite

    hi',
    I am facing lot of issues in 11G
    I am using:-->"Jdeveloper Studio Edition Version 11.1.1.2.0"
    When I am trying to open a already working composite from a different machine in my machine, it start giving error
    it is unable to build "Unable to load WSDL file", I have all the webservices in different machine OSB and my machine
    is targeting that machine OSB with IP address, In my BPEL process I am using localhost, I am able to see all the methods
    in composite and dosent shows any error, but when I try to compile it it gives unable to load WSDL file.
    please advice,
    thanks
    Yatan

    I suppose, any unhandled exception in the one-way BPEL process could have caused rollback of the instance and hence the instance might not be seen. Add fault handlers-catch/catch-all blocks in the one-way BPEL process if not done already, and test.

  • Issue with Site Configuration / Load Balancing

    We’re noticing strange behavior with our servers that are configured behind a load balancer. We’ve got two servers with different ports and a load balancer:
    Server1: https://host1:30003/opensso
    Server2: https://host2:30103/opensso
    Load Balancer: https://loadbalancer:30003/opensso
    When we go to the admin console, we can access Server1 without a problem, but the second time we go the load balancer sends us to Server2, and our browser returns a page not found error. We’ve traced the HTTP traffic and discovered that every other time we go to the admin console (the load balancers are configured round robin), Server2 always returns a bogus HTTP found URL. The response it provides is something like https://loadbalancer:*30103*/opensso/UI/Login (just an example).
    The issue here is that it is properly directing the end user’s browser to the load balancer DNS entry. It is not however directing the end user’s browser to the proper port. It seems to sends its own port value to the browser. Obviously when the browser tries to access this URL the Load Balancer rejects the request because it is not listening on port 30103.
    Can Multiple OpenSSO application servers (configured as a site) run from behind a load balancer when they are listening on different ports? If so, why is the application server responding to the user request with its own port, rather than that of the load balancer, yet still providing the DNS hostname entry for the load balancer the whole time.

    Major updates of Muse are targeted to release roughly every quarter. The 1.0 release was in mid-May. The 2.0 release was in mid-August. A fundamental change to image loading would only appear as part of a major update due to the engineering and testing efforts required.
    As provided in your previous thread http://forums.adobe.com/message/4659347#4659347 the only workaround until then is to reduce the number of images in the slideshow.

  • Issue with Invoice, credit memo

    Hi Guru's,
    A sales invoice was created in the for 100,000 and the value fields in COPA that were hit were:-
    Billed quantity
    phl volume
    Quantity 01
    sales quantity
    Non-reporting WXX1
    Other sale ded
    other sale ded 2
    Revenue
    Standard cogs 5
    Standard price
    Customer service needed to pass credit but a the credit memo was created for the full amount 100,000 of the invoice by mistake.
    The credit memo hit the following value fields in COPA
    Non-reporting WXX1
    Other sale ded
    other sale ded 2
    Revenue
    To correct the mistake of the credit memo a debit memo was created (but with the wrong amount 50,000) and it also hit the same value fields as the credit memo.
    To correct this my thought is reverse the debit memo with the wrong amounts and recreate the debit memo with the full amount bringing the invoice back to the full amount. Then re-create the credit memo which I hope then will hit COGS ? fixing and the showing the correct values in COPA? Or is there a better way to go around this issue.
    Please advice.
    Best Regards,
    Yasmeen

    Thanks Kiran for response.
    Let me clear the question.
    GR posting detail.
    Reference no - 12345, doc date 06/21/2008, posting date 06/21/2008, amount 1, USD, CC-100.
    Can I post the invoice against the same detail using MRHR.
    (I'm not sure what extactly MRHR verify for dupliacte entry because, I can post the IR if the I'm changing the docu date from 06/21/2008 to any other date.)
    I'm able to simulate the IR without any error while using MR01.
    Regards,
    Aditya

Maybe you are looking for

  • Sitch over to TAXINN

    Dear Guys, Let me know implications on business when tax procudure changed from TAXINJ to TAXINN ans also how to do the configuration, send me some documents Regards: Tata Reddy

  • Reconnect wireless automatically after network drops

    My Mac mini is used as a file server so it's always on plus it sits in the corner with no keyboard / mouse or screen connected. My problem is that when I reboot my wireless router (linksys) the wireless connection on the mini is not reconnected autom

  • Upper limit on imported images?

    Is there a cap on the number of photos that Lightroom will : 1. import at any one time? 2. load into it's file library before it is full 3. or are the number of photos that can be imported and managed infinite and dependent only on the disk space ava

  • Can't do attachments in hotmail

    Attachments no longer work in hotmail. "Attachment" word stays grey and nothing happens

  • Error while installing Premiere Elements (trial)

    Dear all, For my study I need to use Adobe Premiere Elements. A few days ago I decided to download it through the Adobe installer. All worked well, the setup started successful as well. But when it reaches the 'shared technologies' part, it displays