Huge size of ARTMAS IDOCs

Hello gurus,
We are facing an issue in trasmiting generic articles(MARA-ATTYP = 01) along with all its variants via outbound ARTMAS IDOC(Transaction BD10). Some of the articles have around 40-50 variants. When such generic articles are transmitted via BD10, the size of IDOC is too huge for middleware to handle.
These IDOCs fail at the middleware end as the XML file is too huge for them to handle.
Do we have any standard functionality which wouldallow us to split the IDOC data into smaller chunks and transmit the same.
Any pointers to this issue would be a great help!
Thanks in advance.
Regards,
Mohini

Hi
   I think you should split the idoc to 2 separate idoc's
Please refer to this link to split the idoc's
Message split to 2 differenet IDOCs
Thanks
Viquar Iqbal

Similar Messages

  • Huge size of ARTMAS IDOC

    Hello gurus,
    We are facing an issue in trasmiting generic articles(MARA-ATTYP = 01) along with all its variants via outbound ARTMAS IDOC(Transaction BD10). Some of the articles have around 40-50 variants. When such generic articles are transmitted via BD10, the size of IDOC is too huge for middleware to handle.
    These IDOCs fail at the middleware end as the XML file is too huge for them to handle.
    Do we have any standard functionality which wouldallow us to split the IDOC data into smaller chunks and transmit the same.
    Any pointers to this issue would be a great help!
    Thanks in advance.
    Regards,
    Mohini
    Edited by: Mohini Shewale on Feb 12, 2009 1:30 PM

    Hi Mohini,
    what does it mean, the IDOC is too big ?
    Which segments have the biggest numbers?
    With a generic article you have
    1 MARA segment for generic article
    and 40 MARAs for the variants.
    Which middleware is not able to handle this size?
    Do you really need all the data in the IDOC? Maybe you can filter some segments like MBEW or MARD to make it smaller.
    Or you reduce some fields per segment.
    You can filter it in distribution model transaction BD64.
    regards
    Björn
    Edited by: Björn Panter on Feb 12, 2009 2:22 PM

  • Huge size file processing in PI

    Hi Experts,
    1. I have seen blogs which explains processing huge files. for file and sftp
    SFTP Adapter - Handling Large File
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    Here also we have constrain that we can not do any mapping. it has to be EOIO Qos.
    would it be possible, to process  1 GB size file and do mapping? which hardware factor will decide that sytem is capable of processing large size with mapping?
    is it number of CPUs,Applications server(JAVA and ABAP),no of server nodes,java,heap size?
    if my system if able to process 10 MB file with mapping there should be something which is determining the capability.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    2. consider pi is able to process 50 MB size message with mapping. in order to increase the performance what are the options we have in PI
    i have come across these two point many times during design phase of my project. looking for your suggestion
    Thanks.

    Hi Ram,
    You have not mentioned what sort of Integration it is.You just mentioned as FILE.I presume it is FILE To FILE scenario.In this case in PI 711 i am able to process 100MB(more than 1Million records ) file size with mapping(File is in the delta extract in SAP ECC AL11).In the sender file adapter i have chosen recordset per message and processed the messages in bit and pieces.Please note this is not the actual standard chunk mode.The initial run of the sender adapter will load the 100MB file size into the memory and after that messages will be sent to IE based on recordset per message.If it is more than 100MB PI Java starts bouncing because of memory issues.Later we have redesigned the interface from proxy to file asyn and proxy will send the messages to PI in chunks.In a single run it will sent 5000 messages.
    For PI 711 i believe we have the memory limtation of the cluster node.Each cluster node can't be more than 5GB again processing depends on the number of Java app servers and i think this is no more the limitation from PI 730 version and we can use 16GB memory as the cluser node.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    If i understand this i think if it is asyn communication then definitely 1GB data can sent to webservice however messages from Proxy should sent to PI in batches.May be the same idea can work for Sync communication as well however timeouts in receiver channel will be the next issue.Increasing time outs globally is not best practice however if you are on 730 or later version you can increase timeouts specific to your scenario.
    To handle 50 MB file size make sure you have the additional java app servers.I don't remember exactly how many app server we have in my case to handle 100 MB file size.
    Thanks

  • XSL mapping of ARTMAS IDoc - carriage returns and white space

    Hi all. I have tried all sorts here and am completely stuck. I am mapping an article master ARTMAS IDoc using XSL mapping and an extra carriage return and white space characters are being added to the FIELD1 and FIELD2 fields of the E1BPE1MARAEXTRT segment and I don't know why! These fields contain all our bespoke field values joined together so they are 229 and 250 characters long respectively, which I think is related to the problem. The mapping program looks like this:
    <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
    <xsl:output method="xml" encoding="UTF-8" omit-xml-declaration="no" indent="yes"/>
    <xsl:template match="/">
    <ARTMAS04>
    <IDOC>
      <xsl:copy-of select="//ARTMAS04/IDOC/EDI_DC40"/>
      <xsl:copy-of select="//ARTMAS04/IDOC/E1BPE1MATHEAD"/>
      <xsl:copy-of select="//ARTMAS04/IDOC/E1BPE1MARART"/>
      <xsl:copy-of select="//ARTMAS04/IDOC/E1BPE1MARAEXTRT"/>
      <xsl:copy-of select="//ARTMAS04/IDOC/E1BPE1MAKTRT"/>
      <xsl:for-each select="//ARTMAS04/IDOC/E1BPE1MARCRT">
        <xsl:if test="(PLANT='D100') or (PLANT='D200') or (PLANT='D300')">
          <xsl:copy-of select="."/>
        </xsl:if>
      </xsl:for-each>
      <xsl:copy-of select="//ARTMAS04/IDOC/E1BPE1MARMRT"/>
      <xsl:copy-of select="//ARTMAS04/IDOC/E1BPE1MEANRT"/>
    </IDOC>
    </ARTMAS04>
    </xsl:template>
    </xsl:stylesheet>
    I have a test IDoc going through this and the first 3 characters of the FIELD1 are "BIK". If I look at the source of the XML file, the problem area looks like this:
    <E1BPE1MARAEXTRT SEGMENT="1">
          <FUNCTION>004</FUNCTION>
          <MATERIAL>000000000000895649</MATERIAL>
          <b><FIELD1>
            BIK</b>      0  0.00 T                                    000000001CARRERA FURY 04 20"     CARRERA FURY 04 20" 2005092420040622    X                    00000000   20031104 00200406140000000000                    0.00 0000000
          </FIELD1>
    You can't really see it above, but there is a carriage return and 8 white spaces before the BIK. Where does this carriage return and white space come from?! Any help would be really appreciated - this is driving me crazy!
    Many thanks,
    Stuart Richards

    Stuart,
    I think I know the cause to your problem. I am not an XSLT programmer but based on searching this forum I found how I can acheive the carriage return at the end of my document. Your xsl statement is as follows:
    <xsl:output method="xml" encoding="UTF-8" omit-xml-declaration="no" indent="yes"/>
    Change the indent value from yes to no. This should remove the carraige return. Hope this works for you.
    I actually need a carriage return and line feed on my document so I am still stuck. All I have acheived is the carraige return.
    Thanks,
    Jim

  • How to find the Size of an Idoc With data

    Hi All,
    There is business case where in the partner needs an Idoc with size 200KB - 250KB  triggerd from SAP.
    Is there any chace of knowing the size of an Idoc with data before it gets tirrgerd from SAP.
    Please Help
    Thanks,
    Suma

    Hi Dinil,
    I have provided you with a sample that would show you the how you can get the size of an arraylist
    the sample has a page untitled1 and a bean named test.
    I have run in on jdev 11.1.2 and it is ok, it will be ok on 11.1.3
    after running the sample you will see the 2.
    please remember that you must add the JSTL taglib on the viewcontroller.
    just right click on viewcontroller, go to tag lib select the jstl.
    page
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
    "http://www.w3.org/TR/html4/loose.dtd">
    <%@ page contentType="text/html;charset=UTF-8"%>
    <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f"%>
    <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h"%>
    <%@ taglib uri="http://java.sun.com/jsp/jstl/functions" prefix="fn"%>
    <f:view>
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
    <title>untitled1</title>
    </head>
    <body>
    <h:form>
    <h:outputText value="#{fn:length(test.a)}"/>
    </h:form>
    </body>
    </html>
    </f:view>
    bean
    import java.util.ArrayList;
    public class Test {
    public Test() {
    a= new ArrayList();
    Object o=new Object();
    a.add(o);
    a.add(o);
    ArrayList a;
    public void setA(ArrayList a) {
    this.a = a;
    public ArrayList getA() {
    return a;
    I hope this sample came handy.
    regards,

  • Huge size scheme export in standard edition

    Hi,
    Source:-
    Oracle 10.2.0.3 standard edition on Linux.
    Destination:-
    Oracle 11gR2 - enterprise edition.
    I have to export a scheme of size 250+ gb, it taking long time to export as we do not have parallelism too in STD edition.
    Is there any way? Where I can perform export & import faster?
    Constraint is expdp of schema taking 30+ hours, so If transportable tablespaces, then is there any compatibility problem with source and destination versions and editions?
    An what is the procedure?
    Thanks.

    Hemant K Chitale wrote:
    Can i use 11gR2 binaries to perform TTS of 10g Std edition database?
    You could concurrently run multiple export sessions with table lists --- but you wouldn't get data consistency if the tables are being updated.Thanks for your information.
    This question is now out of TTS, talking in general expdp/impdp.
    I have posted this question in Export/import division of Database, But no quick responses so i moved my question to Database-General.
    Solomon Yakobson mentioned we can use 11gR2 binaries to perform export schema of 10g database, from below link.
    Huge size scheme export in standard edition
    Hope this will work. any more suggestions on this?
    Any suggestions
    Edited by: Autoconfig on Oct 17, 2011 6:32 AM

  • JDOM parser fails to parse Huge Size Xml Files ??? Unable to Parse ???

    Hi All,
    When i transformed or parsed many XML Files of huge size...I am getting java.lang.OutOfMemory error for all the huge xml files. It is working fine for files which is of small size.
    I've 2GB ram in my system. I have also set heapsize for the JVM.
    -Xms512M -Xmx1800M (or) -Xms512M -Xmx1500M
    I like to know what are the drawbacks of JDOM parser ?
    What is the maximum size of the Xml which JDOM can parse ?
    I've read a long time before that parsers have certain limitations. Can anybody tell me the limitations of the parses?
    Please help. i'm currently using jdom.jar to parse the xml files.
    Thanks,
    J.Kathir

    Hi All,
    When i transformed or parsed many XML Files of huge
    size...I am getting java.lang.OutOfMemory error for
    all the huge xml files. It is working fine for files
    which is of small size.Of course it is.
    >
    I've 2GB ram in my system. I have also set heapsize for the JVM.
    -Xms512M -Xmx1800M (or) -Xms512M -Xmx1500MYou can't always get what you want. Your JVM is competing with all the other processes on your machine, including the OS, for RAM.
    I like to know what are the drawbacks of JDOM parser ?You just discovered it: all DOM parsers create an in-memory representation of the entire document. A SAX parser does not.
    What is the maximum size of the Xml which JDOM can parse ?The max amount of memory that you have available to you.
    I've read a long time before that parsers have
    certain limitations. Can anybody tell me the
    limitations of the parses?See above.
    Please help. i'm currently using jdom.jar to parse the xml files.You can try a SAX parser or a streaming parser. Maybe those can help.
    %

  • Best way of partitioning the huge size table in oracle 11gr2

    hi,
    OS: linux
    DB: oracle 11gR2
    table size is about 2T and single table space with multiple dbf files in ASM,
    table partition with dbms.redefinition is running past 2 days (range partition).. its running but not that fast..
    note: exchange partition is too slow.. hence not used..
    what is the best way of doing the huge size table partition. please suggest, thanks.

    >
    what is the best way of doing the huge size table partition
    >
    A few questions
    1. Is that an OLTP or OLAP table?
    2. Is all of the data still needed online or is some of it archivable?
    3. What type of partitiioning are you doing? RANGE? LIST? COMPOSITE?
    4. Why do you you say 'exchange partition is too slow' - did you do a test?
    For example if data will be partitioned by create date then one stragety is to initially put all existing data into the root partition of a range partitioned table. Then you can start using new partitions for new incoming data and take your time to partition the current data.

  • FUNCTION elememt in ARTMAS IDOC?

    Hi,
    I m looking for the e;eaning of the FUNCTION elememt in ARTMAS IDOC! is it relation to creation / upadate data?
    If it s true, how it s look this maching?
    Thanks,
    FF

    hi,
    values for Function element from WE60
    '003' Delete: Message contains objects to be deleted
    '004' Change: Message contains changes
    '005' Replace: This message replaces previous messages
    '009' Original: First message for process
    '023' Wait/Adjust: Data should not be imported
    '018' Resend
    Regards,
    michal
    <a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a>

  • Problem with ARTMAS idoc

    Hi
    I am working on an IS-Retail implementation and we deal with article master (similar to Material master). The idoc corresponding to this is ARTMAS.
    The scenario is as explained below.
    We have articles classified as 'generic' and 'variant'. One generic article can have many variants. eg. T-shirt can be a generic and its variants could be 'High neck', 'V-neck', 'round neck' etc.
    The issue is : Whenever we make article master changes to the generic article and execute the tcode BD21 - idoc type for change pointers for ARTMAS, the articles whose info is changed triggers the outbound idoc which contains a segment ' E1BPE1MATHEAD' containing the details of generic and a number of segments 'E1BPE1AUSPRT' which contain the details of the variants associated with the generic,
    When i changed any field in MM42 for a generic article, the idoc generated through BD21 contains the segments for variant articles also.
    However, when the description field -MAKTX is changed for the generic article, the outbound idoc generated through BD21 does <u>NOT</u> contain the segment for the associated variants.
    I need the outbound idoc to include the variant segments (E1BPE1AUSPRT) also when the Description of the Generic article is changed.
    I have checked the table fields associated with ARTMAS and the field MARA-MAKTX is included in BD52.
    Can someone throw some light on this.
    Thanks in advance.

    hi negender,
          can u help me how u solved the issue,
    thanx,
    hari

  • Table with a huge size. With big fields... should be in a separated table?

    Hi.
    Consider the following table
    id_foo num_doc_foo pages_foo pdf_foo .. etc
    imagine that the table has millions of records, and a total size of 4GB or more.
    im collecting data for build a statistical form, but the query takes forever, althow analizing the query all checks out.
    My question is, the size of the table might influence the query performance?
    Should i create a separated table to collect the values i want, and keep the previous table to save the data (the pdf_data documents are huge).
    Resuming what im saying.
    A table for the huge data fields (with a big size), and another table that connects to the previous one, but with smaller data, data for generating reports, etc.
    What do u think? Does it makes sence?
    I work with oracle 9.2x , and can't upgrade.
    Cheers.

    don't remember exact details for 9.2, but I believe that oracle stores the LOB separate from the rest of the table, if it's over a certain threshold. so, if the pdfs are large enough, they aren't really stored with the table data, so reporting queries won't be affected. check the manual for how LOB storage works, and then store the pdfs in a separate table if needed.

  • Lightroom backups folder.  Huge size.

    In "My Pictures" is a folder called Lightroom, where all the catalogues and backups are stored.  A while ago, I had a major crash on the computer and lost everything on my hard disc.  I was able to restore most from my external hard drive.
    Now, in the Lightroom folder I have the following folders:
    Backups - this is 11.7 gb and contains multiple back ups.  Can I safely delete some of the older ones?
    Lightroom 2 catalogue - size greater than 97 gb.  This has masses of backups inside it going back to 2008.  Also inside this folder is a Lightroom previews lrdata file of 7.7gb and a  Lightroom 2 catalogue of 1.44 gb
    Lightroom 2 catalogue previews.lrdata with a number of folders 0-9 and A-F in it.  8.7 gb
    Lightroom 3 catalogue previews. lrdata. Same as above but for LR3.  513 mb
    Lightroom catalogue previews.lrdata This probably relates to LR1 and appears to be empty.
    Lightroom previews.lrdata  Again seems to relate to LR1 and is empty.
    Lightroom settings   That's OK
    Lightroom 3 backups  That's OK too.
    All the above are shown as folders.  the next lot show as single sheets of paper, if that makes sense to you.
    Lightroom 2 catalogue 1.49 gb
    Lightroom 3 catalogue 41mb
    Lightroom 3 catalogue 2  261mb
    Lightroom 3 catalogue 2.lrcat-journal
    When using LR3, I have not integrated the LR2 catalogue.   LR3 has only a small number of photos in it. This was advised somewhere.  I read a post in this forum a few days ago suggesting that the LR files should not be so large as these are.  I have a total of about 12,500 photos in the catalogue.
    Which files can I delete?  All the LR1 files?  Old backups, keeping just the latest of each LR2 and LR3?
    Why do I seem to be getting multiple backups in different folders?
    I am not a computer wizard.  My understanding is limited and I don't want to delete data that should not be deleted but this does seem to be using a huge amount of disc space.
    JW

    Many thanks.  I back up weekly or more often to an external drive, including all the LR files so I feel reasonably happy about them being on the same disc.  I can get rid of masses of old backups, then.  That should release some disc space.
    JW

  • WE20 - Pack size not working/IDOC packaging not working - PI 7.1

    Hi all,
    My requirement is to collect idocs and send to PI at one point of time in a day.
    I kept Pack.Size = 100 in partner profile (WE20), in PI 7.1 - Sender Communication channel I assigned the same value 100 to pack size.
    Now the problem is when I create more than 10 idocs and trigger manually by rseout00. for example
    15 idocs are pushed to PI by executing RSEOUT00. In PI when I check IDX5 or SXMB_MONI, 2 message ID's are generated one with 10 idocs and the other with 5 idocs.
    It works fine meaning creates only one message id if the total count of IDOC's are 10 or <10.
    Any ideas/suggestions appreciated.
    Thanks,
    Krishna

    Yes Prateek,
    In Sender Communication channel - IDOC package size is assigned as '100' and only 10 idocs are collecting for one message id.
    I checked the assignment and it was correct. I changed the description, saved and activated Sender agreement & Sender Communication Channel.
    No luck.
    Regards,
    Krishna

  • Size of an IDOC

    Hi All,
    I want to know how much size & memory can be occupied by an IDOC when we triggering an custom IDOC.
    In Detail:-
    I am execting an custom IDOC from SAP R\3 to SAP XI.
    I want to know how much size or memory can be occupied by the system.
    Can we take more than 100 fields in an IDOC in different - different segments.
    Regards,
    Narendra

    Check EDIDD, field-SDATA

  • APQD table size is fastly increasing with huge size

    Hi All,
    File system space has been increasing very fast. We analyzed
    the tables for the large size, from which we found APQD table size is
    relatively very large in size in Production system. So we are planning
    to delete old session logs till last 5 days from the APQD table. We
    reffered the note
    number "36781 - Table APQD is very large", and run the
    report RSBDCCKA and RSBDCCKT and then "RSBDCREO", but found no
    significant change at table level and file system as well. (attached is
    the screenshot for the same).
    We are planning to re-organize the APQD table, so that we can have free
    size. Can you please help me to know, reorganization of APQD table will
    help to gain segnificant space? We have APQD table size = 350 GB. So
    the re-org will generate huge archive log files which needs to be taken
    backup. To avoid this can we have other strategy to tackle this problem?
    Quick help is appriciated to sort out this problem in production
    environment.
    -Anthony...

    > number "36781 - Table APQD is very large", and run the
    > report RSBDCCKA and RSBDCCKT and then "RSBDCREO", but found no
    > significant change at table level and file system as well. (attached is
    As far as I remember, RSBDCREO is client dependent.
    If you did run the report in client 000 it might not affect the old
    batchinput data in your business client.
    A reorg only makes sense, if you did a significant amount of deletes before.
    As for filesystem space, you should see a reduced amount of files in
    the batch input subdirectories in GLOBAL_DIR.
    Volker

Maybe you are looking for

  • Can I log into different itunes accounts on one computer?

    The computer with my mother's itunes account has crashed and I have itunes on my laptop that contains all of my music from my account  Is there anyway to log out of my itunes and log into hers so that I can put her old music onto her new ipod.

  • Can I purchase with a credit card instead of my Apple ID balance?

    Can I pay for a purchase in the iTunes store with a credit card instead of my Apple ID balance? I want to take advantage of a credit card promotion.

  • Icon composer, can make them, can't use them from get info, how?

    Hi, I can make icons in Icon Composer, but when I try to retrieve and use them from the "get info" window there is only a generic ICNS icon available at the top, although the icon I made shows in the preview pane at the bottom. How can I retrieve the

  • Oracle VM vSwitch equivalent

    Hi, I need to setup the Oracle RAC private network between two Virtual Machine DB servers in a clustered server pool. One way would be to create an additional VLANid in the physical switch that interconnects the OVMS in the pool and then create a new

  • Printer forgets it's name

    I have an Epson RX700 connected to the USB on an AirPort Extreme. I changed the default name of the printer. Everything works just fine - printing from Mac and PC. But then sometimes the name reverts to the default 'Epson RX700' and then the print jo