XML record to Xi from R3

Hi Guys,
If i dont want to use IDOCS and PROXY and applicatoin server, what is the other way i can send data to Xi from R3 ? Also let me know what type of connection setup is required if there is any other means.
Thank You!

Hi,
After doing the first 2 steps, then you are going to put your port and the function module in the partner profile.  Then your code will know which RFC destination it has to use. 
What ever you are doing in the first 2 steps then independent and they are dependent indirectly and that combination is going to maintained in the partner profile(WE20) transaction.
I hope I am clear now.
Thanks,
Mahesh.

Similar Messages

  • Need help for SQL SELECT query to fetch XML records from Oracle tables having CLOB field

    Hello,
    I have a scenario wherein i need to fetch records from several oracle tables having CLOB fields(which is holding XML) and then merge them logically to form a hierarchy XML. All these tables are related with PK-FK relationship. This XML hierarchy is having 'OP' as top-most root node and ‘DE’ as it’s bottom-most node with One-To-Many relationship. Hence, Each OP can have multiple GM, Each GM can have multiple DM and so on.
    Table structures are mentioned below:
    OP:
    Name                             Null                    Type        
    OP_NBR                    NOT NULL      NUMBER(4)    (Primary Key)
    OP_DESC                                        VARCHAR2(50)
    OP_PAYLOD_XML                           CLOB       
    GM:
    Name                          Null                   Type        
    GM_NBR                  NOT NULL       NUMBER(4)    (Primary Key)
    GM_DESC                                       VARCHAR2(40)
    OP_NBR               NOT NULL          NUMBER(4)    (Foreign Key)
    GM_PAYLOD_XML                          CLOB   
    DM:
    Name                          Null                    Type        
    DM_NBR                  NOT NULL         NUMBER(4)    (Primary Key)
    DM_DESC                                         VARCHAR2(40)
    GM_NBR                  NOT NULL         NUMBER(4)    (Foreign Key)
    DM_PAYLOD_XML                            CLOB       
    DE:
    Name                          Null                    Type        
    DE_NBR                     NOT NULL           NUMBER(4)    (Primary Key)
    DE_DESC                   NOT NULL           VARCHAR2(40)
    DM_NBR                    NOT NULL           NUMBER(4)    (Foreign Key)
    DE_PAYLOD_XML                                CLOB    
    +++++++++++++++++++++++++++++++++++++++++++++++++++++
    SELECT
    j.op_nbr||'||'||j.op_desc||'||'||j.op_paylod_xml AS op_paylod_xml,
    i.gm_nbr||'||'||i.gm_desc||'||'||i.gm_paylod_xml AS gm_paylod_xml,
    h.dm_nbr||'||'||h.dm_desc||'||'||h.dm_paylod_xml AS dm_paylod_xml,
    g.de_nbr||'||'||g.de_desc||'||'||g.de_paylod_xml AS de_paylod_xml,
    FROM
    DE g, DM h, GM i, OP j
    WHERE
    h.dm_nbr = g.dm_nbr(+) and
    i.gm_nbr = h.gm_nbr(+) and
    j.op_nbr = i.op_nbr(+)
    +++++++++++++++++++++++++++++++++++++++++++++++++++++
    I am using above SQL select statement for fetching the XML records and this gives me all related xmls for each entity in a single record(OP, GM, DM. DE). Output of this SQL query is as below:
    Current O/P:
    <resultSet>
         <Record1>
              <OP_PAYLOD_XML1>
              <GM_PAYLOD_XML1>
              <DM_PAYLOD_XML1>
              <DE_PAYLOD_XML1>
         </Record1>
         <Record2>
              <OP_PAYLOD_XML2>
              <GM_PAYLOD_XML2>
              <DM_PAYLOD_XML2>
              <DE_PAYLOD_XML2>
         </Record2>
         <RecordN>
              <OP_PAYLOD_XMLN>
              <GM_PAYLOD_XMLN>
              <DM_PAYLOD_XMLN>
              <DE_PAYLOD_XMLN>
         </RecordN>
    </resultSet>
    Now i want to change my SQL query so that i get following output structure:
    <resultSet>
         <Record>
              <OP_PAYLOD_XML1>
              <GM_PAYLOD_XML1>
              <GM_PAYLOD_XML2> .......
              <GM_PAYLOD_XMLN>
              <DM_PAYLOD_XML1>
              <DM_PAYLOD_XML2> .......
              <DM_PAYLOD_XMLN>
              <DE_PAYLOD_XML1>
              <DE_PAYLOD_XML2> .......
              <DE_PAYLOD_XMLN>
         </Record>
         <Record>
              <OP_PAYLOD_XML2>
              <GM_PAYLOD_XML1'>
              <GM_PAYLOD_XML2'> .......
              <GM_PAYLOD_XMLN'>
              <DM_PAYLOD_XML1'>
              <DM_PAYLOD_XML2'> .......
              <DM_PAYLOD_XMLN'>
              <DE_PAYLOD_XML1'>
              <DE_PAYLOD_XML2'> .......
              <DE_PAYLOD_XMLN'>
         </Record>
    <resultSet>
    Appreciate your help in this regard!

    Hi,
    A few questions :
    How's your first query supposed to give you an XML output like you show ?
    Is there something you're not telling us?
    What's the content of, for example, <OP_PAYLOD_XML1> ?
    I don't think it's a good idea to embed the node level in the tag name, it would make much sense to expose that as an attribute.
    What's the db version BTW?

  • Slicers Removed from Template: "Removed Records: Slicer Cache from /xl/slicerCaches/slicerCache3.xml part (Slicer Cache)"

    Product: Microsoft Office Professional Plus 2010
    I have created a template file with a data table, 2 pivot tables, and each of those 2 pivot tables has a pivotchart and 2 slicers.  I have some code that updates the pivot table ranges.  When that occurs the pivot tables, pivotcharts and slicers
    all update successfully.  
    The problem comes after saving the .xlsm file.  Upon opening the file I receive a message saying "Excel found unreadable content in 'myTemplate.xlsm'. Do you want to recover the contents of this workbook? If you trus the source of this workbook,
    click Yes."
    I then get a list or removed records and the repair that was done.
    Removed Records: Slicer Cache from /xl/slicerCaches/slicerCache3.xml part (Slicer Cache)
    Removed Records: Slicer Cache from /xl/slicerCaches/slicerCache4.xml part (Slicer Cache)
    Removed Records: Slicers from /xl/slicers/slicer2.xml part (Slicer)
    Removed Records: Drawing from /xl/drawings/drawing4.xml part (Drawing shape)
    Repaired Records: Named range from /xl/workbook.xml part (Workbook)
    Most of the file is OK with the exception of my last worksheet.  The pivot table and chart get updated, but the slicers are gone.  Any idea why this is happening?  Can something be done to prevent this?
    Thanks,
    Rich

    Hi Rich,
    Based on your description, my understanding is that slicers are removed after you get the error messages and repair it. You wonder to know why slicers are lost after repairing, it seems that the corrupted file caused the issue.
    Please try to update the  pivot table ranges manually without macros. According to my test, the slicers won’t be lost without macros. Please test this method in your own environment. If it
    works fine after disable macros, please check your code.
    If my understanding is incorrect, could you upload a sample via OneDrive and be at bit more precise explain your problem, so that we can get more accurate solutions to this problem. I am glad to help and forward to your reply.
    Hope it’s helpful.
    Regards,

  • Removed Records: PivotTable report from /xl/pivotTables/pivotTable1.xml part (PivotTable view)

    I keep getting an error in Excel 2013 when using PowerPivot. Can someone please explain to me what this error means? I can't see that there is anything wrong with the underlying data, and the list of "removed data" is empty too...
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <recoveryLog xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main"><logFileName>error085600_01.xml</logFileName><summary>Errors were detected in file 'E:\SkyDrive\Documents\Forex\Greenzone_Stats\Greenzone_2013_v2.xlsx'</summary><removedRecords summary="Following is a list of removed records:"><removedRecord>Removed Records: PivotTable report from /xl/pivotTables/pivotTable1.xml part (PivotTable view)</removedRecord></removedRecords></recoveryLog>
    TIA!
    Dennis

    Turns out it's a resource limitation in Excels chart rendering engine. If I filter the data model so that less rows are included in the PivotChart, the error goes away.
    Re
    Dennis

  • Lack of performance in SELECT-ing XML records

    Hello Champs, I am new to XML world. But as a DBA now I'm into the situation to suggest better performance improvement in accessing XML records.
    Problem:
    There is a batch job from informatica, fetching records from XML tables(close to 400, one by one) stored in oracle database. All 400 tables just have two columns as described below:
    Name                                         Null?          Type
    RECID                                     NOT NULL VARCHAR2(255)
    XMLRECORD                                            XMLTYPE
    Each table has NORMAL index created for it only for VARCHAR2 column:
    CREATE UNIQUE INDEX "username"."Indexname_PK" ON "username"."table_name" ("RECID") PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "tbs_idx" ;
    All the table has the CLOB index created for it with below description.
         CHUNK PCTVERSION  RETENTION  FREEPOOLS CACHE      IN_ROW    FORMAT          PAR
          8192                 14400            YES                           YES         ENDIAN     NEUTRAL          NO
    Informatica issues below query on all the table, where it could just fetch only 400rows/sec in average. This takes entire buisness day to complete the batch job for all 400 tables, which gives big problem in production environment.
    SELECT <table_name>.<column_name>.getclobval() FROM  <table_name>;
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
    PL/SQL Release 10.2.0.5.0 - Production
    CORE    10.2.0.5.0      Production
    TNS for IBM/AIX RISC System/6000: Version 10.2.0.5.0 - Productio
    NLSRTL Version 10.2.0.5.0 - Production
    Clarification required:
    1. Where do you see in this scenario, there is a problem which blocks the performance?
    2. What type of index on this setup is normally advisable?
    Many Thanks for your assistance,

    Not sure if you will like it, although it will improve your performance issues considerably, upgrade to Oracle 11.2.0.3 or 11.2.0.4 and be sure that all those XMLTYPE (CLOB) stored columns will be created/recreated with XMLTYPE (SECUREFILE BINARY XML).
    One way of making sure that XMLTYPE will be stored via Binary XML Securefile LOB storage is by setting "db_securefile" on database level ( Using Oracle SecureFiles ) to value ALWAYS or FORCE. Binary XML Securefile XMLType content may only be created on ASSM enabled tablespaces...

  • Error inserting XML records 4000 bytes through Pro*C

    Hi,
    I am seeing the following error while trying to insert XML records > 4000 bytes (Records < 4000 bytes get inserted without any issues). Any help in resolving the issue would be highly appreciated.
    ORA return text: ORA-01461: can bind a LONG value only for insert into a LONG column.
    I am also able to insert records > 4000 bytes using the following query, But, I want to insert the records through a C application (using Pro*C) that is not running on the database server.
    INSERT INTO MY_XML_TABLE
    VALUES (XMLType(bfilename('XML_DIR', 'MY_FILE.XML'),
    nls_charset_id('AL32UTF8')));
    Oracle Version
    ===============
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE 11.2.0.2.0 Production
    TNS for Solaris: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    Pro*C/C++ version:
    ====================
    Pro*C/C++ RELEASE 11.2.0.0.0 - PRODUCTION
    Schema registration:
    ====================
    begin
    DBMS_XMLSCHEMA.registerSchema (
    SCHEMAURL => 'MY_XML_SCHEMA.xsd',
    SCHEMADOC => bfilename ('ENG_REPORTS', 'MY_XML_SCHEMA.xsd'),
    GENTYPES => FALSE,
    OPTIONS => DBMS_XMLSCHEMA.REGISTER_BINARYXML,
    CSID =>nls_charset_id ('AL32UTF8'));
    end;
    Table creation
    ===============
    CREATE TABLE MY_XML_TABLE (
    MY_XML_RECORD XmlType )
    XMLTYPE MY_XML_RECORD STORE AS BINARY XML
    XMLSCHEMA "MY_XML_SCHEMA.xsd" ELEMENT "MYXMLTAG" ;
    Record Insertion (Pro*C generated code):
    =========================================
    /* EXEC SQL FOR :l_sizeof_array_togo
    insert INTO MY_XML_TABLE
    (MY_XML_RECORD )
    VALUES( XMLTYPE(:l_XML_ptr INDICATOR :l_XML_indicators )); */
    struct sqlexd sqlstm;
    sqlstm.sqlvsn = 12;
    sqlstm.arrsiz = 1;
    sqlstm.sqladtp = &sqladt;
    sqlstm.sqltdsp = &sqltds;
    sqlstm.stmt = "insert into MY_XML_TABLE (MY_XML_RECORD) values (XMLTYPE(:s1\
    :s2 ))";
    sqlstm.iters = (unsigned int )l_sizeof_array_togo;
    sqlstm.offset = (unsigned int )20;
    sqlstm.cud = sqlcud0;
    sqlstm.sqlest = (unsigned char *)&sqlca;
    sqlstm.sqlety = (unsigned short)4352;
    sqlstm.occurs = (unsigned int )0;
    sqlstm.sqhstv[0] = (unsigned char *)&l_XML_ptr->xml_record;
    sqlstm.sqhstl[0] = (unsigned long )8002;
    sqlstm.sqhsts[0] = ( int )sizeof(struct xml_rec_definition);
    sqlstm.sqindv[0] = ( short *)&l_XML_indicators->XML_record_ind;
    sqlstm.sqinds[0] = ( int )sizeof(struct XML_indicator);
    sqlstm.sqharm[0] = (unsigned long )0;
    sqlstm.sqadto[0] = (unsigned short )0;
    sqlstm.sqtdso[0] = (unsigned short )0;
    sqlstm.sqphsv = sqlstm.sqhstv;
    sqlstm.sqphsl = sqlstm.sqhstl;
    sqlstm.sqphss = sqlstm.sqhsts;
    sqlstm.sqpind = sqlstm.sqindv;
    sqlstm.sqpins = sqlstm.sqinds;
    sqlstm.sqparm = sqlstm.sqharm;
    sqlstm.sqparc = sqlstm.sqharc;
    sqlstm.sqpadto = sqlstm.sqadto;
    sqlstm.sqptdso = sqlstm.sqtdso;
    sqlcxt((void **)0, &sqlctx, &sqlstm, &sqlfpn);
    }

    After selecting data from xmltab table I just received first line of xmldata file. i.e
    <?xml version="1.0" encoding="WINDOWS-12 52"?> <BAROutboundXM L xmlns="http://BARO
    That must be a display issue.
    What client tool are you using, and what version?
    If SQL*Plus, you won't see the whole content unless you set some options :
    {code}
    SET LONG <value>
    SET LONGCHUNKSIZE <value>
    {code}
    Could you try the following?
    {code}
    SET LONG 10000
    SELECT t.object_value.getclobval() FROM xmltab t;
    -- to force pretty-printing :
    SELECT extract(t.object_value, '/*').getclobval() FROM xmltab t;
    {code}
    Edited by: odie_63 on 16 févr. 2011 08:58

  • ? - Is there a way to validate 1 XML record at a time, using the SAX or oth

    Hello!
    Before running into space problems, i generated an XML file from a 'pipe delimited' file and and then processed that XML file thru a SAXParser 'validator' and the data was correctly validated, using the RELAXNG Schema patterns as the validation criteria!
    But as feared, the XML file was huge! (12 billion XML recs. generated from 1 billion 'pipe' recs.) and i am now trying to find a way to process 1 'pipe' record at a time, (ie) read 1 record from the 'pipe delimited' file, convert that rec. to an XML rec. and then send that 1 XML rec. thru the SAXParser 'validator', avoiding the build of a huge temporary XML file!
    After testing this approach, its looks like the SAXParser 'validator' (sp.parse) is expecting only (1) StringBufferInputStream as input,and after opening, reading and closing just (1) of the returned StringBufferInputStream objects, the validator wants to close up operations!
    Where i have the "<<<<<" you can see where i'm calling the the object.method that creates the 'pipe>XML' records 'sb.createxml' and will be returning many occurances of the StringBufferInputStream object, where (1) StringBufferInputStream object represents (1) 'pipe>XML' record!
    So what i'm wondering, is if there is a form of 'inputStream' class that can be loaded and processed at the same time! ie instead of requiring that the 'inputStream' object be loaded in it's entirety, before going to validation?
    Or if there is another XML 'validator' that can validate 1 XML record at a time, without requiring that the entire XML file be built first?
    1. ---------------------------------class: (SX2) ---------------------------------------------------------------------------------------------------------
    import ............
    public class SX2
    public static void main(String[] args) throws Exception
    MyDefaultHandler dh = new MyDefaultHandler();
    SX1 sx = new SX1();
    SAXParser sp = sx.getParser(args[0]);
    stbuf1 sb = new stbuf1();
    sp.parse(sb.createxml(args[1]),dh); <<<<<< createxml( ) see <<<<<<< below
    class MyDefaultHandler extends DefaultHandler {
    public int errcnt;
    "SX2.java" 87 lines, 2563 characters
    2. ----------------------------------class: (stbuf1) method: (createxml) ----------------------------------------------------------------------------
    public stbuf1 () { }
    public StringBufferInputStream createxml( String inputFile ) <<<<<< createxml(
    BufferedReader textReader = null;
    if ( (inputFile == null) || (inputFile.length() <= 1) )
    {     throw new NullPointerException("Delimiter Input File does not exist");
    String ele = new String();
    try {
    ele = new String();
    textReader = new BufferedReader(new FileReader(inputFile));
    String line = null; String SEPARATOR = "\\|"; String sToken = null;
    String hdr1=("<?xml version=#1.0# encoding=#UTF-8#?>"); hdr1=hdr1.replace('#','"');
    String hdr2=("<hlp_data>");
    String hdr3=("</hlp_data>");
    String hdr4=("<"+TABLE_NAME+">");
    String hdr5=("</"+TABLE_NAME+">");
    while ( (line = textReader.readLine()) != null )
    String[] sa = line.split(SEPARATOR);
    String elel = new String();
    for (int i = 0; i < NUM_COLS; i++)
    if (i>(sa.length-1)) { sToken = new String(); } else { sToken = sa; }
    elel="<"+_columnNames[i]+">"+sToken+"</"+_columnNames[i]+">";
    if (i==0) {
    ele=ele.concat(hdr1);ele=ele.concat(hdr2);ele=ele.concat(hdr4);ele=ele.concat(elel);
    else
    if (i==NUM_COLS - 1) {
    ele=ele.concat(elel);ele=ele.concat(hdr5);ele=ele.concat(hdr3);
    else {
    ele=ele.concat(elel);
    textReader.close();
    catch (IOException e) {
    return (new StringBufferInputStream(ele));
    public static void main( String args[] ) {
    stbuf1 genxml_obj = new stbuf1 ();
    String ptxt=new String(args[0]);
    genxml_obj.createxml(ptxt); }}

    Well,i think you can use the streaming API for xml processing provided by weblogic.It is pull model,not push model like SAX.with it,you can select the events you want without having to react to every event,and you can filter the events out.
    Sun also provide such streaming API for xml processing,and i got an very simple introduction about it on the Chinese Sun developer site.but i couldn't find any other infomation about it elsewhere! If you have such materials,please send to my email:[email protected],and if I have it,i will be sure to post the links here.hope it helps more or less:)
    @smile@

  • Filtering XML records while bursting - processing only a subset of the data

    Hi
    I'm attempting to filter the records in my XML data while bursting, so that depending on the value of a data element, the entire record is either skipped or included in the burst output.
    I only want a subset of the XML output to be included in the burst output.
    I've tried applying a filter at the select stage -
    <xapi:request select="/AR_INVOICE/LIST_EMAIL_HEAD/EMAIL_HEAD{EMAIL_IND!=''}">
    This removes all the records where 'EMAIL_IND' is not null - i.e. there is an email address to send to,
    but instead of giving me multiple emails, one for each /AR_INVOICE/LIST_EMAIL_HEAD/EMAIL_HEAD,
    I get just one mail address that contains all of the data for the records that have email addresses.
    I also tried putting a filter on the template
    <xapi:template type="rtf" locale="" location="xdo://AR.XXARINVEMAILDUMMY.en.00/?getSource=true" translation="" filter="{EMAIL_IND!=''}" />
    i.e. having only one template and filtering as shown, but this has no effect.
    Note: I had to change the square brackets - '[' to curly brackets '{' to get the examples to show.
    Any ideas?
    Thanks in advance.
    Mike

    Hi
    I worked out a way to conditionally use ony a number of the data records in the bursting process - discarding the others.
    In EBS, I generate a set of data in a a data-definition template that contains entries for some people who require email delivery, and some who require printed output.
    In the concurrent programme, I specify an output rtf template that has a filter in it to print only the records for those who don't need emails.
    (don't you just love the way that the designers call both the data definition and the output definition - TEPLATES - not confusing at all.....)
    This step generates a sinlge sorted pdf file that is printed)
    Then came the tricky part to send emails only to those who need them- using the bursting engine - to the other set of people - but to "cleanly" dispose of the records for the people who do not want emails.
    What is not clear in any of the documentation at all is that the XMLPublisher bursting engine MUST handle ALL the records that it receives in the XML input.
    If you specify a filter on the output template in the bursting control file, that excludes some records (those not needing the email) the bursting engine doesn't know what do with the remaining records.
    Enter stage left - multiple delivery methods.
    I simply defined another delivery method sample template with a filter including only those records for people who do NOT need emails - type filesystem - and routed the output to the /tmp directory - the trash.
    The lesson(s) to be learnt
    1) The bursting engine needs to have instructions to handle ALL the XML data that is fed to it.
    2) You can define as many output documents and delivery methods as you like - putting filters on each delivery method as you like - BUT ALL XML RECORDS MUST be provided for.
    3) XML records that are not required in one output / bursting stream can be handled in another - and trashed in the /tmp - or other - area.
    The full bursting control file is shown below
    1) First define the two delivery methods
    2) Then define the two "documents" - using filters - using the two delivery methods.
    Hope this helps others wanting to do similary things
    Mike
    <?xml version="1.0" encoding="UTF-8"?>
    <xapi:requestset xmlns:xapi="http://xmlns.oracle.com/oxp/xapi" type="bursting">
    <xapi:request select="/AR_INVOICE/LIST_EMAIL_HEAD/EMAIL_HEAD">
    ----- Define the two delivery methods - first the "email"---------
    <xapi:delivery>
    <xapi:email server="10.1.1.2" port="25"
    from="${EM_DOC_EMAIL_ADDRESS}" reply-to ="${EM_COLLECTOR_EMAIL_ADDRESS}">
    <xapi:message id="email" to="[email protected]"
    attachment="true" subject="${EM_OPERATING_UNIT} invoice for ${EM_CUSTOMER_NAME}">
    Please review the attached invoice - terms are ${EM_TERMS}
    This will be sent to collector email ${EM_COLLECTOR_EMAIL_ADDRESS} and customer email ${EM_DOC_EMAIL_ADDRESS}
    </xapi:message>
    </xapi:email>
    ------ Second - the "null" - type filesystem ----------------
    <xapi:filesystem id="null" output="/tmp/xmlp_null_${EM_CUSTOMER_NAME}" />
    </xapi:delivery>
    ------Then define the first document using the"email" delivery -------------
    <xapi:document output-type="pdf" delivery="email">
    <xapi:template type="rtf" locale=""
    location="xdo://AR.XXARINVEMAILDUMMY.en.00/?getSource=true" translation="" filter=".//EMAIL_HEAD[EM_DOC_EMAIL_ADDRESS!='NO_EMAIL']">
    </xapi:template>
    </xapi:document>
    ------ Then define the other document using the "null" delivery --------------
    <xapi:document output-type="pdf" delivery="null">
         <xapi:template type="rtf" locale=""
         location="xdo://AR.XXARINVEMAILDUMMY.en.00/?getSource=true" translation="" filter=".//EMAIL_HEAD[EM_DOC_EMAIL_ADDRESS='NO_EMAIL']">
         </xapi:template>
    </xapi:document>
    </xapi:request>
    </xapi:requestset>

  • How to import multiple xml records into PDF form

    Our organization needs to print and distribute serveral hundred PDF forms to our members.  We want to populate the names and some other basic info at the top of each form so the member can fill in the rest and sign and return it.  I have the one page form set up with the fields and it's connected to an xml schema data source.  When I import the xml file it reads the first record and that's it - how do I get it to read all of the xml records into a copy of the form for each record in the xml data source file so we can print and discribute the forms?  We are using Acrobat 9 Pro on a Windows 7 32 bit machine.
    Thanks

    Is this a form created with Acrobat or LiveCycle?
    Acrobat would need uniquely named fields for each individual's page or pages if they are all in one file.
    If you are processing one PDF file per individual, I would look at splitting the XML or FDF file into a record for each individual and them populating each individual's PDF form.
    You could even use a tab delimited file and a template to populate the  PDF template and then spawn a new page from the template.
    With LiveCycle I would look at using a database for the variable data.

  • "xml:lang" is getting replaced by "lang" in XML Record converter class

    Hi All,
    The issue i am having is in the Record converter class which converts XML Record to JCA record, the string "xml:lang" is replaced by only "lang" due to which the transaction i am running fails as the application to which this xml is posted rejects it.
    Details:
    I have a Custom Resource Adapter(on JCA 1.5 Spec) deployed on Oracle Weblogic Server 10.3. I invoke this adapter from the BPEL process (SOA Suite 11g). I have implemented the XMLRecordConverter interface which converts the XMLRecord to JCARecord. During this conversation the "xml:lang" string is replaced by "lang" due to which the application to which this xml is posted is throwing error.
    The code snippet of the XMLRecordConverter implementation is below
    import javax.resource.ResourceException;
    import javax.resource.cci.Record;
    import javax.xml.transform.dom.DOMSource;
    import oracle.tip.adapter.api.record.RecordElement;
    import oracle.tip.adapter.api.record.XMLRecord;
    import oracle.tip.adapter.api.record.XMLRecordConverter;
    import oracle.tip.adapter.fw.record.RecordElementImpl;
    import oracle.tip.adapter.fw.record.XMLRecordImpl;
    import oracle.tip.adapter.fw.util.JCADOMWriter;
    import oracle.xml.parser.v2.DOMParser;
    import org.w3c.dom.Document;
    import org.w3c.dom.Element;
    public Record convertFromXMLRecord (XMLRecord xmlRecord) throws ResourceException
    String xmlString;
    RecordElement payloadRecordElement = xmlRecord.getPayloadRecordElement();
    if (payloadRecordElement != null) {
    xmlString = serialize(payloadRecordElement);
    else {
    throw new ResourceException("No data in record from EIS!");
    RecordImpl rec = new RecordImpl();
    rec.setStr_Payload(xmlString);
    return rec;
    private String serialize (RecordElement recordElement)
    if (recordElement != null) {
    org.w3c.dom.Element wsifEnvelopeRootElement =
    recordElement.getDataAsDOMElement();
    if (wsifEnvelopeRootElement != null) {
    JCADOMWriter domSerializer = new JCADOMWriter();
    String xmlString = domSerializer.print(wsifEnvelopeRootElement);
    return xmlString;
    return "<empty>";
    the resulting xml has all the occurrence of "xml:lang" replaced with "lang"
    If i add namespace ( like xmlns:xml="http://www.w3.org/XML/1998/namespace") at the root element of XML coming from BPEL process then the replacement is not happening.
    Has any of the Oracle api's used in the above code uses namespace aware parser or the BPEL engine is replacing "xml:lang" to "lang" when it passes the xml to XMLRecord Converter class?
    The above code works perfectly fine in Oracle Fusion 10g (Oracle AS 10 and SOA Suite 10g)
    Any information will be really helpful specially in the oracle api's
    Thanks,
    Amith

    Hi wilson,
    Do you know how to move or delete this thread? i didnt find any options to move or delete this thread

  • Error : idoc xml record in segment attribute instead of SEGMENT

    hi friends
    can any one solve my problem. In message mapping I mapped with a IDOC. In message mapping I Mapped all the fields. Still I am getting the error as "IDOC XML RECORD IN SEGMENT ATRIBUTE INSTEAD OF SEGMENT" I dont know about this error.
    can any one solve this problem please . I am doing this scenario since 5 days. help me..
    thanks in advance
    Vasu

    Hi Vasudeva,
    Can you pls provide little more details on the scenario ?
    Also at which place are you getting this error ?
    Assuming that you have created a message mapping for some source message to target IDoc message, here are some suggestions.
    1) Test the message mapping. (are you getting the error in testing itself ?)
    2) Apart from mandatory fields' mapping, are there any constants to be assigned to some IDoc fields ? Or any node to be disabled ? Or any such additional things...
    Regards,

  • IPhone 4s Voice Memo App has 5 second delay when the record button is pressed. When it starts recording, it goes from 0 seconds to 5 or so seconds recorded. This happens randomly and often and sometimes has the delay but starts at zero. Solution Anyone?

    After iOS 7 update, my iPhone 4s Voice Memo App has 5 second delay when the record button is pressed. When it starts recording, it goes from 0 seconds to 5 or so seconds that it shows has recorded. This happens randomly and often, sometimes it will have the 5+ second delay but starts recording at zero seconds. Besides the delay it has been working fine as far as saving and playback is concerned. I have plenty of storage on the phone itself and it NEVER had this problem before I updated to iOS 7. I've reset the phone a couple times by holding down the power and home buttons at the same time. The reason I have an issue with this is that I'm always recording song ideas, melodies, and scratch takes; what I'm saying is when I come up with an idea I need to be able to know that when I hit record it will start right then so I don't forget anything that has just popped in my mind.
    Does anyone have a solution or suggestion?
    Thanks

    After iOS 7 update, my iPhone 4s Voice Memo App has 5 second delay when the record button is pressed. When it starts recording, it goes from 0 seconds to 5 or so seconds that it shows has recorded. This happens randomly and often, sometimes it will have the 5+ second delay but starts recording at zero seconds. Besides the delay it has been working fine as far as saving and playback is concerned. I have plenty of storage on the phone itself and it NEVER had this problem before I updated to iOS 7. I've reset the phone a couple times by holding down the power and home buttons at the same time. The reason I have an issue with this is that I'm always recording song ideas, melodies, and scratch takes; what I'm saying is when I come up with an idea I need to be able to know that when I hit record it will start right then so I don't forget anything that has just popped in my mind.
    Does anyone have a solution or suggestion?
    Thanks

  • Delta records not updating from DSO to CUBE in BI 7

    Hi Experts,
    Delta records not updating from DSO to CUBE
    in DSO keyfigure value showing '0' but in CUBE same record showing '-I '
    I cheked in Change log table in DSO its have 5 records
    ODSR_4LKIX7QHZX0VQB9IDR9MVQ65M -  -1
    ODSR_4LKIX7QHZX0VQB9IDR9MVQ65M -   0
    ODSR_4LIF02ZV32F1M85DXHUCSH0DL -   0
    ODSR_4LIF02ZV32F1M85DXHUCSH0DL -   1
    ODSR_4LH8CXKUJPW2JDS0LC775N4MH -   0
    but active data table have one record - 0
    how to corrcct the delta load??
    Regards,
    Jai

    Hi,
    I think initially the value was 0 (ODSR_4LH8CXKUJPW2JDS0LC775N4MH - 0, new image in changelog) and this got loaded to the cube.
    Then the value got changed to 1 (ODSR_4LIF02ZV32F1M85DXHUCSH0DL - 0, before image & ODSR_4LIF02ZV32F1M85DXHUCSH0DL - 1, after image). Now this record updates the cube with value 1. The cube has 2 records, one with 0 value and the other with 1.
    The value got changed again to 0 (ODSR_4LKIX7QHZX0VQB9IDR9MVQ65M - (-1), before image &
    ODSR_4LKIX7QHZX0VQB9IDR9MVQ65M - 0, after image). Now these records get aggregated and update the cube with (-1).
    The cube has 3 records, with 0, 1 and -1 values....the effective total is 0 which is correct.
    Is this not what you see in the cube? were the earlier req deleted from the cube?

  • How to delete the all records in Ztable from report program

    Hi Guys,
    Good Day!
    How to delete all records in Ztable from report program(Means I want to clean Ztable records from report program) .  Please send me the code.
    Thanks & Regards,
    Reddy

    Use this.
    DELETE { {FROM target [WHERE sql_cond]}
           | {target FROM source} }.
    *But before deleting the rows please check if this Ztable is being used in any other programs or used by others.
    Check "where-used-list"
    in se11 give the table name
    utilities- where-used list.
    I hope this helps.
    thanks.

  • Looking for App to record my screen from Ipad 1

    Hi;
    I am looking for App to record my screen from Ipad 1...the ideas is sending a video file  to one of my client showing what we are doing..
    Thanks

    You can take a screen shot of any page in your iPad.  Simply hold the power and home button for less than an second and you'll hear the camera shutter.  The picture will be stored in your camera roll and you can then email it.

Maybe you are looking for

  • Can I share my itunes purchases with my wife's iphone?

    I have an extensive collection of music both ripped and purchased from itunes. Since the music is purchased under my account the songs play on my iPhone. I would like my wife to have the same music collection on her phone. Is it possible to have her

  • Require certificate to modiy pdf but not to read in Acrobat 11 standard

    Hello, i want to secure a PDF with a certificate that persons will need to modify the PDF. but i want them to be able to read the PDF without it. is this possible i've searched the web for the answer but i cant find it. I'am able to secure a PDF file

  • How to integrate price list on a Purchase Order?

    Hi, Version : R12.1.3 I wanted to know how to use price list on a purchase order. I have created one price list but I am not able to use that, how could I use this. How to link Price list to the particular supplier. Thanks in advance. AmolA

  • Create a Renderkit for a ADF UI component

    This question pertains to JDeveloper 11g and ADF Renderkits. It appears that we cannot change the default-render-kit-id to anything other than oracle.adf.rich. If we have oracle.adf.rich defined as the default renderkit Id is it possible to create a

  • Is it possible to open a pro tools project in logic?

    Or vise versa?, What can or cannot be done to communticate between these two apps? In olden days, Apple bent over backwards to be compatible with windows apps. Have they done this with logic or are we on our own as far as working with the pro tools b