RFC Lookup - Best Approach To Parse Returned Tables

Hi Everyone,
We are doing some RFC Lookups at a header node that are returning tables for all of the items (for performance reasons).  I am trying to figure out what the best way to extract the values from the table, which is most of time has more than 1 key column.  At the moment I am doing this through DOM, but I have also heard about using arrays, and have even seen an example of using a hashtables with all of the values concatenated together to later parse out using substrings.  I'm looking for the best approach to:
1) Store this data as some kind of global object to lookup during the header
2) Search and Parse from the global object during linte items.
As an example, I have the following lines in my table:
Key1,Key2,Value1,Value2,Value3
A,A,1,2,3
A,B,1,2,4
A,C,3,4,2
B,A,2,4,6
And during line item processing I may want to find the value for Key1=A, Key2=C.
Thanks
Peter

Hi Peter,
Please take a look at these...
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/xi-code-samples/xi%20mapping%20lookups%20rfc%20api.pdf
/people/siva.maranani/blog/2005/08/23/lookup146s-in-xi-made-simpler
/people/sravya.talanki2/blog/2005/12/21/use-this-crazy-piece-for-any-rfc-mapping-lookups
/people/alessandro.guarneri/blog/2006/03/27/sap-xi-lookup-api-the-killer
cheers,
Prashanth
P.S Please mark helpful answers

Similar Messages

  • Best approach to impliment "rate table"

    My application needs to initialize several "rate tables" (think tax rate tables, etc.) from property and/or xml files and then perform LOTS of lookups into these tables.
    None of the collections seems to provide quite the right functionality. A Map is close but I also need to be able to lookup by the key (eg. income) and return the rate for the correct "bracket". That is, I want to find return the rate associated with the maximum key less than or equal to my lookup value.
    I'm thinking the best approach is to wrap or extend a Map and provide an addition lookup method.
    I should also point out that these tables tend to be relatively small (from 5 to 20 or 30 "rows") which may indicate I'd be better with a brute force array implimentation.
    Any suggestions and/or comments as to how best to proceed would be greatly appreciated.
    R.Parr
    Temporal Arts

    You might try wrapping a SortedMap. That would make it easy to return <= or >= results.

  • Best approach to publish new table or new column on existing table on MDW?

    Hi,
    I'm refering to Olite R3 without any patches. I don't use Java API, I use MDW.
    if I have a new table or a new column on a existing table, what's the best approach to publish it?
    I'm asking this because I've trying lots of approaches and the only solution was, step-by-step:
    1) On MDW, drop the publication item
    2) Add again the publication item
    3) Associate the publication item to the publication
    4) Save everything
    5) File / Deploy (if I don't do it, it does not work)
    6) Tools/Package... (that's where it's a problem: if I don't remove the app and create it again it does not work!)
    7) on the client side, I perform a msync with "force refresh"
    That's the only way I found to publish new items for sure. Any other action does not push the new table or new column to the client's embbeded DB.
    Any comments?
    Regards,
    Maurício Américo Vernaschi.

    I do not use MDW, rather a mix of java and the final publish step you use, but
    Adding new PIs should be easy, just add them and re-publish (no need to drop anything)
    for changes, if you just have new columns and the sql statement is 'select * from' then you should just need to make the changes in the base schema objects, and run the publish with no changes and the updates should be picked up. If selecting specific columns, then update and re-publish.
    When using MDW at the end you can save the application as a jar file, and then use this jar file to publish in the mobile manager - this is the best wayto publish.
    Have a look at this jar file in winzip, and you will find it contains a web.xml file. This is the xml definition of the publication items, and for simple changes it is possible to just edit this file and republish via the mobile manager

  • RFC Lookups codepage

    Hi guys,
    I am using RFC API to perform some lookups from within my mapping program.
    In order to create the request in XML, I am using DOM xml parser.
    My code works fine when input data is in English, but when it is in another language, e.g. Greek, input data is respresented wrongly in XML document.
    The ppiece of code where I create the XML request is the following:
    Document docReq = null;
    // Building up RFC Request Document
    docReq = builder.newDocument();
    RFCInterfaceLine il = null;
    Element documentElement = docReq.createElement("ns0:" + remoteFunctionName);
    documentElement.setAttributeNS("http://www.w3.org/2000/xmlns/", "xmlns:ns0", "urn:sap-com:document:sap:rfc:functions");
    Node root = docReq.appendChild(documentElement);
    rfcInterfaceParametersLines = rfcInterfaceParameters.getLines();
    for (int i = 0; i < functionParameters.size(); i++)
    il = (RFCInterfaceLine) rfcInterfaceParametersLines.get(i);
         if (il.getParameterType().equalsIgnoreCase("I"))
         root.appendChild(docReq.createElement(il.getParameterName())).appendChild(docReq.createTextNode((String) functionParameters.get(i)));
    RFCInterfaceLine is a class used to represent the parameters (inputs, outputs and exceptions) used by the Remote Function Module in ABAP. "I" stands for Import. So when I have an import parameter, I create an element in XML.
    However, in the final XML document I cannot see as a header the encoding = "UTF" attribute. Is this the one that is missing to enable other languages as well??
    Evaggelos

    Hi refer these
    If you want to use DOM, you'll also have to choose a parser, since DOM's specification doesn't specify a parser, just document and element handlers, so there are various implementations of parsers for DOM. I've used Xerces's DOMParser (org.apache.xerces.parsers.DOMParser) on Java Mappings with success (never tried on UDF, though) and it's very simple.
    After you get the output stream from the rfc lookup, just use:
    DOMParser parser = new DOMParser();
    InputSource input = new InputSource(in);
    parser.parse(input);
    Document MyDoc = parser.getDocument();
    to get the Document object. Then you can use regular DOM methods to get the value.
    For example, with
    String value = MyDoc.getDocumentElement().getFirstChild().getNodeValue();
    you'd get the value of the first child of the root element.
    following document on XI Mapping lookups RFC API by Michal?
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/xi-code-samples/xi%20mapping%20lookups%20rfc%20api.pdf
    Mapping lookups u0096 RFC API
    <b>Mapping Lookups (New)</b>
    http://help.sap.com/saphelp_erp2005/helpdata/en/0f/f084429fb4aa1ae10000000a1550b0/frameset.htm
    <b>For Mapping Lookups~</b>
    /people/alessandro.guarneri/blog/2006/03/27/sap-xi-lookup-api-the-killer
    /people/sravya.talanki2/blog/2005/12/21/use-this-crazy-piece-for-any-rfc-mapping-lookups
    This blog and article deals with calling your RFC from your JAVA MAPPING / User Defined Function.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/uuid/801376c6-0501-0010-af8c-cb69aa29941c
    Java Mapping
    http://help.sap.com/saphelp_nw04/helpdata/en/e2/e13fcd80fe47768df001a558ed10b6/content.htm
    DOM parser API
    http://java.sun.com/j2se/1.4.2/docs/api/org/w3c/dom/package-frame.html
    On how to create XML docs with SAX and DOM go thruugh these links:
    http://www.cafeconleche.org/books/xmljava/chapters/ch09.html
    http://www.cafeconleche.org/books/xmljava/chapters/ch06.html
    Also go through these Blogs....
    /people/prasad.ulagappan2/blog/2005/06/08/sax-parser
    /people/prasad.ulagappan2/blog/2005/06/29/java-mapping-part-i
    /people/prasad.ulagappan2/blog/2005/06/29/java-mapping-part-ii
    /people/prasad.ulagappan2/blog/2005/06/29/java-mapping-part-iii
    /people/sravya.talanki2/blog/2005/12/21/use-this-crazy-piece-for-any-rfc-mapping-lookups
    Thanks !!

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • Best approach to return Large data ( 4K) from Stored Proc

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

  • Parsing in RFC lookup

    Hi All,
    I have wriiten a RFC lookup for mapping the company code. But I am not getting the result while parsing the XML. I have tested the code and without parsing I am getting the correct value in XML. Can anybody send the code for parsing the XML data using DOM.
    Below is the code which I am using .
    try {
            docResponse = builder.parse(in);
            if (docResponse == null) {
                    importanttrace.addWarning("docResponse is null");
    res =  docResponse.getElementsByTagName("COMPANYID").item(0).getFirstChild().getNodeValue();
    if (res == null) {
                    importanttrace.addWarning("res is null");
    catch (Exception e) {
          importanttrace.addWarning("Error when parsing RFC Response - " + e.getMessage());
    try {
            // Free resources, close the accessor..
            if (accessor != null) {
                    try {
                            accessor.close();
                    } catch (LookupException e) {
                            importanttrace.addWarning( "Error while closing accessor " + e.getMessage());
    } catch (Exception e) {
            importanttrace.addWarning("Result value not found in DOM - " + e);
    // return the result obtained above
    return res;
    Thanks,
    Aparna

    Maybe it's really just the typo in your element name and everything works fine when you use:
    res = docResponse.getElementsByTagName("COMPANY_CODE").item(0).getFirstChild().getNodeValue();
    And I think method <em>getTextContent()</em> should also do the trick of the two calls you're using:
    res = docResponse.getElementsByTagName("COMPANY_CODE").item(0).getTextContent();

  • Best approach for RFC call from Adapter module

    What is the best approach for making a RFC call from a <b>reciever</b> file adapter module?
    1. JCo
    2. Is it possible to make use of MappingLookupAPI classes to achieve this or those run in the mapping runtime environment only?
    3. Any other way?
    Has anybody ever tried this? Any pointers????
    Regards,
    Amol

    Hi ,
    The JCo lookup is internally the same as the Jco call. the only difference being you are not hardcoding the system related data in the code. So its easier to maintain during transportation.
    Also the JCO lookup code is more readable.
    Regards
    Vijaya

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • RFC lookup error when table in change mode

    Hi,
    I´m using an RFC lookup from XI to translate external to internal partner nr. Input to the FM in R/3 is the external partner nr och the FM should return the internal partner nr to XI. The problem is that when a user edits the table that my FM reads from a get an error.
    - Isn´t it possible to read a r/3 table while it is in change mode? Is my only way around this to create a z version of this table?
    Claes

    Hi Claes
    have  a look on these  links ,
    For your RFC lookup,
    /people/alessandro.guarneri/blog/2006/03/27/sap-xi-lookup-api-the-killer
    /people/sravya.talanki2/blog/2005/12/21/use-this-crazy-piece-for-any-rfc-mapping-lookups
    RFC lookups pdf https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/xi-code-samples/xi%20mapping%20lookups%20rfc%20api.pdf
    /people/alessandro.guarneri/blog/2006/03/27/sap-xi-lookup-api-the-killer
    Michal's Blog on RFC Mapping Lookup:/people/michal.krawczyk2/blog/2005/09/15/xi-rfc-mapping-lookups-from-bc-to-xi
    RFC Lookup.
    RFC lookup return no values
    http://help.sap.com/saphelp_nw2004s/helpdata/en/17/d609b48ea5f748b47c0f32be265935/frameset.htm
    Also check this useful link which also talk about Value Lookup:
    Value lookup
    http://help.sap.com/saphelp_nw04/helpdata/en/2e/96fd3f2d14e869e10000000a155106/content.htm
    /people/prasad.illapani/blog/2006/10/25/how-to-check-jdbc-sql-query-syntax-and-verify-the-query-results-inside-a-user-defined-function-of-the-lookup-api
    Lookups - /people/morten.wittrock/blog/2006/03/30/wrapping-your-mapping-lookup-api-code-in-easy-to-use-java-classes
    Lookups - /people/alessandro.guarneri/blog/2006/03/27/sap-xi-lookup-api-the-killer
    Thanks !!

  • Single RFC Lookup should return multiple values - but returns no values

    I have an RFC Lookup in my PID system that i had to change due to a test defect.
    the FM i wrote was working on a single value and returning the correct entry...  however it now needs to return multliple entries and map 0..unbounded....
    i have made the changes, the FM works in ECD, however when i call the FM from the mapping, it does not return any values...  now, i am asking my basis team to change the PIAPPLUSER to dialog user so i can throw a breakpoint for an external user... 
    has anyone done a single to multi value mapping on lookup?  i am not sure that it is the FM that is incorrect as it is very simple code..   
    DATA: lt_jobtype TYPE zhr_lkupjobtype_t.
      CLEAR     lt_jobtype.
      REFRESH lt_jobtype.
      SELECT * FROM zhr_lkupjobtype
        INTO TABLE lt_jobtype
        WHERE zinterface_id = import-zinterface_id
        AND   zsap_jobtype  = import-zsap_jobtype.
      MOVE lt_jobtype TO export.
    is there a way of checking the RFC part of an message mapping?  i checked the full trace within graphical mapping but this shows no return....

    I am using a MOVE instead of APPEND.
    according to the keyword help if the tables are identical, you can use MOVE.
    it works when i test in SE37.
    i have tested it both ways and i get the same result each time.
    i even tried it this way:
      SELECT * FROM zhr_lkupjobtype
        INTO TABLE export 
        WHERE zinterface_id = import-zinterface_id
        AND   zsap_jobtype  = import-zsap_jobtype.
    and that works too to get the target values into the export table!  that's what made me think it was not the code as i have tried three different ways of writing the same code...  the Function Module works perfectly anyway!  but when it is called from PI i cannot see if any values are returned....

  • RFC lookup to return multiple parameters

    Hi,
    I have a File to Idoc scenario involving RFC lookups.
    The RFC in this case has 2 input parameters and returns 5 output parameters in runtime. Can anybody help me with the UDF that can be used to send 2 parameters as input to the RFC and receive the 5 output parameters in the mapping and post it to the target Idoc structure.
    Your help would be much appreciated !
    Thanks & Regards,
    Sherin Jose P

    Hi,
           Please find the below UDF that i used when i got the same requirement.
    MySource structure is :
        MT_Source
                SSN
    My Target Structure is :
            ZIdoc
                   Empame
                   indClientSite
                   doj....etc
    My mapping program is like this....
               ssn---->findEmpInfo->findEmpName----EmpName
               findClientSite---->clientsite
    UDFCode:
    public String findEmpInfo(String ssn, Container container) throws StreamTransformationException{
    String inputString ="<?xml version=\"1.0\" encoding=\"UTF-8\"?> <ns0:RFC_GETEMPLOYEEDETAIL xmlns:ns0=\"urn:sap-               com:document:sap:rfc:functions\"> <SSN>"ssn"</SSN> </ns0:RFC_GETEMPLOYEEDETAIL>"ssn"</SocialSecurityNo>       </ns0:MT_EmpSSN>";
    String targetValue = "";
    AbstractTrace trace = container.getTrace();
    RfcAccessor rAcc = null;
    ByteArrayOutputStream out = null;
    try{
         Channel ch = LookupService.getChannel("BS_CLNT", "CC_Receiver_RFCLookup");          //DetermineChannel
         rAcc = LookupService.getRfcAccessor(ch);                      //Get RfcAccessor
         InputStream     iStream = new ByteArrayInputStream(inputString.getBytes());     
         XmlPayload   payload = LookupService.getXmlPayload(iStream);     //get xml payload form of the input
         Payload result = rAcc.call(payload);     //make a lookup call
         InputStream in = result.getContent();
         byte[]  bArray     =     new byte[512];
         out = new ByteArrayOutputStream(512);
         for(int i=in.read(bArray);i>0;i = in.read(bArray)){                                   out.write(bArray,0,i);
         targetValue = out.toString();
    catch(LookupException ex){
         trace.addDebugMessage("LookupException"+ex.getMessage());
    catch(IOException ex){
         trace.addDebugMessage("IOException"+ex.getMessage());
    finally{
                  if(out !=null){
              try{
                   out.close();
              catch(IOException ex){
                                                             trace.addDebugMessage("ErrorDuring Closing buffer"+ex.getMessage());
         if(rAcc !=null){
                 try{                                                        rAcc.close();
              catch(LookupException ex){
                   trace.addDebugMessage("Error while closing RFCAccessor"+ex.getMessage());
    GlobalContainer gContainer = container.getGlobalContainer();
    gContainer.setParameter("RFCResponse",targetValue);
    return targetValue;
    public String findEmpName(String str, Container container) throws StreamTransformationException{
         GlobalContainer gContainer = container.getGlobalContainer();
        Object obj =  gContainer.getParameter("RFCResponse");
        String str ="";
        str = obj.toString();
        String st = "\"";     
        str = str.replaceAll("&lt;", "<");
        str = str.replaceAll("&quot;", st);
        str = str.replaceAll("&gt;", ">");
        String clntSite = "";
        AbstractTrace trace = container.getTrace();
        ByteArrayInputStream in;
        in = new ByteArrayInputStream(str.getBytes());
        try{
                                                      DocumentBuilderFactory dbFact = DocumentBuilderFactory.newInstance();
                                                      DocumentBuilder dBuild = dbFact.newDocumentBuilder();
                                                      Document doc = dBuild.parse(in);
         NodeList nList1 = doc.getElementsByTagName("CLNTSITE");
                                                       for(int i=0;i<nList1.getLength();i++){
              Node nFname = nList1.item(0);
              clntSite = nFname.getChildNodes().item(0).getNodeValue();
              trace.addWarning("Client Site : "+nFname.getChildNodes().item(0).getNodeValue());
           catch(Exception ex){
                                                 trace.addWarning("Exception Occurred :"+ex);
           return clntSite;
    Hop this will help you......
    Thanks&Regards
    Priyanka

  • Best approach -To create RTF template having more than 50 tables.

    Hi All,
    Need your help.I am new to BI publisher. Currently we are using BIP 11g.
    I want to develop.rtf template having lots of layout and images.
    Data is coming from different tables (example : pulling from around 40 tables). When i tried to pull data from 5 tables by joining tables. It takes more time using data model in BI publisher 11g saved in xml and used in word doc.
    Could you please suggest best approach  weather i need to develop .rtf template via data model or query to generate a report.
    Also please suggest / guide me .
    Regards & Thanks in advance.

    it's very specific requirements
    first of all it's relate to logic behind
    as example 50 tables are related ? or 50 independent tables ? or may be 5 related and another independent ?
    based on relation of tables you create sql statement(s)
    how many sql statement(s) you'll have lead to identify ways to get data, as example, by package or trigger etc
    kim size of resulting select statement(s)
    if size say 1mb it's must be fast to get report but for 1000mb it can consume many time
    also kim what time it's not only to select data but to merge data and template
    looks like experimenting and knowing full logic of report is only ways to get needed output in projection of data and time

  • RFC Lookup Table Parameters Reusability

    Dear All
    My Requirement is to Map fields in IDoc from the File data ,there are some fields in idoc for which i need to fetch data from SAP.
    For this i am using RFC Look up. i am retrieving values from table parameters for RFC.
    and this Table parameters are being used muliple times in Mapping.
    Right now i have to execute the same RFC for set of input data atleast 5 r 6 times.
    This is effective Performance of the Interface
    Is there any concept of Reusuability in RFC Lookup while Mapping in PI.?
    Can we use Java Initialization section in mapping for executing the RFC once and using the output data in table parameters while mapping various target fields.
    Hoping for a Positive Reply
    Regards
    Bhasker

    Hi,
    1) If it is PI 7.1 there are mapping enhancements, such as
       a) Variables for storing intermediate mapping value,
       b) for RFC lookup etc.
    you can find the 7.1 pdfs from SDN
    2) you can have a two stage mapping, putting them sequentially in Interface mapping object.
      First step : Source to Target (orignal target structure)
    here you can do all the mapping and also One time RFC lookup, but leaving the other fields where you need the lookup values again
      Second step : Target to Target
    here already mapped target becomes your input & you know the fields which have the value & the fields that you need to map again and all other fields are one to one mapping.
    Regards
    Vishnu

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

Maybe you are looking for

  • ITunes will no longer shuffle or play continuously

    Hey all - my iTunes will no longer play continuously.....I've tried to set it to shuffle but to no avail. It just plays one song and then stops. Any suggestions? Oh - and I have updated the software, and uninstalled and then reinstalled it as well! C

  • Why does my iMessage no longer send on my iPad?

    Recently, I started having trouble with iMessage not delivering messages.  I get the error code 'try again'.  I did a restart and a complete restore and still no luck.  Help!

  • Mapping in Load rules.

    Hi All, I have a situation where I have to maintain a mapping for the product codes in my Load rule. And this is no small list of products that I have to manually updated the mapping to my load rule.. It is huge as hell. Now, is there any way that I

  • Flashy red pixels

    Hello, I'm having a really weird problem. I was working on my macbook pro 15 inch, late 2008, and I went to eat. When I came back, it was on sleep mode. I turned it on, and then, there was alot of Flashy Red Pixels but only on Dark aera on the macboo

  • SQL Query is not found in NQQuery.log

    Hi Experts, I am created an request and executed. But I could not able to find the Physical SQL Query in the NQQuery.log file under OBIEE_HOME/OracleBI/server/Log directory. I am not running the report with Administrator privileges. Any inputs would