Collect vs append.

what does a collect statement mean to a database table?
why one shouldnot do an aapend after a row has been added to a database table using COLLECT?

Both statement used to populate data in internal table whan u use append statement then a new record has been appended to internal table and when u use collect statement then program will check key for character fields and for same character fields it will add all numeric fields.
ex:
APPEND line_spec TO itab SORTED BY comp result.
Addition:
... SORTED BY comp
Effect
This statement appends one or more rows line_spec to an internal index table itab. If itab is a standard table, you can use SORTED BY to sort the table in a specified way. Use result when appending a single row as of release 6.10 to set a reference to the appended row in the form of a field symbol or a data reference.
For the individual table types, appending is done as follows:
To standard tables, rows are appended directly and without checking the content of the internal table.
To sorted tables, rows are appended only if they correspond to the sort sequence and do not create duplicate entries with unique table key. Otherwise, an untreatable exception is triggered.
To hashed tables, no rows can be appended.
The APPEND statement sets sy-tabix to the table index of the last appended row.
Addition
... SORTED BY comp
Effect
This addition is allowed only if you specify a workarea wa and if you use a standard table, where wa must be compatible to the row type of the table. You can specify component comp as shown in section Specifying Components, however, you can access only one single component and no attributes of classes using the object component selector.
The statement is executed in two steps:
Starting at the last row, the table is searched for a row, in which the value of component comp is greater than or equal to the value of component comp of wa. If such a row exists, the workarea wa is included after this row. If no such row exists, the workarea wa is included before the first row. The table index of all rows following the included rows increases by one.
If the number of rows before the statement is executed is greater than or equal to the number specified in the definition of the internal table in the INITIAL SIZE addition, the newly-created last row is deleted.
Note
When using only the statement APPEND with addition SORTED BY to fill an internal table, this rule results in an internal table that contains no more than the number of rows specified in its definition after INITIAL SIZE and that is sorted in descending order by component comp (ranking).
The SORT statement should usually be used instead of APPEND SORTED BY.
Example
Creating a ranking of the three flights of a connection showing the most free seats.
PARAMETERS: p_carrid TYPE sflight-carrid,
p_connid TYPE sflight-connid.
DATA: BEGIN OF seats,
fldate TYPE sflight-fldate,
seatsocc TYPE sflight-seatsocc,
seatsmax TYPE sflight-seatsmax,
seatsfree TYPE sflight-seatsocc,
END OF seats.
DATA seats_tab LIKE STANDARD TABLE OF seats
INITIAL SIZE 3.
SELECT fldate seatsocc seatsmax
FROM sflight
INTO seats
WHERE carrid = p_carrid AND
connid = p_connid.
seats-seatsfree = seats-seatsmax - seats-seatsocc.
APPEND seats TO seats_tab SORTED BY seatsfree.
ENDSELECT.
COLLECT
Syntax
COLLECT wa INTO itab result.
Effect
This statement inserts the contents of a work area wa either as single row into an internal table itab or adds the values of its numeric components to the corresponding values of existing rows with the same key. As of Release 6.10, you can use result to set a reference to the inserted or changed row in the form of a field symbol or data reference.
Prerequisite for the use of this statement is that wa is compatible with the row type of itab and all components that are not part of the table key must have a numeric data type (i, p, f).
In standard tables that are only filled using COLLECT, the entry is determined by a temporarily created hash administration. The workload is independent of the number of entries in the table. The hash administration is temporary and is generally invalidated when the table is accessed for changing. If further COLLECT statements are entered after an invalidation, a linear search of all table rows is performed. The workload for this search increases in a linear fashion in relation to the number of entries.
In sorted tables, the entry is determined using a binary search. The workload has a logarithmic relationship to the number of entries in the table.
In hashed tables, the entry is determined using the hash administration of the table and is always independent of the number of table entries.
If no line is found with an identical key, a row is inserted as described below, and filled with the content of wa:
In standard tables the line is appended.
In sorted tables, the new line is inserted in the sort sequence of the internal table according to its key values, and the table index of subsequent rows is increased by 1.
In hashed tables, the new row is inserted into the internal table by the hash administration, according to its key values.
If the internal table already contains one or more rows with an identical key, those values of the components of work area wa that are not part of the key, are added to the corresponding components of the uppermost existing row (in the case of index tables, this is the row with the lowest table index).
The COLLECT statement sets sy-tabix to the table index of the inserted or existing row, in the case of standard tables and sorted tables, and to the value 0 in the case of hashed tables.
Outside of classes, you can omit wa INTO if the internal table has an identically-named header line itab. The statement then implicitly uses the header line as the work area.
COLLECT should only be used if you want to create an internal table that is genuinely unique or compressed. In this case, COLLECT can greatly benefit performance. If uniqueness or compression are not required, or the uniqueness is guaranteed for other reasons, the INSERT statement should be used instead.
The use of COLLECT for standard tables is obsolete. COLLECT should primarily be used for hashed tables, as these have a unique table key and a stable hash administration.
If a standard table is filled using COLLECT, it should not be edited using any other statement with the exception of MODIFY. If the latter is used with the addition TRANSPORTING, you must ensure that no key fields are changed. This is the only way to guarantee that the table entries are always unique and compressed, and that the COLLECT statement functions correctly and benefits performance. The function module ABL_TABLE_HASH_STATE can be used to check whether a standard table is suitable for editing using COLLECT.
Example
Compressed insertion of data from the database table sflight into the internal table seats_tab. The rows in which the key components carrid and connid are identical are compressed by adding the number of occupied seats to the numeric component seatsocc.
DATA: BEGIN OF seats,
carrid TYPE sflight-carrid,
connid TYPE sflight-connid,
seatsocc TYPE sflight-seatsocc,
END OF seats.
DATA seats_tab LIKE HASHED TABLE OF seats
WITH UNIQUE KEY carrid connid.
SELECT carrid connid seatsocc
FROM sflight
INTO seats.
COLLECT seats INTO seats_tab.
ENDSELECT.
Re: collect append
reward points if helpfull

Similar Messages

  • Configuring Kodo default implementation for field of Collection type

    If I am not mistaken default implementation for field of Collection type in
    Kodo is
    LinkedList based proxy. It would be great if it were possible to configure
    Kodo to use a proxy of my choosing
    I did some tests and it seems to me that ArrayList is much more efficient
    than Linked list (see below)
    Is there any specific reason I am not aware of that makes LinkedList better
    than array list
    In my applications all collections a relatively small (or at least most of
    my collections are definitely small)
    and since I use Collection interface there is no inserts into middle of my
    collections - only appends (which ArrayList handles very well)
    So my question is can I make Kodo to use ArrayListProxy for fields of
    Collection type
    (except of course using ArrayList field instead of Collection which I do not
    want to do)
    below is some statistics on collection performance (populating and iterating
    collections)
    the same test against 3 collections implementations (JDK 1.4.1)
    Not only ArrayList by far the fastest and memory friendly it also garbage
    collected much sooner and better -
    I show here max memory consumption and last to would not be garbage
    collected till all memory is in use (old generation GC)
    and ArrayList seems to be collected by young gen GC because it was collected
    very quickly between test cycles why other only when all memory was used
    So please make ArrayList your default collection implementation :-)
    Small collection size (40)
    time(ms) memory(kb)
    ArrayList 5,218 62,154
    LinkedList 14,125 240,066
    HashSet 27,000 311,825
    the same but using using random inserts - append(index, object) rather than
    append(object):
    ArrayList 8937, 53591
    LinkedList 15047, 240066
    Larger collection size (200)
    ArrayList 4860, 47709
    LinkedList 18468, 290704
    HashSet 34391, 422282
    the same but using using random inserts - append(index, object) rather than
    append(object):
    ArrayList 11844, 47709
    LinkedList 25766, 290704

    You should be able to accomplish this fairly easily by extending
    SimpleProxyManager:
    http://solarmetric.com/Software/Documentation/2.4.3/docs/javadoc/com/solarmetric/kodo/util/SimpleProxyManager.html
    and overriding the appropriate methods (getCollectionCopy and
    getCollectionProxy).
    On Mon, 12 May 2003 12:26:21 -0400, Alex Roytman wrote:
    If I am not mistaken default implementation for field of Collection type in
    Kodo is
    LinkedList based proxy. It would be great if it were possible to configure
    Kodo to use a proxy of my choosing
    I did some tests and it seems to me that ArrayList is much more efficient
    than Linked list (see below)
    Is there any specific reason I am not aware of that makes LinkedList better
    than array list
    In my applications all collections a relatively small (or at least most of
    my collections are definitely small)
    and since I use Collection interface there is no inserts into middle of my
    collections - only appends (which ArrayList handles very well)
    So my question is can I make Kodo to use ArrayListProxy for fields of
    Collection type
    (except of course using ArrayList field instead of Collection which I do not
    want to do)
    below is some statistics on collection performance (populating and iterating
    collections)
    the same test against 3 collections implementations (JDK 1.4.1)
    Not only ArrayList by far the fastest and memory friendly it also garbage
    collected much sooner and better -
    I show here max memory consumption and last to would not be garbage
    collected till all memory is in use (old generation GC)
    and ArrayList seems to be collected by young gen GC because it was collected
    very quickly between test cycles why other only when all memory was used
    So please make ArrayList your default collection implementation :-)
    Small collection size (40)
    time(ms) memory(kb)
    ArrayList 5,218 62,154
    LinkedList 14,125 240,066
    HashSet 27,000 311,825
    the same but using using random inserts - append(index, object) rather than
    append(object):
    ArrayList 8937, 53591
    LinkedList 15047, 240066
    Larger collection size (200)
    ArrayList 4860, 47709
    LinkedList 18468, 290704
    HashSet 34391, 422282
    the same but using using random inserts - append(index, object) rather than
    append(object):
    ArrayList 11844, 47709
    LinkedList 25766, 290704

  • Cannot assign value to a Variable of Complex Type beyond index 1

    Hello:
    I have a variable defined as a complex type as followed. I tried to assign a value to each of the two elements but it only allows me to assign to the 'element#1.
    This statement that tries to assign a value into element#2 will not work, if I assign with '[1]' for the first element it will work:
    <copy> <---- THIS WORKS
    <from expression="'John'"/>
    <to variable="My_Variable"
    part="My_Collection"
    query="/ns9:My_Collection/ns9:Collection/ns9:Collection_Item[1]/ns9:pname"/>
    </copy>
    <copy> <---- THIS DOES NOT WORK
    <from expression="'John'"/>
    <to variable="My_Variable"
    part="My_Collection"
    query="/ns9:My_Collection/ns9:Collection/ns9:Collection_Item[2]/ns9:pname"/>
    </copy>
    Is there something wrong with my definition below that allows only element#1 to be refererenced but not element#2???? Am I missing some kind of initialization that is needed to initialize both elements????
    Here are my message and Complex Type definitions:
    <variable name="My_Variable" messageType="ns8:args_out_msg"/>
    <message name="args_out_msg">
    <part name="My_Collection" element="db:My_Collection"/>
    </message>
    <element name="My_Collection">
    <complexType>
    <sequence>
    <element name="Collection" type="db:Collection_Type" db:index="2" db:type="Array" minOccurs="0" nillable="true"/>
    <element name="Ret" type="string" db:index="3" db:type="VARCHAR2" minOccurs="0" nillable="true"/>
    </sequence>
    </complexType>
    </element>
    <complexType name="Collection_Type">
    <sequence>
    <element name="Collection_Item" type="db:Collection_Type_Struct" db:type="Struct" minOccurs="0" maxOccurs="unbounded" nillable="true"/>
    </sequence>
    </complexType>
    <complexType name="Collection_Type_Struct">
    <sequence>
    <element name="pname" db:type="VARCHAR2" minOccurs="0" nillable="true">
    <simpleType>
    <restriction base="string">
    <maxLength value="25"/>
    </restriction>
    </simpleType>
    </element>
    </sequence>
    </complexType>
    The error msg it gives me is as followed:
    [2010/09/04 00:47:59] Error in <assign> expression: <to> value is empty at line "254". The XPath expression : "" returns zero node, when applied to document shown below:less
    oracle.xml.parser.v2.XMLElement@1fa7874
    [2010/09/04 00:47:59] "{http://schemas.xmlsoap.org/ws/2003/03/business-process/}selectionFailure" has been thrown.less
    -<selectionFailure xmlns="http://schemas.xmlsoap.org/ws/2003/03/business-process/">
    -<part name="summary">
    <summary>
    XPath query string returns zero node.
    According to BPEL4WS spec 1.1 section 14.3, The assign activity &lt;to&gt; part query should not return zero node.
    Please check the BPEL source at line number "254" and verify the &lt;to&gt; part xpath query.
    </summary>
    </part>
    </selectionFailure>
    Thanks
    Newbie

    Hello:
    Base on the suggestion to use 'append' instead of 'copy', I tried to define a 'singleNode' which is of type 'Collection_Type_Struct' so I can append this individual 'struct' into my array (i.e. as the 2nd. element of my array "/ns9:My_Collection/ns9:Collection/ns9:Collection_Item"), but I am getting an error in defining this variable as:
    <variable name="singleNode" element="Collection_Type_Struct"/> <--- error
    Can someone tell me how should I define "singleNode" so I can put a value in it and then append this 'singleNode' into the array:
    <variable name="singleNode" element=" how to define this????"/>
    <assign>
    <copy>
    <frem expression="'Element2Value'"/>
    <to variable="singleNode"
    part="My_Collection"
    query="/ns9:My_Collection/ns9:Collection/ns9:Collection_Item/ns9:pname"/>
    </copy>
    </assign>
    <bpelx:assign>
    <bpelx:append>
    <from variable="singleNode" query="/ns9:My_Collection/ns9:Collection/ns9:Collection_Item"/>
    <to variable="My_Variable"
    "part="My_Collection"
    query="/ns9:My_Collection/ns9:Collection"/>
    </bpelx:append>
    </bpelx:assign>
    Again here is my definition in my .xsd file:
    <element name="My_Collection">
    <complexType>
    <sequence>
    <element name="Collection" type="db:Collection_Type" db:index="2" db:type="Array" minOccurs="0" nillable="true"/>
    <element name="Ret" type="string" db:index="3" db:type="VARCHAR2" minOccurs="0" nillable="true"/>
    </sequence>
    </complexType>
    </element>
    <complexType name="Collection_Type">
    <sequence>
    <element name="Collection_Item" type="db:Collection_Type_Struct" db:type="Struct" minOccurs="0" maxOccurs="unbounded" nillable="true"/>
    </sequence>
    </complexType>
    <complexType name="Collection_Type_Struct">
    <sequence>
    <element name="pname" db:type="VARCHAR2" minOccurs="0" nillable="true">
    <simpleType>
    <restriction base="string">
    <maxLength value="25"/>
    </restriction>
    </simpleType>
    </element>
    </sequence>
    </complexType>
    Thanks for any help!!!!

  • Unable to generate the XML file through SQL script. getting error PLS-00306

    I am fetching the data from cursor and generating the xml output I am getting the below error.
    When I have checked the cursor query it is fetching the data in to single column.
    Input truncated to 1 characters
    Enter value for 7: EXEC FND_CONC_STAT.COLLECT;
    DBMS_LOB.append (tmp_file, r.core_xml);
    ERROR at line 95:
    ORA-06550: line 95, column 7:
    PLS-00306: wrong number or types of arguments in call to 'APPEND'
    ORA-06550: line 95, column 7:
    PL/SQL: Statement ignored

    Hi Alex,
    thanks for the responce..
    i have fixed the issue
    i have used XMLAttributes to get the value
    SELECT XMLELEMENT (
    NAME "TranACK",
    XMLAttributes ('1' as "TranNum",
    (select distinct to_char(SYSDATE,'yyyy-mm-dd')
    from DUAL) as "PrcDate"),
    XMLFOREST (
    a.PAYMENT_ID AS "PmtID"),
    XMLFOREST (
    a.ACK_TRANSACTION_RECEIVER AS "Name1"),
    XMLFOREST (
    to_char(a.VALUE_DATE,'yyyy-mm-dd') as "ValueDate" ),
    XMLFOREST (
    a.PAYMENT_AMOUNT AS "CurAmt"),
    XMLFOREST (
    a.CURRENCY_CODE AS "CurCode")
    ).getclobval ()
    AS line_xml
    FROM XXWAP_PAYMENT_LINE_TBL a
    where a.PAYMENT_BATCH_ID=P_batch_id;

  • Best Practice for multivalued parameters in *QL

    I have a need to do this:
                <query>             
                   <query-method>
                        <method-name>ejbSelectAllCasesInFIPS</method-name>
                        <method-params>
                                              <method-param>java.util.Collection</method-param>
                          </method-params>
                   </query-method>
                    <ejb-ql>     <![CDATA[
                                         select object(c)
                                         from Clients c, in (c.Status) as s
                                         where s.issue='H' or s.issue='P'
                                         and c.ClientCaseInfo.fipsCode in $1
                                         ]]>
                     </ejb-ql>
                </query>But based on the spec, this seems verboten.
    Am I restricted to building the query by hand, iterating thru the collection and appending each new string parameter into another String representing a comma delimited list of these parameters?
    Many thanks,
    Alexandra

    Is Sun considering adding Collections (even strongly typed Collections) to this spec? I'm guessing the reason it�s not in the spec is because you can have a Collection of ANYTHING, which could be averted by adding collection types for known types (or specifications for such) . I don't see any reason Java can�t provide StringCollection, IntCollection, LongCollection, etc. It would be incumbent upon DBMS vendors to implement the mapping (which they already do for non-Collection types) but being a programmer I don't see how difficult that could be, since we programmers are repeatedly faced with this relatively low-level problem.

  • Repeating nodes using FOR loop but when concating XML string then concating only last iteration of FOr loop ??

    I stuck with a problem that I am using FOR loop for generating repeating nodes. 
    Now when I concat the generated node in the main node then I got only last iteration of that FOR loop.
    can anybody suggest me a way to handle this error....
    FOR i IN 1..pl_phone_tab.Count
    LOOP
    SELECT xmlelement("Phone"     
                                        ,xmlelement("PHONETYPE",xmlattributes('01' AS "dmnADRP_PHONETYPE"),pl_phone_tab(i).p_phtype_tab)
                         ,xmlelement("PHONENUM",pl_phone_tab(i).p_phnum_tab)
                         ,xmlelement("PRIMARY_CONTACT",pl_phone_tab(i).p_prcon_tab)
    INTO p_phone_xml
    FROM dual; END LOOP;
    SELECT xmlelement("PhoneInfo"
                           ,xmlconcat(p_phone_xml))
    INTO p_phone_info_xml
    FROM dual;
    here I am getting only one node but there has to be two nodes for PHONE node

    Not that I'm encouraging you but here are two standalone examples explaining how to do what you want :
    1) Loop through the input collection and append each node to its target container :
    SQL> declare
      2 
      3    type t_emp_tab is table of scott.emp%rowtype;
      4 
      5    emp_tab       t_emp_tab;
      6    emp_info_xml  xmltype;
      7    emp_xml       xmltype;
      8 
      9  begin
    10 
    11    -- filling emp_tab with data
    12    select e.*
    13    bulk collect into emp_tab
    14    from scott.emp e
    15    where e.deptno = 10;
    16 
    17    emp_info_xml := xmltype('<EmpInfo/>');
    18 
    19    -- looping through emp collection and appending to EmpInfo element
    20    for i in 1 .. emp_tab.count loop
    21      select appendchildxml(
    22               emp_info_xml
    23             , '/*'
    24             , xmlelement("Emp"
    25               , xmlattributes(emp_tab(i).empno as "id")
    26               , xmlforest(
    27                   emp_tab(i).ename as "Name"
    28                 , emp_tab(i).job as "Job"
    29                 )
    30               )
    31             )
    32      into emp_info_xml
    33      from dual;
    34    end loop;
    35 
    36    dbms_output.put_line(emp_info_xml.getclobval(1,2));
    37 
    38  end;
    39  /
    <EmpInfo>
      <Emp id="7782">
        <Name>CLARK</Name>
        <Job>MANAGER</Job>
      </Emp>
      <Emp id="7839">
        <Name>KING</Name>
        <Job>PRESIDENT</Job>
      </Emp>
      <Emp id="7934">
        <Name>MILLER</Name>
        <Job>CLERK</Job>
      </Emp>
    </EmpInfo>
    PL/SQL procedure successfully completed
    2) Build a secondary collection of XML nodes and use XMLAgg to aggregate them in one go :
    SQL> declare
      2 
      3    type t_emp_tab is table of scott.emp%rowtype;
      4 
      5    emp_tab       t_emp_tab;
      6    emp_info_xml  xmltype;
      7    emp_xml_tab   xmlsequencetype := xmlsequencetype();
      8 
      9  begin
    10 
    11    -- filling emp_tab with data
    12    select e.*
    13    bulk collect into emp_tab
    14    from scott.emp e
    15    where e.deptno = 10;
    16 
    17    -- looping through emp collection and appending to the collection of Emp nodes
    18    for i in 1 .. emp_tab.count loop
    19 
    20      emp_xml_tab.extend;
    21 
    22      select xmlelement("Emp"
    23             , xmlattributes(emp_tab(i).empno as "id")
    24             , xmlforest(
    25                 emp_tab(i).ename as "Name"
    26               , emp_tab(i).job as "Job"
    27               )
    28             )
    29      into emp_xml_tab(i)
    30      from dual;
    31 
    32    end loop;
    33 
    34    select xmlelement("EmpInfo", xmlagg(t.column_value))
    35    into emp_info_xml
    36    from table(emp_xml_tab) t ;
    37 
    38    dbms_output.put_line(emp_info_xml.getclobval(1,2));
    39 
    40  end;
    41  /
    <EmpInfo>
      <Emp id="7782">
        <Name>CLARK</Name>
        <Job>MANAGER</Job>
      </Emp>
      <Emp id="7839">
        <Name>KING</Name>
        <Job>PRESIDENT</Job>
      </Emp>
      <Emp id="7934">
        <Name>MILLER</Name>
        <Job>CLERK</Job>
      </Emp>
    </EmpInfo>
    PL/SQL procedure successfully completed
    Both solutions give the same output.
    Test them both and see which one fits better into your scenario.

  • Still getting uncaught exception in c++ API running keywords query

    When I run a search based on keyword in java application, the first time, most likely the query results is returned, but for the subsequent keywords searches, the application throws the error below...
    com.sleepycat.dbxml.XmlException: Uncaught exception from C++ API, errcode = INTERNAL_ERROR
         at com.sleepycat.dbxml.dbxml_javaJNI.XmlQueryExpression_execute__SWIG_1(Native Method)
         at com.sleepycat.dbxml.XmlQueryExpression.execute(XmlQueryExpression.java:85)
         at epss.utilities.XQueryUtil.getQueryResultsByKeywords(XQueryUtil.java:168)
         at epss.search.XmlContentByKeywords.getDocumentContentByKeywords(XmlContentByKeywords.java:123)
         at com.epss.test.TestApp.main(TestApp.java:83)
    I know one of the many things to consider fixing this problem is to make sure all berkeley db xml objects (e.g. xmlContainer, XmlManager, XmlResults, XmlQueryExpression, etc) delete() method is called on those obects once they are done to free resources etc. I've been doing all that and still getting the error. This problem doesn't happen when i run a search for based on id (attribute value).
    Note: I'm not explicitly using trasanction since i turned on transaction in EnvironmentConfig to create XmlManager.
    This is the method that does the query and return us the results...
         * Gets the query results by keywords.
         * @param keywords
         * the keywords under search
         * @param manager
         * the object used to perform activities such as preparing XQuery
         * queries
         * @return the query results by keywords
         public static synchronized XmlResults getQueryResultsByKeywords(
                   final String keywords, XmlManager manager) {
              /* Represents a parsed XQuery expression. */
              XmlQueryExpression expr = null;
              /* Encapsulates the results of a query that has been executed. */
              XmlResults results = null;
              /* The query context */
              XmlQueryContext context = null;
              // The value
              XmlValue value = null;
              // Declare string variables
              String query = null;
              // Run logic
              try {
                   /* Do null check */
                   if (manager != null) {
                        // Make XmlValue object
                        value = new XmlValue(keywords);
                        // Get a query context
                        context = manager.createQueryContext();
                        // Bind xquery variable value to its variable name
                        context.setVariableValue(DataConstants.KEYWORD, value);
                        // Build the query string
                        query = QueryStringUtil.xQueryStringByKeywords(
                                  DataConstants.ELEMENTS, DataConstants.KEYWORD);
                        // Compile an XQuery expression into an XmlQueryExpression
                        expr = manager.prepare(query, context);
                        // Evaluates the XQuery expression against the containers
                        results = expr.execute(context);
                        /* Release resources */
                        if (results.size() == 0) {
                             results.delete();
                             results = null;
                        // Free the native resources
                        expr.delete();
                        // Dereference objects
                        expr = null;
                        value = null;
                        context = null;
                        query = null;
                        manager.delete();
                        manager = null;
                        return results;
              } catch (final XmlException e) {
                   // Free the native resources
                   expr.delete();
                   // dereference objects
                   expr = null;
                   value = null;
                   context = null;
                   query = null;
                   // Write to log
                   WriteLog.logExceptionToFile(e);
              return null;
    This is the callback method that return the query string...
         * Returns query keyword query string to retrive keywords.
         * @param elementName The particular node under search
         * @param keywords The keywords being searched under the node
         * @return The string used for the query
         public static synchronized String xQueryStringByKeywords(
                   final String elementName, final String keywords) {
              /* Build query string */
              final StringBuffer sb = new StringBuffer();
              sb.append("let $found := false\n");
              sb.append("let $terms := tokenize($");
              sb.append(keywords);
              sb.append(", \",\")\n");
              sb.append("for $element in collection('");
              sb.append(DataConstants.CONTAINER);
              sb.append("')");
              sb.append("/(FUNDOC | JOBDOC)");
              sb.append("//");
              sb.append(elementName);
              sb.append("//");
              sb.append("parent::*[1]");
              sb.append("\nlet $found := for $term in $terms\n");
              sb
                        .append(" return if (contains(lower-case($element), lower-case($term)))");
              sb.append(" \nthen \"true\"");
              sb.append(" else \"false\" \n");
              sb.append(" return if ($found = \"false\") \nthen () else $element");
              return sb.toString();
    Edited by: user3453165 on Jan 20, 2010 7:20 AM

    I am using berkeley db xml 2.5.13 on windows xp. Yes that's the complete error message. I am going to add my environment class and also part of the keyword search class that extends the environment, which will give u idea about how i'm creating and using transaction. I don't explicitly use transaction. I used to explicitly use it but i thought it's redundant. So when i create the db environment, i just call           envc.setTransactional(true) and pass the EnvironmentConfig object (i.e. envc) to the environment to create instance of XmlManager and this is fine. Look below and u will see what i mean. Please let me know if u need more information. Thanks for your help. Appreciate it.
    Tue, 2010-01-19 10:58:27 PM
    com.sleepycat.dbxml.XmlException: Uncaught exception from C++ API, errcode = INTERNAL_ERROR
         at com.sleepycat.dbxml.dbxml_javaJNI.XmlQueryExpression_execute__SWIG_1(Native Method)
         at com.sleepycat.dbxml.XmlQueryExpression.execute(XmlQueryExpression.java:85)
         at epss.utilities.XQueryUtil.getQueryResultsByKeywords(XQueryUtil.java:166)
         at epss.search.XmlContentByKeywords.getDocumentContentByKeywords(XmlContentByKeywords.java:123)
         at com.epss.test.TestApp.main(TestApp.java:66)
    The environment class...
    package epss.core;
    import java.io.File;
    import java.io.FilenameFilter;
    import java.io.IOException;
    import com.sleepycat.db.DatabaseException;
    import com.sleepycat.db.Environment;
    import com.sleepycat.db.EnvironmentConfig;
    import com.sleepycat.dbxml.XmlContainer;
    import com.sleepycat.dbxml.XmlContainerConfig;
    import com.sleepycat.dbxml.XmlManager;
    import com.sleepycat.dbxml.XmlManagerConfig;
    import epss.utilities.GlobalUtil;
    * Class used to open and close Berkeley Database environment.
    public class DatabaseEnvironment {
         /** The db env_. */
         private Environment dbEnv_ = null;
         /** The mgr_. */
         private XmlManager mgr_ = null;
         /** The opened container. */
         private XmlContainer openedContainer = null;
         /** The new container. */
         private XmlContainer newContainer = null;
         /** The path2 db env_. */
         private File path2DbEnv_ = null;
         /** Whether we are creating or opening database environment. */
         private int mode = -1;
         /** Constants for mode opening or mode creation. */
         private static final int OPEN_DB = 0, CREATE_DB = 1;
         * Set the Mode (CREATE_DB = 1, OPEN_DB = 0).
         * @param m
         * the m
         protected synchronized void setDatabaseMode(final int m) {
              if (m == OPEN_DB || m == CREATE_DB)
                   mode = m;
         * Gets the manager.
         * @return the manager
         protected synchronized XmlManager getManager() {
              return mgr_;
         * Gets the opened container.
         * @return the opened container
         protected synchronized XmlContainer getOpenedContainer() {
              return openedContainer;
         * Gets the new container.
         * @return the new container
         protected synchronized XmlContainer getNewContainer() {
              return newContainer;
         * Initialize database environment.
         * @throws Exception
         * the exception
         protected synchronized void doDatabaseSetup(String container)
                   throws Exception {
              switch (mode) {
              case OPEN_DB:
                   // check database home dir exist
                   if (!(isPathToDbExist(new File(DataConstants.DB_HOME)))) {
                        WriteLog.logMessagesToFile(DataConstants.DB_FILE_MISSING);
                        cleanup();
                        throw new IOException(DataConstants.DB_FILE_MISSING);
                   } else {
                        // Configure database environment
                        configureDatabaseEnv();
                        // Configuration settings for an XmlContainer instance
                        XmlContainerConfig config = new XmlContainerConfig();
                        // DB shd open within a transaction
                        config.setTransactional(true);
                        // Opens a container, returning a handle to an XmlContainer obj
                        openedContainer = getManager().openContainer(container, config);
                   break;
              case CREATE_DB:
                   // Set environment home
                   setDatabaseHome();
                   // Validate database home dir exist
                   if (isPathToDbExist(new File(DataConstants.DB_HOME))) {
                        // Configure database environment
                        configureDatabaseEnv();
                        // Configuration settings for an XmlContainer instance
                        XmlContainerConfig config = new XmlContainerConfig();
                        // Sets whether documents are validated
                        config.setAllowValidation(true);
                        // DB shd open within a transaction
                        config.setTransactional(true);
                        // The database container path
                        File file = new File(path2DbEnv_, container);
                        // Creates a container, returning a handle to
                        // an XmlContainer object
                        newContainer = getManager().createContainer(file.getPath(),
                                  config);
                        newContainer.setAutoIndexing(true);
                   break;
              default:
                   throw new IllegalStateException("mode value (" + mode
                             + ") is invalid");
         * Validate path2 db env.
         * @param path2DbEnv
         * the path2 db env
         * @return true, if checks if is path to db env
         private synchronized boolean isPathToDbExist(final File path2DbEnv) {
              boolean returnValue = false;
              if (!(path2DbEnv.isDirectory() || path2DbEnv.exists())) {
                   throw new IllegalArgumentException(DataConstants.DIR_ERROR
                             + path2DbEnv.getAbsolutePath()
                             + DataConstants.DOES_NOT_EXIST);
              } else {
                   path2DbEnv_ = path2DbEnv;
                   // Test whether db home exist when mode is 0
                   if (path2DbEnv_.exists() && mode == OPEN_DB) {
                        // Test whether all db files exist
                             returnValue = true;
                   } else {
                        // Test whether db home exist when mode is 1
                        if (path2DbEnv_.exists() && mode == CREATE_DB) {
                             returnValue = true;
              return returnValue;
         * Set database environment home.
         * @throws IOException
         * Signals that an I/O exception has occurred.
         private synchronized void setDatabaseHome() throws IOException {
              // The base dir
              File homeDir = new File(DataConstants.DB_HOME);
              // If db home delete fails, throw io exception
              if (!GlobalUtil.deleteDir(homeDir) && homeDir.exists()) {
                   WriteLog.logMessagesToFile(DataConstants.ERROR_MSG);
                   throw new IOException(DataConstants.ERROR_MSG);
              } else {
                   // If delete is successful, recreate db home
                   final boolean success = homeDir.mkdir();
                   // if home dir creation is successful
                   if (success) {
                        // Construct file object
                        File logDir = new File(homeDir, DataConstants.LOG_DIR);
                        // File dbHome = new File(homeDir, DataConstants.DB_DIR);
                        // Create log file
                        boolean logCreated = logDir.mkdir();
                        // Create db home
                        // boolean dbHomeCreated = dbHome.mkdir();
                        if (logCreated) {
                             WriteLog.logMessagesToFile(homeDir.getAbsolutePath()
                                       + " successfully created");
                   } else {
                        WriteLog.logMessagesToFile(homeDir.getAbsolutePath()
                                  + " failed to create");
         * Sets environment configuration and it's handlers.
         * @throws Exception
         * the exception
         private synchronized void configureDatabaseEnv() throws Exception {
              // Construct a new log file object
              File logDir = new File(path2DbEnv_, DataConstants.LOG_DIR);
              // The environment config
              EnvironmentConfig envc = new EnvironmentConfig();
              // estimate how much space to allocate
              // for various lock-table data structures
              envc.setMaxLockers(10000);
              // estimate how much space to allocate
              // for various lock-table data structures
              envc.setMaxLocks(10000);
              // estimate how much space to allocate
              // for various lock-table data structures
              envc.setMaxLockObjects(10000);
              // automatically remove log files
              // that are no longer needed.
              envc.setLogAutoRemove(true);
              // If environment does not exist create it
              envc.setAllowCreate(true);
              // For multiple threads or processes that are concurrently reading and
              // writing to berkeley db xml
              envc.setInitializeLocking(true);
              // This is used for database recovery from application or system
              // failures.
              envc.setInitializeLogging(true);
              // Provides an in-memory cache that can be shared by all threads and
              // processes
              envc.setInitializeCache(true);
              // Provides atomicity for multiple database access operations.
              envc.setTransactional(true);
              // location of logging files.
              envc.setLogDirectory(logDir);
              // set the size of the shared memory buffer pool
              envc.setCacheSize(500 * 1024 * 1024);
              // turn on the mutexes
              envc.setMaxMutexes(500000);
              // show error messages by BDB XML library
              envc.setErrorStream(System.err);
              // File db_home = new File(path2DbEnv_, "db");
              // Create a database environment
              dbEnv_ = new Environment(path2DbEnv_, envc);
              // Configure an XmlManager instance via its constructors
              XmlManagerConfig mgrConf = new XmlManagerConfig();
              mgrConf.setAllowExternalAccess(true);
              mgrConf.setAllowAutoOpen(true);
              // Create xml manager object
              mgr_ = new XmlManager(dbEnv_, mgrConf);
              mgr_.setDefaultContainerType(XmlContainer.NodeContainer);
         * This method is used to close the database environment freeing any
         * allocated resources that may have been held by it's handlers and closing
         * any underlying subsystems.
         * @throws DatabaseException
         * the database exception
         protected synchronized void cleanup() throws DatabaseException {
              if (path2DbEnv_ != null) {
                   path2DbEnv_ = null;
              if (newContainer != null) {
                   newContainer.delete();
                   newContainer = null;
              if (openedContainer != null) {
                   openedContainer.delete();
                   openedContainer = null;
              if (mgr_ != null) {
                   mgr_.delete();
                   mgr_ = null;
              if (dbEnv_ != null) {
                   dbEnv_.close();
                   dbEnv_ = null;
    // This is the keyword search class...
    public final class XmlContentByKeywords extends DatabaseEnvironment {
         public synchronized Document getDocumentContentByKeywords(String keywords)
                   throws Exception {
              // Encapsulates the results of a query that has been executed.
              XmlResults results = null;
              // The manager
              XmlManager manager = null;
              // Run the logic
              if (keywords != null) {
                   try {
                        // Flag to open db
                        final int OPEN_DB = 0;
                        // The keywords content
                        Document keywordsContent = null;
                        // Open db connection
                        try {
                             // Get database instance
                             setDatabaseMode(OPEN_DB);
                             // Open this container in db environment
                             doDatabaseSetup(DataConstants.CONTAINER);
                        } catch (Exception ex) {
                             // Create error node with error message
                             keywordsContent = Wrapper.createErrorDocument(ex
                                       .getMessage());
                             // Return the error node doc
                             return keywordsContent;
                        // Manager instance
                        // final XmlManager manager = getManager();
                        manager = getManager();
                        // Transaction instance
                        // final XmlTransaction txn_ = getTxn();
                        // The map
                        Map<String, Document> map = null;
                        // The temp map
                        Map<String, Document> tempMap = null;
                        // Return the query results
                        results = XQueryUtil.getQueryResultsByKeywords(keywords, manager);
    // use results here...
    // close results when done
    results.delete();
    results = null;
    manager.delete();
    manager = null;
    }

  • Script to Save Path/File Name/login user information

    Hi all,
    Could anyone tell me how to solve this below problem.
    ·        Here i  Need to write a script which does the following
                In Adobe Illustrator CS2, when i run script then the path information of the current open file, Filename of the file and the Login person name has to be recorded in a text file and saved onto the server. (Ex \\10.99.0.60\filehistory\ [today].txt).
    ·        All these activity should take place simultaneously by executing the script once.
    ·        Collect path/File name/Login information and write it to a file. After writing save it to a location in the server. All these should occur in one shot.
    ·        For different opened files of the same day work, different information (path/File name/Login) will be collected and appended to the same text file and locations.
    ·        Different files will be created for information about on each date.
    Regards,
    Sanat

    Try This. (BTW, i am a newbie, learning Applescript now)
    tell application "Adobe Illustrator"
    set filePath to get file path of current document
    set filePath1 to POSIX path of filePath
    set userName to get short user name of (system info)
    tell application "Finder"
    set theFilePath to (path to desktop as string) & "test.txt" as string
    set theFileReference to open for access theFilePath with write permission
    set theResult to get eof of theFileReference
    write filePath1 & " User Name: " & userName & return & return starting at theResult to theFileReference as string
    set theResult to theResult + 1
    close access theFileReference
    end tell
    end tell
    JaiMS

  • Performance concern

    Dear Experts,
    The below query is causing a lot of performance concern. Kindly go through and let me know about the suitable modifications i can make.
    select
              vbeln
              fkart
              vkorg
              vtweg
              fkdat
              sum( fkimg ) as fkimg
              matnr
              aubel
              vstel
              ktgrm
              matkl
              prctr
              spart
              SAKN1
              from ZV_BSEG_VBRP_RK
              into table it_sales
              where
              fkdat_i in s_fkdat and
              vtweg in s_vtweg and
              vkorg in s_vkorg and
              fkart in so_fkart and
              spart in s_spart and
              vstel in s_werks and
              matnr in s_matnr and
              vbeln in s_vbeln and
              fkimg ne '0' and
              ktgrm in ('01','02','03','04','05','06','07','08','09','10','11','12') and
              fksto ne 'X'
              group by vbeln fkart vkorg vtweg fkdat matnr aubel vstel ktgrm matkl prctr spart SAKN1.
        sort it_sales by vbeln matnr prctr.
        if it_sales[] is not initial.
    select vbeln fkdat GJAHR VKORG from vbrk into table it_vbeln for all entries in it_sales where vbeln = it_sales-vbeln.
        select
              belnr
              shkzg
              dmbtr
              hkont
              MATNR
              prctr
              from bsEG into table it_fin1
              for all entries in it_vbeln
              where belnr = it_vbeln-vbeln AND
              BUKRS EQ IT_VBELN-VKORG and
              hkont >= '0000400001' and hkont <= '0000400251'.
    *          hkont = it_sales-sakn1.
    *          group by belnr shkzg hkont prctr.
              SORT IT_FIN BY hkont.
    *delete it_fin where hkont >= '0000400001' and hkont <= '0000400251'.
              SORT IT_FIN BY BELNR MATNR PRCTR.
            IF it_fin1[] IS NOT INITIAL.
    loop at it_fin1.
    move-corresponding it_fin1 to it_fin.
    collect it_fin.
    append it_fin.
    endloop.
              loop at it_fin.
                it_data_1-vbeln = it_fin-belnr.
                it_data_1-matnr = it_fin-matnr.
                it_data_1-prctr = it_fin-prctr.
    *read table it_sales transporting no fields with key vbeln = it_data_1-vbeln matnr = it_Data_1-matnr prctr = it_data_1-prctr.
    *if sy-subrc = 0.
    *tabix = sy-tabix.
    *FOR SALES INVOICE
    read table it_sales with key vbeln = it_data_1-vbeln matnr = it_Data_1-matnr prctr = it_data_1-prctr.
                it_data_1-aubel = it_sales-aubel.
                it_data_1-vstel = it_sales-vstel.
                it_data_1-ktgrm = it_sales-ktgrm.
                it_data_1-matkl = it_sales-matkl.
                it_data_1-fkart = it_sales-fkart.
                it_data_1-vkorg = it_sales-vkorg.
                it_data_1-vtweg = it_sales-vtweg.
                it_data_1-fkdat = it_sales-fkdat.
                it_data_1-spart = it_sales-spart.
                if it_data_1-fkart = 'F2' or  it_data_1-fkart = 'IV' or  it_data_1-fkart = 'ZF2' or it_data_1-fkart = 'ZMIS' or  it_data_1-fkart = 'ZSF2' or it_data_1-fkart = 'ZMF2'.
                  it_data_1-fkimg = it_sales-fkimg.
                  it_data_1-shkzg = it_fin-shkzg.
                  if it_data_1-shkzg = 'H'.
                  it_data_1-dmbtr_1 = it_data_1-dmbtr_1 + it_fin-dmbtr.
                  endif.
                  if it_data_1-shkzg = 'S'.
                  it_data_1-dmbtr_2 = it_data_1-dmbtr_2 + it_fin-dmbtr.
                  endif.
                  endif.
    * H - Credit
    * S - Debit
                if it_data_1-fkart = 'G2' or  it_data_1-fkart = 'IG' or  it_data_1-fkart = 'RE' or  it_data_1-fkart = 'ZCRE'.
                  if it_data_1-fkart = 'G2'.
                    it_data_1-cr_qty = '0'.
                  else.
                    it_data_1-cr_qty = it_sales-fkimg.
                  endif.
                  it_data_1-shkzg = it_fin-shkzg.
                  if it_data_1-shkzg = 'H'.
                  it_data_1-dmbtr_cr_1 = it_data_1-dmbtr_cr_1 + it_fin-dmbtr.
                  endif.
                  if it_data_1-shkzg = 'S'.
                  it_data_1-dmbtr_cr_2 = it_data_1-dmbtr_cr_2 + it_fin-dmbtr.
                  endif.
                  endif.
                if it_data_1-fkart = 'L2'.
                  it_data_1-dr_qty = 0.
                  it_data_1-shkzg = it_fin-shkzg.
                  if it_data_1-shkzg = 'H'.
                  it_data_1-dmbtr_dr_1 = it_data_1-dmbtr_dr_1 + it_fin-dmbtr.
                  endif.
                  if it_data_1-shkzg = 'S'.
                  it_data_1-dmbtr_dr_2 = it_data_1-dmbtr_dr_2 + it_fin-dmbtr .
                  endif.
                endif.
                select single vtext into it_data_1-vtext from tvkmt where spras = 'EN' and ktgrm = it_data_1-ktgrm.
                select single txt20 into it_data_1-gltxt from skat where spras = 'EN' and saknr = it_data_1-hkont and ktopl = '1000'.
                select single maktg into it_data_1-maktg from makt where matnr = it_data_1-matnr.
                select single vtext into it_data_1-division from tspat where spart = it_data_1-spart and spras = 'EN'.
                append it_data_1.
                clear it_data_1.
                clear it_sales.
              endloop.
              sort it_data_1 by matnr vbeln.
    if p_check ne 'X'.
              loop at it_data_1.
                concatenate it_data_1-matnr ' ' it_data_1-matkl ' ' it_data_1-ktgrm into it_final_1-count.
                move: it_data_1-matnr to it_final_1-matnr,
                      it_data_1-matkl to it_final_1-matkl,
                      it_data_1-ktgrm to it_final_1-ktgrm,
                      it_data_1-vtext to it_final_1-vtext,
                      it_data_1-sakn1 to it_final_1-sakn1,
                      it_data_1-spart to it_final_1-spart,
                      it_data_1-gltxt to it_final_1-gltxt,
                      it_data_1-division to it_final_1-division,
                      it_data_1-prctr to it_final_1-prctr,
                      it_data_1-maktg to it_final_1-maktg,
                      it_data_1-fkimg to it_final_1-fkimg,
    *                  it_data_1-dmbtr to it_final_1-dmbtr,
    *                  it_data_1-dmbtr_cr to it_final_1-dmbtr_cr,
    *                  it_data_1-dmbtr_dr to it_final_1-dmbtr_dr,
                      it_data_1-dr_qty to it_final_1-dr_qty,
                      it_data_1-cr_qty to it_final_1-cr_qty,
                      it_data_1-dmbtr_1 TO it_final_1-dmbtr_1,
                      it_data_1-dmbtr_2 TO it_final_1-dmbtr_2,
                      it_data_1-dmbtr_dr_1 TO it_final_1-dmbtr_dr_1,
                      it_data_1-dmbtr_dr_2 TO it_final_1-dmbtr_dr_2,
                      it_data_1-dmbtr_cr_1 TO it_final_1-dmbtr_cr_1,
                      it_data_1-dmbtr_cr_2 TO it_final_1-dmbtr_cr_2.
                append it_final_1.
                clear it_data_1.
              endloop.
              data: wa_matnr_1 like mara-matnr,
                    wa_matkl_1 like vbrp-matkl,
                    wa_ktgrm_1 like vbrp-ktgrm,
                    wa_hkont like bsis-hkont,
                    wa_gltxt like skat-txt20,
                    wa_hkont_dr like bsis-hkont,
                    wa_hkont_cr like bsis-hkont,
                    wa_maktg_1 like makt-maktg,
                    wa_vtext_1 like tvkmt-vtext,
                    wa_vtext_2 like tspat-vtext,
                    wa_spart like vbrk-spart,
                    wa_prctr like vbrp-prctr.
              sort it_final_1 by matnr matkl ktgrm division prctr hkont.
              loop at it_final_1.
                wa_matnr_1 = it_final_1-matnr.
                wa_matkl_1 = it_final_1-matkl.
                wa_ktgrm_1 = it_final_1-ktgrm.
                wa_vtext_1 = it_final_1-vtext.
                wa_spart = it_final_1-spart.
                wa_hkont = it_final_1-sakn1.
                wa_gltxt = it_final_1-gltxt.
                wa_vtext_2 = it_final_1-division.
                wa_prctr = it_final_1-prctr.
                wa_maktg_1 = it_final_1-maktg.
    *        wa_hkont_cr = it_final_1-hkont_cr.
                at end of count.
                  sum.
    *          it_gl-hkont_dr = it_final_1-hkont_dr.
    *          it_gl-hkont_cr = it_final_1-hkont_cr.
                  it_gl-fkimg = it_final_1-fkimg.
                  it_gl-dr_qty = it_final_1-dr_qty.
                  it_gl-cr_qty = it_final_1-cr_qty.
                  it_gl-dmbtr_1 = it_final_1-dmbtr_1.
                  it_gl-dmbtr_2 = it_final_1-dmbtr_2.
                  it_gl-dmbtr = it_final_1-dmbtr_1 - it_final_1-dmbtr_2.
    *it_gl-dmbtr = it_final_1-dmbtr.
                  it_gl-dmbtr_dr_1 = it_final_1-dmbtr_dr_1.
                  it_gl-dmbtr_dr_2 = it_final_1-dmbtr_dr_2.
                  it_gl-dmbtr_dr = it_final_1-dmbtr_dr_1 - it_final_1-dmbtr_dr_2.
    *it_gl-dmbtr_dr = it_final_1-dmbtr_dr.
                  it_gl-dmbtr_cr_1 = it_final_1-dmbtr_cr_1.
                  it_gl-dmbtr_cr_2 = it_final_1-dmbtr_cr_2.
                  it_gl-dmbtr_cr = it_final_1-dmbtr_cr_1 - it_final_1-dmbtr_cr_2.
    *it_gl-dmbtr_cr = it_final_1-dmbtr_cr.
                  it_gl-gltxt = wa_gltxt.
                  it_gl-matnr = wa_matnr_1.
                  it_gl-matkl = wa_matkl_1.
                  it_gl-vtext = wa_vtext_1.
                  it_gl-spart = wa_spart.
                  it_gl-division = wa_vtext_2.
                  it_gl-prctr = wa_prctr.
                  it_gl-hkont = wa_hkont.
                  it_gl-maktg = wa_maktg_1.
                  it_gl-netqty = ( it_gl-fkimg + it_gl-dr_qty ) - ( it_gl-cr_qty ).
                  it_gl-netval = ( it_gl-dmbtr + it_gl-dmbtr_dr ) - ( it_gl-dmbtr_cr ).
                  append it_gl.
                  clear wa_matnr_1.
                  clear wa_vtext_1.
                  clear wa_matkl_1.
                  clear wa_ktgrm_1.
                  clear wa_hkont.
                  clear wa_hkont_dr.
                  clear wa_hkont_cr.
                endat.
                clear it_final_1.
                clear it_gl.
              endloop.
            endif.
          ENDIF.
      endif.
    Do provide your valuable suggestions
    Regards,
    Jitesh
    Use meaningful subject for your Future questions
    Edited by: Vijay Babu Dudla on Mar 23, 2009 6:20 AM

    Assuming you are using standard tables instead of sorted or hashed, your problem is likely here:
    loop at it_fin.
      read table it_sales with
        key vbeln = it_data_1-vbeln
        matnr = it_Data_1-matnr
        prctr = it_data_1-prctr.
    endloop.
    The read without the binary search option is in effect a nested loop. so have a look at:
    [Performance of Nested Loops|/people/rob.burbank/blog/2006/02/07/performance-of-nested-loops]
    Rob

  • Cannot create R3TR OSOA Version from a Generic Datasource

    Hello my friends,
    I need your help, please,
    I have created a Generic Datasource, this generic datasource extracts data from a View in ECC, when I create the datasource and save it(RSO2 transaction), the system(ECC) only show me the R3TR OSOD object available to transport it, I need see the R3TR OSOA object to transport de datasource from Dev to QA system. So when I transport the datasource, i can't find the active version in QA System.
    I think that this could be an authorizations issue, but i am not very sure.
    This are the steps that I am executing:
    1.- Assign the datasource to an Object Directory.
    2.-The Object generated is OSOD(Delivery version) type.
    Thanks for your help!!
    Regards.
    Antonio.

    HI,
    Have you enhanced any fields if please collect the append structure
    that is the not problem first collect the TR at RSA6 same data source.
    please find the doc how to collect the data source in ECC.
    ECC Data sources Transportation
    Thanks,
    Phani.

  • BPEL Append for collection output

    Hi,
    I need to append the values of an invoke procedure loop,  each time inside the loop an output collection is returned with an unique ID parameter ( EX: 20,30,40,50) and all these parameters have to be appended and  passed to the PL/SQL procedure to delete the ID value.
    can i use copy and append to get the values in an single variable
    Thanks,
    Balaji.

    YOu need to take care of only one thing.
    When you are appending the output from the input layer the mapping from input to output will be one layer below the input layer
    what i mean is if you have an element
    <employee>
    <name>
    <employee>
    and name is the appending element then you will use append operation and map the name from input to employee in the output

  • Append the data in file at receiver side

    Hi All,
    I want dump the data frm SAP tables. the data is hughe so we are sending the data in slots from ECC like 50K recoerds at a time and after that to collect that i using the append parameter at file receiver side and records are getting appended in file only that is correct 
    but my one doubt is
    let say monday the records get append into file at receiver side and on tuesday again the records will get appended in same file....it will be problem for this i used create parameter with overrite my question if i am using create then we can not use append ........
    please suggest...........
    Regards

    Hi gangadhar,
                             Append will work like this..
    if there is no file ..create the file and if file already exists then appends the data to the existing file..
    in your case when you create the file with day i.e file_day ..first time when you send 50K records file will get created..
    if you send another 50k in the same then will get appended to the same...
    in case if you send the another 50K set another day..then a new file will get create as there is no file available with taht file name
    i.e filename_nextday...
    Hope this clears all your queries
    HTH
    Rajesh

  • Unable to collect Product Return History using legacy collection

    Hi,
    I am facing issue in collecting product return history using legacy collection, File Upload (User File Upload) & Loader Worker erroring out as below. As I observe, its inserting space after .ctl, .dis & .bad file path.
    Can some one guide me how to reslove below issue.
    Loader Worker
    Argument 1 (CTRL_FILE) = /u02/oracle/xxxxx/inst/apps/rights_apps/logs/appl/conc/out/5913849MSD_DEM_RETURN_HISTORY .ctl
    Argument 2 (DATA_FILE) = /u02/oracle/xxxxx/inst/apps/rights_apps/logs/appl/conc/out/5913849PrdRetHist.dat
    Argument 3 (DISCARD_FILE) = /u02/oracle/xxxxx/inst/apps/rights_apps/logs/appl/conc/out/5913849MSD_DEM_RETURN_HISTORY .dis
    Argument 4 (BAD_FILE) = /u02/oracle/xxxxx/inst/apps/rights_apps/logs/appl/conc/out/5913849MSD_DEM_RETURN_HISTORY .bad
    Argument 5 (LOG_FILE) =
    Argument 6 (NUM_OF_ERRORS) = 1000000
    ===================================================================
    plan_id:0 plan_type:0 planning_engine_type:1
    Creating dummy log file ...
    Parent Program Name: MSCLOADS
    This is NOT as part of a Plan run.
    NLS_LANG original American_America.AL32UTF8 alt American_America.UTF8
    LRM-00112: multiple values not allowed for parameter 'control'
    SQL*Loader: Release 10.1.0.5.0 - Production on Tue Mar 11 19:58:20 2014
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    SQL*Loader-100: Syntax error on command-line
    Program exited with status 1
    APP-FND-01630: Cannot open file /u02/oracle/xxxxx/inst/apps/rights_apps/appltmp/OFq98wrx.t for reading
    Cause: USDINS encountered an error when attempting to open file /u02/oracle/xxxxx/inst/apps/rights_apps/appltmp/OFq98wrx.t for reading.
    Action: Verify that the filename is correct and that the environment variables controlling that filename are correct.
    Action: If the file is opened in read mode, check that the file exists. Check that you have privileges to read the file in the file directory. Contact your system administrator to obtain read privileges.
    Action: If the file is opened in write or append mode, check that you have privileges to create and write files in the file directory. Contact your system administrator to obtain create and write privileges.
    ***** End Of Program - No title available *****
    File Upload (User File Upload)
    Tue Mar 11 19:57:52 RET 2014: Profile 'MRP_DEBUG' Value : N
    Tue Mar 11 19:57:52 RET 2014: ===============================================================
    Tue Mar 11 19:57:52 RET 2014: fileLoaderInit: paramName = pLOAD_ID; paramValue=41563
    Tue Mar 11 19:57:52 RET 2014: ===============================================================
    Tue Mar 11 19:57:52 RET 2014: The control file Path /u02/oracle/xxx/apps/apps_st/appl/msc/12.0.0/patch/115/import/MSD_DEM_RETURN_HISTORY .ctl does not exist. Please contact your System  Administrator
    Regards,
    ML

    Hi,
    Login to unix server and I believe the control file is placed in a custom top say $MSC_TOP in your environment.
    just try to rename the ctl file without the MSD_DEM_RETURN_HISTORY<space>.ctl
    And try to upload the file once again.
    Hope this helps...!!!

  • Appending Xpath Value to a variable in For Each loop

    So I am iterating thru an XML File collection inside a ForEach container. I have an XML Task within this loop of type Xpath.
    The Xpath expression on each file evaluates to a value which I want to append as I keep iterating.
    How do I go about doing this?? I tried something like User::CheckSum =+ User::CheckSum but it gives and error and I am not sure on which property should it be applied to.
    Please understand that it is not one file with multiple nodes but multiple XML files in my scenario.
    Any help is much appreciated
    SM

    I do not think the variable's value gets preserved between hops, anyways, it does not look like a valid approach.
    Use the XMLFile Source in the ForEach Loop and capture the needed value into a package variable that you can keep appending to say in a Script Task.
    Arthur My Blog

  • Logfile Generation utilizing "Excel" (Creating and Appending Report)

    All,
    As always, thanks for the help you have given me in the past....especially the Vets. I have tried to figure out a solution to my issue from the message board, but no solution seems to fit what I am doing.
    Here is my situation...... I am using Labview to test my product one unit at a time. I have always used Teststand and report generation from there, but this time it is strictly Labview. This is my first attempt to create a logfile with Excel that appends one xls file everytime one unit is tested.
    The way my test is set up now, I test and collect the data in an array for when I created the logfile generation VI. I took several stabs at it, looked at examples, but cant figure out the direction I need to go to create this. Here is the parameteres necessary for the logfile (spreadsheet).
    -All UUT's will go into one spreadsheet and the spreadsheet will be appended by adding new data in next available row.
    -Data is imported to spreadsheet in array format.
    -Test data that passes will be green, test data that fails will be red (I can figure this out, but this is why I need to use Excel)
    -I want to use Excel so I have more flexibility for graphs and things of that nature in the future.
    It seems rather simple, but not for me.....lol. If I go to the Report Generation Toolkit, i  see "Create Report" and "Append Report"....but Append Report still wants the "report input" node wired. What do I wire that to? For example, if I have an excel spreadsheet called hangover.xls, do I somehow wire hangover.xls to the input? I am having trouble finding answers. I would really appreciate a simple JPG or VI so I can understand the setup for what I want to do.
    Comments and links to threads/help appreciated!
    Ryan

    Hi Evan,
    Thanks for the other examples....I thought I was going to be able to manipulate them into what I want, but ended up spending about 6 hours playing with it and up to 2am. I am getting so frustrated with this. This is new ground for me, I never have experimented with logfile creation. I am sorry to keep bothering you with this but I am ready to pull my hair out. I attached a couple Vi's....Spreadsheet import is the main VI and report.vi is the sub.....i need to rename them better but haven't got there.
    First off, that VI you posted that I couldn't open, could you just take a JPG of the block diagram? That would really help.
    I need to create a spreadsheet with logfile data in rows. The spreadsheet is to be appended for each unit under test. Each unit under test gets one row and all data is written at the end of the test. If you look at the spreadsheet_import.vi, I am basically taking a bunch of 1D arrays of data to create one long 1D array for one row.
    Every month a new spreadsheet is created (so log file data is divided into months) , and that is what the report.vi does....it looks to see if the filename is already created and if not, sends a boolean to the write to spreadsheet file to append. I reverted to "write to spreadsheet" because for the life of me, I cannot figure out how to use the worksheet portion to do this. I would think this should be pretty simple, but I cannot figure out and its not for lack of trying.
     If I use "write to spreadsheet", I am going to run into problems because I ultimately want to use a excel template with formulas but if I can figure it out, this will have to do.
    All I really want to do is to create a spreadsheet if one doesnt exist or append if it does, combine all my 1d array data, and create one row with this data. The other issue I ran into before is I cant figure out how to tell Excel where the next row is.......UUGHHHH! This is definitely stressing me out as I have a deadline and I will gladly sent a case of beer to Norway for the help received.
    Dying Here,
    Ryan
    Attachments:
    Spreadsheet_import.vi ‏14 KB
    report.vi ‏33 KB

Maybe you are looking for