Endeca Record spec creation

Hello everyone,
Currently our pipeline has 2 input records. 1. ITEM 2. PRICE. For each item in ITEM file, there may be multiple price records in PRICE file. There can be a situation where a particular item does not have any price at all (Expired Items). we are doing LEFT JOIN between ITEM and PRICE.
Now the problem is, we have created a column in PRICE file called "rec_spec" and this acts as a record spec in our pipeline as this is unique in PRICE file. But when we left join it with ITEM records, for all the items which has price will have record spec and expired items will not have record spec. This is creating a huge amount of logs in dgidx.log file as the warning for the records which does not have record spec. Our client wants to get rid of those warning messages from log file. How can I go ahead with this? Is there a way in which Endeca creates a unique record spec for all the records after left join? Please help.
Also, we can't create a record spec only in ITEM file as after left join with PRICE records multiple records may contain same record spec and baseline_update will throw an error.
Thanks in advance,
Sav

Hi Sav,
As you've identified, the RecSpec has to be unique - and there are two ways you can go about this.
#1 - Allow DGIDX to automatically create it; if you don't specify a property to act as the RecSpec and provide it with any values, then it will operate similar to a standard database and create it's own index.  This is not the recommended method though, as it prevents you from being able to conduct partial updates - if you can't re-create the key used, then how can you update the record?
#2 - Create your own RecSpec, as you're doing right now with the PRICE file in the pipeline.  As you've identified though, if you have an empty RecSpec or duplicate, an error is generated.
This is a common problem with the scenario you described, since when there isn't a match to the left of the join and results in an empty key.  I would suggest using a manipulator component, such as a Record Manipulator, following the join to create the RecSpec. There's no need to only use a source field, in fact when using multiple data sets (such as products, how-to's, and articles) we re-create our RecSpecs afterwards.
My suggestion would be to drop a manipulator after the join with the following behaviour:
Retrieve a unique value from the ITEM file, such as Item_ID, Product_ID, etc., as the first part of the key.
If there is a key present from the PRICE file (showing that this is not an expired product), append the additional key (rec_spec) from it onto the ITEM identifier
Optionally, you could append an "_EXPIRED" onto the item ID if there isn't a match with PRICE
That way every item will have a unique RecSpec, which could be recreated if required.

Similar Messages

  • Can I use multiple values for a rollup key on the same Endeca record?

    We have a business need to to aggregate our records using different criteria, based on user navigation. We are thinking of using a rollup key with multiple values to help with the aggregation, but we are running into some issues.
    Here is a made-up xample of what we want to do: assume we have a group of products and these products can be organized into groups using parent-child relationships. We would like to create an aggregate record for each parent, and we want the aggregate record for each parent to include all the children for the parent. To achieve this, we use a field called "parent_rec_spec" that holds the parent record spec and we set the same value on the field for the parent and its children. When we do rollup using the parent_rec_spec as the rollup key, we are able to see one aggregate record for each parent (with its children).
    The previous setup worked perfectly for us so far. But now we are getting a business requirement that allows children nodes to be linked to multiple parents at the same time. We were hoping of using another dimension to limit the records based on user roles/characteristics , so that only applicable parents/children are displayed (for example, we can use "market" as an additional filtering property, and we decide to show all parents/children for "North America", while hiding the parents/children for other markets).
    This caused an odd behavior when children are linked to multiple parents. For example, assume that SKUs A and B were linked to parents in "North America" and "Europe" at the same time, and assume that the user chose the "North America" market. The navigation state would eliminate the parents/children that are no in North America, and will also cause the parents/children that are labeled for North America to show up and be aggregated correctly. This however will lead to the creation of additional aggregate records for the A and B using the parent_rec_spec values that would have linked them to the Europe parents (even though the parents are hidden away).
    Here is an example index file that we used to load the test data:
    Update||1
    Market||North America
    Record Type||Product
    Name||Parent 1
    rec_spec||P1
    parent_rec_spec||P1
    EOR
    Update||1
    Market||Europe
    Record Type||Product
    Name||Parent 2
    rec_spec||P2
    parent_rec_spec||P2
    EOR
    Update||1
    Market||North America
    Record Type||Product
    Name||Child A
    rec_spec||A
    parent_rec_spec||P1
    EOR
    Update||1
    Market||North America
    Market||Europe
    Record Type||Product
    Name||Child B
    rec_spec||B
    parent_rec_spec||P1
    parent_rec_spec||P2
    EOR
    Update||1
    Market||North America
    Market||Europe
    Record Type||Product
    Name||Child C
    rec_spec||C
    parent_rec_spec||P1
    parent_rec_spec||P2
    EOR
    Update||1
    Market||Europe
    Record Type||Product
    Name||Child D
    rec_spec||D
    parent_rec_spec||P2
    EOR
    In this setup, we have parent P1 marked for North America with children A, B and C, and parent P2 marked for Europe with B, C and D as children. When we use North America as a filter in the navigation state, and parent_rec_spec as the rollup key, then we will see an aggregate record for P1, A, B and C. But we will also see an aggregate record for B and C by itself (presumably because of the other parent_rec_spec value on these records).
    The actual data that we are testing with is more complicated, but the end result is similar. We also noticed that the additional aggregate records would not be created always, depending on the ordering of the records.
    The question that I need help with is this: is there a way to fine tune the rollup logic so that it can only include certain records (in the example above, we can change the rec_spec from PA and PB to PA_North_America and PB_Europe and then we would be interested in rolling up using values that end with NorthAmerica).
    By the way, we considered using separate rollup keys for each context (like parent_rec_spec_north_america and parent_rec_spec_europe), but the number of contexts is dynamic, and might grow large. So it is not easy for us to create the additional properties on the fly, and we are concerned about the possible large number of dimensions.

    http://www.adobe.com/cfusion/webforums/forum/messageview.cfm?catid=2&threadid=1157850

  • Endeca record ID shows unexpected values

    Hi All,
    I have designed pipelines and record adapters,dimension adapters, prop mapper using deployment template.
    In this process, I have created custom Properties and Dimensions. I have mentioned Product_ID as "use for record spec" and successfully done the indexing process as per as forge log is concerned.
    However when I am trying to test the data in endeca_jsp reference application , I am suprised to see that number of total records are displaying correct as 7 (seven) but the record name (P_Name) is displaying as following :
    1 Record Endeca.4096
    2 Record Endeca.8192
    3 Record Endeca.16384
    4 Record Endeca.32768
    5 Record Endeca.65536
    6 Record Endeca.131072
    7 Record Endeca.262144
    But is should be displayed as "Acer laptop 1" , "Acer Laptop 2" and so on.
    And of course all the properties are also not coming.
    Can anyone faced similar issues and any suggestion/ideas ...
    Regards,
    Hoque

    Hoque,
    Do the p_product_id and p_name show up correctly on the properties of each record in your endeca_jspref application? If so, they're mapping correctly in your pipeline. The property to be designated as the title of each record in the endeca_jspref application is controlled in your constants.jsp file in your endeca_jspref /webapps directory. By default, this is set to "P_Name", but be careful, this is case-sensitive.
    If you're running the reference application via port 8006 (workbench) then look to C:\Endeca\Workbench\<version>\server\webapps\endeca_jspref for the root of the endeca_jspref application to find this constants.jsp file. If you're running it via port 8888 (eac/platform services), then look to C:\Endeca\PlatformServices\<version>\tools\server\webapps\endeca_jspref. Note: these paths do depend on the version of the product you're running, but should point you in the right direction.
    HTH,
    Dan
    http://branchbird.com

  • Quality info record mass creation

    Hello Guys,
    Can we able to do mass creation of quality info records thru QI06 ? I want to load bunch of QIR as a manual load.
    I tried using qi06 in add option but I am getting a warning message that QIR is not changed.
    Is that only we can change it ? Pls help.
    Akash

    Hi
    For QI06:
    Read follwoing
    You can use this program to make mass changes to quality information records. When adding records, you can enter one or several materials. As a rule, the only materials used for processing are those for which QM in Procurement is active in the material master and a vendor release or quality assurance agreement is required by the control key.
    You can also select the materials using the material class.
    You can enter one or more vendors, for whom a quality information record is to be created.
    Using the selection criterion 'Only for purchasing info records', you can create quality information records for all possible material-vendor combinations, provided that purchasing information records already exist for these combinations.
    That means first of all you must enter Procurement key in QM view of that Material
    Second if you select multiple vendor for a material or maultiple materials with single Vendor.
    if you execute this then You will get the list of all possible vendor material combination for which QI can be maintained.
    by selecting line you can save them.
    The quality info records printed in the list have a traffic light indicator which denotes the following:
    Green light:  - All data is O.K.
    Yellow light: - Ordered quantity is greater than released quantity
            - There is no quality assurance agreement
            - Lock function is active
            - Validity period of QM system has expired
    Red light:    - Deletion flag is set
      You can select the "Set deletion flag"
      function using the menu
    Regards
    Sujit

  • XSLT stylesheet template for Endeca Records

    Endeca Forge provides a Record Adapter which can load XML data, transformed (if required) to Endeca's XML record format by an XML Stylesheet Template (XSLT). This provides a way of getting XML into Endeca with a minimum of fuss, for data analysis, PoCs and for modest amounts of XML data in production applications.
    There was an XSLT template posted to Endeca Eden prior to the acquisition. The thread is still available at:
    http://eden.endeca.com/web/eden/forums/message_boards/message/99120
    Does anyone have a copy of the template?
    TG

    Hi TG,
    Template XSLT below. Its main feature is that it attempts to be XSLT-agnostic, other than identification of the start-record element/x-path. It could be optimized for specific XML structures, or modified to support more complex XML data.
    Note also the remarks about XSLT and larger XML data - as your XML source data gets larger, XSLT will become less attractive, and you'll want to implement a streaming or pull-parser approach (SAX, StAX).
    Best
    Brett
    <?xml version="1.0" encoding="utf-8"?>
    <!-- Copyright 2012 Oracle, Inc. All Rights Reserved. -->
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
        <xsl:output method="xml" indent="yes" doctype-system="records.dtd" />
         <!--
              ========== About this stylesheet ==========
              This is a generic stylesheet for transforming an XML file into the XML record format accepted by Endeca Forge.
              The stylesheet is designed to be reasonably agnostic about the source XML format: the main configuration required is to identify
              one or more repeating record elements in the source XML.  The stylesheet creates the root RECORDS element, enclosing one RECORD element for each
              repeating element.
              Each RECORD element is then assigned any number of PROP name-value pairs created for each property, with each PROP containing
              a NAME attribute and a PVAL.  The stylesheet effectively "flattens" the XML into a RECORD, recursively fetching sub-elements and creating
              properties named according to the hierarchical path (think XPath).
              ========== Stylesheet configuration ==========
              Add your base source repeating elements as an xpath selector at ADD RECORD SELECTORS HERE below.
              The default example in the template is "item", which suites an RSS 2.0 feed.
              By default, the template will use full path property names.  To change to simple property names, change the variable
              use-simple-prop-names below to true().
              You can also change the path dividers used in full property names - see path-divider and attribute-divider variables below.
              ========== Examples ==========
              The first example used is an RSS 2.0 feed (note that there are other Endeca adapters for this type of data).
              Using full property names, this will produce records with the following properties:
              rss_channel_item_title
              rss_channel_item_description
              rss_channel_item_link
              rss_channel_item_guid_@isPermaLink
              and so on.  With simple prop names, only the last part of the name is used (after the last _ or @).
              The second example (sample.xml and sample_records.xml) shows how nested repeating elements within a record are folded-down to
              a multi-value property.
         -->
         <!-- change select to true() to use simple prop names -->
         <xsl:variable name="use-simple-prop-names" select="false()" />
         <xsl:variable name="path-divider">
              <xsl:text>_</xsl:text>
         </xsl:variable>
         <xsl:variable name="attribute-divider">
              <xsl:text>_@</xsl:text>
         </xsl:variable>
         <xsl:template match="/">
              <xsl:element name="RECORDS">
                   <!-- apply templates for each "repeating element" representing a single record -->
                   <!-- ADD RECORD SELECTORS HERE -->
                   <xsl:apply-templates select="//item" />
              </xsl:element>
         </xsl:template>
         <!-- one template with RECORD element for each template type applied above -->
         <xsl:template match="item">
              <xsl:element name="RECORD">
                   <xsl:call-template name="leaves-auto-subpaths" />
              </xsl:element>
         </xsl:template>
        <!-- calls the prop template for each leaf -->
        <xsl:template name="leaves-auto-subpaths">
            <!-- select all attributes-->
            <xsl:for-each select="@*">
                <xsl:call-template name="prop" />
            </xsl:for-each>
            <!-- select all leaf nodes -->
            <xsl:for-each select="*[not(*) and . != '']">
                <xsl:call-template name="prop" />
            </xsl:for-each>
            <!-- auto recurse subpaths -->
            <xsl:for-each select="*[*]|*[@*]">
                <xsl:call-template name="leaves-auto-subpaths" />
            </xsl:for-each>
        </xsl:template>
        <!-- creates the PROP element and calls prop-name and prop-pval -->
        <xsl:template name="prop">
            <xsl:element name="PROP">
                 <xsl:if test="$use-simple-prop-names">
                      <xsl:call-template name="prop-name-simple" />
                 </xsl:if>
                 <xsl:if test="not($use-simple-prop-names)">
                      <xsl:call-template name="prop-name-full" />
                 </xsl:if>
                <xsl:call-template name="prop-pval" />                 
            </xsl:element>
        </xsl:template>
        <!-- creates the NAME element -->
        <xsl:template name="prop-name-simple">
            <xsl:attribute name="NAME">
                <xsl:value-of select="name()" />
            </xsl:attribute>
        </xsl:template>
        <!-- creates the NAME attribute -->
        <xsl:template name="prop-name-full">
            <xsl:attribute name="NAME">
                 <!-- walk from root to current node, print name and underscore  -->
                <xsl:for-each select="ancestor-or-self::*">
                    <xsl:value-of select="name()" />
                    <xsl:if test="position() != last()">
                        <xsl:value-of select="$path-divider" />
                    </xsl:if>
                </xsl:for-each>
                <!-- pick up attributes -->
                <xsl:if test="not(namespace::*)">
                    <xsl:value-of select="$attribute-divider" />
                    <xsl:value-of select="name()" />
                </xsl:if>
            </xsl:attribute>
        </xsl:template>
        <!-- creates the PVAL element -->  
        <xsl:template name="prop-pval">
            <xsl:element name="PVAL">
                <xsl:value-of select="." />
            </xsl:element>
        </xsl:template>
    </xsl:stylesheet>

  • Info record auto creation

    Hi all,
    When I create a PO and inform the auto info record creation, the plant information is not updated in the info record created. Is there any configuration regarding this? How can I set this to update by plant?
    Thanks in advance.
    Regards,
    Marcelo Buosi.

    Hi,
    Materials Management>Purchasing>Conditions-->Define Condition Control at Plant Level
    check also
    Materials Management>Purchasing>Environment Data>Define Default Values for Buyers>Settings for Default Values>Indicators>Info record update
    You assign default values that you maintained in Customizing to a user by entering the key of the default value in the user master record under the parameter ID "EVO".
    Andrzej

  • How to delete child table records while creation reconciliation event

    Hi,
    I developed custom scheduler task to create reconciliation events with child form information:
    if (!reconOperations.ignoreEventAttributeData(objectName, attrs, "Roles", childs)) {
    long eventId = reconOperations.createReconciliationEvent(objectName, attrs, eventAttributes);
    for (int f = 0; f < childs.length; f++) {
    reconOperations.providingAllMultiAttributeData(eventId, "Roles", true);
    reconOperations.addDirectMultiAttributeData(eventId, "Roles", childs[f]);
    reconOperations.finishReconciliationEvent(eventId);
    but when I delete roles from target system roles won't delete from OIM.
    How config recon Operations to do this?

    objName - The Name for the object for which the reconciliation is taking place
    parentRecord - A map containing the field-value pairs for the data received from the target.
    childTableName - The name of the multi-attribute reconciliation field that the data is for
    childRecords - A List containing the Hashtable objects. Every Hashtable will have field-value pairs for the data record pertaining to that attribute received from the target.

  • Info record & PO creation withou master data

    Dear experts,
    Please explain me
    1. Can we create an Info record without material master
    2. For creating a PO without material master,
        i) Account assignment, material text entry is must, apart from this any
            configuration is required?
        ii) Explain me the use of settings in IMG --- MM --- Purchasing -- Entry aids for
            items without material master.
    thanks in advance,
    regards,
    Govardhan,V.

    1. Can we create an Info record without material master
    yes you can - create for material group
    2. For creating a PO without material master,
    i) Account assignment, material text entry is must, apart from this any
    configuration is required?
    No
    ii) Explain me the use of settings in IMG --- MM --- Purchasing -- Entry aids for
    items without material master.
    If you are create material master w/o accounting view in this case system will take the valuation class based on the mateiral master
    The assignment of a valuation class to a material group enables the    
    system to determine different accounts for the individual material     
    groups.

  • Oracle endeca studio application creation issue

    I have installed oracle endeca 3.1 solution on my desktop but i am not able to create application using endeca studio from excel upload. It shows excel data in preview and fields as well. But it gets stuck in create application page. Please help in resolving this issue.

    Hi,
         I have had this before and sometimes it can be that the Excel spreadhseet has formula, split screen, colours or anything else odd, please copy and paste values and take any split screen or colour off and try again, it usually works for me.
    Regards John

  • Equipment record address creation via function module

    How can I  populate an Equipment record's address via a function module?  I'm referrring to the Address area of the Location tab in tcode IE02. BAPI_EQUI_CREATE works fine for the base data but has no address capabilities.

    Alejiandro
    Yes it was ITOB410. I tried the lock_old parm without luck but was able to get around that problem with an explicit calll to FM EQUIPMENT_LOCK. However, I have a new problem with the FM.
    The LOCK FM got rid of the BLOCKED issue. However, now an internal call from EQUIPMENT_MODIFY to EQUI_GET_SAVE_DATA is causing a short dump in include LITOBBUFEQF70 on:
    120           IF NOT SY-SUBRC IS INITIAL.
    121 *         try to read lock table by matnr/sernr
    122             READ TABLE I_WA-LOCK ASSIGNING <L_WA_LOCK_REC>
    123                  WITH TABLE KEY EQUNR = SPACE
    124                                 MATNR = <L_WA_EQUI_REC_WBUF>-EQUI
    125                                 SERNR = <L_WA_EQUI_REC_WBUF>-EQUI
    126             IF NOT SY-SUBRC IS INITIAL.
    >>>>               MESSAGE X003.
    128             ENDIF.
    129           ENDIF.
    Have you seen this?

  • Regarding vendor master record's creation

    hi sap gurus,
    can anybody tell me how i can check who(user's id) made (vmr-xk01) in sap. so plz can anybody hel pme out

    Hi,
    Use XK03.
    Enter XK03, Enter Vendor code, Company code and purchasing org.
    Select Address data and press enter.
    Then click on Administrative Data button which is aviilabel at header level behind change button.
    You can get your required detail from here.
    Regards,
    Mahesh Wagh

  • Restrict Info Record Creation Based on Material Type

    Dear colleagues
    I would like restrict material info record (ME11) creation based on given materialsu2019 material type (MARA u2013 MTART). Is it possible such control? If yes, can you please provide the way?
    Best regards

    Dear AP,
    Unfortunately I wasnu2019t able to find any user exit, badi or etc. That's way I need to post the question to the forum. Before providing any answer please consider that no question is post without any research and give the audience specific, to the point answers not such irrelevant ones.
    Thank you

  • Error while running forge post Endeca integration with ATG

    Hi All,
    We have integrated Endeca application with ATG and then tried running the Endeca baseline update script. However the script failed with the below error message
    Parsing XML dimensions data with validation turned on
    Parsing project file "C:\apps\ATGSample\data\forge_output\ATGSample.xml" (project="ATGSample")
    XMLParser: Reading dimensions, dvals, and synonyms from file "C:\apps\ATGSample\data\forge_output\\ATGSample.dimensions.xml"
    ERROR 06/07/13 05:15:57.022 UTC (1370582157018) DGIDX {dgidx,baseline} Internal error while decompressing input stream: null
    FATAL 06/07/13 05:15:57.022 UTC (1370582157018) DGIDX {dgidx,baseline} Fatal error at file , line 0, char 0; Message: An exception occurred! Type:RuntimeException, Message:The primary document entity could not be opened. Id=C:\apps\ATGSample\data\forge_output\\ATGSample.dimensions.xml
    WARN 06/07/13 05:15:57.022 UTC (1370582157019) DGIDX {dgidx,baseline} Lexer/OLT log: level=-1: 2013/06/07 10:45:57 | INFO | Disabling log callback
    We checked the physical location of forge output and found that there is no file called 'ATGSample.dimensions.xml '
    However, when we manually placed this file in the forge output folder and ran dgidx alone, the baseline update failed with the below error
    Parsing XML dimensions data with validation turned on
    Parsing project file "C:\apps\ATGSample\data\forge_output\ATGSample.xml" (project="ATGSample")
    XMLParser: Reading dimensions, dvals, and synonyms from file "C:\apps\ATGSample\data\forge_output\\ATGSample.dimensions.xml"
    ERROR 06/07/13 05:15:57.022 UTC (1370582157018) DGIDX {dgidx,baseline} Internal error while decompressing input stream: null
    FATAL 06/07/13 05:15:57.022 UTC (1370582157018) DGIDX {dgidx,baseline} Fatal error at file , line 0, char 0; Message: An exception occurred! Type:RuntimeException, Message:The primary document entity could not be opened. Id=C:\apps\ATGSample\data\forge_output\\ATGSample.dimensions.xml
    WARN 06/07/13 05:15:57.022 UTC (1370582157019) DGIDX {dgidx,baseline} Lexer/OLT log: level=-1: 2013/06/07 10:45:57 | INFO | Disabling log callback
    ============================================================================
    === DGIDX: Version = "6.4.0.692722"
    === Start Time : Fri Jun 07 11:04:15 2013
    === Arguments : "C:\Endeca\MDEX\6.4.0\bin\dgidx.exe -v --compoundDimSearch --lang en --out C:\apps\ATGSample\logs\dgidxs\Dgidx\Dgidx.log --dtddir C:\Endeca\MDEX\6.4.0\conf\dtd --tmpdir C:\apps\ATGSample\data\temp C:\apps\ATGSample\data\forge_output\ATGSample C:\apps\ATGSample\data\dgidx_output\ATGSample"
    === Current Directory : C:\apps\ATGSample
    === Exec Path : C:\Endeca\MDEX\6.4.0\bin\dgidx.exe
    ============================================================================
    Language/collation in use is English (collation=endeca)
    WARN 06/07/13 05:34:15.054 UTC (1370583255046) DGIDX {dgidx,baseline} Lexer/OLT log: level=-1: 2013/06/07 11:04:15 | INFO | Enabling log callback
    No application configuration specified. Using "C:\apps\ATGSample\data\forge_output\ATGSample" as the application configuration prefix.
    ============================================================================
    === DGIDX: Starting phase "Read raw dimensions, properties, and records"
    === Current Time : Fri Jun 07 11:04:15 2013
    === Total Elapsed : 0.1131 seconds
    === User CPU Time : 0.0625 seconds
    === System CPU Time : 0.1250 seconds
    === Memory Usage : 18.44 MB
    ============================================================================
    Parsing XML dimensions data with validation turned on
    Parsing project file "C:\apps\ATGSample\data\forge_output\ATGSample.xml" (project="ATGSample")
    XMLParser: Reading dimensions, dvals, and synonyms from file "C:\apps\ATGSample\data\forge_output\\ATGSample.dimensions.xml"
    In Dval [id=10001] named "clothing-sku.color", the name is non-searchable.
    In Dval [id=10002] named "clothing-sku.size", the name is non-searchable.
    In Dval [id=10003] named "furniture-sku.woodFinish", the name is non-searchable.
    In Dval [id=10093] named "product.brand", the name is non-searchable.
    In Dval [id=10094] named "product.catalogId", the name is non-searchable.
    In Dval [id=10006] named "product.disallowAsRecommendation", the name is non-searchable.
    In Dval [id=10007] named "product.features.displayName", the name is non-searchable.
    In Dval [id=10095] named "product.language", the name is non-searchable.
    In Dval [id=10008] named "product.nonreturnable", the name is non-searchable.
    In Dval [id=10096] named "product.priceListPair", the name is non-searchable.
    In Dval [id=10009] named "product.siteId", the name is non-searchable.
    In Dval [id=10010] named "sku.siteId", the name is non-searchable.
    In Dval [id=10011] named "product.category", the name is non-searchable.
    In Dval [id=10079] named "item.type", the name is non-searchable.
    XMLParser: Done reading dimensions, dvals, and synonyms from "C:\apps\ATGSample\data\forge_output\\ATGSample.dimensions.xml"
    XMLParser: Reading auto propmap file "C:\apps\ATGSample\data\forge_output\\ATGSample.auto_propmap.xml"
    XMLParser: Done reading auto propmap file "C:\apps\ATGSample\data\forge_output\\ATGSample.auto_propmap.xml"
    XMLParser: Reading properties from file "C:\apps\ATGSample\data\forge_output\ATGSample.prop_refs.xml"
    XMLParser: Done reading properties from file "C:\apps\ATGSample\data\forge_output\ATGSample.prop_refs.xml"
    XMLParser: Reading rollup properties and dimensions from file "C:\apps\ATGSample\data\forge_output\ATGSample.rollups.xml"
    XMLParser: Done reading rollup properties and dimensions from file "C:\apps\ATGSample\data\forge_output\ATGSample.rollups.xml"
    XMLParser: Reading record spec property from file "C:\apps\ATGSample\data\forge_output\ATGSample.record_spec.xml"
    XMLParser: Property "common.id" is a record spec property.
    XMLParser: Done reading record specs from "C:\apps\ATGSample\data\forge_output\ATGSample.record_spec.xml"
    XMLParser: Reading record filter properties from file "C:\apps\ATGSample\data\forge_output\ATGSample.record_filter.xml"
    XMLParser: Done reading record filter properties from file "C:\apps\ATGSample\data\forge_output\ATGSample.record_filter.xml"
    XMLParser: Creating dimensions from dvals.
    XMLParser: Reading rollup properties and dimensions from file "C:\apps\ATGSample\data\forge_output\ATGSample.rollups.xml"
    XMLParser: Done reading rollup properties and dimensions from file "C:\apps\ATGSample\data\forge_output\ATGSample.rollups.xml"
    XMLParser: Reading dimensions from file "C:\apps\ATGSample\data\forge_output\ATGSample.dimension_refs.xml"
    XMLParser: Done reading dimensions from file "C:\apps\ATGSample\data\forge_output\ATGSample.dimension_refs.xml"
    XMLParser: Reading dimension groups from file "C:\apps\ATGSample\data\forge_output\ATGSample.dimension_groups.xml"
    XMLParser: Done reading dimension groups from file "C:\apps\ATGSample\data\forge_output\ATGSample.dimension_groups.xml"
    XMLParser: Reading precedence rules from file "C:\apps\ATGSample\data\forge_output\ATGSample.precedence_rules.xml"
    XMLParser: Done reading precedence rules from file "C:\apps\ATGSample\data\forge_output\ATGSample.precedence_rules.xml"
    XMLParser: Reading dval refs from file "C:\apps\ATGSample\data\forge_output\ATGSample.dval_refs.xml"
    XMLParser: Done reading dval refs from file "C:\apps\ATGSample\data\forge_output\ATGSample.dval_refs.xml"
    XMLParser: Reading dval ranks from file "C:\apps\ATGSample\data\forge_output\ATGSample.dval_ranks.xml"
    XMLParser: Done reading dval ranks from file "C:\apps\ATGSample\data\forge_output\ATGSample.dval_ranks.xml"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10079] "item.type"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10094] "product.catalogId"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10095] "product.language"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10009] "product.siteId"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10001] "clothing-sku.color"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10007] "product.features.displayName"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10006] "product.disallowAsRecommendation"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10002] "clothing-sku.size"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10003] "furniture-sku.woodFinish"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10008] "product.nonreturnable"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10093] "product.brand"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10096] "product.priceListPair"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10010] "sku.siteId"
    ERROR 06/07/13 05:34:15.242 UTC (1370583255242) DGIDX {dgidx,baseline} No dimension_refs entry found for dimension [10011] "product.category"
    XMLParser: Reading refinement config from file "C:\apps\ATGSample\data\forge_output\ATGSample.refinement_config.xml"
    XMLParser: Done reading refinement config from file "C:\apps\ATGSample\data\forge_output\ATGSample.refinement_config.xml"
    XMLParser: Reading dimension search index configuration from file "C:\apps\ATGSample\data\forge_output\ATGSample.dimsearch_index.xml"
    XMLParser: Done reading dimension search index configuration from file "C:\apps\ATGSample\data\forge_output\ATGSample.dimsearch_index.xml"
    XMLParser: Reading record search index configuration from file "C:\apps\ATGSample\data\forge_output\ATGSample.recsearch_indexes.xml"
    XMLParser: Done reading record search index configuration from file "C:\apps\ATGSample\data\forge_output\ATGSample.recsearch_indexes.xml"
    XMLParser: Reading record search interface configuration from file "C:\apps\ATGSample\data\forge_output\ATGSample.recsearch_config.xml"
    XMLParser: Done reading record search interface configuration from file "C:\apps\ATGSample\data\forge_output\ATGSample.recsearch_config.xml"
    WARN 06/07/13 05:34:15.288 UTC (1370583255283) DGIDX {dgidx,baseline} Errors while parsing record search interface configuration from file "C:\apps\ATGSample\data\forge_output\ATGSample.recsearch_config.xml": RETURN_RELRANK_SCORE no longer supported. Search interface member "allAncestors.displayName" in interface "All" is not a property or dimension Search interface member "product.displayName" in interface "All" is not a property or dimension Search interface member "sku.displayName" in interface "All" is not a property or dimension Cannot put search interface member "product.features.displayName" into the search interface "All" because it has not been enabled for full-text search Cannot put search interface member "product.brand" into the search interface "All" because it has not been enabled for full-text search Search interface member "product.repositoryId" in interface "All" is not a property or dimension Search interface member "sku.repositoryId" in interface "All" is not a property or dimension Search interface member "product.briefDescription" in interface "All" is not a property or dimension Search interface member "product.description" in interface "All" is not a property or dimension Search interface member "product.longDescription" in interface "All" is not a property or dimension Search interface member "product.keywords" in interface "All" is not a property or dimension Cannot put search interface member "clothing-sku.color" into the search interface "All" because it has not been enabled for full-text search Cannot put search interface member "clothing-sku.size" into the search interface "All" because it has not been enabled for full-text search Cannot put search interface member "furniture-sku.woodFinish" into the search interface "All" because it has not been enabled for full-text search Search interface member "sku.manufacturer_part_number" in interface "All" is not a property or dimension
    XMLParser: Reading search chars from file "C:\apps\ATGSample\data\forge_output\ATGSample.search_chars.xml"
    XMLParser: Done reading search chars from file "C:\apps\ATGSample\data\forge_output\ATGSample.search_chars.xml"
    XMLParser: Reading language stemming settings from file "C:\apps\ATGSample\data\forge_output\ATGSample.stemming.xml"
    XMLParser: Done reading per-language stemming settings from file "C:\apps\ATGSample\data\forge_output\ATGSample.stemming.xml"
    Default language English manually configured to use static word forms.
    XMLParser: Reading word forms from file "C:\Endeca\MDEX\6.4.0\conf\stemming\en_word_forms_collection.xml"
    XMLParser: Done reading word forms from file "C:\Endeca\MDEX\6.4.0\conf\stemming\en_word_forms_collection.xml". There are 50374 word forms.
    XMLParser: Reading language config from file "C:\apps\ATGSample\data\forge_output\ATGSample.languages.xml"
    XMLParser: Done reading language config from file "C:\apps\ATGSample\data\forge_output\ATGSample.languages.xml"
    XMLParser: Reading stop words from file "C:\apps\ATGSample\data\forge_output\ATGSample.stop_words.xml"
    XMLParser: Done reading stop words from file "C:\apps\ATGSample\data\forge_output\ATGSample.stop_words.xml", finished in 0.0039 seconds.
    FATAL 06/07/13 05:34:17.616 UTC (1370583257616) DGIDX {dgidx,baseline} ENE Indexer: Error processing records file.
    WARN 06/07/13 05:34:17.616 UTC (1370583257616) DGIDX {dgidx,baseline} Lexer/OLT log: level=-1: 2013/06/07 11:04:17 | INFO | Disabling log callback

    I've seen this type of error before when Forge is configured to read multiple files with spec *.xml.  Is that how you've configured your record adapter?  This configuration then collides with Forge XML config files merged into the same data/processing directory.
    From memory there's a couple of solutions for this - one might be to give the files a common name prefix if that's feasible, e.g. _data*.xml.  You could use a sub-directory to separate the files, but you'd need to modify your copy scripts.

  • InvalidRecordException: Input record does not have a valid Id.

    I am trying to export data from hybris to endeca. I have set up the index schema, index element, cas config, added properties.
    I have added queries the index elements but somehow the recordstore are not exporting the data to endeca. When, I checked the recordstore for data, it returned an empty file. I have checked the record store config for data and it shows record.spec available. Not sure why I am getting the InvalidRecordException.
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <recordStoreConfiguration xmlns="http://recordstore.itl.endeca.com/">
    <changePropertyNames/>
    <idPropertyName>record.spec</idPropertyName>
    <jdbmSettings/>
    </recordStoreConfiguration>
    ERROR [hyend2exportcronjob1360880300110::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] (hyend2exportcronjob1360880300110) [EndecaRecordStoreService] Exception during writing records to record store
    com.endeca.itl.recordstore.InvalidRecordException: Input record does not have a valid Id.
    at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:141)
    at $Proxy146.writeRecords(Unknown Source)
    at com.endeca.itl.recordstore.RecordStoreWriter.doFlush(RecordStoreWriter.java:170)
    at com.endeca.itl.recordstore.RecordStoreWriter.flush(RecordStoreWriter.java:145)
    at com.endeca.itl.recordstore.RecordStoreWriter.close(RecordStoreWriter.java:163)
    at de.hybris.platform.hyend2.services.endeca.impl.DefaultEndecaRecordStoreService.writeRecordsToRecordStore(DefaultEndecaRecordStoreService.java:125)
    at de.hybris.platform.hyend2.services.endeca.impl.DefaultEndecaRecordStoreService.writeRecords(DefaultEndecaRecordStoreService.java:98)
    at de.hybris.platform.hyend2.services.impl.DefaultExportService.exportSchema(DefaultExportService.java:456)
    at de.hybris.platform.hyend2.services.impl.DefaultExportService.exportSchema(DefaultExportService.java:210)
    at de.hybris.platform.hyend2.services.impl.DefaultExportService.export(DefaultExportService.java:121)
    at de.hybris.platform.hyend2.jobs.ExportJob.perform(ExportJob.java:67)
    at de.hybris.platform.hyend2.jobs.ExportJob.perform(ExportJob.java:1)
    at de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob.performCronJob(ServicelayerJob.java:40)
    at de.hybris.platform.cronjob.jalo.Job.execute(Job.java:1294)
    at de.hybris.platform.cronjob.jalo.Job.performImpl(Job.java:813)
    at de.hybris.platform.cronjob.jalo.Job.access$1(Job.java:772)
    at de.hybris.platform.cronjob.jalo.Job$JobRunable.run(Job.java:677)
    at de.hybris.platform.util.threadpool.PoolableThread.run(PoolableThread.java:198)

    This was due to the dropping of recordstores which were not getting recreated. Using environment parameters (host and port) on the command line to create the recordstore for data resolved the issue.

  • Endeca Utility 'backup_log_dir_for_component_AuthoringDgraph' failed

    I am getting error when i am executing baseline update script on windows. The forge is happening fine after which the script is throwing error.
    [06.25.14 21:04:38] INFO: Checking definition from AppConfig.xml against existing EAC provisioning.
    [06.25.14 21:04:39] INFO: Definition has not changed.
    [06.25.14 21:04:39] INFO: Starting baseline update script.
    [06.25.14 21:04:39] INFO: Acquired lock 'update_lock'.
    [06.25.14 21:04:39] INFO: [ITLHost] Starting shell utility 'cleanDir_processing'.
    [06.25.14 21:04:40] INFO: [ITLHost] Starting shell utility 'cleanDir_forge-output'.
    [06.25.14 21:04:41] INFO: [ITLHost] Starting shell utility 'cleanDir_dgidx-output'.
    [06.25.14 21:04:42] INFO: [ITLHost] Starting shell utility 'move_-_to_processing'.
    [06.25.14 21:04:44] INFO: [ITLHost] Starting copy utility 'fetch_config_to_input_for_forge_Forge'.
    [06.25.14 21:04:45] INFO: [ITLHost] Starting backup utility 'backup_log_dir_for_component_Forge'.
    [06.25.14 21:04:46] INFO: [ITLHost] Starting component 'Forge'.
    [06.25.14 21:04:55] INFO: [ITLHost] Starting backup utility 'backup_log_dir_for_component_Dgidx'.
    [06.25.14 21:04:56] INFO: [ITLHost] Starting component 'Dgidx'.
    [06.25.14 21:05:11] INFO: [AuthoringMDEXHost] Starting shell utility 'cleanDir_local-dgraph-input'.
    [06.25.14 21:05:12] INFO: [AuthoringMDEXHost] Starting shell utility 'rmdir_dgraph-input-old'.
    [06.25.14 21:05:13] INFO: [AuthoringMDEXHost] Starting copy utility 'copy_index_to_host_AuthoringMDEXHost_AuthoringDgraph'.
    [06.25.14 21:05:14] INFO: Applying index to dgraphs in restart group 'A'.
    [06.25.14 21:05:14] INFO: [AuthoringMDEXHost] Starting shell utility 'mkpath_dgraph-input-new'.
    [06.25.14 21:05:16] INFO: [AuthoringMDEXHost] Starting copy utility 'copy_index_to_temp_new_dgraph_input_dir_for_AuthoringDgraph'.
    [06.25.14 21:05:17] INFO: [AuthoringMDEXHost] Starting shell utility 'move_dgraph-input_to_dgraph-input-old'.
    [06.25.14 21:05:18] INFO: [AuthoringMDEXHost] Starting shell utility 'move_dgraph-input-new_to_dgraph-input'.
    [06.25.14 21:05:19] INFO: [AuthoringMDEXHost] Starting backup utility 'backup_log_dir_for_component_AuthoringDgraph'.
    [06.25.14 21:05:20] SEVERE: Utility 'backup_log_dir_for_component_AuthoringDgraph' failed.
    Occurred while executing line 5 of valid BeanShell script:
    2|
    3|    AuthoringDgraphCluster.cleanDirs();
    4|    AuthoringDgraphCluster.copyIndexToDgraphServers();
    5|    AuthoringDgraphCluster.applyIndex();
    6|
    7|    LiveDgraphCluster.cleanDirs();
    8|    LiveDgraphCluster.copyIndexToDgraphServers();
    [06.25.14 21:05:20] SEVERE: Error executing valid BeanShell script.Occurred while executing line 28 of valid BeanShell script:
    25|        Dgidx.run();
    26|
    27|        // distributed index, update Dgraphs
    28|        DistributeIndexAndApply.run();
    29|
    30|        WorkbenchManager.cleanDirs();
    31|        Forge.getPostForgeDimensions();
    [06.25.14 21:05:20] SEVERE: Caught an exception while invoking method 'run' on object 'BaselineUpdate'. Releasing locks.
    Caused by java.lang.reflect.InvocationTargetException
    sun.reflect.NativeMethodAccessorImpl invoke0 - null
    Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException
    com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript - Error executing
    valid BeanShell script.
    Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException
    com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript - Error executing
    valid BeanShell script.
    Caused by com.endeca.soleng.eac.toolkit.exception.EacComponentControlException
    com.endeca.soleng.eac.toolkit.utility.Utility runInParallel - Utility 'backup_lo
    g_dir_for_component_AuthoringDgraph' failed.
    [06.25.14 21:05:20] INFO: Released lock 'update_lock'.
    In the AuthoringDgraph log it is showing below message.
    FATAL
    06/25/14 14:39:40.954 UTC (1403707180954)
    DGRAPH
    {dgraph,baseline}
    Rapid updates are not supported for datasets that do not have a record specification model specified in the record spec file.
    Stemming should be enabled for 1 languages

    Hello,
    I believe there are two different issues that may be occurring here, so we can start with the easiest one first, the error message in AuthoringDgraph.  The error message is being triggered as it can't find a property within your records that has been flagged as a Record Specifier (aka Record Spec or RecSpec).  In the Endeca world, this is similar to what a Primary Key on a table would accomplish in a relational database.  These Record Specs are used in many ways for record retrieval, but it's not neccessary to have one put in place.
    If you don't specify a property to be a Record Spec, than Endeca will create one for you.  However - and this is where the error message comes in - you cannot perform updates to the data, known as a Partial Update.  Since Endeca hasn't been told what property will uniquely represent a specific record, it can't receive updates to update/delete them.  That's where the "Rapid Updates" comes into play.
    There are two choices for this, firstly specify a property as a Record Spec, either through Developer Studio, or through the IOC/CSV overrides if you're using the product catalog integration method (what ATG uses to send data to Endeca).  The second option is to disable Rapid Update (partial updates) functionality in your index - you'll need to comment out references to "partial" directories in the various Dgraph XML configurations in your config/scripts directory.  However, that does go against Endeca Best Practices which are to always have a Record Spec.
    And now onto the second issue, the error with the "backup_log_dir_for_component_AuthoringDgraph" script.  This isn't an uncommon type of error, and may be unrelated to the issue above.  The usual culprits for this are permission errors, inability to obtain a lock, etc.  These scripts are run by the EAC and not the application, so are stored in a different location.
    Firstly, find the installation directory for Platform Services on your system.  Inside it should be a logging directory, and you're looking for workspace/logs/utility/<appName>.backup_log_dir_for_component_AuthoringDgraph.txt inside there.  Open up that file, and it should help to clarify what the issue is - if not, then please post it's contents here and we can delve deeper.
    Best,
    - Jeff

Maybe you are looking for