Keyword Redirect in Endeca Commerce 3.1.0

We are using MDEX 6.3.0, PlatformServices 6.1.3, Tools and Frameworks 3.1.0 with guided navigation version of workbench as part of our installation of Endeca Commerce 3.1.0. We would not be using PageBuilder and its cartridges but would like to implement Keyword Redirects for our endeca application which deals with search and guided navigation only.
On setting up the Discover app we figured out that Keyword Redirect management option in workbench was not available. Do we need to make configuration changes to enable it or is the feature dropped from this version release?
Thanks in advance!

The order would be managed by the merchandisers and could be updated daily, weekly based on the business requirement. Since we would be using rule manager only we would like to have the feature available in the workbench; any possible suggestions that would make the life of our merchandisers little easy.
Could we have these files declared in our AppConfig.xml as web-studio managed files so that the merchandisers can have more control like in Workbench 2.x.

Similar Messages

  • Keyword redirection ATG-Endeca

    Hi,
    I have integrated ATG10.1.1 with Endeca commerce 3.10. I configured keyword redirection in Endeca developer studio. But i am not sure how to retrieve the URL from the search results returned by Endeca!! Endeca JSP reference application retrieves and displays the URL. However, I dont know how to do the same using the sample jsp pages given by ATG in assemblerSearchResultsSample.war. These jsps seem to use endeca_assembler-3.1.0. Appreciate any help on this.

    There is a patch available for 3.0.1 from support for managing keyword redirects in Workbench and assembler support for them (with documentation)
    I recommend grabbing the patch or upgrading to 3.1.1

  • Automation of thesaurus and keyword redirects in endeca

    I need to fetch data from text file or excel file and update it into current thesaurus and keyword redirects in endeca. How this is possible ? Any ideas ?

    I have approached this issue in a slightly different way - and I can imagine a couple of scenarios for you:
    For Pre Tools & Services 3.1.x:
    1) Your staff maintain thesaurus and Keyword Redirects in both Workbench and in some external system.
    In this case, you might consider customization your baseline update script in the following way:
    a) get your externally managed thesaurus/redirects
    b) export workbench managed files into a staging directory
    c) remove any other files such as rule groups that you don't want to modify
    d) call some custom class that you develop that merges your custom thesaurus/redirects into the thesaurus and redirect xml files.
    e) import the two files back into workbench
    f) continue on with the baseline update script which would then do all the right things to get the new thesaurus/redirects out of workbench, merge with pipeline, and runs forge
    2) You don't maintain Thesaurus and Redirects in Workbench at all
    a) ensure your app config settings for the ConfigManager component do not include thesaurus and redirects in the webStudioMaintainedFile list
    b) your external process can just create thesaurus.xml and redirects.xml per the endeca dtds for these xml files (or you can create a class/xslt that converts your feed into the appropriate xml files)
    Post Tools & Services 3.1.x:
    In this world, thesaurus and redirects are managed in Workbench and I don't believe you have an option to not do this.
    I have not had to work through this so unsure exactly how you would update the Workbench but the general idea is to get the externally managed thesaurus/redirects, convert them to the proper XML and JSON, then import into IFCR using a custom script.  this is the part I'm most unsure of as I have not seen documentation on how to do this.  But I believe you can do this given some other solutions I have heard of.

  • How to use Workbench created Thesaurus and Keyword redirects in Endeca 3.1.2?

    Hi ,
    I have been trying to add a few Keyword Redirects and Thesaurus entries in the Endeca Workbench. After I add them, I am unable to see either the Keyword Redirects or Thesaurus being reflected in the Authoring/Live dgraphs!
    I assumed that workbench entries will automatically override/merge the dev studio files when I run a baseline update. (There were no Keyword Redirects and Thesaurus entries I had created at this point through dev. studio.
    But, When I add them via the Developer studio and run a baseline update, I am able to verify their working in jspref as well as in the Production site. PFB the version of Endeca that I am using:
    PlatformServices – 6.1.3
    WorkBench – 3.1.2
    MDEX – 6.4.1
    CAS – 3.1.2
    I want to know how I can have workbench created/managed Keyword Redirects and Thesaurus.
    I tried creating the Keyword Redirects and Thesaurus entries first (in the workbench) and then running a baseline update and vice versa, both the approaches didn't seem to work.
    I have used the OOTB ProductCatalogIntegration set up to deploy this application and I've not changed anything in the AppConfig.xml either.
    Any help would be hugely appreciated as I have been trying to acheive this for a long time without any progress.
    Please let me know if you have any ideas on how to resolve this.
    Thanks in advance,
    Arjun

    I was able to get the thesaurus working by just adding the entries in the Workbench. There was a problem with the IFCR Utility which was the reason why the Thesaurus entries were not being published to the Live Dgraphs. Once that was fixed, the thesaurus worked correctly.
    But Keyword Redirects had more effort as our site was using presentation API. The keyword directs configured in the workbench are not being returned as Supplement Objects, but rather as a content XML. So we needed to handle the XML content which contains the redirect url.
    If you have set up your application correctly, then you should be able to see the redirects in jspref atleast.

  • Keyword Redirects with Stemming

    Hi all,
    I am trying to accomplish something with the redirect rules that cannot be accomplished through the IAP workbench - we want the users to be able to do this themselves, without input from the development team. Here is the scenario:
    Two redirect rules:
    Keyword: garage
    URL: /garage.html
    Keyword: garages
    URL: /garageCare.html
    We have stemming enabled on a global scale and herein lies the problem. Before the redirect rule is applied when searching for "garages", the stemming is applied (thus producing "garage") and we are returned to the first URL, instead of the second. I have been able to accomplish the above successfully by navigating into the default_redirects.xml and changing the "ENABLE_STEMMING" parms for these two words from TRUE to FALSE. This works, however, we do not want to interfere with the users entering in redirects via IAP Workbench. We also have no way of knowing when a change is made through workbench, if our overriden values (ENABLE_STEMMING="FALSE") will be reverted. Is there something we can enable to allow the users to enable stemming on an individual keyword redirect basis, like we are able to do via editing the rule in workbench?
    Thanks,
    Rob

    Hi,
    did try using Thesaurus do to that instead of Stemming ?
    exerpt from the Admin Dev Guide "Stemming equivalences are defined among single words. For example, stemming is used to produce
    an equivalence between the words automobile and automobiles (because the first word is the stem
    form of the second), but not to define an equivalence between the words vehicle and automobile (this
    type of concept-level mapping is done via the Thesaurus feature).
    Stemming equivalences are strictly two-way (that is, all-to-all). For example, if there is a stemming
    entry for the word truck, then searches for truck will always return matches for both the singular form
    (truck) and its plural form (trucks), and searches for trucks will also return matches for truck. In contrast,
    the Thesaurus feature supports one-way mappings in addition to two-way mappings."
    hope that helps
    regards
    Saleh

  • Endeca Commerce-can't support Chinese search at MDEX 6.4.0 also ?

    Dear experts:
    I am do a PoC base on the sample discover project. After specifiy the language to Chinese, I can't start the Forge. It seems that the datasource is not ready, but I really have load it to the MDEX and can read it from the web apps. The error log as below:
    C:\Endeca\app\Disvover\control>load_baseline_test_data.bat
    C:\Endeca\app\Disvover\control>baseline_update.bat
    [01.25.13 16:04:01] INFO: Checking definition from AppConfig.xml against existin
    g EAC provisioning.
    [01.25.13 16:04:02] INFO: Definition has not changed.
    [01.25.13 16:04:02] INFO: Starting baseline update script.
    [01.25.13 16:04:02] INFO: Acquired lock 'update_lock'.
    [01.25.13 16:04:02] INFO: [ITLHost] Starting shell utility 'move_-toprocessing
    [01.25.13 16:04:03] INFO: [ITLHost] Starting copy utility 'fetch_config_to_input
    forforge_Forge'.
    [01.25.13 16:04:04] SEVERE: Utility 'fetch_config_to_input_for_forge_Forge' fail
    ed. Refer to utility logs in [ENDECA_CONF]/logs/copy on host ITLHost.
    Occurred while executing line 19 of valid BeanShell script:
    16| LockManager.removeFlag("baseline_data_ready");
    17|
    18| // fetch config files to forge input
    19| Forge.getConfig();
    20|
    21| // archive logs and run ITL
    22| Forge.archiveLogDir();
    [01.25.13 16:04:04] SEVERE: Caught an exception while invoking method 'run' on o
    bject 'BaselineUpdate'. Releasing locks.
    Caused by java.lang.reflect.InvocationTargetException
    sun.reflect.NativeMethodAccessorImpl invoke0 - null
    Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException
    com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript - Error executing
    valid BeanShell script.
    Caused by com.endeca.soleng.eac.toolkit.exception.EacComponentControlException
    com.endeca.soleng.eac.toolkit.utility.Utility run - Utility 'fetch_config_to_inp
    ut_for_forge_Forge' failed. Refer to utility logs in [ENDECA_CONF]/logs/copy on
    host ITLHost.
    [01.25.13 16:04:04] INFO: Released lock 'update_lock'.     
    what I have configured is:
    For data source:
    I changed the property 'product.long_desc' of first one record to chinese and load it to MDEX by baseline_update and promote_content (can read it from the web application)
    For Forge:
    specify the encoding of source data to UTF-8 and specify the per-property language ID for property 'product.long_disc' in the Forge pipeline.
    For Digdx
    specify the language in DataIngest.xml as below:
    - <dgidx id="Dgidx" host-id="ITLHost">
    - <properties>
    <property name="numLogBackups" value="10" />
    <property name="numIndexBackups" value="3" />
    </properties>
    - <args>
    <arg>-v</arg>
    <arg>--compoundDimSearch</arg>
    <arg>--lang</arg>
    <arg>zh-CN</arg>
    </args>
    <log-dir>./logs/dgidxs/Dgidx</log-dir>
    <input-dir>./data/forge_output</input-dir>
    <output-dir>./data/dgidx_output</output-dir>
    <temp-dir>./data/temp</temp-dir>
    <run-aspell>true</run-aspell>
    </dgidx>
    </spr:beans>
    and add the file Disvover.spell_config.xml with below content:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE SPELL_CONFIG SYSTEM "spell_config.dtd">
    <SPELL_CONFIG>
    <SPELL_ENGINE>
    <DICT_PER_LANGUAGE>
         <ESPELL/>
         </DICT_PER_LANGUAGE>
    </SPELL_ENGINE>
    </SPELL_CONFIG>
    For the Dgraph:
    specify the language in DgraphDefaults.xml as below:
    - <dgraph-defaults>
    - <properties>
    <property name="srcIndexDir" value="./data/dgidx_output" />
    <property name="srcIndexHostId" value="ITLHost" />
    <property name="srcPartialsDir" value="./data/partials/forge_output" />
    <property name="srcPartialsHostId" value="ITLHost" />
    <property name="srcCumulativePartialsDir" value="./data/partials/cumulative_partials" />
    <property name="srcCumulativePartialsHostId" value="ITLHost" />
    <property name="srcDgraphConfigDir" value="./data/workbench/dgraph_config" />
    <property name="srcDgraphConfigHostId" value="ITLHost" />
    <property name="numLogBackups" value="10" />
    <property name="shutdownTimeout" value="30" />
    <property name="numIdleSecondsAfterStop" value="0" />
    </properties>
    - <directories>
    <directory name="localIndexDir">./data/dgraphs/local_dgraph_input</directory>
    <directory name="localCumulativePartialsDir">./data/dgraphs/local_cumulative_partials</directory>
    <directory name="localDgraphConfigDir">./data/dgraphs/local_dgraph_config</directory>
    </directories>
    - <args>
    <arg>--threads</arg>
    <arg>2</arg>
    <arg>--whymatch</arg>
    <arg>--spl</arg>
    <arg>--dym</arg>
    <arg>--dym_hthresh</arg>
    <arg>5</arg>
    <arg>--dym_nsug</arg>
    <arg>3</arg>
    <arg>--stat-abins</arg>
    <arg>--lang</arg>
    <arg>zh-CN</arg>
    </args>
    <startup-timeout>120</startup-timeout>
    </dgraph-defaults>
    </spr:beans>
    Dear experts, thank you in advance. Any advice or solution about this issue will be great appreciated!
    Best regards
    Vicky

    Hi Michael
    I can't find the log directory as reminded in the error infor '[ENDECA_CONF]/logs/copy'
    Below is my log dir, could you please give some advice:
    C:\Endeca\app\Disvover\logs>dir
    Volume in drive C has no label.
    Volume Serial Number is 4CC1-459C
    Directory of C:\Endeca\app\Disvover\logs
    01/26/2013 12:10 AM <DIR> .
    01/26/2013 12:10 AM <DIR> ..
    01/25/2013 02:28 PM <DIR> dgidxs
    01/25/2013 02:28 PM <DIR> dgraphs
    01/26/2013 12:10 AM 115,343 Disvover.0.0.log
    01/25/2013 02:28 PM <DIR> forges
    01/24/2013 03:02 PM <DIR> logservers
    01/26/2013 12:10 AM <DIR> logserver_output
    01/26/2013 12:10 AM <DIR> provisioned_scripts
    01/23/2013 06:18 PM <DIR> report_generators
    1 File(s) 115,343 bytes
    9 Dir(s) 87,275,294,720 bytes free
    Thanks you and Regards
    Vicky

  • Endeca commerce 3.1.1 can't support Chinese segmentation also?

    Dear experts:
    I am do a PoC base on the sample discover project. After specifiy the language to Chinese,I can only search the chinese content between Punctuations, take a long description of a product for example,
    优质性能
    品质画面: 在QVGA分辨率(640×480像素,软件增强)捕捉清晰的视频和图像.
    品质声音: 使用耳机享受清晰,层次分明的谈话。无需购买额外的设备.
    便利的
    支持流行的即时消息应用程序: 是进行Skype™视频理想伴侣的.
    通用管理面板:: 轻松安装在任何类型的显示器或笔记本电脑.
    I can only get this product by searching the keyword '优质性能' not '优质'
    Any of your response will be very appreciated.
    what I have configured is:
    For data source:
    I changed the property 'product.long_desc' of first one record to chinese and load it to MDEX by baseline_update and promote_content (can read it from the web application)
    For Forge:
    specify the encoding of source data to UTF-8 and specify the per-property language ID for property 'product.long_disc' in the Forge pipeline.
    For Digdx
    specify the language in DataIngest.xml as below:
    - <dgidx id="Dgidx" host-id="ITLHost">
    - <properties>
    <property name="numLogBackups" value="10" />
    <property name="numIndexBackups" value="3" />
    </properties>
    - <args>
    <arg>-v</arg>
    <arg>--compoundDimSearch</arg>
    <arg>--lang</arg>
    <arg>zh-CN</arg>
    </args>
    <log-dir>./logs/dgidxs/Dgidx</log-dir>
    <input-dir>./data/forge_output</input-dir>
    <output-dir>./data/dgidx_output</output-dir>
    <temp-dir>./data/temp</temp-dir>
    <run-aspell>true</run-aspell>
    </dgidx>
    </spr:beans>
    and add the file Disvover.spell_config.xml with below content:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE SPELL_CONFIG SYSTEM "spell_config.dtd">
    <SPELL_CONFIG>
    <SPELL_ENGINE>
    <DICT_PER_LANGUAGE>
    <ESPELL/>
    </DICT_PER_LANGUAGE>
    </SPELL_ENGINE>
    </SPELL_CONFIG>
    For the Dgraph:
    specify the language in DgraphDefaults.xml as below:
    - <dgraph-defaults>
    - <properties>
    <property name="srcIndexDir" value="./data/dgidx_output" />
    <property name="srcIndexHostId" value="ITLHost" />
    <property name="srcPartialsDir" value="./data/partials/forge_output" />
    <property name="srcPartialsHostId" value="ITLHost" />
    <property name="srcCumulativePartialsDir" value="./data/partials/cumulative_partials" />
    <property name="srcCumulativePartialsHostId" value="ITLHost" />
    <property name="srcDgraphConfigDir" value="./data/workbench/dgraph_config" />
    <property name="srcDgraphConfigHostId" value="ITLHost" />
    <property name="numLogBackups" value="10" />
    <property name="shutdownTimeout" value="30" />
    <property name="numIdleSecondsAfterStop" value="0" />
    </properties>
    - <directories>
    <directory name="localIndexDir">./data/dgraphs/local_dgraph_input</directory>
    <directory name="localCumulativePartialsDir">./data/dgraphs/local_cumulative_partials</directory>
    <directory name="localDgraphConfigDir">./data/dgraphs/local_dgraph_config</directory>
    </directories>
    - <args>
    <arg>--threads</arg>
    <arg>2</arg>
    <arg>--whymatch</arg>
    <arg>--spl</arg>
    <arg>--dym</arg>
    <arg>--dym_hthresh</arg>
    <arg>5</arg>
    <arg>--dym_nsug</arg>
    <arg>3</arg>
    <arg>--stat-abins</arg>
    <arg>--lang</arg>
    <arg>zh-CN</arg>
    </args>
    <startup-timeout>120</startup-timeout>
    </dgraph-defaults>
    </spr:beans>
    Best Regards
    Vicky
    what I have configured is:
    For data source:
    I changed the property 'product.long_desc' of first one record to chinese and load it to MDEX by baseline_update and promote_content (can read it from the web application)
    For Forge:
    specify the encoding of source data to UTF-8 and specify the per-property language ID for property 'product.long_disc' in the Forge pipeline.
    For Digdx
    specify the language in DataIngest.xml as below:
    - <dgidx id="Dgidx" host-id="ITLHost">
    - <properties>
    <property name="numLogBackups" value="10" />
    <property name="numIndexBackups" value="3" />
    </properties>
    - <args>
    <arg>-v</arg>
    <arg>--compoundDimSearch</arg>
    <arg>--lang</arg>
    <arg>zh-CN</arg>
    </args>
    <log-dir>./logs/dgidxs/Dgidx</log-dir>
    <input-dir>./data/forge_output</input-dir>
    <output-dir>./data/dgidx_output</output-dir>
    <temp-dir>./data/temp</temp-dir>
    <run-aspell>true</run-aspell>
    </dgidx>
    </spr:beans>
    and add the file Disvover.spell_config.xml with below content:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE SPELL_CONFIG SYSTEM "spell_config.dtd">
    <SPELL_CONFIG>
    <SPELL_ENGINE>
    <DICT_PER_LANGUAGE>
    <ESPELL/>
    </DICT_PER_LANGUAGE>
    </SPELL_ENGINE>
    </SPELL_CONFIG>
    For the Dgraph:
    specify the language in DgraphDefaults.xml as below:
    - <dgraph-defaults>
    - <properties>
    <property name="srcIndexDir" value="./data/dgidx_output" />
    <property name="srcIndexHostId" value="ITLHost" />
    <property name="srcPartialsDir" value="./data/partials/forge_output" />
    <property name="srcPartialsHostId" value="ITLHost" />
    <property name="srcCumulativePartialsDir" value="./data/partials/cumulative_partials" />
    <property name="srcCumulativePartialsHostId" value="ITLHost" />
    <property name="srcDgraphConfigDir" value="./data/workbench/dgraph_config" />
    <property name="srcDgraphConfigHostId" value="ITLHost" />
    <property name="numLogBackups" value="10" />
    <property name="shutdownTimeout" value="30" />
    <property name="numIdleSecondsAfterStop" value="0" />
    </properties>
    - <directories>
    <directory name="localIndexDir">./data/dgraphs/local_dgraph_input</directory>
    <directory name="localCumulativePartialsDir">./data/dgraphs/local_cumulative_partials</directory>
    <directory name="localDgraphConfigDir">./data/dgraphs/local_dgraph_config</directory>
    </directories>
    - <args>
    <arg>--threads</arg>
    <arg>2</arg>
    <arg>--whymatch</arg>
    <arg>--spl</arg>
    <arg>--dym</arg>
    <arg>--dym_hthresh</arg>
    <arg>5</arg>
    <arg>--dym_nsug</arg>
    <arg>3</arg>
    <arg>--stat-abins</arg>
    <arg>--lang</arg>
    <arg>zh-CN</arg>
    </args>
    <startup-timeout>120</startup-timeout>
    </dgraph-defaults>
    </spr:beans>
    Best regards
    Vicky

    Hi
    I can't see anything wrong with your configuration - I'd recommend you raise an SR at support.oracle.com for this.
    Michael

  • Keyword re-directs in a single endeca instance for different application

    Hi all,
    We are maintaining a  single endeca application for 2 different web-sites.Now,we have a requirement to set up keyword -re-directs for both the applications but the re-directs set up in 1 application should not clash with the other.Actually,I tried setting up in 2 differnt groups using dev studio,but I am not seeing any difference in JSP ref application.Ex: I set up a re-direct for word "Shampoo" in both sites but at any time,only 1 re-direct is seen.
    Please advise.
    Regards
    Shreyas

    Hi Shreyas,
    Keyword redirects are global to an index.
    You can try following work arounds,
    Option 1 :- I guess you are using experience manager, in that case, you can create a custom cartridge for the keyword redirects and play around with triggers to makes it application aware. This option is good if you have number of keyword redirects are lesser.
    Option 2 :- With the existing keyword redirects you can do something like below, which is a dirty workaround. see if you can include site name in the user search term . For example say you have site1 and site2 as two sites. So if you are to add a term as google for site1 and not not for site2, then you will add it as "site1 google" with match all or match phrase. from the front end you will have to query twice first time with "google" as search term and second time with "site1 google" as a search term. you should get just keyword redirects from the second query. Additional search query here may not be good for the production environments. Thisld  option should not be used if you have large volume of data or large volumes of the traffic.
    Hope that makes sense!

  • Promote Endeca Configurations from stgaing to Production

    How to promote the Search Configuration xmls(Rules, keyword Redirects, Thesaurus etc) from staging to production? Emgr_Update doesn't seem to help in 3.1.2 version, one of the documentations says its no longer public.
    Basically we want the similar behaviour as 'get_ws_settings' in Oracle Endeca Commerce, Tools and Frameworks 3.1.2(workbench version, no Experience Manager).
    Please help. Thanks

    Have you looked at the Configuration Migration Utility? That allows you to create a zip file and basically package the workflow definition and move it from content server to content server.

  • Installing and running Endeca on Crunchbang (#!)

    Greetings-
    I am trying to install and run Endeca on my Crunchbang system [see below for distro details] .
    I finished installing packages and deploying a sample application from deployment_template successfully. However the initialize_service.sh fails.
    Please help me if you can figure out where I might be going wrong.
    The following are the steps I followed:
    I downloaded the following packages from this link https://edelivery.oracle.com/EPD/Download/get_form?egroup_aru_number=16289751
    V33316-01 Oracle Endeca Platform Services 6.1.3 for Linux x86-64
    V37711-01 Oracle Endeca Content Acquisition System 3.1.2 for Linux x86-64
    V37714-01 Oracle Endeca MDEX Engine 6.4.1 for Linux x86-64
    V37716-01 Oracle Endeca Tools and Frameworks with Experience Manager 3.1.2 for Linux x86-64
    The Oracle Endeca Commerce Compatibility Matrix, January 17, 2013 does not  advise on the above versions.
    I ran the installation scripts and used default settings, as advised, when prompted.
    All package installation is in "/usr/local/endeca/" which has four subdirectories viz.
    CAS
    MDEX
    PlatformServices
    ToolsAndFramework
    $ENDECA_ROOT is /usr/local/endeca/PlatformServices/6.1.3
    I tried to install the sample application from the deployment using deploy.sh
    /usr/local/endeca/ToolsAndFramework/3.1.2/deployment_template/bin/deploy.sh
    The installation succeeded, with defaults, the application was deployed in "/home/my_username/endeca/apps/"
    The initialize_services script gives the following error:
    my_username@crunchbang:~/endeca/apps/MyApp2/control$ ./initialize_services.sh
    ./initialize_services.sh: 11: [: unexpected operator
    Setting EAC provisioning and performing initial setup...
    [07.13.13 17:09:39] INFO: Checking definition from AppConfig.xml against existing EAC provisioning.
    [07.13.13 17:09:39] INFO: Setting definition for application 'MyApp2'.
    [07.13.13 17:09:39] SEVERE: Caught an exception while checking provisioning.
    Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
    com.endeca.soleng.eac.toolkit.application.Application setDefinition - Caught exception while defining application 'MyApp2'.
    Caused by com.endeca.eac.client.EACFault
    sun.reflect.NativeConstructorAccessorImpl newInstance0 - null
    In addition the above:
    http://crunchbang:8006/ifcr/admin.html
    works for both "crunchbang" as well as "localhost" i.e it takes me to the WorkBench with "User Management" and "EAC Connection Settings" showing.
    http://localhost:8006/endeca_jspref/controller.jsp
    however returns the following error on entering "localhost", "15000"
    status >> invalid ENE location
    ENEConnectionException
    com.endeca.navigation.ENEConnectionException: Error establishing connection to retrieve Navigation Engine request 'http://localhost:15000/graph?node=0&offset=0&nbins=10&irversion=640'. Connection refused
    Linux Distro
    $ cat /etc/*-release
    PRETTY_NAME="Debian GNU/Linux 7 (wheezy)"
    NAME="Debian GNU/Linux"
    VERSION_ID="7"
    VERSION="7 (wheezy)"
    ID=debian
    ANSI_COLOR="1;31"
    HOME_URL="http://www.debian.org/"
    SUPPORT_URL="http://www.debian.org/support/"
    BUG_REPORT_URL="http://bugs.debian.org/"

    Does anyone know of a good telephone support firm that could
    help me get everything working, including mySQL? Might be worth it.
    Thanks

  • Upgrading from Endeca IAP 2.1.0 to Tools and Frameworks 3.1.1

    Hi,
       We are using Merchndizing workbench 2.1.0 and planning to migrate to Tools & frameworks 3.1.1
    Tools and Frameworks migration document is saying that it is not supporting the migration.
    Did anybody tried this migration?
    Pls help.
    Regards,
    Kishore.

    Hi Kishore,
    I have done a couple of upgrades, first I suggest you upgrade to 3.1.2 instead of 3.1.1.  3.1.2 fixes a few issues, didn't impact APIs (thus any coding efforts underway), and also includes sample migration scripts for Page Builder/Experience Manager.  3.1.1 did not have those.
    These scripts are samples only as every deployment can be so different and with varying complexity.  Thus this is your starting point.  You mention IAP - that probably means you don't have Experience Manager and only Rule Manager so this won't be as interesting to you.
    Next, both 3.1.1 and 3.1.2 include out-of-the-box migration scripts for Rule Manager rules, thesaurus, and keyword redirects.  This script has worked well in my opinion.  There are a few gotchas that you will encounter - the cartridge templates it creates are not always clean or model the rules the way you would like.
    Yes, they don't explicitly support the 2.1.0 to 3.1.1 migration, so you need to upgrade to 2.1.2 first.  This is trivial and I would just include this step as part of your overall migration plan.  Most likely you will be upgrading Platform Services at the same time and you have to upgrade MDEX (you are probably running 6.2.2 or lower).  Upgrading MDEX alone gave us 10% faster query responses when running in backwards compat mode - so we upgraded both of those components almost immediately and then worked on the Workbench migration.
    There are lots of details to work through for this migration - even if you are not using rule manager or page builder/experience manager.

  • After installing Endeca Extension Error BEA-000386 BEA-090402

    HI
    after installing Endeca Extensions according with note
    Installing Oracle E-Business Suite Extensions for Oracle Endeca, Release 12.1 V5 (Doc ID 1683053.1)
    Option 5 Studio Managed Server
    from startAllEndeca.sh fails
    with error BEA-000386 BEA-090402
    I 've modified 3 files boot.properties 
    /u01/Oracle/Middleware/user_projects/domains/endeca_domain
    specifying username=endeca and password=welcome123
    but it's stail failing with same error.
    Any idea?
    Anything else that must be changed?
    Thanks in advance
    Laura

    Hi Laura,
    This forum is dedicated to Oracle Endeca Commerce.  Please see the following post regarding the correct forum for Oracle Endeca Information Discovery questions.  It looks like it was mistakenly moved here by another member Oracle since the products shared the same "Endeca" term.
    https://community.oracle.com/message/12877494#12877494
    Thanks,
    Alex

  • Endeca returns only 15 records, while TotalNumAggrERecs = 20

    Hi,
    If I execute an Endeca query with a particular N-value in C#, then the Navigation query of the ENEQueryResults contains TotalNumAggrERecs = 20 and AggrERecs.Count = 15. How is it possible that Endeca returns a list of only 15 aggregated records, while the number of aggregated records is 20?
    According to the Endeca Basic Development Guide, you should use
    • the Navigation.TotalNumAggrERecs to get the number of aggregated records that matched the navigation query
    • the Navigation.AggrERecs to retrieve a list of aggregated records returned by the navigation query
    I would therefore assume that the number of AggrERecs would match the TotalNumAggrERecs.
    Extra info:
    • other queries (the same code, but another N-value) return results with matching result.Navigation.TotalNumAggrERecs and result.Navigation.AggrERecs.Count. Also, the printscreen above shows an exception in SyncRoot. I thought this exception causes to only return 15 records instead of 20, but this exception also occurs on queries where the both Navigation.TotalNumAggrERecs and Navigation.AggrERecs.Count are the same.
    • the same query in the "6.1 Oracle Endeca - JSP Reference Implementation" tool returns 20 aggregated records.
    • So far, my conclusion is that there are 20 records, but Endeca returns only the first 15. I have however no clue how to solve this. Suggestions on how to investigate this issue are very welcome!
    Regards, Leonard

    This question is on the wrong forum - this forum is for the Endeca Information Discovery (EID) product - you are working with the Endeca Commerce product APIs.  See Technical Questions .

  • Initial Service script in Endeca Application

    Hi All,
    Agter creating the Endeca Application, there is a step that tells , to run initial-servies.sh, what is the exact use of this is given below, but where can i see these files ?? where is the location??Can you polease tell me the exact use oif this step.
    The initialize_services script creates the Record Store instances for product data,
    dimension values, precedence rules, and schema information with the names below. The Record
    Store instances are prefixed with the application name and language code specified in the
    deployment descriptor file. In this case, the application name is Discover and the language
    code is en:
     Discover_en_schema
     Discover_en_dimvals
     Discover_en_prules
     Discover_en_data

    Hi Bravo,
    Creation of CAS record store instances is done in C:\Endeca\ToolsAndFrameworks\3.1.0\reference\discover-data-pci\control\initialize_rs_feeds.bat which is invoked by initialize_services.bat file. This step is available when you create an Endeca Application using discover-data-pci which is mainly used for Product Catalog Integration using ATG. This step creates four empty record stores (data,dimvals,schema and precedecerules) in CAS.
    If you are using ATG 10.1.1 or above to integrate with Endeca Commerce 3.1.0 or above, there are components in ATG such as DataDocumentSumbitter etc which populate the data into these record stores and then invoke the EndecaScriptsService to trigger the scripts to perform Baseline Update. As part of the baseline update, Forge takes these CAS record stores as input and indexes the data.
    Thanks,
    Shabarinath Kande
    Edited by: shabari on Jan 10, 2013 1:10 AM

  • Endeca cross domain problem

    We are currently using Endeca Commerce 3.1 on a distributed environment with MDEX installed on one server and Experience Manager on another but both on the same domain.
    When we use the RecordSpotlight cartridge in Experience Manager the record selection radio buttons are greyed out.
    We followed the instructions under “Setting up a cross-domain policy file” under Appendix C in the “Tools and Framework Installation Guide” and added the following crossdomain.xml file under <MDEX_install_dir>/6.4.0/conf/dtd/xform but still the problem persists. We have also set the value of permitted-cross-domain-policies in the crossdomain.xml file to “none”, “master-only” and “all”. None of them solved the problem.
    <?xml version="1.0"?>
    <!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
    <cross-domain-policy>
    <site-control permitted-cross-domain-policies="all"/>
    <allow-access-from domain="*.mydomain.com" />
    <allow-access-from domain="*.mydomain.ca" />
    <allow-http-request-headers-from domain="*" headers="SOAPAction"/>
    </cross-domain-policy>
    Could you please advise on how to enable those record selection radio buttons?
    Sincerely,
    Alex Luc

    It looks like I've solved it. The parent domain was
    sub-domain and I was leaving off the "
    http://" for the allowDomain argument. Once I
    added it, the swfs work.

Maybe you are looking for