ATG/Endeca: RefinementMenu and Dimension Groups

Hi,
When I query the assembler and I get back the refinements, I don't get the dimension group in which they fall into.
Is there a way in ATG to configure the assembler to get the DimGroups? I see the DimGroups in the jspref App but not in the Assembler JSON nor XML response.
Thanks for any suggestion.
Luc

This might explain why my RefinementMenuHandler is never called. I turned on the debug on the NucleausAssemblerFactory class and I get this in the log:
2013-04-09 09:06:26,790 INFO [atg/endeca/assembler/NucleusAssemblerFactory] (Request Processor - requestURI=/site/guidedsearch) DEBUG Returning handler from defaultHandlerPath: /atg/endeca/assembler/cartridge/handler/ContentInclude
2013-04-09 09:06:26,797 INFO [atg/endeca/assembler/NucleusAssemblerFactory] (Request Processor - requestURI=/site/guidedsearch) DEBUG No handler found for cartridge type GuidedSearchService
2013-04-09 09:10:18,736 INFO [atg/endeca/assembler/NucleusAssemblerFactory] (Request Processor - requestURI=/site/guidedsearch) DEBUG Returning handler from defaultHandlerPath: /atg/endeca/assembler/cartridge/handler/NavigationContainer
2013-04-09 09:06:26,828 INFO [atg/endeca/assembler/NucleusAssemblerFactory] (Request Processor - requestURI=/site/guidedsearch) DEBUG Returning handler from defaultHandlerPath: /atg/endeca/assembler/cartridge/handler/Breadcrumbs
2013-04-09 09:06:26,847 INFO [atg/endeca/assembler/NucleusAssemblerFactory] (Request Processor - requestURI=/site/guidedsearch) DEBUG Returning handler from defaultHandlerPath: /atg/endeca/assembler/cartridge/handler/ResultsList
2013-04-09 09:06:26,862 INFO [atg/endeca/assembler/NucleusAssemblerFactory] (Request Processor - requestURI=/site/guidedsearch) DEBUG Returning handler from defaultHandlerPath: /atg/endeca/assembler/cartridge/handler/SearchAdjustments
2013-04-09 09:06:26,871 INFO [atg/endeca/assembler/NucleusAssemblerFactory] (Request Processor - requestURI=/site/guidedsearch) DEBUG Returning handler from defaultHandlerPath: /atg/endeca/assembler/cartridge/handler/DimensionSearchResults
2013-04-09 09:06:26,874 INFO [atg/endeca/assembler/NucleusAssemblerFactory] (Request Processor - requestURI=/site/guidedsearch) DEBUG No handler found for cartridge type ContentSlotList
There is no mention of the RefinementMenu. I even tried to enforce it in the handlerMapping property but without success.
Could this be because I am using the GuidedSearch Framework instead of Experience Manager?
Thanks,
Luc

Similar Messages

  • How to create Dimension group in ATG-endeca ecommerce application

    We have requirement where we have to specify dimensions and their schema from BCC in ATG. Now the problem is we want to dimension groups and we don't see anywhere in documents to create it. Since this dimensions are created through FCM so we are not able to see in /config/pipeline directory but we see only in processing directory. So we don't see any means to create dimension group through dev studio too. Please tell us how we can do it.

    we have to do it through EP interface through BP TMS.
    1. connect EP to SAP system
    2. Configure all the iviews of TMS business package so it can communicate with SAP system (change only SAP system name for all iviews)
    3. Assign the roles to user of EP
    4. run application using TMS

  • Creating hierarchical dimension for products in ATG - Endeca integration

    Hi,
    I need to generate the hierarchical dimensions till product hierarchy, OOB feature creates the dimension till category hierarchy.
    If anyone has worked or have idea on this, please provide the steps to be followed and the sample category-din-output-config.xml to generate hierarchical dimension till products.
    Regards,
    Sudheer

    Are you planning to create articles in ExpManager OR the content will come from some other system and you just want to manage the pages to display the content in Exp Manager?
    On a high level:
    If it is case #1 you have to integrate ATG & Endeca and write some cartridges for managing the content.
    If it is case #2 you have to integrate ATG & Endeca and modify the auto generated pipeline to pull in your content from the other system.
    Pankaj.

  • Ordering Dimensions in ATG-Endeca

    Hi All,
    I have a running ATG-Endeca instance (10.1.1 / 3.1.0) and was wondering if anyone has successfully managed to order the dimensions from inside of ATG?
    Rex..

    I believe it's possible to explicitly order non-autogenerated dimension values by providing a dimval.display_order value in the dimval records sent to the CAS dimval record store. In ATG 10.1.2, the CategoryToDimensionOutputConfig does so (which is how the category dimension ends up displayed in the manually configured order in CRS).

  • ATG with Endeca SEO and Sitemap

    Hi,
    We are working on ATG 10.2 with Endeca 3.1.2 integration. It would be grateful if someone provides pros and cons of using Endeca SEO and Sitemap for this requirement. We do feel that ATG out of box is not providing any API for integrating Endeca SEO and we need to recreate the whole Endeca SEO framework in ATG. Please provide inputs. We are working on multisite feature with ATG where Endeca has language based MDEX for each country.

    This has been done on a few projects, but if you wait for the upcoming version 11 releases of ATG and Endeca (probably in a week or two), you'll see that CRS now includes extensive use of the Endeca SEO modules.

  • Reg: Fact table and Dimension table in Data Warehousing -

    Hi Experts,
    I'm not exactly getting the difference between the criteria which decide how to create a Fact table and Dimension table.
    This link http://stackoverflow.com/questions/9362854/database-fact-table-and-dimension-table states :
    Fact table contains data that can be aggregate.
    Measures are aggregated data expressions (e. Sum of costs, Count of calls, ...)
    Dimension contains data that is use to generate groups and filters.
    This's fine but how does one decide which columns to consider for Fact table and which columns for Dimension table?
    Any help is much appreciated.
    Pardon me if this's not the correct place for this question. My first question in the new forum.
    Thanks and Regards,
    Ranit Biswas

    ranitB wrote:
    But my main doubt was - what is the criteria to differentiate between columns for Fact tables and Dimension tables? How can one decide upon the design?
    Columns of a fact table will often be 'scalar' attributes of the 'fact' data item. A dimension table will often be 'compound' attributes of a 'fact'.
    Consider employee information. The EMPLOYEE table can be a fact table. It might have scalar attribute columns such as: DATE_HIRED, STATUS, EMPLOYEE_ID, and so on.
    Other related information that can't be specified as a single attribute value would often be stored in a 'dimension' table: ADDRESS, PHONE_NUMBER.
    Each address requires several columns to define it: ADDRESS1, ADDRESS2, CITY, STATE, ZIP, COUNTRY. And an employee might have several addresses: WORK_ADDRESS, HOME_ADDRESS. That address info would be stored in a 'dimension' table and only the primary key value of the address record would be stored in the EMPLOYEE 'fact' table.
    Same with PHONE_NUMBER. Several columns are required to define a phone number and each employee might have several of them. The dimension tables are used to help 'normalize' the data in the employee 'fact' table.
    And that EMPLOYEE table might also be a DIMENSION table for other FACT tables. A DEVELOPER table might have an EMPLOYEE_ID column with a value that points to a 'dimension' row in the EMPLOYEE dimension table.

  • StoreFront : Payment and Shipping group relationships are missing for some orders

    Hi Team,
    In our application, we are able to see relationship between payment and shipping group for some orders. But we are not able to find these relationships for some orders.
    We are verifying in "dcspp_payship_rel" table. We are wondering why this behavior is happening for some orders.
    Could you please suggest to move further ?
    Regards,
    Babji...

    Hello.
    First of all you must ensure that you are properly using transactions when you create/update the orders.
    There are best practices to update the orders in ATG that must be followed to avoid loss of information.
    Like this steps below:
    Acquire a write lock using the ATG lock manager.
    Start the transaction.
    Synchronized on the Order object.
    Update the Order.
    End the synchronization.
    End the transaction.
    Release the lock.
    Here are some links that should help you understand the steps to be followed to make a correct update of an order:
    https://atgoasis.wordpress.com/2014/08/28/best-practices-for-updating-an-order-in-atg-commerce-applications/
    http://www.digitalsanctuary.com/tech-blog/java/atg/design-pattern-for-updating-an-atg-order.html
    http://sumangalavijay.blogspot.com.br/2011/10/atg-update-order.html
    Oracle ATG Web Commerce - Managing Transactions in Oracle ATG Web Commerce
    Hope it helps you! =)

  • How to create index for articles&products in ATG+Endeca application?

    Hi.
    I'm novice in Endeca.
    How to create ATG+Endeca application in which exists search by articles (created in Experience Manager) and by products (from BCC ATG).
    And how to index together products and articles?
    Where can I read about it?
    Thanks.

    Are you planning to create articles in ExpManager OR the content will come from some other system and you just want to manage the pages to display the content in Exp Manager?
    On a high level:
    If it is case #1 you have to integrate ATG & Endeca and write some cartridges for managing the content.
    If it is case #2 you have to integrate ATG & Endeca and modify the auto generated pipeline to pull in your content from the other system.
    Pankaj.

  • Table as fact and dimension

    Hi,
    Can one table act as a fact in one subject area and act as a dimension in another subject area? Thanks.

    Hi
    I confirm Stijn Gabriels' post.
    You don't have to do an alias in your physical table, otherwise the request will generate an alias in SQL for nothing ! However, in your logical layer, you will create 2 logical table : one for the fact, one for dimension. Both of them will have the same source : your unique physical table.
    Let's take an example : suppose you have only 2 table in your datawarehouse : 1 fact table with degenerate dimension attributes (so a table with fact and dimension data), we'll call it "revenue", and 1 dimension table for... "Time", for example. We'll call it"Time"
    Your conceptual (on paper) is a star schema with 1 fact table (revenue_fact), and 2 dimension table (time, and revenue_carac).
    In your OBIEE physical layer :
    - you import the 2 tables "revenue" and "time" from your database.
    - you link "revenue" with "time"
    In your OBIEE logical layer :
    - you create a logical table called "Dim Time", based on the "Time" physical table and you do what you want with it (hierarchy...)
    - you create a logical table called "Dim Revenue Carac", based on the "revenue" physical table, and you do what you want with attributes
    - you create a logical table called "Fact revenue", based on the "revenue" physical table, and you do you what you want with measures and aggregation
    - you link the 2 logical dimension table with the logical fact table
    And that's all. Now, let's see which kind of SQL OBIEE will generate if you want to display the measure "revenue" with the attribute "revenue_carac" and the attribute "year".
    Select Sum(R.revenue_measure) , R.revenue_carac , T.year
    From revenue R , time T
    Where R.time_id = T.id
    Group by R.revenue_carac , T.year
    If you set alias in your physical layer, the request will be that (and you don't want it) :
    Select Sum(R1.revenue_measure) , R2.revenue_carac , T.year
    From revenue R1, revenue R2 , time T
    Where R1.time_id = T.id
    And R1.id = R2.id
    Group by R2.revenue_carac , T.year
    same results, but useless join between the same physical table

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • How ATG,Endeca,Webcenter sites are integrated in eCommerce?

    Hi all,I just began studying ATG,and have not known it very much. My company use oracle's products like ATG,Endeca,webcenter sites.But I don't understand:
    1.how they are integrated togethor.
    2.What is their respective role in the whole eCommerce development?
    3.how they collaborate with each other ?
    I will very appreciate if you can solve my confusion.
    sorry about my poor english.

    Hi Sarah
    What verison of ATG is your company using?
    We can then point you to the right material.
    For example prior to ATG 10.1.1 there was no product integration between ATG/Endeca it would have been a custom integration.
    Thanks
    Gareth

  • Time and Dimension Consolidation

    Hi All
    Can some one help in explaining Time and Dimension Consolidation in BICA link under create scorecard metric.
    How does this setting impact target calculation. And what does Enter and None option work under dimension consolidation and options avaiable under time consolidation.
    Help is highly appreciated.
    Regards
    Nagesh

    Nagesh,
    There are some significant differences between how Time and Dimensions are handled in SSM.
    Time: InfoCubes have time characteristics for each time period. SSM has no characteristic for Time which gives it greater flexibility to use different time settings. In BICA you are taking the most granular time characteristic to bring into SSM. Then in SSM you can aggregate or divide the measure value into any other time periodcity.
    Dimensions: Key Figures in InfoCubes are dimensioned by every characteristic in the Dimension Tables. In SSM, measures can have different dimensionality or, in some cases, no dimensionality. An easy example to better understand is that in SSM, SALES can be dimensioned by Customer, Customer Group and Product, and COSTS can be dimensioned by just Customer Group and Product.
    The BICA connector that is used does not bring in all the internal features of BEx. There are internal calculations that BEx does that is not exposed to SSM through the BICA. That is why in some cases you can see some difference between the BEx output and the data in SSM.
    There is a Best Practices Guide BICA for BW which is the best resource of information on the BICA connector and understanding some of the key differences between data in BW and how to bring that into SSM.
    http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/701b8da0-3135-2b10-929d-936e43164b06
    Although the document says its for SSM 7.0, the doc will apply to SSM 7.5 as well.
    Regards,
    Bob
    Edited by: Bob McGlynn on Jan 14, 2011 6:27 AM

  • ATG-Endeca integration issue

    I have a simple jsp to invoke droplet 'InvokeAssembler' as bellow:
    <dsp:droplet name="InvokeAssembler">
      <dsp:param name="contentCollection" value="/content/Shared/PLP Page Collection/PLP passarela - feminino"/>
      <dsp:oparam name="output">
        <dsp:getvalueof var="test" vartype="com.endeca.infront.assembler.ContentItem" param="contentItem" />
        <br/>contentItem.contents: <dsp:valueof param="contentItem.contents" /> <br/></br>
        <br/><br/>contentItem: <c:out value="${test}" />
      </dsp:oparam>
    </dsp:droplet>
    And the 'contentCollection' value is copied from Experience Manager content collection xmgr.html url:
    http://localhost:8006/ifcr/sites/CRS/tools/xmgr.html#content/Shared/PLP%20Page%20Collection/PLP%20passarela%20-%20femini…
    The output for contentItem.contents is empty, I am not sure if there is something I missed in jsp or the content collection is empty in Experience Manager but there is no errors in ATG jboss side, the debug log as below:
    21:50:11,035 INFO [AssemblerPipelineServlet] DEBUG Ignoring (passing on) request because mime type dynamo-internal/html is not allowed for assembly
    21:50:11,040 INFO [AssemblerPipelineServlet] DEBUG Ignoring (passing on) request for /passarela/test.jsp
    21:50:11,531 INFO  [AssemblerTools] DEBUG Assembling content for {@type=ContentSlot, ruleLimit=1, contentCollection=/content/Shared/PLP Page Collection/PLP passarela - feminino}
    21:50:13,289 INFO  [AssemblerTools] DEBUG Record filters: [product.priceListPair:precoVenda_precoTabela]
    21:50:13,290 INFO  [AssemblerTools] DEBUG FilterState.securityFilter:
    21:50:13,318 INFO  [AssemblerTools] DEBUG NavigationState: ?Nf=product.endDate%7CGTEQ+1.3769568E12%7C%7Cproduct.startDate%7CLTEQ+1.3769568E12&Nr=product.priceListPair%3AprecoVenda_precoTabela
    21:50:13,347 INFO [NucleusAssemblerFactory] DEBUG Returning handler from defaultHandlerPath: /atg/endeca/assembler/cartridge/handler/ContentSlot
    21:50:13,363 INFO  [AssemblerTools] DEBUG UserState segments: [sitegroup.grupoPassarela, site.passarelaHome]
    21:50:13,673 INFO  [AssemblerTools] DEBUG Assembled content {contents=[], @type=ContentSlot, atg:currentSiteProductionURL=/passarela, ruleLimit=1, contentCollection=/content/Shared/PLP Page Collection  /PLP passarela - feminino}
    But I have index data there and should return something, anyone has any idea?
    Thanks a lot.

    Hi freind,
    Same errore i got in my case a lso . i am trying to integrate latest endeca installable with ATG 10.1.1 . Could you please help me if you are have followed any remedies for this error..

  • How to restrict Sales office and Sales Group.

    Hi All,
    I want to restrict the users from changing the sales office and sales group in the sales order.
    Is there any standard way to achieve this or do we need to do with User exit??
    Please help.
    Thanks,
    Pavan.

    hi
    there is no standard settings for estrict the users from changing the sales office and sales group in the sales order
    so you have to write the logic in userexit
    DATA: lt_user_list   TYPE STANDARD TABLE OF tvarvc,
           lw_user_list   TYPE tvarvc,
           lr_user        TYPE RANGE OF syuname,
           lw_user        LIKE LINE OF lr_user.
    IF screen-name EQ ' VBAK-VKBUR' and   VBAK-VKGRP.
    * IF sy-tcode EQ 'VA02'.
    **Get list of users who are allowed to change SO - only they can change payment terms
         SELECT *
           FROM tvarvc
           INTO TABLE lt_user_list
          WHERE name = 'ZSD_VA02_ALLOWED'
            AND type = 'S'.
         IF sy-subrc = 0.
           LOOP AT lt_user_list INTO lw_user_list.
             lw_user-sign = lw_user_list-sign.
             lw_user-option = lw_user_list-opti.
             lw_user-low    = lw_user_list-low.
             lw_user-high   = lw_user_list-high.
             APPEND lw_user TO lr_user.
             CLEAR lw_user.
           ENDLOOP.
    **    If user is not in the users listed for change allowed
           IF sy-uname NOT IN lr_user.
             screen-input = 0.
           ELSE.
             screen-input = 1.
           ENDIF.
         ENDIF.
       ENDIF.
    ENDIF.
    go to STVARV t code here you check the NAME and give the user ids who need to change

  • In Answers am seeing "Folder is Empty" for Logical Fact and Dimension Table

    Hi All,
    Am working on OBIEE Answers, on of sudden when i clicked on Logical Fact table it showed me as "folder is empty". I restarted all the services and then tried still showing same for Logical Fact and Dimension tables but am able to see all my reports in Shared Folders. I restarted the machine too but no change. Please help me out to resolve this issue.
    Thanks in Advance.
    Regards,
    Rajkumar.

    First of all, follow the forum etiquette :
    http://forums.oracle.com/forums/ann.jspa?annID=939
    React or mark as anwser the post that the user gave.
    And for your question, you must check the log for a possible corrupt catalog :
    OracleBIData_Home\web\log\sawlog0.log

Maybe you are looking for