Best practices on number of pipelines in a single project/app to do forging

Hi experts,
I need couple of clarification from you regarding endeca guided search for enterprise application.
1)Say for example,I have a web application iEndecaApp which is created by imitating the jsp reference application. All the necessary presentation api's are present in WEB-INF/lib folder.
1.a) Do I need to configure anything else to run the application?
1.b) I have created the web-app in eclipse . Will I be able to run it from the any thirdparty tomcat server ? If not then where I have to put the war file to successfully run the application?
2)For the above web-app "iEndecaApp" I have created an application named as "MyEndecaApp" using deploymenttemplate. So one generic pipeline is created. I need to integrate 5 different source of data . To be precise
i)CAS data
ii)Database table data
iii)Txt file data
iv)Excel file data
v)XML data.
2.a)So what is the best practice to integrate all the data. Do I need to create 5 different pipeline (each for each data) or I have to integrate all the 5 data's in a single pipeline ?
2.b)If I create 5 different pipeline then all should reside in a single application "MyEndecaApp" or I need to create 5 difference application using deployment template ?
Hope you guys will reply it back soon..... Waiting for your valuable response ..
Regards,
Hoque

Point number 1 is very much possible ie running the jsp ref application from a server of your choice.I havent tried that ever but will draw some light on it once I try.
Point number 2 - You must create 5 record adapters in the same pipeline diagram and then join them with the help of joiner components. The resultant must be fed to the property mapper.
So 1 application, 1 pipeline and all 5 data sources within one application is what should be the ideal case.
And logically also since they all are related data, so must be having some joining conditions and you cant ask 5 different mdex engines to serve you a combined result.
Hope this helps you.
<PS: This is to the best of my knowledge>
Thanks,
Mohit Makhija

Similar Messages

  • Best practice for number of result objects in webi

    Hello all,
    I am just wondering if SAP has any recommendation or best practice document regarding number of fields in Result Objects area for webi. We are currently running on XI 3.1 SP3...one of the end user is running a webi with close to 20 objects/dimensions and 2 measure in result objects. The report is running for 45-60 mins and sometimes timing out. The cube which stores data has around 250K records and the report would return pretty much all the records from the cube.
    Any recommendations/ best practices?
    On similar issue - our production system is around 250GB what would be the memory on your server typically...currently we have 8GB memory on the sap instance server.
    Thanks in advance.

    Hi,
    You mention Cubes so i suspect BW or MS AS .   Yes,  OLAP data access (ODA) to OLAP DataSets is a struggle for WebIntelligence which is best at consuming Relational RowSets.
    Inefficient MDX queries can easily be generated by the webi tool, primeraly due to substandard (or excessive) query and document design. Mandatory filters and focused navigation (i.e. targetted BI questions) are the best for success.
    Here's an intersting article about "when is a webi doc too big" https://weblogs.sdn.sap.com/pub/wlg/18706
    Here's a best practice doc about webi report design and tuning ontop of BW MDX : https://service.sap.com/~sapidb/011000358700000750762010E 
    Optimization of the cube itself, including aggregates and cache warming is important. But especially  use of Suppress Unassigned nodes in the BW hierarchy, and "query stripping" in the webi document.
    finally,  patch level of the BW (BW-BEX-OT-MDX) component is critical.  i.e. anything lower than 7.01 SP09 is trouble. (memory management, mdx optimization, functional correctness)
    Regards,
    H

  • Database best practice: max number of columns

    I have two questions that I would appreciate comments on...
    We have a table titled TRANSACTION with 160 columns and a view titled TRANSACTIONS_VIEW with 233 columns in it. This was designed by someone a while ago. I am wondering if it is against best practice to have this many columns in a table? I have never before seen a table with this many columns in it and feel that there must be a way to split the data into multiple tables to make it more manageable.
    My second question is on partitions, the above table TRANSACTION is partitioned by manually specifying partitions with max values on the transaction date starting august 2008 through january 2010 at 1 month increments. Isn't it much better to specify automatic partitioning using the interval clause?

    kev374 wrote:
    thanks for the response, yes there are many columns that violate 3NF and that is why the column count is so high.
    Regarding the partition question, by better I meant by using "interval" the partitions could be created automatically at the specified interval instead of having to create them manually.The key is to understand the logic behind these tables and columns. Why it was designed like this, if it's business requirement then 200 some columns are not bad, if it's design flaw 20 columns could be too much. It's not necessarily always good to have strict 3NF design, sometime for various reason you can denormalize the tables to get better performance.
    As to partitioning Q, you have to do the rolling window (drop/add partition as time goes by) type of partitioning scheme manually so far.

  • Best Practice: A J2EE Blue-Print for a Typical Web App

    Consider a typical synchronous Struts-based Web application which does a simple DB search and post. What are some of the main patterns and components that should be used if following the �industry best practices�
    Does the following flow seem accurate?
    Strust Action creates a TransferObject , and passes it to a Business Delegate. Delegate finds the appropriate BusinessObject, the Business Object uses the Data Access Object�.the CRUD operation happens and the result is sent back to the Action in the same TransferObject.
    Which one of these components need an interface?
    What's the best way for this components to interact with each other (factory, etc.)?
    Message was edited by:
    kmkiani
    Message was edited by:
    kmkiani

    There are 3 tiers in a Java EE application. (Presentation, Business, Integration).
    The BusinessDelegate in this scenario would be a Presentation-tier business delegate. This guy would interact with a Session Facade who lives on the Business-tier. The SessionFacade is the abstraction on the Business-tier and the Business Delegate is the abstraction on the Presentation-tier. It is these guys that have direct communication. This design enables low coupling between the actual implementations of each area. If done properly, you could go from EJB to Web Service to POJO business models without ever having to change anything in the Presentation-tier.
    These object-oriented design patterns are primarily for Enterprise applications with extensive Quality-of-Service requirements.
    In your scenario, the Presentation-tier would contain a MVC-based web application, i.e. Struts. The business model and business/domain requirements would be implemented in the Business-tier.
    Presentation Tier - Struts Web Application
    Business Tier - (EJB | POJO | WEB SERVICES) Application
    Integration Tier - (Relational Database | File System | XML Database | EIS)

  • Best Practice for Using Static Data in PDPs or Project Plan

    Hi There,
    I want to make custom reports using PDPs & Project Plan data.
    What is the Best Practice for using "Static/Random Data" (which is not available in MS Project 2013 columns) in PDPs & MS Project 2013?
    Should I add that data in Custom Field (in MS Project 2013) or make PDPs?
    Thanks,
    EPM Consultant
    Noman Sohail

    Hi Dale,
    I have a Project Level custom field "Supervisor Name" that is used for Project Information.
    For the purpose of viewing that "Project Level custom field Data" in
    Project views , I have made Task Level custom field
    "SupName" and used Formula:
    [SupName] = [Supervisor Name]
    That shows Supervisor Name in Schedule.aspx
    ============
    Question: I want that Project Level custom field "Supervisor Name" in
    My Work views (Tasks.aspx).
    The field is enabled in Task.aspx BUT Data is not present / blank column.
    How can I get the data in "My Work views" ?
    Noman Sohail

  • Best practice for putting together scenes in a Flash project?

    Hi, I'm currently working on a flash project with the following characteristics:
    using a PC
    2048x1080 pixels
    30 fps
    One audio file that plays (once) continuously across the whole project
    there are actions that relate to the audio, so the timing is important
    at least 10 scenes
    about 7 minutes long total
    current intent is for it to be played in a modern theater as a surprise
    What is the best practice for working on this project and then compiling it together?
    Do it all in one project file?
    Split the work into different project (xfl) files for each scene and then put it together when all the scenes are finalized?
    Use one project file but create different "scenes" for each respective scene?  I think this is the "classic" way (?).
    Make the scenes "movie clips" and then insert them into the timeline with the audio as its own layer?
    Other?
    I'm currently working on it by having it all in one project file.  But I've noticed that there's some lag (or it gets choppy) at certain parts during playback and the SWF history shows 3.1 MB with a yellow triangle with exclamation point symbol.  Thanks in advance. 

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • Best practices for realtime communication between background tasks and main app

    I am developing (in fact, porting to WinRT Universal App) an application connecting to Bluetooth medical devices. In order to support background connectivity, it seems best is to use background tasks triggered by a device connection. However, some of these
    devices provide a stream of data which has to be passed to the main app in real time when it is active - i.e. to show an ECG on the screen. So my task ideally should receive and store data all the time (both background and foreground) and additionally make
    main app receive it live when it is in foreground.
    My question is: how do I make background task pass real-time data to the app when it is active? Documentation talks about using storage, but it does not seem optimal for realtime messaging.. Looking for best practices and advice. Platform is Windows 8.1
    and Windows Phone 8.1.

    Hi Michael,
    Windows phone app has resource quotas, to prevent it from interfering with real-time communication functionality, background task using the ControlChannelTrigger and PushNotificationTrigger receive guaranteed resource quotas for every running task. You can
    find more information from
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/Hh977056(v=win.10).aspx. See Background task resource guarantees for real-time communication section. ControlChannelTrigger is not supported on windows phone, so you can have a look at PushNotificationTrigger
    class.
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/windows.applicationmodel.background.pushnotificationtrigger.aspx.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate
    the survey.

  • Best practice - comparing NUMBER(4, 0) and VARCHAR2(5)

    I have two fields in two different views that I am comparing in a complex query. This is just one small part of the query.
    I want to make sure I am doing this the best way. First, here are the two fields:
    ZAS.ORG_ID NUMBER(4,0)
    ORG.IORG_NBR VARCHAR2(5)
    And here is the part of the query that I select the first field.
    (CASE WHEN ZAS.ORG_ID IS NULL THEN NULL ELSE (TO_CHAR(ZAS.ORG_ID, '0000')) END)
    As you can see, I do the same thing in the second part of the query. To help you understand, note that I am basically pulling back a bunch of data from two sources and comparing them.
    (CASE WHEN org.iorg_nbr IS NULL THEN NULL ELSE (TO_CHAR(org.iorg_nbr, '0000')) END)
    Please let me know if I need to supply any additional information. Any help would be greatly appreciated. Thanks!

    Hello,
    This:
    (CASE WHEN ZAS.ORG_ID IS NULL THEN NULL ELSE (TO_CHAR(ZAS.ORG_ID, '0000')) END)Could simply be this:
    (TO_CHAR(ZAS.ORG_ID))And this:
    (CASE WHEN org.iorg_nbr IS NULL THEN NULL ELSE (TO_CHAR(org.iorg_nbr, '0000')) END)Can simply be this:
    (org.iorg_nbr)No need to do anything with that, since it's already TRIMmed (no spaces), and you're converting ZAS.ORG_ID to a CHAR string, so they both will match without the extra '0's. And no need to explicitly handle NULLs at all.

  • Best practice for Number range buffering in SAP BI for FI Cubes

    Hello Experts,
    I have a question regarding Number range buffering. What i have observed is if we use more number of background processes while loading data, it for sure creaets locks with NRIV table bcoz of more calls for bufffered numbers with multiple processes.
    But there is a clause in SNRO while we buffer 'Main Memory' that it should not be used for Financial documents. I understand that this is for serial assignment of numbers for Documents and acoounts on the backend ECC, but does this even stand true for DIM ID's which we are buffering while loading data into a FI Cube?
    I have looked up a SAP note 1398444, which talks about NRIVSHADOW and parallel buffering and different use case scenarios.
    Which one of those should be used if I need to buffer my Dimension while loading a FI Cube in BI with maximun background processes?
    Please help me understand the difference?

    hi,
    well the message is just like a warning to signify that  incase you load more number of new records then your existing limit of buffer then you will have to increase accordingly if no then you can ignore
    if you want to understand the reason it like this....
    if there is not sufficent buffers space available and then process would be rollback and the existing numbers will get lost resulting gaps between the document numbers and for applications like finance this is not desirable .
    see doc more details
    http://help.sap.com/saphelp_nw04/helpdata/en/95/3d5540b8cdcd01e10000000a155106/frameset.htm
    Also if you are not sure you ask your basis team for Early watch alert report in their you can easily see whether number range buffering is really required or not, if yes then for which object/dimensions
    Also check below SAP Note:449030 and 62077
    hope it helps
    regards
    laksh

  • Best practice for adding images to a RH8 HTML project?

    I'm working on a Word to RH8 HTML conversion. The images in Word are SNAG images but the resolution is poor and some of the images may need to be recreated from scratch again. Going forward I imagine I should work on getting them right in Snagit9. Any suggestions?

    For example, installation help topics dealing with install/uninstall screenshots for an application's client and server:
    client_dir_sum.gif
    client_dot_net.gif
    client_uninst_wlcm.gif
    server_wlcm.gif
    server_cfg_tomcat.gif
    server_success.gif
    server_uninst_wlcm.gif
    ...or, screenshots dealing with an Accounts window:
    acct_groups.gif
    acct_lookup.gif
    acct_lookup_btn.gif
    acct_templates.gif
    In other words, use cfg for configuration, dir for directory, btn for button, etc., and use either a function or application window as your lead, allowing you to see all related graphics in sequential order. If you've ever done any manual book indexing, think of this as Index entries/subentries, only the "entry" in file naming is your leading prefix (client_, acct_, etc.), and your subentries are the rest of each file name. Such as:
    Accounts
    configuring groups
    using lookup feature
    lookup button
    templates
    Use the same naming conventions for your topic file names as well; using this method eliminates the need for virtual folders. Many users on this forum use the folder structure, but editing/renaming/adding/deleting folders seems to very often create all sorts of grief. I find this naming structure to be less stressful.
    Good luck,
    Leon

  • Large ADF Applications - Best Practice

    Hi
    We have a single ADF project (one model, one view/controller) with the following model components:
    68 AMs
    387 VOs
    175 EOs
    This project is ever expanding, and we are suffering some well-known performance problems when open JDeveloper or open the view/controller project.
    Are there any best-practice guidelines on how to structure your ADF projects?
    e.g. what is the maximum recommended number of AMs/VOs/EOs in a single project?
    We have kept everything in a single project after some advice from Oracle to help us to re-use common modules easily.
    We use JDeveloper v10.1.3 and use JHeadstart v10.1.3 SU 1 to generate our view/controller layer, using multiple faces-config files.
    Thanks
    Denis

    Hi Denis,
    We have exactly the same problem in expanding our application but we have a single AM and less EOs, VOs than you right now. There are some threads discussing this issue but I haven't found a complete and standard solution yet. The only thing I know is that almost every thing can be segmented in ADF like projects, application module, faces-config.xml, ...
    Another thing which is very important is that segmenting an application is a tradeoff, it has some advantaes but problems in SCM, security, ...
    S/\EE|)

  • Best practice for how to access a set of wsdl and xsd files

    I've recently beeing poking around with the Oracle ESB, which requires a bunch of wsdl and xsd files from HOME/bpel/system/xmllib. What is the best practice for including these files in a BPEL project? It seems like a bad idea to copy all these files into every project that uses the ESB, especially if there are quite a few consumers of the bus. Is there a way I can reference this directory from the project so that the files can just stay in a common place for all the projects that use them?
    Bret

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Best practice for image inserts

    I am planning to create new webhelp topics and edit existing
    topics in my project. Is it the best practice to place graphics
    files in the same folder as the HTML input files?

    I personally have always avoided the use of folders, maybe
    because most of my projects were large merged projects, which
    already contain multiple project "folders." Also, renaming/deleting
    these beasties later on can be a bear, as evidenced by many
    angst-ridden threads in this forum.
    Let RH place them in its own "virtual" Images folder and
    leave it at that. Unless...do you feel lucky?
    By the way, best practice (at least mine) is to name
    everything (projects, topics, graphics) with no spaces, very brief,
    and possibly with 2- to 3-character prefixes to more easily
    identify them (possibly application features, for example). Such
    as: sa_user_cfg.gif for a screenshot of the User Configuration
    window for System Administration.
    Good luck,
    Leon

  • Best practice for image resolution

    Although my source images files are high quality and look great in photoshop etc when they are viewed within a folio on the iPad the image quality drops. I'm using JPEG high when creating the folio but it still looks pretty blurry. It's OK, but nowhere near as sharp as i feel it could be.
    There seems to be something going during the conversion process that reduces the quality.
    Given that I have some nice, sharp hi-res masters, how should I be putting them in the folios to maximise image quality. I tried creating the folio at the highest quality and there was no difference. I've resaved the images at the size that they will appear in the folio.
    (*bangs head against the wall*)
    Thanks
    M

    I personally have always avoided the use of folders, maybe
    because most of my projects were large merged projects, which
    already contain multiple project "folders." Also, renaming/deleting
    these beasties later on can be a bear, as evidenced by many
    angst-ridden threads in this forum.
    Let RH place them in its own "virtual" Images folder and
    leave it at that. Unless...do you feel lucky?
    By the way, best practice (at least mine) is to name
    everything (projects, topics, graphics) with no spaces, very brief,
    and possibly with 2- to 3-character prefixes to more easily
    identify them (possibly application features, for example). Such
    as: sa_user_cfg.gif for a screenshot of the User Configuration
    window for System Administration.
    Good luck,
    Leon

  • Help: Best Practice Baseline Package experience

    All,
    Can I ask for some feedback from folks who've used the Best Practice Baseline package before - I'm in a project implementing ECC 6.0 and we're planning to use this package to quickly set up and demo the system to the business.  I've limited experience in config and system setup so your advice is much appreciated
    Specifically my questions:
    1.  Has anyone used the package for a similar purpose (quick setup to demo to business)?  Does it save a lot of time or should our team just configure the system manually?  Are there pitfalls I should be aware of?
    2.  I understand a limited amount of config / master data customisation can be done and there's even a wizard.  Is it effective and error-free - i.e., will I be better off by installing the base package and make changes later manually ?
    3.  Assuming after the demo we have firmed business requirements, which approach would be faster and easier?  Configuring from scratch vs. using existing BP baseline then doing delta configuration?
    4.  Does the documentation tell me exactly what was configured and the settings used?
    Appreciate any help and insight you can offer, thanks!!

    Dear Jin,
    The following help link should answer your queries:
    http://help.sap.com/bp_grcv152/GRC_US/Documentation/SAP_APJ_BP_Study_2006_CS.ppt#418,1,Slide  of SAP Best Practice
    Regardes,
    Naveen.

Maybe you are looking for