Basics:  Best practise when using a thesaurus?

Hi all,
I currently use a function which returns info for a search on our website, the function is used by the java code to return hits:
CREATE OR REPLACE FUNCTION fn_product_search(v_search_string IN VARCHAR2)
RETURN TYPES.ref_cursor
AS
wildcard_search_string VARCHAR2(100);
search_results TYPES.ref_cursor;
BEGIN
OPEN search_results FOR
SELECT
          DCS_PRODUCT.product_id,
          DCS_CATEGORY.category_id,
          hazardous,
          direct_delivery,
          standard_delivery,
          DCS_CATEGORY.short_name,
          priority
          FROM
          DCS_CATEGORY,
          DCS_PRODUCT,
          SCS_CAT_CHLDPRD
          WHERE
          NOT DCS_PRODUCT.display_on_web = 'HIDE'
          AND ( contains(DCS_PRODUCT.search_terms, v_search_string, 0) > 0)
          AND SCS_CAT_CHLDPRD.child_prd_id = DCS_PRODUCT.product_id
          AND DCS_CATEGORY.category_id = SCS_CAT_CHLDPRD.category_id
          ORDER BY SCORE(0) DESC,
          SCS_CAT_CHLDPRD.priority DESC,
          DCS_PRODUCT.display_name;
RETURN search_results;
END;
I want to develop this function so that is will use a thesaurus in case of no data found.
I have been trying to find any documentation that might discuss 'best practise' for this type of query.
I am not sure if I should just include the SYN call in this code directly or whether the use of the thesaurus should be restricted so that it is only used in circumstances where the existing fuction does not return a hit against the search.
I want to keep overheads and respose times to an absolute minimum.
Does anyone know the best logic to use for this?

Hi.
You want so much ("... absolute minimum for responce time...") from OracleText on 9.2.x.x.
First, text queries on 9.2 is so slowly than on 10.x . Second - this is bad idea - trying to call query expansion functions directly from application.
My own expirience:
The best practise with thesauri usage is:
1. Write a good searcg string parser which add thes expansion function (like NT,BT,RT,SYN...) directly in result string passed through to DRG engine.
2. Use effective text queries: do not use direct or indirect sorts (hint DOMAIN_INDEX_NO_SORT can help).
3. Finally - write effective application code. Code you show is inefficient.
Hope this helps.
WBR Yuri

Similar Messages

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Best practice when using auto complete in view layer

    Hello
    I have a question regarding best way to store/cash data when using auto complete function in view layer.
    I know that there will be a lot of visitors that will use this function and I dont want to kill the application server so I need some advice.
    Its about 6000 words that should be searchable .... my first thought was to create a Singleton-bean that stores current items and that I will iterate over but its a lot of "waste" doing that way.
    I would be very glad if anyone could advice me how to do this best way if there is any de-facto standard to use when using auto completion in "view layer"
    Thanks!
    Best Regards/D_S

    I dont know what your design is, but here are some ideas:
    To me, autocomplete means you have some user specific data that the user entered prevously such as their home address, and some generic data that is not specific to any particular user. I would store all that in a database. For the user specific data I would store their userID along with the data in the database. Then, when populating a JSP page, I would call up just the data specific to that user and the generic data from the database. I would store it as an array of some type in javascript client--side. When the user clicks the autopopulate button, I would have that button call a javascript fuction that reteives the data from the javascript array and populate the various textfields. All this is done client-side so the form does not have to be re-drawn. I question why you have 6000 items. Normally, autopopulate has at most only a few dozens of items. If you still need 6000 items, I suggest adding a textfield to the form to filter what the data he needs down to a manageable amount. Example: rather than get all names from a telephone book, put a textfield on the form that allowfs an end user to enter a letter a to z such as 'b', then only fetch last names from the phone book that begins with 'b'.

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

  • SAP best practise to use transportation component

    Hi,
    I need to know when should be the shipment created after delivery? When Shipment create for group of deliveries?
    Indeed I would like to know the SAP best practices to use transportation component. Please forward if you have any related links which describe the SAP best practice to use transportation component.
    Thanks,
    Victor

    Dear Victor,
    The shipment document will be created after delivery creation but before Post Goods Issue(PGI).
    When the deliveries belongs to the same route you can create shipment document for group of deliveries.
    For more information go through this Help link it will help you,
    http://help.sap.com/saphelp_47x200/helpdata/en/93/743c21546011d1a7020000e829fd11/frameset.htm
    I hope this will help you,
    Regards,
    Murali.

  • Best Practises for using Content Types

    We had third party vendor who migrated and re structured contents and sites in Sharepoint 2010. I noticed one unusual thing : In most cases they new separate site content types for every library in a site i.e. even if two libraries contain same set of metadata
    columns they created separate site content types by duplicating it from the first one and gave it unique name and used in second library.
    My point of view for content type is that they are used for re usability, i.e. if another library is needing same set of metadata columns then I would just reuse existing content type rather than creating another content type with different name by inheriting
    it from the first one and having same set of columns.
    When I asked vendor reason for this approach (for every library they created new content types and for libraries needing same set of meta data columns they just inherited from a custom site content type and created another duplicate one with same set of
    meta data columns and gave it different name in most cases name of library ) they said they did that to classify documents which I did not agree with because by creating two document libraries classification is already done.
    I need some expert advice on this, I will really appreciate: I understand content types are useful and they provide re usability but,
     A) Do we need to create new site content types whenever we create new library ? (Even though we are not going to re use them)
    B) What is best practice : if few libraries are needing same set of metadata columns
    1) Create site a content type and reuse it in those libraries ? or
    2) Create a site content type and create new content types by inheriting from site content type created at first and just give them different name even though all of them are having same set of columns  ?
    I need expert advice on this but following is my own opinion on this
    I do not think point A) is a good practice and should not be used, we should create site content type only when we think it will be re used and we do not need to create site content type every time we create document library. Also I do not think point 2)
    of B is a good practice as well
    Dhaval Raval

    It depends on the nature of the content types and the libraries. If the document types really are shared between document libraries then use the same ones. If the content types are distinct and non overlapping items that have different processes, rules or
    uses then breaking them out into a separate content type is the way forward.
    As an example for sharing content types: Teams A and B have different document libraries. Both fill in purchase orders, although they work on different projects. In this case they use the same form and sharing a content type is the no question approach.
    As an example for different content types: A company has two arms, a consultancy where they send people out to client sites and a manufacturing team who build hardware. Both need to fill in timesheets but whilst the metadata fields on both are the same the
    forms are different and/or are processed in a different manner.
    You can make a case either way, i prefer to keep the content types simple and only expand out when there's a proven need and a user base with experience with them. It means that if you wanted to subdivide later you'd have more of a headache but that's a
    risk I generally think works out.

  • Mobile App Best Practice When Using SQLite Database

    Hello,
    I have a mobile app that has several views.
    Each view calls a different method of a Database custom class that basically returns the array from a synchronous execute call.
    So, each view has a creationComplete handler in which I have something like this:
    var db:Database=new Database();
    var connectResponse:Object=db.connect('path-to-database');
    if(connectResponse.allOK)//allOK is true if connection was succesful
       //Do stuff with data
    else
       //Present error notice
    However this seems reduntant. Is it OK to do this once (connect the the database) in the Main Application file?
    The do something like FlexGlobals.topLevelApplication.db?
    And even generally speaking, constants and other things that I would need throughout the app, can be placed in the main app? As a best practice, not technically as technically it is possible.
    Thank you.

    no, I only connect it once
    I figured I wanted several views to use it so made it static and singleton as I only have 1 database
    I actually use synchronous calls but there is a sync with remote mysql database function, hence the eventdispatcher
    ... although I am thinking it might be better to use Async and dispatch a custom event and have the relative views subscribe

  • Best practices when using OEM with Siebel

    Hello,
    I support numerous Oracle databases and have also taken on the task of supporting Enterprise Manager (GRID Control). Currently we have installed the agent (10.2.0.3) on our Oracle Database servers. So most of our targets are host, databases and listeners. Our company is also using Siebel 7.8 which is supported by the Siebel Ops team. The are looking into purchasing the Siebel plugin for OEM. The question I have is there a general guide or best practice to managing the Siebel plugin? I understand that there will be agents installed on each of the Servers that have Siebel Components, but what I have not seen documented is who is responsible for installing them? Does the DBA team need an account on the Siebel servers to do the install or can the Siebel ops team do the install and have permissions set on the agent so that it can communicate with the GRID Control? Also they will want access to the Grid Control to see performance of their Components, how do we limit their access to only see the Siebel targets including what is available under the Siebel Services tab? Any help would be appreciated.
    Thanks.

    There is a Getting Started Guide, which explains about installation
    http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b32394/toc.htm
    -- I presume there are two teams in your organization. viz. DBA team which is responsible for installing the agent and owns the Grid Control / Siebel ops team is responsible for monitoring Siebel deployment.
    Following is my opinion based on the above assumption:
    -- DBA team installs agent as a monitoring user
    -- Siebel ops team provides execute permission to the above user for server manager[]srvrmgr.exe] utilities and read permission to all the files under Siebel installation directory
    -- DBA team provisions a new admin for Siebel ops team and restrict the permissions for this user
    -- Siebel ops team configures the Siebel pack in Grid Control. [Discovery/Configuration etc]
    -- With the above set up Siebel ops team can view only the Siebel specific targets.
    Thanks

  • Best Practices when using MVMC to improve speed of vmdk download

    I've converted a number of machines already from ESXi4.1 to Hyper-V 2012 successfully and learnt pretty much all the gotchas and potential issues to avoid along the way, but I'm still stuck with extremely slow downloading of the source
    vmdk files to the host i'm using for the MVMC. This is not so much an issue for my smaller VM's but it will be once I hit the monster sized ones.
    To give you an idea on a 1GB network it took me 3 hours to download an 80GB VM. Monitoring the network card on the hyper-v host I have MVMC running on shows that i'm at best getting 30-40Mbs download and there are large patches where that falls right down
    to 20Kbs or thereabouts before ramping back up to the Mbs range again. There are no physical network issues that should be causing this as far as I can see.
    Is there some undocumented trick to get this working at an acceptable speed? 
    Copying large files from a windows guest VM on the esx infrastructure to the Hyper-V host does not have this issue and I get the full consistent bandwidth.

    It's VMWARE in general is why... Ever since I can remember (which was ESX 3.5) if you copy using the webservice from the data store the speeds are terrible. Back in the 3.5 days the max speed was 10Mbps as second. FASTSCP came around and threaded it to make
    it fast.
    Backup software like Veeam goes faster only if you have a backup proxy that has access to all data stores running in VMware. It will then utilize the backend VMware pipe and VM network to move the machines which is much faster.
    That being said in theory if you nested a Hyper-V server in a VMWARE VM just for conversations it would be fast permitting the VM server has access to all the datastores.
    Oh and if you look at MAT and MVMC the reason why its fast is because netapp does some SAN offloading to get around VMWARE and make it array based. So then its crazy fast.
    As a side not that was always one thing that has pissed me off about VMWARE.

  • Best practices when using collections in LR 2

    How are you using collections and collection sets? I realize that each photographer uses them differently, but I'm looking some inspiration because I have used them only very little.
    In LR 1 I used collections a bit like virtual Folders, but it doesn't anymore work as naturally in LR2.
    In LR 1 it was like:
    Travel (1000 images, all Berlin images)
    Travel / Berlin trip (400 images, all Berlin trip images)
    Travel / Berlin trip / web (200 images)
    Travel / Berlin trip / print (100 images)
    In LR 2 it could be done like this, but this somehow feels unnatural.
    Travel (Collection Set)
    Travel / Berlin trip (CS)
    Travel / Berlin trip / All (collection, 400 images)
    Travel / Berlin trip / web (collection, 200 images)
    Travel / Berlin trip / print (collection, 100 images)
    Or is this kind of use stupid, because same could be done with keywords and smart collections, and it would be more transferable.
    Also, how heavily are you using Collections? I'm kind of on the edge now (should I start using them heavily or not), because I just lost all my collections because I had to build my library from scratch because of weird library/database problems.

    Basically, i suggest not to use collections to replicate the physical folder structure, but rather to collect images independent of physical storage to serve a particular purpose. The folder structure is already available as a selection means.
    Collections are used to keep a user defined selection of images as a persistent collection, that can be easily accessed.
    Smart collections are based on criteria that are understood by the application and automatically kept up to date, again as a persistent collection for easy access. If this is based on a simple criterium, this can also, and perhaps even easier, done by use of keywords. If however it is a set of criteria with AND, OR combinations, or includes any other metadata field, the smart collection is the better way to do it. So keywords and collections in Lightroom are complementary to eachother.
    I use (smart)collections extensively, check my website  www.fromklicktokick.com where i have published a paper and an essay on the use of (smart)collections to add controls to the workflow.
    Jan R.

  • What is best practise for using .movelast .movefirst?

    I was told at some point that I should always execute a .movelast   then .movefirst before starting to work on a recordset, so that I was sure all records were loaded.  But if there are NO records, I get a "no current record" when the .MoveLast statement is executed in the following code.  But if I use rstClassList.RecordCount before the .movelast, can I count on it's being valid?
    Also, I was unable to paste this code into this post.  I had to re-type it.  Is that expected behavior? Not to be able to paste stuff in?
    ls_sql = "Select * from tblStudents"
    Set rstClassList = CurrentDb.OpenRecordset(ls_sql)
    rstClassList.MoveLast
    rstClassList.MoveFirst
    li_count = rstClassList.RecordCount
    TIA
    LAS

    You do a MoveLast in order to make the RecordCount accurate.  If you access it before that, the results are unreliable.
    You want to do a MoveLast, then a RecordCount, and only then a MoveFirst if the count is greater that zero.
    That being said, the DataWindow is how people normally work with a database from PowerBuilder.

  • New to 16x9 format, how get best quality when using standard dv footage?

    If I am capturing standard NTSC dv footage, and adding standard dimension digital photos, what capture and sequence settings should I use to get best looking 16x9 output? I know I can export as 16x9 and burn a 16x9 DVD in DVDSP, but I was thinking I should capture and edit that way as well to get best quality.
    Thus far, I've been editing everything as 4x3 and assuming the tv's these will be shown on will compensate, which is generally true. I would like, however, to give my clients the option of which output option is best for them.

    So, a client would have to give me 16x9 raw footage for me to produce a true 16x9 DVD?
    Yes.
    I know I can't produce HD DVDs, at least not cost effectively, and most people don't have players anyway.
    Right. None of us can. The movies from the store or Netflix aren't HD. HD has no current relevance in DVD production (other than as acquisition formats).
    I do want their footage to look as good as
    possible on those large screen tvs.
    If you produce a good DVD it will look good. All the other DVDs that are 4:3 are just that - 4:3. Same with 4:3 television. If they were produced well they should look great on any screen.
    1. I'm starting with DV NTSC raw footage, a low-res
    codec.
    OK.
    2. I'm compressing to MPEG-2 to be playable on an
    SD-DVD player, a further low res codec.
    OK. But you really mean "to be playable on a DVD player".
    3. Regardless of what I do, it's going to be
    low-res.
    Regardless of what you do, the resolution is limited by current DVD technology. That is, in the NTSC world, 720x480 whether it's 4:3 or 16:9.
    Does MPEG-2 always look crappy on an HDTV? On a 72dpi SDTV, it looks acceptable. Is there a workaround for HDTV?
    Whether or not it looks crappy is subjective. If Hollywood-produced DVDs and SD television look great on it, then it's possible to make it look great.
    Your real limitations - the ones you have control over, are:
    - quality of camera, glass, and related equipment
    - quality of lighting, audio, camerawork, production in general
    - recording format (DV ain't the greatest).
    - editing format (hint: you don't have to edit in DV)
    - quality of editing / editor
    - quality of MPEG2 encoding
    In your case I think the last one is the most relevant. If you aren't getting acceptable results you may need to learn more about MPEG2 and Compressor.
    BTW 72dpi is a print term, it has absolutely no meaning in video.
    Further question, if I'm doing a photo slideshow, not DV footage, can I leave it uncompressed or will it convert to MPEG-2 and stink regardless?
    To make a DVD movie it needs to be MPEG-2. That's the technology we have. But it doesn't have to stink. The MPEG-2 you use is the same MPEG2 the studios use with movie releases. The technology is the same, the difference is that they have people who really really know how to use it well.
    There is a way to make actual slideshows from JPEGs which play from a DVD or CD. I know little about this, but I suspect that the resolution is still limited to 720x480. Someone will correct me if that's wrong. But you don't have to put them through MPEG2 compression.
    You may benefit from finding out what 16:9 really means. All it means is that the display is wider. That's all. There is no implication of a quality increase, in fact from DVD playback the 16:9 picture will always appear a little softer than SD if viewed on a widescreen device. That's just the way it is.

  • Best practises for using Excel functions on Power Pivot Data

    Hi
    How do you suggest to perform calculations on cells in a power pivot table? Obviously the ideal approach is to use a DAX measure. But given that DAX doesn't have every function, is there a recommended way of eg adding an "extra"  ( ie just
    adjacent)  column to a pivot table. ( in particular I want to use beta.inv )  
    I could imagine one option of adding some VBA that would update the extra column as the pivot table itself refreshed ( and added more/less rows)
    thanks
    sean

    Hi Sean,
    I don't know what's your expected requirement regarding this issue, maybe you can share some sample data or scenario to us for further investigation. As we know, if we need to add extra column to PowerPivot data model, we can directly
    create a calcuated column:
    calculated columns:
    http://www.powerpivot-info.com/post/178-how-to-add-dax-calculations-to-the-powerpivot-workbooks
    There are some different between Excel and DAX functions, here are the list for your reference:
    Excel functions (by category):
    http://office.microsoft.com/en-us/excel-help/excel-functions-by-category-HA102752955.aspx
    DAX Function Reference:
    http://msdn.microsoft.com/en-us/library/ee634396.aspx
    Hope this helps.
    Regards, 
    Elvis Long
    TechNet Community Support

  • Best way to use Models

    Hello,
    What is the best practice when using models? Given a scenario like, 1 project has several components. Each component has its view, windows, etc, since the components are separated by functionality.
    Is it better to create a model, then add that model to the "used models" on each component? Or should I create a different component, that will handle the model, and expose its data via context on its interface?
    regards,
    arnold

    Hello Arnold,
    I have read in one of the SAP Documents, If Models are used in Different Projects (Diff DC's) then create in Separate DC for Models and use it in all the projects.
    If Models are in used in one DC with different components, use it with used models.
    Regards,
    Sridhar

Maybe you are looking for

  • No data in trace file view panels

    Hello, I was not aware of the open -> .trc file feature until I saw it posted earlier in this forum. Do you know if the issue is being investigated? The running dialog appears for a few minutes with lots of cpu activity and then ... nothing. I am usi

  • HT201303 using credit card on new PC

    Hi I have just got a new Mac and am trying to purchase a song on Itunes but it wont accept my credit card as said is being used on new PC.  Please advise how to sort this out Thanks

  • Drill Down to Album Menu Makes Ipod Mini Die

    Hi everyone, I have a NEW problem with my Ipod Mini that just started in the last couple of days. It'll play on shuffle with no problem, but if I want to listen to a particular album and go to the "Album" menu, as soon as I click on play, the thing d

  • Repair / recover database that is unable to be attached

    Hi, I had some hardware problems on the database server where I'm working. After restarting the server, I can't attach any database to an SQL Instance. I'm not sure what to do because without being able to attach it, I can't dbcc it. Is there any SQL

  • Background pictures replace main pic

    Every time I try to drag the background picture to the background it replaces the original photo. What is wrong