Best Approach for Use Case

Experts,
I am creating a small POC for a search engine. I am thinking on what is the best way to achieve the below scenario
1) Assume like in Google i am entering some data in the text box. I want the area below the text box to show the records (leaving aside from where the records is fetched - DB or flatflies) based on user input values and change as an when new characters are entered or deleted.
I guess contextual events clubbed together with regions will help me in this but i need to know if i am thinking in the right direction.
2) My other page has registration which contains 3 sections (logically) say Emp, Dept, Country. All the EO have validations defined on them. Now the user can only enter Emp details and proceed, or Dept details and proceed. So in such a case how do i skip the other section validations.
I guess sub forms can help me in this
Please advise.
Jdev 11.1.1.4 and beyond.

Hi,
you didnot get the article..I explains the wizard type interface with next functionality where validations is not fired at first but during at commit time.....
When i commit how do i make sure that only Transaction level validation is fired for the tab/section of one Entity only ?That article don't meet your usecase.....
SkipValidation will only skip validations when you need to navigate along with immediate="false"(default), but it wont allow you to commit unless and until your eo validates.
I guess SkipValidation set to Custom would help me Wrong..
Lets say i have a page with 3 tabs or a single page with 3 sections each sowing data from diff EO. Each EO having validations defined on them.In that case you need to supress the validation of that eo using some flag workaround .. as i explained you in my previous post !!
just set the flag to 'DRAFT' to supress the validation of that eo before commit..
Using it as EntityLevel script validator would be better.
Regards,
Edited by: Santosh Vaza on Jun 29, 2012 10:51 AM

Similar Messages

  • Best approach for using Faces with growing children?

    Hi all,
    I'm I recent Aperture user with young kids (four-year old twins), and I'm wondering how best to use Faces to identify the kids' faces.  I started working with about 6 months worth of recent photos in Aperture (3.2.3) before importing my full iPhoto library.  Faces did quite a good job of identifying the kids in that sample.  I've just imported all of my iPhoto library, which includes photos back to when the kids were first born.  Faces is now making suggestions that seem pretty reasonable, but far from perfect.  I suspect that if I go through the process of training Faces to do a better job with the "baby faces" its performace would improve on the old photos I just added, but is that a bad idea?  I'm afraid that training Faces to recognize the baby version of someone will "broaden" the definition it's using, making recognition less accurate for new photos I add. I could tell Faces that the baby versions are different people, but that might be worse -- then I'd have two very similar face profiles that are competing to "claim" new faces.  Does anyone have any experience that might help?
    Thanks,
    Brad

    Let me start by stating:
    - I don't know
    - I don't think you'll get any help from Apple.
    That said, here's what I suggest.  The Faces parameters are biometric.  The human head changes the least of any body part over the course of life.  Still, there is bound to be an age prior to which Faces identification works less-well because the data is "smeared".  Similarly, after that age, the Faces identification should work with the same level of accuracy.
    I would, for the present, ignore that.  Identify all Faces you have.  If Faces identification is sub-optimal, pick an age that you think corresponds to what I've laid out above, and create a new Face for all pictures of the individual prior to that age.  At that point you'll have two "Faces" for each individual: let's say "Robin Infant" and "Robin (post-infant)".
    While Aperture makes it easy to combine Faces (drag-drop in Faces View), I don't know an easy way to split named Faces.  It's easy enough to group the Images you want (filter for "Face is ... " and for "Date is before ... ").  From there, you will have to rename the Faces one-by-one.  This goes quickly by pasting the name in the name field.
    My guess is that the identification algorithm rejects the data from included faces that are outliers.  IOW, I don't think you can train Faces to be sloppy.
    Let us know what you find out.

  • Best Approach for using Data Pump

    Hi,
    I configured a new database which I set up with schemas that I imported in from another production database. Now, before this database becomes the new production database, I need to re-import the schemas so that the data is up-to-date.
    Is there a way to use Data Pump so that I don't have to drop all the schemas first? Can I just export the schemas and somehow just overwrite what's in there already?
    Thanks,
    Nora

    Hi, you can use the NETWORK_LINK parameter for import data from other remote database.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1007380
    Regards.

  • Best approach for uploading document using custom web part-Client OM or REST API

    Hi,
     Am using my custom upload Visual web part for uploading documents in my document library with a lot of metadata.
    This columns contain single line of text, dropdownlist, lookup columns and managed metadata columns[taxonomy] also.
    so, would like to know which is the best approach for uploading.
    curretnly I am  trying to use the traditional SSOM, server oject model.Would like to know which is the best approach for uploading files into doclibs.
    I am having hundreds of sub sites with 30+ doc libs within those sub sites. Currently  its taking few minutes to upload the  files in my dev env. am just wondering, what would happen if  the no of subsites reaches hundred!
    am looking from the performance perspective.
    my thought process is :
    1) Implement Client OM
    2) REST API
    Has anyone tried these approaches before, and which approach provides better  performance.
    if anyone has sample source code or links, pls provide the same 
    and if there any restrictions on the size of the file  uploaded?
    any suggestions are appreciated!

    Try below:
    http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
    http://stackoverflow.com/questions/9847935/upload-a-document-to-a-sharepoint-list-from-client-side-object-model
    http://www.codeproject.com/Articles/103503/How-to-upload-download-a-document-in-SharePoint
    public void UploadDocument(string siteURL, string documentListName,
    string documentListURL, string documentName,
    byte[] documentStream)
    using (ClientContext clientContext = new ClientContext(siteURL))
    //Get Document List
    List documentsList = clientContext.Web.Lists.GetByTitle(documentListName);
    var fileCreationInformation = new FileCreationInformation();
    //Assign to content byte[] i.e. documentStream
    fileCreationInformation.Content = documentStream;
    //Allow owerwrite of document
    fileCreationInformation.Overwrite = true;
    //Upload URL
    fileCreationInformation.Url = siteURL + documentListURL + documentName;
    Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add(
    fileCreationInformation);
    //Update the metadata for a field having name "DocType"
    uploadFile.ListItemAllFields["DocType"] = "Favourites";
    uploadFile.ListItemAllFields.Update();
    clientContext.ExecuteQuery();
    If this helped you resolve your issue, please mark it Answered

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • Best Approach for Reporting on SAP HANA Views

    Hi,
    Kindly provide information w.r.t the best approach for the reporting on HANA views for the architecture displayed below:
    We are on a lookout for information mainly around the following points:
    There are two reporting options which are known to us and listed below namely:
    Reporting on HANA views through SAP BW  (View > VirtualProvider > BEx > BI 4.1)
    Reporting on HANA views in ECC using BI 4.1 tools
            Which is the best option for reporting (please provide supportive reasons : as in advantages and limitations)?
             In case a better approach exists, please let us know of the same.
    Best approach for reporting option on a mixed scenario wherein data of BW and HANA views is to be utilized together.

    Hi Alston,
    To be honest I did not understand the architecture that you have figured out in your message.
    Do you have HANA instance as far as I understood and one ERP and BW is running on HANA. Or there might be 2 HANA instance and ERP and BW are running independently.
    Anyway If you have HANA you have many options to present data by using analytic views. Also you have BW on HANA as EDW. So for both you can use BO and Lumira as well for presenting data.
    Check this document as well: http://scn.sap.com/docs/DOC-34403

  • Best approach for IDOC - JDBC scenario

    Hi,
    In my scenarion I am creating sales order(ORDERS04) in R/3 system and which need to be replicated in a SQL Server system. I am sending the order to XI as an IDoc and want to use JDBC for sending data to SQL Server. I need to insert data in two tables(header & details). Is it possible without BPM?  Or what is the best approach for this?
    Thanks,
    Sri.

    Yes, this is possible without the BPM.
    Just create the Corresponding Datatype for the insertion.
    if the records to be inserted are different, then there wil be 2 different datatypes ( one for header and one for detail).
    Do a mutlimapping, where your Source is mapped into the header and details datatype and then send using the JDBC sender adapter.
    For the strucutre of your Datatype for insertion , just check this link,
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    To access any Database from XI, you will have to install the corresponding Driver on your XI server.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3867a582-0401-0010-6cbf-9644e49f1a10
    Regards,
    Bhavesh

  • Best approach for RFC call from Adapter module

    What is the best approach for making a RFC call from a <b>reciever</b> file adapter module?
    1. JCo
    2. Is it possible to make use of MappingLookupAPI classes to achieve this or those run in the mapping runtime environment only?
    3. Any other way?
    Has anybody ever tried this? Any pointers????
    Regards,
    Amol

    Hi ,
    The JCo lookup is internally the same as the Jco call. the only difference being you are not hardcoding the system related data in the code. So its easier to maintain during transportation.
    Also the JCO lookup code is more readable.
    Regards
    Vijaya

  • What's Best Approach for Multitrack Classical Music?

    Can someone suggest the best approach for recording classical musicians onto
    four tracks? In this scenario, they play until they make a mistake on, say,
    measure 24, stop, then (take 2) go back to measure 20 and play until the next
    rough spot, and so on. Ultimately there may be 15 takes that all need to be
    trimmed and stitched together.
    In the old (tape) days, this was pretty basic editing. I would use a blade and block
    to cut out all the bad stuff on the multitrack tape, then I could mix. But how do I
    do this in Audition? (I use version 1.5.)
    I can't do the cuts it in edit view because the tracks would get out of sync
    Assuming all the takes are in one session, in multitrack view, this most basic of
    functions seems to elude me. What am I missing?

    Al the Drifter wrote:
    If you follow Steve's advice, and after doing the edits you discover
    that one instrument should come up 1db, you are screwed.
    I could be wrong about this in the classical music environment,
    where things are not close-mic'ed but if I am, I am confident Steve
    will correct me.  Ha.
    You always run the risk of small changes between takes - and that's where Audition 3 and the new improved crossfades score rather heavily. You won't notice 1dB on a single instrument across a fade though - it's hard to spot this as a jump, even, unless it's on pure tone. No, I very rarely close-mic stuff at all, although I did with a clavichord recently - it's seriously too quiet to mic any other way.
    jaypea500 wrote:
     when recording classical music, any engineer worth anything has the mix down pat as it's being recorded. 
    That's the way they used to work, certainly - but not nowadays, especially if it's done on location, which most classical recording is. What's more likely to happen is that you'd use decent mic preamps feeding straight into a multitrack, or even some software on a laptop. I generally record like that - but I also feed the multitrack outputs to a Yamaha mixer via ADAT, do a mix on that and record it back to a spare multitrack pair. I don't actually need to do that - but having a mix available from the multitrack that's pretty much there is good as far as being able to play back takes to conductors is concerned.
    Of course, one of the other reasons that classical sessions recorded on location aren't mixed on the spot is that the monitoring conditions are invariably far from ideal, and I'd have it that no engineer worth anything would ever risk a final mix done on location.
    But I only get paid to do all of this on a regular basis, so what would I know? Must be something though - my customers come back for more...

  • Best approach for building dialogs based on Java Beans

    I have a large amount of Java Beans with several properties each. These represent all the "data" in our system. We will now build a new GUI for the system and I intend to reuse the beans as far as possible. My idea is to automatically generate the configuration dialogs for each bean using the java.beans package.
    What is the best approach for achieving this? Should I use PropertyEditors or should I make my own dialog-generator using the Introspetor class or are there any other suitable solutions?
    All suggestions and tips are very welcome.
    Thanks!
    Erik

    Definitely, it is better for you to use JTable. Why not try it?

  • Best approach for IDs mapping..

    Hello,
    I'd like to ask you for your experiences about classical integration problem: mapping of IDs (materials, partners...)
    What is the best approach for integration between SAP and other systems? Can you give me some hints?
    Thanx, Peter

    Hi Peter,
    you have 4 ways to do it:
    1. you can do it inside an integration process:
    RFC call for checking a table with ID -> ID mappings
    (not so good as you have to use integration process)
    but very easy to biuld as this is standard 
    2. table in R/3 and changing the values in a user exit
    (you maintaint the data in a table in R/3)
    the fastest way (no calls to other programs)
    but you have to create user exits and
    this is not why you (your client) bought the XI  
    3. you can use this new RFC API
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/uuid/801376c6-0501-0010-af8c-cb69aa29941c
    which seems to be the best approach
    as you don't need BPM for this and it's a standard
    4. value mapping tables in XI...
    Regards,
    michal
    Message was edited by: Michal Krawczyk

  • Design Patterns, best approach for this app

    Hi all,
    i am starting with design patterns, and i would like to hear your opinion on what would be the best approach for this app. 
    this is basically an app for data monitoring, analysis and logging (voltage, temperature & vibration)
    i am using 3 devices for N channels (NI 9211A, NI 9215A, NI PXI 4472) all running at different rates. asynchronous.
    and signals are being processed and monitored for logging at a rate specified by the user and in realtime also. 
    individual devices can be initialized or stopped at any time
    basically i'm using 5 loops.
    *1.- GUI: Stop App, Reload Plot Names  (Event handling)
    *2.- Chart & Log:  Monitors Data and Start/Stop log data at a specified time in the GUI (State Machine)
    *3.- Temperature DAQ monitoring @ 3 S/s  (State Machine)   NI 9211A
    *4.- Voltage DAQ monitoring and scaling @ 1K kS/s (State Machine) NI 9215A
    *5.- Vibration DAQ monitoring and Analysis @ 25.6 kS/s (State Machine) NI PXI 4472
    i have attached the files for review, thanks in advance for taking the time.
    Attachments:
    V-T-G Monitor_Logger.llb ‏355 KB

    mundo wrote:
    thanks Will for your response,
    so, basically i could apply a producer/consummer architecture for just the Vibration analysis loop? or all data being collected by the Monitor/Logger loop?
    is it ok having individual loops for every DAQ device as is shown?
    thanks.
    You could use the producer/consumer architecture to split the areas where you are doing both the data collection and teh analysis in the same state machine. If one of these processes is not time critical or the data rate is slow enough you could leave it in a single state machine. I admit that I didn't look through your code but based purely on the descriptions above I would imagine that you could change the three collection state machines to use a producer/consumer architecture. I would leave your UI processing in its own loop as well as the logging process. If this logging is time critical you may want to split that as well.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • What is the Best Approach for System Defined Fields and Default values

    Hi ,
    Please let me know that what can be the best approach for providing a Default value for the System defined fields when creating a User and How can we hide the System defined fields at the time of User creation

    You cannot provide default values for any attributes defined in the FormMetaData.xml file. You can only provide default values for fields defined in the User Defined Fields and supply a default value.
    You can using entity adapters to populate some of the values, but you must supply an Organization because there is an entity adapter that you cannot modify that verifies the organization name.
    -Kevin

  • What are the best practices for using the enhancement framework?

    Hello enhancement framework experts,
    Recently, my company upgraded to SAP NW 7.1 EhP6.  This presents us with the capability to use the enhancement framework.
    A couple of senior programmers were asked to deliver a guideline for use of the framework.  They published the following statement:
    "SAP does not guarantee the validity of the enhancement points in future releases/versions. As a result, any implemented enhancement points may require significant work during upgrades. So, enhancement points should essentially be used as an alternative to core modifications, which is a rare scenario.".
    I am looking for confirmation or contradiction to the statement  "SAP does not guarantee the validity of enhancement points in future releases/versions..." .  Is this a true statement for both implicit and explicit enhancement points?
    Is the impact of activated explicit and implicit enhancements much greater to an SAP upgrade than BAdi's and user exits?
    Is there any SAP published guidelines/best practices for use of the enhancement framework?
    Thank you,
    Kimberly
    Edited by: Kimberly Carmack on Aug 11, 2011 5:31 PM

    Found an article that answers this question quite well:
    [How to Get the Most From the Enhancement and Switch Framework as a Customer or Partner - Tips from the Experts|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c0f0373e-a915-2e10-6e88-d4de0c725ab3]
    Thank you Thomas Weiss!

Maybe you are looking for

  • How to insert special characters in pdf comments

    Now, we are ready to go "no paper" workflow, and so mark anything on pdf file, but we cannot insert most of special characters in PDF comments. Same as here, and the xml entity is "&PSgr;", "how to archive it?

  • HOW TO PASS NON REMOTE OBJECT PARAMETER

    good day, hi im new to java and rmi, and i need help pls, my question is how can i pass a non-remote object/local object as a parameter to a remote method. can u pls give a code for the: 1. client that will invoke the remote method and pass the objec

  • Plugging a Macbook into a TV?

    Can you watch Macbook content on a TV? The setup is a Late-2008 Macbook - going into an older Samsung or something - looked on the back - has what I guess is RCA - one Black (video) and one Yellow (audio). Is this possible? I use the Macbook with a c

  • Connecting to AS 400 via 5250 Emulator Session

    Hi, I am trying to connect to an as400 server via the sgd 5250 enhancement package. I created an testobject where i specified the hostname of the as 400 server and set the port to 23/tcp. Now I tried to start the application as webtop or independent

  • Suite "Comment transférer mes données IPTC DE PSE 4 à PSE 12?"

    A l'attention de MICHELB/PARIS avec lequel j'avais entamé ce fil de discussion entre le 22 et le 25/10/2013 : Le 14/12/13 Je reviens vers vous suite à nos échanges du 22 au 25 octobre 2013, car, après de longues hésitations, j'ai décidé de tenter la