Dynamic queries best practice

Hello,
I want to create a report that the user can use radio buttons to refine. As an example, referring to the apex sample application's Products page, add radio buttons for [>] All prices [>] under $500.
I think the technique demonstrated for building dynamic query reports (this is a 1.6 tutorial,link to it is here) is the way to do this, but I'm not sure this is the best approach.
I'm wondering about any design disadvantage. I'd appreciate comment. Also, if there is a working example with source view in the public apex workspaces you are aware of, post a link if there's value beyond the tutorial I mentioned.
Thank you.
Albert

I am using dynamic queries from time to time. Here is a working example:
http://htmldb.oracle.com/pls/otn/f?p=31517:1
You may have a look.
Denes Kubicek

Similar Messages

  • Dynamic Scheduling Best Practice -- IS-U-BF-PS E1DY E2DY

    I have been tasked with resolving several long standing issues with My Companies Meter Reading Schedules. My question originates out of the desire to implement the eventual corrections I make as close to a best practice standard as possible.
    Near the end of 2009 I extended the Dynamic Schedule Records out to the end of 2010 with transaction E1DY
    At the beginning of 2010 I reported a program error which resulted in [Note 1411873|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=0001411873&nlang=E&smpsrv=https%3a%2f%2fwebsmp103%2esap-ag%2ede&hashkey=9D07D6F4306CBF2AF0B69DEE0022142E| Schedule record: Previous period end in dynamic scheduling]
    I requested clarification of the comment:
    "In certain operational scenarios that are not explicitly forbidden (but which are strongly advised against), the end of the previous period of the first schedule record of a series of a year may not be calculated correctly in the dynamic scheduling"
    & was advised:
    it means such cases where you don't have a full sequence of MRU.
    The standard process of dynamic scheduling is designed that you have for every day several readings (consilidated in meter reading units/ MRU).
    There was no further clarification other than the confirmation that the configuration existing in our system did not match this ideal condition.
    The Current Design of Dynamic schedules is as follows:
    1. No Budget Billing implemented at all. All Portions defined with a parameter record Without Budget Billing configured
    2. Several Groups of Monthly Portions allocated to Calendar Z3
    2a.     21 Monthly Portions
    2b.     21 Monthly Portions
    2c.     21 Monthly Portions
    2d.     21 Monthly Portions
    2e.     21 Monthly Portions
    2f.     20 Monthly Portions
    2g.     1 Monthly Portion
    2h.     1 Monthly Portion
    -Please note, that this results in day 21 of 2a-2e not including day 1 of the 2f Monthly Portions as is intended. this results in manual movement of the 20 Monthly portions in transaction E2DY one by one.
    -Please note, for portions in group 2d, & 2e there is a "gap" in the config where the factory Calendar is not Assigned for day 12 & 13 in the series. resulting in a gap in the schedule record creation.
    3. Many Meter Reading Units are configured for each portion.
    My intended changes to the configuration are as follows:
    4. No change to Budget Billing
    5. All Groups of 21 Monthly Portions (2a - 2e) share the same configuration, so change all Meter reading units for (2a - 2e) to 2b (least change)
    6. 2g is configured the same as day 14 of groups 2a - 2e, move to 2b equivalent
    7. 2h is configured the same as day 15 of groups 2a - 2e, move to 2b equivalent
    8. 2f is configured on Calendar Z3, so update configuration to Calendar ZL
    9. Generate schedule records for Calendars Z3 & ZL
    Having read all the above, can anyone expert in the design & implementation of Dynamic scheduling think of any issues which may arise from updating the configuration as described.
    If anything is unclear or stupid let me know, I'm definitely interested in feedback to help ensure the corrections are made smoothly, & to clarify what was the "operational scenarios that are not explicitly forbidden (but which are strongly advised against)" as mentioned in the SAP Note.
    Also as a final question, how feasible would it be to delete the unused portions after these changes are migrated?
    regards
    Daniel
    Edited by: Daniel McCollum on Sep 9, 2010 7:12 AM

    I have started on point 8 first:
    after moving all 2f portions to calendar ZL & reentering the Meter Reading Units to resync the calenadar configuration, I used E3DY to delete schedules on calendar ZL from a future date.
    This has eliminated the offending schedules on these portions from the Z3 Calendar.
    Point 9:
    using E1DY to generate the schedules & E2DY to "merge" them with the end of the older schedules still on Calendar Z3 has resulted in the expected 20 day cycle.
    I am now dealing with the portions still on the Z3 calendar by regenerating them via E1DY & moving to the correct dates via E2DY to verify the schedules.

  • Best Practice: Dynamically changing Item-Level permissions?

    Hi all,
    Can you share your opinion on the best practice for Dynamically changing item permissions?
    For example, given this scenario:
    Item Creator can create an initial item.
    After item creator creates, the item becomes read-only for him. Other users can create, but they can only see their own entries (Created by).
    At any point in time, other users can be given Read access (or any other access) by an Administrator to a specific item.
    The item is then given edit permission to a Reviewer and Approver. Reviewers can only edit, and Approvers can only approve.
    After the item has been reviewed, the item becomes read-only to everyone.
    I read that there is only a specific number of unique permissions for a List / Library before performance issues start to set in. Given the requirements above, it looks like item-level permission is unavoidable.
    Do you have certain ideas how best to go with this?
    Thank you!

    Hi,
    According to your post, my understanding is that you wanted to change item level permission.
    There is no out of the box way to accomplish this with SharePoint.               
    You can create a custom permission level using Visual Studio to allow users to add & view items, but not edit permission.   
    Then create a group with the custom permission level. The users in this group would have the permission of create & add permission, but they could no edit the item.
    In the CodePlex, there is a custom workflow activities, but by default it only have four permission level:
    Full Control , Design ,Contribute and Read.
    You should also customize some permission levels for your scenario. 
    What’s more, when use the SharePoint 2013 designer, you should only use the 2010 platform to create the workflow using this activities,
    https://spdactivities.codeplex.com/wikipage?title=Grant%20Permission%20on%20Item
    Thanks & Regards,
    Jason
    Jason Guo
    TechNet Community Support

  • Best Practice on Creating Queries in Production

    We are a fairly new BI installation. I'm interested in the approach other installations take to creation of queries in the production environment. Is it standard to create most queries directly into the production system? Or is it standard to develop the queries in the development system and transport them through to production?

    Hi,
    Best practices applied to all developments whether it is R/3, BI modelling or Reporting and as per the best practice we do development in Development system, testing in testing box and finally deploy successful development to production. yes for user analysis purpose, user can do adhoc analysis or in some scenario they create user specific custom queries (sometimes reffere as X-query created by super user).
    So it is always to do all yr developement in Development Box and then transport to Production after successful QA testing.
    Dev

  • Best practice multi-org, MW, SOA, Siebel authen with static or dynamic url

    All,
    My client integration lead had a question about the current best practices for multi-org structures with MW, SOA and Siebel. Internally the client contact is being pressured to get dynamic urls for authentication (for each area and new addition…currently exposed web services include Acct, Payment, Contact, etc… currently 60-70 services). However, he would like to stay with his current process for web service integration and just add pos id, user id, org id, etc. in the message string that is passed.
    Please let me know what you think and why so I can pass this information along.

    Hi even we too struck up with the same kind of issue.Please let me know if you got any solution for this.Your help is highly Appreciated.
    Thanks,
    Ravi Kasu.
    [email protected]

  • Best Practices for highly dynamic features like Search

    For a project I need to implement a "Search" Component which will most probably use lucene that is built into CQ5. Since most of the other content on the site is dynamic and cached on dispatcher my concern is regarding the load such a dynamic feature will create for the publish instance.
    What are the best practices to minimize the load on publish instance for such a scenario?

    One option is to have your search results display via AJAX rather than a full page request. That way most of the page is cached in dispatcher and only the AJAX request with the search results is dynamic.

  • Best Practices - Distributi​ng Dynamic VI's with LV2011

    I'm distributing code which consists of a main program which calls existing (and future) vi's dynamically, but one at a time. The dynamically called vi's do not have input or output terminals. They run, one at a time, in a sub-panel in the main program. The main program needs to maintain a reference to the dynamically loaded vi so it can be sure the dyn. loaded vi has fully stopped before unloading calling a replacement vi. These vi's do not used Shared Variables or Globals, but may have a few vi's in common with the main program (it would be OK to duplicate these vi's in the release).
    With that background, what are the best practices these days for releasing dynamically loaded vi's (and their dependents)?
    If I use a Project Library (.lvlib), it would seem that I need to first build a .exe containing the top-level vi's (the one's to be dynamically loaded), so that a separate .lvlib can be generated which includes their dependencies. The contents of this .lvlib and a .lvlib containing the top-level vi's can then be merged to create a single .lvlib, and then a packed library can be generated for distribution with the main .exe.
    This seems way too involved (but necessary?)
    My goal is to simply have a .exe for the main program, and some other structure containing the dynamically called vi's and their dependents. This seemed so straighforward when a .exe was really a .llb a few years ago
    Thanks in advance for your feedback.
    Solved!
    Go to Solution.

    A great source of information I've found since posting is here:
    http://zone.ni.com/devzone/cda/pub/p/id/1261
    regarding packed libraries. Bottom line - they automatically include dependencies to the top-level dynamically linked vi's placed in a .lvlib from which the .lvlibp is built..
    I cannot seem to find an example of dynamically calling a vi within a packed library. If I use the old .exe as llb method, I get an Error 7.

  • Where to find best practices for tuning data warehouse ETL queries?

    Hi Everybody,
    Where can I find some good educational material on tuning ETL procedures for a data warehouse environment?  Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems.  (For example, most of our ETL
    queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
    I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
    often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
    partition tables that are larger than 50 GB;
    use minimal logging to load data precisely where you want it as fast as possible;
    often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
    rebuild statistics after every load of a table.
    But I still feel like I'm missing some very crucial concepts for performant ETL development.
    BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices.  Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
    SQL" tasks.
    Thanks, and any best practices you could include here would be greatly appreciated.
    -Eric

    Online ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL  using Merge command of SQL Server
    2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    Kindly let me know if any further help is needed
    Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities

  • Best Practices - Dynamic Ranking, Dimension Values to Return, etc.

    The pinned post says non-technical questions belong on the Business Forum. I can't find an Endeca-specific business forum. If there is one, please tell me where to find it.
    My question is about dynamic ranking and the initial display of only the top N dimension values with a "More..." option to see the rest of them.
    What's the current wisdom on this from a usability point of view? Use it, don't use it? If using it, show how many values initially?
    Or, if not using it, you instead set up a hierarchy of dimensions so that the user never has to look at 50 choices for something?
    This is not a technical question. What is the current wisdom? What are the best practices?
    Thanks!

    Dynamic ranking is a good choice only if all choices cannot be further grouped. In my practice most of the content can be normalized and restricted to a very limited set of options. Dynamic ranking with "more" is an easy way out and seems like a lazy take on content management.

  • Best practice on dynamically changing drop down in page fragment

    I have a search form, which is a page fragmant that is shared across many pages. This form allows users to selct a field from the drop down list, and search for a particular value, as seen in the screenshot here:
    http://photos1.blogger.com/blogger2/1291/174258525936074/1600/expanded.5.gif
    Please note that the search options are part of page fragmant embedded within a page - so that I can re-use it across many pages.
    The drop down list changes,based on the page this fragment is embedded. For users page, it will be last name, first name, etc. For parts page, it will be part number, part desc., etc.
    Here is my code:
              Iterator it=getTableRowGroup1().getTableColumnChildren();
            Option options[]=new Option[getTableRowGroup1().getColumnCount()];
            int i=0;
            while (it.hasNext()){
                TableColumn column=(TableColumn)it.next();
                if (column.getSort()!=null){
                    options=new Option(column.getSort().toString(), column.getHeaderText());
    }else{
    options[i]=new Option(column.getHeaderText(), column.getHeaderText());
    i++;
    search search=(search)getBean("search");
    search.getSearchDDFieldsDefaultOptions().setOptions(options);
    This code works, but it gives me all fields of the table available in the drop down. I want to be able to pick and choose. May be have an external properties file associated with each page, where I can list the fields available for search drop down??
    What is the best practice to load the list of options available for drop down on each page? (i.e last name, first name, etc.) I can have the code embedded in prerender of each page, or use sort of a resouce bundle for each page or may be use a bean?

    I have to agree with Pixlor and here's why:
    http://www.losingfight.com/blog/2006/08/11/the-sordid-tale-of-mm_menufw_menujs/
    and another:
    http://apptools.com/rants/menus.php
    Don't waste your time on them, you'll only end up pulling your hair out  :-)
    Nadia
    Adobe® Community Expert : Dreamweaver
    Unique CSS Templates |Tutorials |SEO Articles
    http://www.DreamweaverResources.com
    Book: Ultimate CSS Reference
    http://www.sitepoint.com/launch/005dfd4/3/133
    http://twitter.com/nadiap

  • Best Practices for Handling queries for searching XML content

    Experts: We have a requirement to get the count of 4 M rows from a specific XML tag with value begins with, I have a text index created but the query is extremely slow when I use the contains operator.
    select count(1) from employee
    where
    contains ( doc, 'scott% INPATH ( /root/element1/element2/element3/element4/element5)') >0
    what is oracle's best practices recommendation to query / index such searches?
    Thanks

    Can you provide a test case that shows the structure of the data and how you've generated the index? Otherwise, the generic advice is going to be "use prefix indexing".

  • Best Simple Dynamic Form Content Practice

    I have an idea for a form which has simple content subforms that I'm trying to put togther and I'm hoping I can do this in ColdFusion but I'm not that good at it yet to know what the best practices are to getting this done.
    I have a form with a general question followed by choices with radio buttons.  I'm looking for added form content to be displayed when one or more radio buttons are selected.  For example:
    What service are you looking for?
      ()  Hair
      ()  Nails
      ()  Shave
      ()  Wax
    Then if somone selected "Hair", under Hair more form content would be displayed.  For example:
    What service are you looking for?
      ()  Hair
              What are you doing
                   ()  getting ready for a wedding
                   ()  going on a date
                   Do you need a
                   ()  cut
                   ()  trim
                   ()  Style
                        ()  Mohawk
                        ()  Mullet
      ()  Nails
      ()  Shave
      ()  Wax
    Sort of like that with the added content adding to the form.  I'm collecting the data in a simple MS Access DB and working with CS4.  My server runs the latest version of CF.
    Sounds simple enough but I'm not sure how to go about it and/or what the best practices are.  Any guideance?
    Thank you for any help in advance!!!
    -- Dax

    Hmmm.  Actually, maybe using radio buttons is not such a good idea since I want subforms to populate for each selection and there can be more than one, even all, selected so maybe checkboxes.  But here's the catch.  What you're suggesting is that the user hit the submit button to return the sub form and that's not what I'm looking for.  I'm actually looking for the functionality that the form itself expands with a selection automatically.  In other words, user checks box, then the form expands with additional questions for their selection.  Then when the entire form is filled out as best it can, the user submits the data.  THAT's more like what I'm looking for.

  • SAP BO Dashboards 4.1 best practice on layout and components

    Dear SCN,
    I have requirement to create a BO 4.1 dashboard with data & Visualization based on a excel sheet which is currently in use as a Mgmt dashboard. The current excel dashboard is having more than 100 KPIs in one view which is readable only if you put in on a slide and view it in full screen by running a slideshow.
    Question 1:
    1. Being the suggested size of the Xcelsius canvas not more than 1024 X 768 so that it is viewable with out scroll bar in BI launchpad or in any browser or in pdf, I am trying to confirm in this forum that the canvas size of 1024 X 768 is the recommended maximum size for the dashboard to get the clear view in any browser/BI launchpad . Pls confirm as it will help me in doing the design for the KPIs and its visualization.
    Question 2:
    1. I am using the BICS connection and accessing the source data from BW. Because the no. of KPIs are more and ranging between 10 cubes and 40 queries as the data is across different modules, I would like to know what is the recommended no. of connections for queries /cubes in dashboard using BICS connectivity which does not affect the performance
    2. For the same dashboard using BICS connection, What is ideal number of components like Charts/Scorecard/Spreadsheet table that is recommended to use to ensure better performance?
    I appreciate your answers which can help the finalization of the dashboard design for this dashboard of data and visualization requirements which is very high when compared to the normal dashboards.
    Thanks and Regards
    Jana

    Hi Suman,
    Thanks for your answers.You answers and links which you have attached are helpful and It answered my questions related to canvas size and Connections.
    I am expecting some benchmark numbers as per the best practices with respect to the No. of components to be used to ensure the better loading of the dashboard. As the increase in number of components increase the size of the dashboard and also it requires more time to load the data for the components, I am looking for the number as per the best practice by considering the below points.
    1. When I say the no. of components, I am not considering the components like label, text box,combo box or list box. I am considering the components which is used for visualization and interactive drill down on top of the visualized charts ( For Eg. Column charts, Pie charts, Gauges ).
    2.I am not going to use more calculations/formulas in my dashboards as the values and structure are almost the same with the BEx query.
    3.Having around 10 to 12 connections.
    4.The data sets are not more than 900 rows totally. For any control, we will be binding only 100 rows at the max as the data for the KPIs are summarized at the year/month level at the BW layer.
    Since the KPIs are more, the Visualizations are more and we can't re-use the Visualization charts for most of the KPIs. Currently I am ending up with ~35 charts/ gauges along with other label and selection controls which I will be using to show 100 KPIs with unique visualization requirements and I am going for the tab-wise layout with more dynamic to accommodate and separate logically.
    Hope these details will give clear picture of why I am looking for the Benchmark on No. of components .
    I appreciate your help!
    Thanks and Regards
    Jana

  • Reflection Performance / Best Practice

    Hi List
    Is reflection best practice in the followng situation, or should I head down the factory path? Having read http://forums.sun.com/thread.jspa?forumID=425&threadID=460054 I'm now wondering.
    I have a Web servlet application with a backend database. The servlet currently handles 8 different types of JSON data (there is one JSON data type for each table in the DB).
    Because JSON data is well structured, I have been able to write a simple handler, all using reflection, to dynamically invoke the Data Access Object and CRUD methods. So one class replaces 8 DAO's and 4 CRUD methods = 32 methods - this will grow as the application grows.
    Works brilliantly. It's also dynamic. I can add a new database table by simply subclassing a new DAO.
    Question is, is this best practice? Is there a better way? There are two sets of Class.forName(), newInstance(), getClass().getMethod(), invoke() ; one for getting the DAO and one for getting the CRUD method.....
    What is best practice here. Performance is important.
    Thanks, Len

    bocockli wrote:
    What is best practice here. Performance is important.I'm going to ignore the meat of your question (sorry, there are others who probably have better insights there) and focus on this point, because I think it's important.
    A best practice, when it comes to performance is: have clear, measurable goals.
    If your only performance-related goal is "it has to be fast", then you never know when you're done. You can always optimize some more. But you almost never need to.
    So you need to have a goal that can be verified. If your goal is "I need to be able to handle 100 update requests for Foo and 100 update requests for Bar and 100 read-only queries for Baz at the same time per second", then you have a definite goal and can check if you reached it (or how far away you are).
    If you don't have such a goal, then you'll be optimizing until the end of time and still won't be "done".

  • Table partitioning best practices

    Just looking for some ideas.
    We have a large information warehouse table that we are looking to partition on 'service_period' id. We have been requested to partition on every month of every year which right now will create approximately 70 partitions. The other problem is that this is a rolling or dynamic partition meaning we will have a 'new' partition vale with each new month. I understand in 11g there is a rolling partition functionality but we are not there yet.
    So right now we are looking for a best practice for this scenario. We are thinking of possibly creating a partition on each year and indexing on the service period within each partition, maybe hash partitioning on the service period id (although does not seem to group the service periods distinct within each partition), somehow creating the partition dynamically via pl/sql (creating the table with a basic partition and then running an alter table on the data creating the proper number of partitions within a list partition.
    I am also wondering if there is a point of too many partitions on a table. I am thinking 70 may be a little extreme but not sure. We are going to do some performance testing but would be nice to hear from the community. We have 5,000,000 over approx 70 partitions giving us approx 70,000 records per partition. The other option would be to create the partition based on year and then apply an index over top on the service period to reduce the number of partitions.
    Thanks in advance,
    Bruce Carson

    This is not a lot of data, so the effort of partitioning may not be worth the benefit you receive. 70 partitions is not unreasonable. Do you have performance problems ? Do the majority of your queries reference service_period ? Do you have a lot of full table scans ?
    Partitioning strategies depend on the queries you plan to run, your data distributions, and your purge / archival strategy.
    Think about whether you should pre-create partitions for years in advance. Think about whether you should put every partition in a seperate tablespace for easy purging and archival. Think about what indexing you will use (can you use local indexes, or do you need global ?) Think about what data changes are happening ? Are they all on the newest data ? Can you compress older partitions with PCT_FREE 0 to improve performance ?

Maybe you are looking for