Do u know the best approach with data....?

I am considering the best approach for returning a resultset from a ejb to my jsp page but I dont know which approach is the best. You comment PLEASE. (As resultset cannot be serialized so returning it directly wont be considered).
Approach A � Make a custom class having get/set variables to represent each column values in the resultset, and use the class in jsp. However, I find this tedious because whenever I add to the select statement, I have to add class variables too.
Approach B � Manually manipulate data in resultset and put into a vector then return the vector to jsp
Approach C � use rowsets instead and return the rowset to jsp.
Many thanks u all...

Hello,
Approach A is not recommended - you would have to leave the resultset open and so leave the connection the the database open.
Approach B is better
Approach C - well RowSets are a new thing in 1.4 which I have not tried yet. They look useful, but is your app running on 1.4?

Similar Messages

  • What´s the best approach to work with Excel, csv files

    Hi gurus. I got a question for you. According to your experience what's the best approach to work with Excel or csv files that have to be uploaded through DataServices to you datawarehouse.
    Let's say your end-user, who is not a programmer, creates a group of 4 excel files with different calculations in a monthly basis, so they can generate a set of reports from their datawarehouse once the files have been uploaded to tables in your DWH. The calculations vary from month to month. The user doesn't have a front-end to upload the excel files directly to Data Services. The end user needs to keep a track of which person uploaded the files for a determined month.
    1. The end user should place their 4 excel files in a shared directory that will be seen by DataServices.
    2. DataServices will execute certain scheduled job that will read the four files and upload them to the Datawarehouse at a determined time, lets say at 9:00pm.
    It makes me wonder... what happens if the user needs to present their reports immediately so they can´t wait until 9:00pm.  Is it possible for the end user to execute some kind of action (out of the DataServices Environment) so DataServices "could know" that it has to process those files right now, instead of waiting for the night schedule?
    Is there a way that DS will track who was the person who uploaded those files?
    Would it be better to build a front-end for the end user so they can upload their four files directlyto the datawarehouse?
    Waiting for your comments to resolve this dilemma
    Best Regards
    Erika

    Hi,
    There are functions in DS that captures the input files automatically. You could use file_exists() or wait_for_file() option to do that. Schedule the job to run every certain minute and if the file exists then run. This could be done by using a certain file name with date and timestamp etc or after running move the old files to archive and DS wait for new files to show up.
    Check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    Hope this helps.
    Arun

  • What is the best approach to process data on row by row basis ?

    Hi Gurus,
    I need to code stored proc to process sales_orders into Invoices. I
    think that I must do row by row operation, but if possible I don't want
    to use cursor. The algorithm is below :
    for all sales_orders with status = "open"
    check for credit limit
    if over credit limit -> insert row log_table; process next order
    check for overdue
    if there is overdue invoice -> insert row to log_table; process
    next order
    check all order_items for stock availability
    if there is item that has not enough stock -> insert row to
    log_table; process next order
    if all check above are passed:
    create Invoice (header + details)
    end_for
    What is the best approach to process data on row by row basis like
    above ?
    Thank you for your help,
    xtanto

    Processing data row by row is not the fastest method out there. You'll be sending much more SQL statements towards the database than needed. The advice is to use SQL, and if not possible or too complex, use PL/SQL with bulk processing.
    In this case a SQL only solution is possible.
    The example below is oversimplified, but it shows the idea:
    SQL> create table sales_orders
      2  as
      3  select 1 no, 'O' status, 'Y' ind_over_credit_limit, 'N' ind_overdue, 'N' ind_stock_not_available from dual union all
      4  select 2, 'O', 'N', 'N', 'N' from dual union all
      5  select 3, 'O', 'N', 'Y', 'Y' from dual union all
      6  select 4, 'O', 'N', 'Y', 'N' from dual union all
      7  select 5, 'O', 'N', 'N', 'Y' from dual
      8  /
    Tabel is aangemaakt.
    SQL> create table log_table
      2  ( sales_order_no number
      3  , message        varchar2(100)
      4  )
      5  /
    Tabel is aangemaakt.
    SQL> create table invoices
      2  ( sales_order_no number
      3  )
      4  /
    Tabel is aangemaakt.
    SQL> select * from sales_orders
      2  /
            NO STATUS IND_OVER_CREDIT_LIMIT IND_OVERDUE IND_STOCK_NOT_AVAILABLE
             1 O      Y                     N           N
             2 O      N                     N           N
             3 O      N                     Y           Y
             4 O      N                     Y           N
             5 O      N                     N           Y
    5 rijen zijn geselecteerd.
    SQL> insert
      2    when ind_over_credit_limit = 'Y' then
      3         into log_table (sales_order_no,message) values (no,'Over credit limit')
      4    when ind_overdue = 'Y' and ind_over_credit_limit = 'N' then
      5         into log_table (sales_order_no,message) values (no,'Overdue')
      6    when ind_stock_not_available = 'Y' and ind_overdue = 'N' and ind_over_credit_limit = 'N' then
      7         into log_table (sales_order_no,message) values (no,'Stock not available')
      8    else
      9         into invoices (sales_order_no) values (no)
    10  select * from sales_orders where status = 'O'
    11  /
    5 rijen zijn aangemaakt.
    SQL> select * from invoices
      2  /
    SALES_ORDER_NO
                 2
    1 rij is geselecteerd.
    SQL> select * from log_table
      2  /
    SALES_ORDER_NO MESSAGE
                 1 Over credit limit
                 3 Overdue
                 4 Overdue
                 5 Stock not available
    4 rijen zijn geselecteerd.Hope this helps.
    Regards,
    Rob.

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    Unfortunately, Safari cannot be updated past 5.1.10 on a Mac running v10.6.8.
    So, the options are to upgrade to a newer OS X or use Firefox or  Chrome.
    Be aware, Apple no longer support Snow Leopard v10.6 >  www.ibtimes.com/apple-kills-snow-leopard-os-x-106-no-longer-receives-security-u pdates-1558393
    See if your Mac can run v10.9 Mavericks >  OS X Mavericks: System Requirements
    If so, you can download and install Mavericks for free from the App Store.
    Read prior to upgrading >   Upgrading to 10.7 and above, don't forget Rosetta! | Apple Support Communities

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • I would like to know the best way to scan music into a pdf format which I can then sync with ipad

    I would like to know the best way to scan music into a pdf format which I can  then sycn with ipad

    smgchandler,
    Any camcorder that can connect via USB or that can save to an SD card can interface with an iPad if you have the iPad Camera Connection Kit so that you can watch your videos on the new iPad's beautiful retina display.  You can find this on the Apple website or in most big box retail stores.
    http://store.apple.com/us/product/MC531ZM/A?fnode=MTc0MjU4NjE

  • Although we have iPhones and iPads, this is our first mac pc and we wanted to know the best way to set up multiple users with parental contro;s

    although we have iphones and ipads, this is our first imac and we wanted to know the best ways to set up multiple users with parental controls

    I'm assuming you have the lastet OSX version 10.9 "Mavericks." Here are some useful links from Apple I think will help:
    http://support.apple.com/kb/PH14414
    http://support.apple.com/kb/PH14099
    http://support.apple.com/kb/PH14280
    and a video:
    http://support.apple.com/kb/VI28
    I suspect you have a newer iMac than the pre-2006 models this forum covers (the forum labels are unbelieveably vague). If you can confirm that yours is newer than 2005, I can ask the Hosts to move you the the forum for current iMacs, which gets many more views.

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What is the best approach to track deleted records

    Dear all,
    We have build a CMS platform which is based on SQL server 2012 tables structure hosted in Azure.
    We have build on top of this some REST API method in order to access data from any type of client application.
    The issues we need to solved now is what his the best way to track deleted records in order that client application gets informed through web service about deleted data from our CMS.
    We were thinking of 2 path actually :
    - having a kind of Ghost table for each of our real table where deleted records will be inserted into ( physical delete ). This would mean adding as many Ghost tables as we have production tables
    - Adding a IsDeleted flag to each of our table which will be set to true when a record is deleted from our CMS ( logical delete ). This would means adding an IsDelete field to each of our tables, create and update all our store procedure and web services
    in order to taken in account that new filter criteria to fetch our records. Quite huge job
    Will there be any other approach ?
    We are looking the best solution with minimum impact on our current solution
    reagards
    Your knowledge is enhanced by that of others.

    Hello,
    @Tom, based on your question
    "The question would be what do you need to do with the deleted records and how long do you need to keep them?"
    When records is deleted, then I simply want to delete them and informed any client application about deleted items in order to get data in Sync. I will not have any reporting on deleted data !
    The only reason of tracking delete tables items, is simply to informed client application through web service sync about the data to be ignored. Client application have a caching database records for performance reason and is is require to not used data
    from that local storage which has been reported as deleted by the SQL server on Aure.
    Does this make sense ?
    regards
    Your knowledge is enhanced by that of others.

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • What is the best approach to converting LV7.1 tags to LV2012 shared variables in multiple VIs?

    What is the best approach to upgrading from LV7.1/DSC tags to LV2012/DSC shared variables, in multiple VIs running on multiple platforms? Our system is composed of  about 5 PCs running Windows 2000/LV7.1 Runtime, plus a PLC, and a main controller running XP/SP3/LV2012. About 3 of the PCs publish sensor information via tags across the LAN to the main controller. Only the main controller is currently being upgraded. Rudimentary questions:
    1. Will the other PCs running the 7.1 RTE (with tags) be able to communicate with the main controller running 2012 (shared variables)?
    2. Is it necessary to convert from tags to shared variables, or will the deprecated legacy tag VIs from LV7.1 work in LV2012?
    3. Will all the main controller VIs need to be incorporated into a project in order to use shared variables?
    4. Is the only way to do this is to find all tag items and replace them with shared variable items?
    Thanks in advance with any information and advice!
    lb
    Solved!
    Go to Solution.

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • What is the best approach to start SMP 3.0

    I have experience in BPCS ERP and AS400 (iSeries) programming language. I wanted to move into SAP Mobility. What would be the best approach to start with. Is it directly to learn SMP 3.0 is good??

    Hi Krishna,
    Good to know that you are towards learning SAP Mobility.
    Not sure if you have already read it, SMP 3.0 is an unified product for SUP (Sybase unwired platform), Syclo Agentry and mobilizer. So you have to decide which way you have to proceed.  To understand a bit about each, i would direct you to go though this discussion
    Types of SAP Mobile Technologies
    SAP Mobile App development
    fyi: SMP 3.0 hasn't released in the market, still in Ramp-up process.
    For any query, i would request you to post @ these mobility related forums
    SAP Mobile Platform Developer Center
    SAP for Mobile
    Regards,
    Jitendra

  • What is the best approach to take daily backup of application from CQ5 Server ?

    Hello,
    How to maintain daily backup to maintain the data from server.
    What is the best approach.
    Regards,
    Satish Sapate.

    Linking shared from ryan should give enough information. 
    If case backing up large repository you may know Data Store store holds large binaries and are only stored once. To reduce the backup time remove the datastore from the backup by following [1] (CQ 5.3 example)
    [1] In order to remove the datastore from the backup you will need to do the following:
    Assuming your repository is under /website/crx/repository and you want to move your datastore to /website/crx/datastore
        stop the crx instance
        mv /website/crx/repository/shared/repository/datastore /website/crx/
        Then modify repository.xml by adding the new path configuration to the DataStore element.
    Before:
    <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    <param name="minRecordLength" value="4096"/>
    </DataStore>
    After:
    <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    <param name="path" value="/website/crx/datastore"/>
    <param name="minRecordLength" value="4096"/>
    </DataStore>
    After doing this then you can safely run separate backups on the datastore while the system is running without affecting performance very much.
    Following our example, you could use rsync to backup the datstore:
    rsync --av --ignore-existing /website/crx/datastore /website/backup/datastore

  • What is the best approach to trying to find high freq hits in a file?

    Lets say I have a text document that has millions of rows of information like "Name, address, last time checked in:"
    What is the best approach if I were to look for the top 5 people who appears the most on this huge list?
    Thanks!

    If it is not in a database and it's just one fileYou can still put it into a DB.
    with all those data, what approach would be good in
    the realm of Java? I thought I already said that.
    Would Map still be the best
    choice?Simplest? Probably. Best? Only you can determine that.
    Would the complexity be n^2 since you would
    need to put everything in, then compare all the
    sizes?No, it should be O(2N) (which is really O(N)). Inseting into the map is O(N), and then iterating once over the entries and adjusting your running top 5 is O(N).

Maybe you are looking for

  • Family sharing music who work on different computers

    hi everyone, here is the question, which will start with a long explaination. my desktop computer is the "main" computer of the house. i have about 35,000 songs on it in my itunes. everyone in the house (wife & son) uses my music (which is everyone's

  • Change pagination attribute

    Hi How can I change the text value in the end of report that belong to pagination the text is "Next" OR "Previous", i want to change it

  • Blocking Invoice verfication if it is before the GRN date

    Hi, I need to create a block where the system does not allow to do invoice verification (MIRO) if it is before the GRN (MIGO) date.... is this possible? Example. GRN date is 07th March 2007, the system allows to do invoice verfication even on the 06t

  • Safair won't start.

    Okay, so awhile back I had corrupted preferences that prevented any extra programs save for the ones that came with my early-2008 macbook pro from launching. I fixed it, but Safari never got fixed. Upon an attempted launch a message pops up saying "T

  • On OS X Utilities screen after frozen.... now what?

    On OS X Utilities screen after frozen.... now what?