Best way to handle XML data

I was hoping someone could tell me if I'm doing something stupid or a much harder way then needed.
I have some data that is stored in an xml file. Probably 150 items (will stay around this number) with less then a dozen attributes per item.
The name of each of these will be loaded into a list on a jsp. When clicking on an item the details will be displayed on the page below it.
Right now my plan is to:
!. read the file in on application startup and store the items in a collection of beans in the session, or maybe at application scope.
2. Then in the jsp use AJAX and pass the items ID to a servlet.
3. The servlet pulls the collection out of session or application scope and find the correct item and sends it back as xml.
4. Then either parse the xml or use JSON to convert it to an object.
5. Populate the fields on the jsp.
I'm just worried that I'm doing something incredibly inefficient or stupid.
With only 150 items would I be better served just loading all the items into a multi dimensional javascript array on loading the page and just get rid of AJAX all together?

Comments are shown below preceeded by *******:
I was hoping someone could tell me if I'm doing something stupid or a much harder way then needed.
**** Yes, I think its much harder than it needs to be.
I have some data that is stored in an xml file. Probably 150 items (will stay around this number) with less then a dozen attributes per item.
****** Normally, such data is stored in a database table (no xml involved). However, you can store them in a file if you want.
****** If you store it in a file, I suggest not using xml. Its an advanced topic. I suggest using a tab delimited file (you can use a text editor to create it).
The name of each of these will be loaded into a list on a jsp. When clicking on an item the details will be displayed on the page below it.
***** ok, so far
Right now my plan is to:
!. read the file in on application startup and store the items in a collection of beans in the session, or maybe at application scope.
****** Your options are:
***** request scope: need to read the file each time the user needs the data. Not efficient.
****** session scope: data lasts as long as user is logged on. Good idea.
****** application scope: data lasts after the user logs off. Bad idea. takes up memory permidently. Must remove it from application scope when last user logs off.
****** Store it in a database: best idea, but takes time to learn.
2. Then in the jsp use AJAX and pass the items ID to a servlet.
****** Dont use AJAX, its an advanced topic. Learn JSP and XHTML first.
3. The servlet pulls the collection out of session or application scope and find the correct item and sends it back as xml.
****** send it back via request.setAttribute() and read it on the JSP via <useBean>. xml is not the way to go.
4. Then either parse the xml or use JSON to convert it to an object.
****** dont use xml.
5. Populate the fields on the jsp.
***** Yes. Note JSP should not contain business logic. Business logic should be done in a servlet. The servlet should put data in request scope
for the JSP to populate itself. The data objects put in request scope should contain just data, no business logic functionality.
I'm just worried that I'm doing something incredibly inefficient or stupid.
***** Best way to learn is to try stuff out and make your next project better than the previous.
With only 150 items would I be better served just loading all the items into a multi dimensional javascript array on loading the page and just get rid of AJAX all together?
******** Javascript is an advanced topic. Its main purpose on the JSP is to handle simple onclick events from html tags and to do some basic client side validation.
***** I believe a book on JSP would be very helpful.

Similar Messages

  • Best way to extract XML data to DB

    Hello,
    In our work we need to extract data from XML documents into a database. Here are some extra notes:
    + There is no constant schema for the XML documents.
    + Some processing on the XML files is required.
    + The XML documents are very big.
    + The database is Oracle 8i or 9i (Enterprise Edition both) and our framework is NET.
    Our questions are:
    1. What is the best way to extract the data into the database under the above circumstances ?
    2. In case there was a constant schema for the XML documents, would there be a better way ?
    3. Is writing the data to text files first, and then loading it via SQL-Loader is an effective way ?
    Any thoughts would be welcome. Thanks.

    Hi Nicolas,
    The answer depends on your actual storage method (binary, OR, CLOB?), and db version.
    You can try XMLTable, it might be better in this case :
    SELECT x.elem1, x.elem2, ... , x.elem20
    FROM your_table t
       , XMLTable(
          '/an/xpath/to/extract'
          passing t.myxmltype
          columns elem1  varchar2(30) path 'elem1'
                , elem2  varchar2(30) path 'elem2'
                , elem20 varchar2(30) path 'elem20'
         ) x
    ;

  • What's the best way to handle all my data?

    I have a black box system that connects directly to a PC and sends 60 words of data at 10Hz (worse case scenario). The black box continuously transmits these words, which contain a large amount of data that is continuously updated from up to 50 participants (again worst case scenario) 
    i.e. 60words * 16bits * 10Hz * 50participants = 480Kbps.  All of this is via a UDP Ethernet connection.
    I have LabVIEW reading the data without any problem. I now want to manipulate this data and then distribute it to other PCs on a network via TCP/IP.
    My question is what is the best way of storing my data locally on the interface PC so that I can then have clients request the information they require via TCP/IP. Each message that comes in via the Ethernet will relate to one of the participants, so I need to be able to check if I already have data about that participant - if I do then I can just update it, if I don't I need to create a record for the participant, and if I havn't heard from one for a while I will need to delete it. I don't want to create unnecessary network traffic. I also want to avoid global variables if possible - especially considering that I may have up to 3000 variables to play with.
    I'm not after a solution, just some ideas about how to tackle this problem... I thought I could perhaps create a database and have labview update a table with the data, adding a record for each participant. Alternatively is there a better way of storing all the data in memory besides global variables?
    Thanks in advance.

    Hi russelldav,
    one note on your data handling:
    When  each of the 50 participants send the same 60 "words" you don't need 3000 global variables to store them!
    You can reorganize those data into a cluster for each participant, and using an array of cluster to keep all the data in one "block".
    You can initialize this array at the start of the program for the max number of participants, no need to (dynamically) add or delete elements from this array...
    Edited:
    When all "words" have the same representation (I16 ?) you can make a 2D array instead of an array of cluster...
    Message Edited by GerdW on 10-26-2007 03:51 PM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Best way of handling large amounts of data movement

    Hi
    I like to know what is the best way to handle data in the following scenario
    1. We have to create Medical and Rx claims Tables for 36 months of data about 150 million records each - First month (month 1, 2, 3, 4, .......34, 35, 36)
    2. We have to add the DELTA of month two to the 36 month baseline. But the application requirement is ONLY 36 months, even though the current size is 37 months.
    3 Similarly in the 3rd month we will have 38 months, 4th month will have 39 months.
    4. At the end of 4th month - how can I delete the First three months of data from Claim files without affecting the performance which is a 24X7 Online system.
    5. Is there a way to create Partitions of 3 months each and that can be deleted - Delete Partition number 1, If this is possible, then what kind of maintenance activity needs to be done after deleting partition.
    6. Is there any better way of doing the above scenario. What other options do I have.
    7 My goal is to eliminate the initial months data from system as the requirement is ONLY 36 months data.
    Thanks in advance for your suggestion
    sekhar

    Hi,
    You should use table partitioning to keep your data on monthly partitions. Serach on table partitioning for detailed examples.
    Regards

  • Best way to handle calling the EP logon page

    I would like to be able to handle the following:
    1. Start EP in anonymous/guest mode (basically unauthenticated).
    2. Present an iview that allows a user to fill in some information but when he/she clicks on a button it would first check if the user is logged on and if not -- call up the EP logon page/self registrtion page.  After logging in/self-registrating, the iview would continue processing.
    What is the best way to handle this?
    Regards,
    Mel Calucin
    Bentley Systems, Inc.

    Hi,
    You have to download the com.sap.portal.logon standar "par" and modify the jsp file to simplify the logon page (maybe only input fields for user and password).
    then upload the new .par file to the portal and update the authschems.xml file to redirect the default method to the new ".par".
    You can insert an "enter" button in the home page that launh a pop up to a dummy page with property "default" authentication scheme. Automatically the Portal shows you in the little pop up the simplyfied login page to insert login data. The dummy page need to redirect the parent page to the portallauncher component and close itself.

  • Best way to handle text files in OD10g

    We have a requirement to store reports in text format into a database field, to be able to view the reports, and to print them if desired using Forms 10g. What is the best way to handle this?
    - define the field in the database as clob or blob?
    - if CLOB is the choice, what tools to use to upload CLOBs to the database (since webutil transfer is for blob only)?
    - in Forms 10g, can one use the Forms data type LONG for CLOB?
    - can you do Forms search on clob and blob fields?
    - how can reports that are stored in fields be viewed without first downloading to the client workstation?
    - in Forms 10g, what is the best way to view text files residing in local PCs: "host notepad myFile"?
    Thanks much for your reply!
    gk

    Take a deep breath. Relax. All is fine.
    iDVD does not look at the size of your video file, it looks at the length. iDVD can accomodate up to 2 hours of movie
    iDVD gives you different options depending on the length of your movie. Although I won't agree with your friend about reducing the length of your movie to 15 minutes, if you could trim out a few minutes to get it under an hour that setting in iDVD (Best Performance though the new version may have renamed it) gives you the best quality. Still, any iDVD setting will give you good quality even at 64 minutes
    In FCE export as Quicktime Movie NOT any flavour of Quicktime Conversion. Select chapter markers if you have them. If everything is on one system unchecked the Make Movie Self Contained button. Drop the QT file into iDVD

  • Best way to handle multiple currencies

    I have a requirement that users should be able to report against an OLAP cube in a currency of their choice (from a list of about 20) and was wondering what the best way to handle this might be.
    One option would be to have a currency dimension containing the list of valid currencies and then to pre-calculate measures in each of the currencies and store them in the cube. However the downside of this is that the resultant cube would be 20 times larger than a cube in a single currency, take longer to maintain etc. I could of course partition the cube by currency to improve reporting performance since users would only report in one currency at a time.
    Another alternative would be to dynamically calculate the measures based on exchange rates - I guess this could either be done in the cube itself or as part of the reporting code. However since exchange rates are daily, this would obvioulsy prevent me from aggregating data up the time dimension (all measures are at the day level).
    Is there any standard way of doing this and what are the pro's and con's?
    Thanks,
    Chris

    Sorry - messed up - I should have posted this in the OLAP forum.....

  • Best way to handle time taking MIS queries!

    Dear all,
    Recently we have developed some MIS reports that execute large queries and process lot of data. These queries almost held the database and user's experience very slow speed.
    What is the best way to handle these heavy duty queries?
    Like having another server for MIS and then what is the best way to synchronize data on daily basis?
    OR
    Creating procedures that execute at night and populate MIS tables and reports using this formulated data?
    Any other better solution please?
    Thanks, Imran

    misterimran wrote:
    Dear all,
    Recently we have developed some MIS reports that execute large queries and process lot of data. These queries almost held the database and user's experience very slow speed.
    What is the best way to handle these heavy duty queries?
    Like having another server for MIS and then what is the best way to synchronize data on daily basis?Based on your requirement, Streams.
    Creating procedures that execute at night and populate MIS tables and reports using this formulated data?I would not recommend this for the maintenance part involved; and also, this is re-inventing the wheel.

  • Best way to handle tcMultipleMatchFoundException

    Can any one tell me what is the best way to handle tcMultipleMatchFoundException during Reconciliaiton.
    One way which i know is to manually correct the data. Apart form is there any way..?
    Thanks,
    Venkatesh.

    Hi,
    I've done a great deal of work with mobile accounts in Snow Leopard and I'm now having a "play" with Lion. To be honest you have to sit down and think about why you need mobile accounts.
    If your user only uses one computer then your safer having a local account backed up by a network Time Machine, this avoids the many many woes that the Servers FileSyncAgent brings to the table.
    If your users are going to be accessing multiple computers on the network and leaving the network then a mobile account is good for providing a uniform user experience and access to files etc. However, your users will have to make a choice as to whether they want their iPhoto libraries on one Local machine (backed up by Time Machine) or whether they want their library to be hosted on the server and not part of the Mobile Home Sync schedule (adding ~/Pictures to the excluded items on the home sync settings).
    With the latter, users will be able to access their iPhoto libraries on any computer when they are within the network (as it's accessed from the users server home folder).
    With the first option the user would have their iPhoto library on one computer (say the laptop they used the most) but then would not be able to access it from other computers they log on to.
    iPhoto libraries are a pain, and I'm working hard to come up with a workaround. If your users moved over to using Apeture then you could include the aperture library as part of the home sync thanks to Deeport (http://deepport.net/archives/os-x-portable-home-directories-and-syncing-flaw-wit h-bundles/)
    He does suggest that the same would work with IPhoto libraries - but it doesn't for a number of mysterious reasons regarding how the OS recognizes thie iPhoto bundle (it does so differently compared to Apeture).
    Hope this helps...

  • Best way to handle duplicate headings stemming from linked TOC book?

    What's the best way to handle duplicate topic titles stemming from TOC books that contain links to a topic that you want to have appear in the body? The problem I've had for years now is that the TOC generates one heading, and the topic itself generates one heading.This results in duplicate headings in the printed output.
    I have a large ~2500 topic project I have to print every release, and to date we've been handling it in post-build word macros, but it's not 100% effective, so we're looking to fix this issue on the source side of the fence. On some of our smaller projects, we've actually marked with the heading in the topic itself with the Online CBT and that seems to work. We're thinking of doing the same to our huge project unless there's a better way of handling this. Any suggestions?

    See the tip immediately above this link. http://www.grainge.org/pages/authoring/printing/rh9_printing.htm#wizard_page3
    The alternative is to remove the topic from the print layout so that it only generates by virtue of its link to the book.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Best way to "page" through data?

    If I have a XM, something like
    <people>
    <person>
    <firstname>john</firstname>
    <lastname>smith</lastname>
    </person>
    <person>
    <firstname>robert</firstname>
    <lastname>walker</lastname>
    </person>
    </people>
    And I create some bound fields to display this, what's the best way to "page" the data?
    I know I can create an XML variable of the data and then access person[0].firstname, but I need to have a button or other ui element that would choose person 0,1,2,3 etc.
    Normally I's use a list, but this application calls for a static screen where only the data changes when buttons are chosen.

    I use this method(this is simplified) as it doesn't require any sort of data services, you can have change events etc for editing/deleteing. the slider is just for poc, a prev/next button could just as easily do.
    <?xml version="1.0" encoding="utf-8"?>
    <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
    xmlns:s="library://ns.adobe.com/flex/spark"
    xmlns:mx="library://ns.adobe.com/flex/halo" minWidth="800" minHeight="400" creationComplete="initApp()" width="800" height="400">
    <fx:Declarations>
    <!-- Place non-visual elements (e.g., services, value objects) here -->
    </fx:Declarations>
    <fx:Script>
    <![CDATA[
    import mx.collections.ArrayCollection;
    [Bindable] private var Arr:ArrayCollection=new ArrayCollection();
    private var Sel:int = 0;
    protected function initApp(): void
    Arr.addItem({id:1,name:"John",surname:"Robertson",age:30});
    Arr.addItem({id:2,name:"Peter",surname:"Williams",age:35});
    Arr.addItem({id:3,name:"Jane",surname:"Brown",age:23});
    Arr.addItem({id:4,name:"Rebecca",surname:"Smith",age:42});
    Arr.addItem({id:5,name:"Susan",surname:"Reynolds",age:25});
    Arr.addItem({id:6,name:"Michael",surname:"Royce",age:23});
    Arr.addItem({id:7,name:"Jack",surname:"Jones",age:22});
    Arr.addItem({id:8,name:"Pete",surname:"Young",age:50});
    Arr.addItem({id:9,name:"Robert",surname:"Peters",age:39});
    hs.maximum = Arr.length-1;
    hs.value = 0;
    updateList();
    protected function updateList():void
    Sel = hs.value;
    surname.text= Arr[Sel].surname;
    firstname.text= Arr[Sel].name;
    age.text= Arr[Sel].age;
    ]]>
    </fx:Script>
    <s:Group width="497" height="114" horizontalCenter="0" verticalCenter="0">
    <s:TextInput id="surname" x="46" y="45"/>
    <s:TextInput id="firstname" x="182" y="45"/>
    <s:TextInput id="age" x="318" y="45"/>
    <s:HSlider id="hs" x="62" y="96" width="200" minimum="0" maximum="1" liveDragging="true" change="updateList()" value="-1"/>
      </s:Group>
    </s:Application>
    David

  • What is the best way to handle very large images in Captivate?

    I am just not sure the best way to handle very large electrical drawings.
    Any suggestions?
    Thanks
    Tricia

    Is converting the colorspace asking for trouble?  Very possibly!  If you were to do that to a PDF that was going to be used in the print industry, they'd shoot you!  On the other hand, if the PDF was going online or on a mobile device – they might not care.   And if the PDF complies with one of the ISO subset standards, such as PDF/X or PDF/A, then you have other rules in play.  In general, such things are a user preference/setting/choice.
    On the larger question – there are MANY MANY ways to approach PDF optimization.  Compression of image data is just one of them.   And then within that single category, as you can see, there are various approaches to the problem.  If you extend your investigation to other tools such as PDF Enhancer, you'd see even other ways to do this as well.
    As with the first comment, there is no "always right" answer.  It's entirely dependent on the user's use case for the PDF, requirements of additional standard, and the user's needs.

  • (workflow question) - What is the best way to handle audio in a large Premiere project?

    Hey all,
    This might probably be suitable for any version of Premiere, but just in case, I use CS4 (Master Collection)
    I am wrestling in my brain about the best way to handle audio in my project to cut down on the time I am working on it.
    This project I just finished was a 10 minute video for a customer shot on miniDV (HVX-200) cut down from 3 hours of tape.
    I edited my whole project down to what looked good, and then I decided I needed to clean up all the Audio using Soundbooth, So I had to go in clip by clip, using the Edit in SoundBooth --> Render and Replace method on every clip. I couldn't find a way to batch edit any audio in Soundbooth.
    For every clip, I performed similar actions---
    1) both tracks of audio were recorded with 2 different microphones (2 mono tracks), so I needed only audio from 1 track - I used SB to cut and paste the good track over the other track.
    2) amplified the audio
    3) cleaned up the background noise with the noise filter
    I am sure there has to be a better workflow option than what I just did (going clip by clip), Can someone give me some advice on how best to handle audio in a situation like this?
    Should I have just rendered out new audio for the whole tape I was using, and then edit from that?
    Should I have rendered out the audio after I edited the clips into one long track and performed the actions I needed on it? or something entirely different? It was a very slow, tedious process.
    Thanks,
    Aza

    Hi, Aza.
    Given that my background is audio and I'm just coming into the brave new world of visual bits and bytes, I would second Hunt's recommendation regarding exporting the entire video's audio as one wav file, working on it, and then reimporting. I do this as one of the last stages, when I know I have the editing done, with an ear towards consistency from beginning to end.
    One of the benefits of this approach is that you can manage all audio in the same context. For example, if you want to normalize, compress or limit your audio, doing it a clip at a time will make it difficult for you to match levels consistently or find a compression setting that works smoothly across the board. It's likely that there will instead be subtle or obvious differences between each clip you worked on.
    When all your audio is in one file you can, for instance, look at the entire wave form, see that limiting to -6 db would trim off most of the unnecessary peaks, triim it down, and then normalize it all. You may still have to do some tweaking here and there, but it gets you much farther down the road, much more easily.Same goes for reverb, EQ or other effects where you want the same feel throughout the entire video.
    Hope this helps,
    Chris

  • I am moving from PC to Mac.  My PC has two internal drives and I have a 3Tb external.  What is best way to move the data from the internal drives to Mac and the best way to make the external drive read write without losing data

    I am moving from PC to Mac.  My PC has two internal drives and I have a 3Tb external.  What is best way to move the data from the internal drives to Mac and the best way to make the external drive read write without losing data

    Paragon even has non-destriuctive conversion utility if you do want to change drive.
    Hard to imagine using 3TB that isn't NTFS. Mac uses GPT for default partition type as well as HFS+
    www.paragon-software.com
    Some general Apple Help www.apple.com/support/
    Also,
    Mac OS X Help
    http://www.apple.com/support/macbasics/
    Isolating Issues in Mac OS
    http://support.apple.com/kb/TS1388
    https://www.apple.com/support/osx/
    https://www.apple.com/support/quickassist/
    http://www.apple.com/support/mac101/help/
    http://www.apple.com/support/mac101/tour/
    Get Help with your Product
    http://docs.info.apple.com/article.html?artnum=304725
    Apple Mac App Store
    https://discussions.apple.com/community/mac_app_store/using_mac_apple_store
    How to Buy Mac OS X Mountain Lion/Lion
    http://www.apple.com/osx/how-to-upgrade/
    TimeMachine 101
    https://support.apple.com/kb/HT1427
    http://www.apple.com/support/timemachine
    Mac OS X Community
    https://discussions.apple.com/community/mac_os

  • What is the best way to mimic the data from production to other server?

    Hi,
    here we user streams and advanced replication to send the data for 90% of tables from production to another production database server. if one goes down can use another one. is there any other best option rather using the streams and replication? we are having lot of problems with streams these days they keep break and get calls.
    I heard about data guard but dont know what is use of it? please advice the best way to replicate the data.
    Thanks a lot.....

    RAC, Data Guard. The first one is active-active, that is, you have two or more nodes accessing the same database on shared storage and you get both HA and load balancing. The second is active-passive (unless you're on 11.2 with Active Standby or Snapshot Standby), that is one database is primary and the other is standby, which you normally cannot query or modify, but to which you can quickly switch in case primary fails. There's also Logical Standby - it's based on Streams and generally looks like what you seem to be using now (sort of.) But it definitely has issues. You can also take a look at GoldenGate or SharePlex.

Maybe you are looking for

  • Old MacBook Pro's screen is dead...

    Hey! My old Mac Book Pro's screen is dead (black). Even though the device reacts to the switch-on/off button, the screen stays as it is (black). What should I do?   Note that its battery is also dead and I can work only when my old pal is plugged in-

  • Set sap Component mandatory while creating the support desk message

    Hi,    When create the support desk message from any application sysytem, the sap component get selected automatically but when we create the suppport desk message from the Easy Access the component wiil be blank, so some user create the support mess

  • Lost iweb pages after format.

    I started my website when i had ilife 06. Now i had to reformat and have loas all mt saved iweb things.... How do i connect iweb upto my .mac site again? is there a way?

  • Mail user name or password incorrect

    I keep trying to send mail frommone of my email accounts on my iPad but I keep getting an error (see name of the thread). The thing is, it isn't incorrect. I am up to date on OS updates and have synced just this afternoon. What's the deal?

  • Cannot Install or Update the iTunes software- Nothing is working

    I purchased a ipod Touch and plugged it into my Dell laptop. It knows it's attached but the software that I need refuses to update or recognize the new ipod. When I tried to follow the steps laid out in a prev. post it did nothing to solve the proble