Best approach for jtree with each node having data to be displayed in 2rows

Hi,
Need directions on approach for constructing a jtree.I am not sure whether this is possible in jtree.
the format of tree will be as shown below
+JTREE
-JTREE
- Name1 Age 1 Id1
Address Details1(Should be a button)
- Name2 Age 2 Id2
Address Details2(Should be a button)
Problem here is each child node has two rows.First row has three columns and second row has a button which takes user to new screen on click. Any directions on how to approach this problem will be helpful
Thanks for your help
Ravi

Hi,
Thanks for the suggestion. Will this approach work, if I have to display a button in the second row and content in the first row (Can these tworows together be given as a treenode)?
Any sample example code will be useful.
thanks and regards
ravi

Similar Messages

  • What is the best approach for dealing with this issue?

    I have been advised by a mac expert that the computer should be left running except for extended periods or, as I have been doing, shut down at the end of each day/ Please explain the rationale for your response. Thanks

    On mactalk.com.au there was recently a thread on "When did you last shut down your Mac".
    A lot of people don't.
    Some people (which I think is mostly PC users converted to Mac, or old-timers who live in the past) do shut down daily.
    I myself have only shut my computer off when moving it physically, when there's an electrical storm, when I've been going on holidays, or when there's been some installation issue requiring a restart/coldstart. (I'm not counting "Restarts" for new software)
    OS X doesn't need to be shut down... though an occasional restart can be good for resetting things like virtual memory, etc.
    If you do leave it going, the CRON scripts only run if the computer is awake anyway (at like 3am in the morning), so if you set the computer to "sleep" after a time, then it's no different to shutting down in that regard.
    If you don't mind the extra time waiting for a boot up... then it doesn't really matter. If like me you're impatient, then don't shut down.

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What's Best Approach for Multitrack Classical Music?

    Can someone suggest the best approach for recording classical musicians onto
    four tracks? In this scenario, they play until they make a mistake on, say,
    measure 24, stop, then (take 2) go back to measure 20 and play until the next
    rough spot, and so on. Ultimately there may be 15 takes that all need to be
    trimmed and stitched together.
    In the old (tape) days, this was pretty basic editing. I would use a blade and block
    to cut out all the bad stuff on the multitrack tape, then I could mix. But how do I
    do this in Audition? (I use version 1.5.)
    I can't do the cuts it in edit view because the tracks would get out of sync
    Assuming all the takes are in one session, in multitrack view, this most basic of
    functions seems to elude me. What am I missing?

    Al the Drifter wrote:
    If you follow Steve's advice, and after doing the edits you discover
    that one instrument should come up 1db, you are screwed.
    I could be wrong about this in the classical music environment,
    where things are not close-mic'ed but if I am, I am confident Steve
    will correct me.  Ha.
    You always run the risk of small changes between takes - and that's where Audition 3 and the new improved crossfades score rather heavily. You won't notice 1dB on a single instrument across a fade though - it's hard to spot this as a jump, even, unless it's on pure tone. No, I very rarely close-mic stuff at all, although I did with a clavichord recently - it's seriously too quiet to mic any other way.
    jaypea500 wrote:
     when recording classical music, any engineer worth anything has the mix down pat as it's being recorded. 
    That's the way they used to work, certainly - but not nowadays, especially if it's done on location, which most classical recording is. What's more likely to happen is that you'd use decent mic preamps feeding straight into a multitrack, or even some software on a laptop. I generally record like that - but I also feed the multitrack outputs to a Yamaha mixer via ADAT, do a mix on that and record it back to a spare multitrack pair. I don't actually need to do that - but having a mix available from the multitrack that's pretty much there is good as far as being able to play back takes to conductors is concerned.
    Of course, one of the other reasons that classical sessions recorded on location aren't mixed on the spot is that the monitoring conditions are invariably far from ideal, and I'd have it that no engineer worth anything would ever risk a final mix done on location.
    But I only get paid to do all of this on a regular basis, so what would I know? Must be something though - my customers come back for more...

  • Best approach for building dialogs based on Java Beans

    I have a large amount of Java Beans with several properties each. These represent all the "data" in our system. We will now build a new GUI for the system and I intend to reuse the beans as far as possible. My idea is to automatically generate the configuration dialogs for each bean using the java.beans package.
    What is the best approach for achieving this? Should I use PropertyEditors or should I make my own dialog-generator using the Introspetor class or are there any other suitable solutions?
    All suggestions and tips are very welcome.
    Thanks!
    Erik

    Definitely, it is better for you to use JTable. Why not try it?

  • Best approach for uploading document using custom web part-Client OM or REST API

    Hi,
     Am using my custom upload Visual web part for uploading documents in my document library with a lot of metadata.
    This columns contain single line of text, dropdownlist, lookup columns and managed metadata columns[taxonomy] also.
    so, would like to know which is the best approach for uploading.
    curretnly I am  trying to use the traditional SSOM, server oject model.Would like to know which is the best approach for uploading files into doclibs.
    I am having hundreds of sub sites with 30+ doc libs within those sub sites. Currently  its taking few minutes to upload the  files in my dev env. am just wondering, what would happen if  the no of subsites reaches hundred!
    am looking from the performance perspective.
    my thought process is :
    1) Implement Client OM
    2) REST API
    Has anyone tried these approaches before, and which approach provides better  performance.
    if anyone has sample source code or links, pls provide the same 
    and if there any restrictions on the size of the file  uploaded?
    any suggestions are appreciated!

    Try below:
    http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
    http://stackoverflow.com/questions/9847935/upload-a-document-to-a-sharepoint-list-from-client-side-object-model
    http://www.codeproject.com/Articles/103503/How-to-upload-download-a-document-in-SharePoint
    public void UploadDocument(string siteURL, string documentListName,
    string documentListURL, string documentName,
    byte[] documentStream)
    using (ClientContext clientContext = new ClientContext(siteURL))
    //Get Document List
    List documentsList = clientContext.Web.Lists.GetByTitle(documentListName);
    var fileCreationInformation = new FileCreationInformation();
    //Assign to content byte[] i.e. documentStream
    fileCreationInformation.Content = documentStream;
    //Allow owerwrite of document
    fileCreationInformation.Overwrite = true;
    //Upload URL
    fileCreationInformation.Url = siteURL + documentListURL + documentName;
    Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add(
    fileCreationInformation);
    //Update the metadata for a field having name "DocType"
    uploadFile.ListItemAllFields["DocType"] = "Favourites";
    uploadFile.ListItemAllFields.Update();
    clientContext.ExecuteQuery();
    If this helped you resolve your issue, please mark it Answered

  • Design Patterns, best approach for this app

    Hi all,
    i am starting with design patterns, and i would like to hear your opinion on what would be the best approach for this app. 
    this is basically an app for data monitoring, analysis and logging (voltage, temperature & vibration)
    i am using 3 devices for N channels (NI 9211A, NI 9215A, NI PXI 4472) all running at different rates. asynchronous.
    and signals are being processed and monitored for logging at a rate specified by the user and in realtime also. 
    individual devices can be initialized or stopped at any time
    basically i'm using 5 loops.
    *1.- GUI: Stop App, Reload Plot Names  (Event handling)
    *2.- Chart & Log:  Monitors Data and Start/Stop log data at a specified time in the GUI (State Machine)
    *3.- Temperature DAQ monitoring @ 3 S/s  (State Machine)   NI 9211A
    *4.- Voltage DAQ monitoring and scaling @ 1K kS/s (State Machine) NI 9215A
    *5.- Vibration DAQ monitoring and Analysis @ 25.6 kS/s (State Machine) NI PXI 4472
    i have attached the files for review, thanks in advance for taking the time.
    Attachments:
    V-T-G Monitor_Logger.llb ‏355 KB

    mundo wrote:
    thanks Will for your response,
    so, basically i could apply a producer/consummer architecture for just the Vibration analysis loop? or all data being collected by the Monitor/Logger loop?
    is it ok having individual loops for every DAQ device as is shown?
    thanks.
    You could use the producer/consumer architecture to split the areas where you are doing both the data collection and teh analysis in the same state machine. If one of these processes is not time critical or the data rate is slow enough you could leave it in a single state machine. I admit that I didn't look through your code but based purely on the descriptions above I would imagine that you could change the three collection state machines to use a producer/consumer architecture. I would leave your UI processing in its own loop as well as the logging process. If this logging is time critical you may want to split that as well.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • What is the best approach for combining events?

    When I work on a wedding my current workflow involves creating a compound clip for each section of the video (e.g. reception, ceremony, dancing etc). Then I add the compound clip 'sequences' into a single project to add the chapter markers and export to a single master file.
    I like the idea of managing each section in a project rather than a compound clip now that projects are part of the library in 10.1, but is there a good way to combine multiple projects (for each section) into a single master project, or would I still need to copy the contents of each project and paste in the master project?
    Maybe I am best to continue with my current workflow.

    Just saw the discussion title - should have said "What is the best approach for combining projects"?

  • Which one is the best approach for responsive UI development option in SharePoint 2013

    Which one is the best approach for responsive UI development option in SharePoint 2013
    Device channel or responsive UI (HTML, CSS)?

    In practice you're probably going to end up with a combination. A couple of device channels for classes of device and then responsive UI within those channels to adjust to particular devices within the classes.
    Of course the real answer is as always 'it depends' as you'll need to pick the best option for each client based on their needs.

  • BAPI_SALESORDER_CREATEFROMDAT2 with each item having diff Ship to party

    Hi Abapers,
    Can anyone guide me in this scenario:
    Creation of sales order using BAPI_SALESORDER_CREATEFROMDAT2 with each item having different ship to party.
    Ex :
             Item Material Qty Delivery date Ship to party
             10   p-100     2     24.12.2011    1020
             20   p-100     4     26.12.2011    1050.
    Can we use this bapi for the present scenario. If yes how to send the multiple ship to party. when i create sales order through VA01 its getting created but not with this bapi.
    Thanks for supporting.

    Pass Table ORDER_PARTNERS with
    PARTN_ROLE
    PARTN_NUMB
    ITM_NUMBER   = '000010'  
    PARTN_ROLE
    PARTN_NUMB
    ITM_NUMBER   = '000020' 
    and so on

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach for setting up iCloud for a child

    Hi,
    I have bought my son an iPad Air for his birthday. We already have an iMac in the house which is shared by us all, with each person having their own login. My original question was going to be how I can create an iCloud login for him. I tried createing one yesterday but to create the iCloud login I needed an apple id but it seems he is too young to have his own applie id. Is it possible for me to have the one Apple id which we use for purchases but then for myself and my son to have seperate iCloud logins?
    But now that I think about this more I am thinklng of creating a seperate apple id for him (maybe in my name) because I do not necessarily want him to access some of te movies and songs I have purchased from iTunes because they are not appropriate for a child.
    So what is the typical setup for app id and icloud accounts for a familly?
    Thanks
    Andy

    Previously, before introduction of family sharing feature - I created for my kids apple id's under my name and with my information - since Apple requires account creator/holder to be over 13. Now with introduction of family sharing Apple policy changed a bit, but I find that they created system that still needs some kinks to be worked out. My personal recommendation to proceed just like you were thinking and make adjustments as Apple perfects their policies.
    Practically you will have your son's phone sign in to his id under
    Settings messages
    Settings facetime
    Settings icloud
    You will share id on
    Settings itunes and app store
    You will disable auto downloads on your and your son's phone
    and most importantly you will have to limit your exposure to get hit by your son using your credit card. As well as control his download options in Settings -General - restrictions - where you can disable download of things by age or content type.

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • Approach for generating a report while having Func Specs of the Report

    Hi Experts,
    Please tell me what should be the approach for generating a report when having Func Specs of the Report.
    Thnx
    Sid

    Hi,
    1st you need to know the business need and related data.
    1st know that according to the business need is there any standard SAP report is there or not.
    try to know the Data base table and related fields required according to the need.
    know the relations of the data base tables to map the data.
    Accordingly prepare Technical spec's or go a head for coding.
    Thanks.
    If this helps you reward with points.

  • Best setup for iMac with SSD & HDD? Best location of scratch & home folders

    Best setup for iMac with SSD and HDD? Best location of scratch & home folders?
    Computer:
    iMac 2.93 GHz Quad core i7, 8GB RAM, 1 TB HDD + 256 GB SSD
    There is not much info from Apple about the best way to set up an iMac with a Hard Drive and Solid state drive. I’ve looked at a few of the forum posts across the web and came up with a plan and lots of questions. (I do use photoshop frequently, but not on a professional level):
    1. I will keep OS and Applications on SSD
    2. About moving the home folder: I saw some posts about moving the whole home folder, but it makes more sense to me to only move selected fodlers withing the home folder tomake the best use of the SSD. So will keep the home folder on SSD, but move certain folders (document /music/iphoto/download) to 1 TB HDD via instructions I found on the macintoshperformanceguide website:
    cd
    sudo cp -r Documents /Volumes/Master
    sudo rm -rf Documents
    sudo ln -s /Volumes/Master/Documents Documents
    3. I would like to get 8 more RAM when I can afford it
    4. I will attach an external hard drive for most of my documents and backup storage
    5. Now here is where I’m not sure what’s best:
    a. Should I partition my internal 1 TB hard drive and use the first partition as a scratch disc for photoshop and other applications? How much should I partition? Is there any benefit to this if the rive is partitioned?
    b. Should I use an external drive as a scratch disc?
    c. Any advice on a good 1-2 TB external drive?
    d. Should I just leave things in factory settings?
    Don't assume I know the basics - I got all the above just by searching around. Any advice and commentary is appreciatedThanks.
    Message was edited by: sfandtheworld

    Thanks for the advice and the links. yes, I would like to speed up ps as much as possible.
    I wonder if putting the scratch disc on the same drive as the OS would cause them to interfere with each other? Even if they are on different partitions, they would not be able to be accessed at the same time, or could they? That's why I was wondering if I should place scratch disc on the internal HDD -- but then I don't know how much to partition for it (or to partition at all?)
    ALso, I read on a few places that too much read/write on the SSD wears it down over time? Is this more of a theoretical concerns - it does not make sense to me since it has no moving parts!
    thanks again for the advice ... I'm gonna go digest those links

Maybe you are looking for