About data structure

hi all,
here i do have another data structure related question where is concerning the JTree i have asked before this. how do i define/design the data structure that will suite my JTree application?
my JTree will take many information as the data that it has at most 4 level including the root. the data may pass from the server, and it may be not in order, how do i do that?
thanks a lot

hmm... i have modify a bit of the data structure class as describe below:
i have done all of the JTree part of my applet, except now only headache problem is that how do i gonna define my JTree data structure efficiently? i mean that there are many possible of data that will pass through the applet to create my JTree, there may be not in order though, so how can i define a suitable data structure to handle the data that coming in to the JTree?
what i have done so far is that i have created a class (consider the data object) to handle the data, whereby the server will pass the data according to the class structure, which then i get the data from there and create my JTree node. in the data object, for sure i put a parameter that will detect the level of the tree node, where i can successfully put it in the JTree, but is there any other better way?
thank you

Similar Messages

  • OC4J: marshalling does not recreate the same data structure onthe client

    Hi guys,
    I am trying to use OC4J as an EJB container and have come across the following problem, which looks like a bug.
    I have a value object method that returns an instance of ArrayList with references to other value objects of the same class. The value objects have references to other value objects. When this structure is marshalled across the network, we expect it to be recreated as is but that does not happen and instead objects get duplicated.
    Suppose we have 2 value objects: ValueObject1 and ValueObject2. ValueObject1 references ValueObject2 via its private field and the ValueObject2 references ValueObject1. Both value objects are returned by our method in an ArrayList structure. Here is how it will look like (number after @ represents an address in memory):
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@2
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@1
    We would expect to see the same (except exact addresses) after marshalling. Here is what we get instead:
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@3
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@4
    It can be seen that objects get unnecessarily duplicated – the instance of the ValueObject1 referenced by the ValueObject2 is not the same now as the instance that is referenced by the ArrayList instance.
    This does not only break referential integrity, structure and consistency of the data but dramatically increases the amount of information sent across the network. The problem was discovered when we found that a relatively small but complicated structure that gets serialized into a 142kb file requires about 20Mb of network communication. All this extra info is duplicated object instances.
    I have created a small test case to demonstrate the problem and let you reproduce it.
    Here is RMITestBean.java:
    package com.cramer.test;
    import javax.ejb.EJBObject;
    import java.util.*;
    public interface RMITestBean extends EJBObject
    public ArrayList getSomeData(int testSize) throws java.rmi.RemoteException;
    public byte[] getSomeDataInBytes(int testSize) throws java.rmi.RemoteException;
    Here is RMITestBeanBean.java:
    package com.cramer.test;
    import javax.ejb.SessionBean;
    import javax.ejb.SessionContext;
    import java.util.*;
    public class RMITestBeanBean implements SessionBean
    private SessionContext context;
    SomeVO someVO;
    public void ejbCreate()
    someVO = new SomeVO(0);
    public void ejbActivate()
    public void ejbPassivate()
    public void ejbRemove()
    public void setSessionContext(SessionContext ctx)
    this.context = ctx;
    public byte[] getSomeDataInBytes(int testSize)
    ArrayList someData = getSomeData(testSize);
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println(" serialised output size: "+byteOutputStream.size());
    byte[] bytes = byteOutputStream.toByteArray();
    objectOutputStream.close();
    byteOutputStream.close();
    return bytes;
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return null;
    public ArrayList getSomeData(int testSize)
    // Create array of objects
    ArrayList someData = new ArrayList();
    for (int i=0; i<testSize; i++)
    someData.add(new SomeVO(i));
    // Interlink all the objects
    for (int i=0; i<someData.size()-1; i++)
    for (int j=i+1; j<someData.size(); j++)
    ((SomeVO)someData.get(i)).addValueObject((SomeVO)someData.get(j));
    ((SomeVO)someData.get(j)).addValueObject((SomeVO)someData.get(i));
    // print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Check the serialised size of the structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return someData;
    Here is RMITestBeanHome:
    package com.cramer.test;
    import javax.ejb.EJBHome;
    import java.rmi.RemoteException;
    import javax.ejb.CreateException;
    public interface RMITestBeanHome extends EJBHome
    RMITestBean create() throws RemoteException, CreateException;
    Here is ejb-jar.xml:
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN" "http://java.sun.com/dtd/ejb-jar_2_0.dtd">
    <ejb-jar>
    <enterprise-beans>
    <session>
    <description>Session Bean ( Stateful )</description>
    <display-name>RMITestBean</display-name>
    <ejb-name>RMITestBean</ejb-name>
    <home>com.cramer.test.RMITestBeanHome</home>
    <remote>com.cramer.test.RMITestBean</remote>
    <ejb-class>com.cramer.test.RMITestBeanBean</ejb-class>
    <session-type>Stateful</session-type>
    <transaction-type>Container</transaction-type>
    </session>
    </enterprise-beans>
    </ejb-jar>
    And finally the application that tests the bean:
    package com.cramer.test;
    import java.util.*;
    import javax.rmi.*;
    import javax.naming.*;
    public class RMITestApplication
    final static boolean HARDCODE_SERIALISATION = false;
    final static int TEST_SIZE = 2;
    public static void main(String[] args)
    Hashtable props = new Hashtable();
    props.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
    props.put(Context.PROVIDER_URL, "ormi://lil8m:23792/alexei");
    props.put(Context.SECURITY_PRINCIPAL, "admin");
    props.put(Context.SECURITY_CREDENTIALS, "admin");
    try {
    // Get the JNDI initial context
    InitialContext ctx = new InitialContext(props);
    NamingEnumeration list = ctx.list("comp/env/ejb");
    // Get a reference to the Home Object which we use to create the EJB Object
    Object objJNDI = ctx.lookup("comp/env/ejb/RMITestBean");
    // Now cast it to an InventoryHome object
    RMITestBeanHome testBeanHome = (RMITestBeanHome)PortableRemoteObject.narrow(objJNDI,RMITestBeanHome.class);
    // Create the Inventory remote interface
    RMITestBean testBean = testBeanHome.create();
    ArrayList someData = null;
    if (!HARDCODE_SERIALISATION)
    // ############################### Alternative 1 ##############################
    // ## This relies on marshalling serialisation ##
    someData = testBean.getSomeData(TEST_SIZE);
    // ############################ End of Alternative 1 ##########################
    } else
    // ############################### Alternative 2 ##############################
    // ## This gets a serialised byte stream and de-serialises it ##
    byte[] bytes = testBean.getSomeDataInBytes(TEST_SIZE);
    try {
    java.io.ByteArrayInputStream byteInputStream = new java.io.ByteArrayInputStream(bytes);
    java.io.ObjectInputStream objectInputStream = new java.io.ObjectInputStream(byteInputStream);
    someData = (ArrayList)objectInputStream.readObject();
    objectInputStream.close();
    byteInputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    // ############################ End of Alternative 2 ##########################
    // Print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Print out the size of the serialised structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    catch(Exception ex){
    ex.printStackTrace(System.out);
    The parameters you might be interested in playing with are HARDCODE_SERIALISATION and TEST_SIZE defined at the beginning of RMITestApplication.java. The HARDCODE_SERIALISATION is a flag that specifies whether Java serialisation should be used to pass the data across or we should rely on OC4J marshalling. TEST_SIZE defines the size of the object graph and the ArrayList structure. The bigger this size is the more dramatic effect you get from data duplication.
    The test case outputs the structure both on the server and on the client and prints out the size of the serialised structure. That gives us sufficient comparison, as both structure and its size should be the same on the client and on the server.
    The test case also demonstrates that the problem is specific to OC4J. The standard Java serialisation does not suffer the same flaw. However using the standard serialisation the way I did in the test case code is generally unacceptable as it breaks the transparency benefit and complicates interfaces.
    To run the test case:
    1) Modify provider URL parameter value on line 15 of the RMITestApplication.java for your environment.
    2) Deploy the bean to the server.
    4) Run RMITestApplication on a client PC.
    5) Compare the outputs on the server and on the client.
    I hope someone can reproduce the problem and give their opinion, and possibly point to the solution if there is one at the moment.
    Cheers,
    Alexei

    Hi,
    Eugene, wrong end user recovery.  Alexey is referring to client desktop end user recovery which is entirely different.
    Alexy - As noted in the previous post:
    http://social.technet.microsoft.com/Forums/en-US/bc67c597-4379-4a8d-a5e0-cd4b26c85d91/dpm-2012-still-requires-put-end-users-into-local-admin-groups-for-the-purpose-of-end-user-data?forum=dataprotectionmanager
    Each recovery point has users permisions tied to it, so it's not possible to retroacively give the users permissions.  Implement the below and going forward all users can restore their own files.
    This is a hands off solution to allow all users that use a machine to be able to restore their own files.
     1) Make these two cmd files and save them in c:\temp
     2) Using windows scheduler – schedule addperms.cmd to run daily – any new users that log onto the machine will automatically be able to restore their own files.
    <addperms.cmd>
     Cmd.exe /v /c c:\temp\addreg.cmd
    <addreg.cmd>
     set users=
     echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
     echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
     FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
     echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
     REG IMPORT c:\temp\perms.reg
     Del c:\temp\perms.reg
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • What is the best data structure for loading an enterprise Power BI site?

    Hi folks, I'd sure appreciate some help here!
    I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics.  Those puppies would sit up and beg when you asked them to deliver up goodies
    to SSRS or PowerView.
    But Power BI is a whole new game for me.  
    Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?  
    Should I be running Data Management Gateway and exposing each table in my RDW individually?
    Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact? 
    I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?  
    And my subsidiary question is this:  am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
    before the data is suitable to expose to Power BI?  
    Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
    before the data is ready to be massaged and presented by Power BI? 
    I'd sure value your thoughts and opinions,
    Cheers, Donna
    Donna Kelly

    Dear All,
    My original question was: 
    what's the optimum way of exposing data to the Power BI cloud?
    Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
    Before I do that, though, let me summarise a few points:
    Melissa said “My initial thoughts:  I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
    the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
    Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
    how to combine pre-built elements”
    I had commented “. . . end users and data modelling don't mix . . . self-service so
    far has been mostly a bust”.
    Here at Redwing, we give out a short White Paper on Business Intelligence Reporting.  It goes to clients and anyone else who wants one.  The heart
    of the Paper is the Reporting Pyramid, which states:  Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
    For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
    For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
    demand.
    For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
    their audiences.
    For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
    intricate analyses as needed.
    On the subject of self-service, the Redwing view says:  Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
    v
    80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
    v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
    I cannot just expose raw data and tell everyone to get on with it.  That way lies madness!
    I think that clean and well-structured data is a prerequisite for delivering business intelligence. 
    Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
    a data warehouse, which is the physical expression of the dimensional model.
    Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
    Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
    That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
    So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
    That’s fine for the general case, but what about Power BI?  Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
    I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
    The question boils down to what data structures should go down that pipe. 
    According to
    Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views.  I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I). 
    I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid.  I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
    Fact is, I cannot for the life of me see what advantages DMG/PQ
    has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
    If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
    heck out of that to produce all the reporting I’d ever want.  It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
    Of course, your enterprise might not
    have a DW, just a heterogeneous mass of dirty unstructured data.  If that’s the case,
    choosing Power BI data structures is the least of your problems!  :-)
    Cheers, Donna
    Donna Kelly

  • Lease Out Contract - Master Data Structural Relations

    Dear Gurus,
    I am new to FX RE and if my question is too simple, please excuse me!
    I am researching the possibilities for the master data structure of lease in contracts and have a serious confusion.
    The documentation says that objects can be assigned to a contract individually or many objects related to one contract. At the same tim it is said that rental units are assigned only to lease out contracts.
    How should i proceed if my company has got many lease in contracts for different object. Is it a must the contract to be assigned to an object or it can be managed just as a contract? I saw that from the RE Navigator you can create a contract, does that mean that in FX RE the contracts itself is considered as master data record?
    Any opinions will be highly appreciated.
    Many thanks in advance!

    Dear Kumar,
    Sorry if I ve been confusing. I will try to make it more clear.
    I am working for a Teleco and i am researching the possibility of implementing FX - RE internally. We will not need all the functionalities of the module, but only the ones that allow you to manage your lease in contracts. We`ve got lots of base stations contracts (as you can imagine) where our company is a tenant in a lease contract. 
    I did some research in relation to the master data structure and cant work out how should our master data be structured. The business requirements are  - handling the advance payments, accruals and postings, giving and receiving notices, adjustments of rent etc. Is it possible to create the contract within the system,  set all the conditions directly in it or do we need to have another master record other that the contract itself.
    My confusion comes from the fact that the standard solution of the lease outs suggests creating a rental unit first and then assigning the contract to it. But since we will be tenants we dont need to more detailed information on these objects, we just need the conditions in the contract that will let us handle all the activities that arise in connection to it.
    I hope that i put the question more clearly this time.
    Can you also tell me if it is possible to create a template and after entering the conditions about the particular contract to generate it from the system on a hard copy?
    Many thanks and sorry if i cant make myself more clear!

  • What's the easiest way to move app data and data structures to a server?

    Hi guys,
    I've been developing my app locally with Apex 4.2 and Oracle 11g XE on Windows 7. It's getting close to the time to move the app to an Oracle Apex server. I imagine Export/Import is the way to move the app. But what about the app tables and data (those tables/data like "customer" and "account" created specifically for the app)? I've been using a data modeling tool, so I can run a DDL script to create the data structures on the server. What is the easiest way to move the app data to the server? Is there a way to move both structures and data in one process?
    Thanks,
    Kim

    There's probably another way to get here, but, in SQL Developer, on the tree navigation, expand the objects down to your table, right click, then click EXPORT.. there you will see all the options. This is a tedious process and it sucks IMO, but yeah, it works. It sucks mostly because 1) it's one table at a time, 2) if your data model is robust and has constraints, and sequences and triggers, then you'll have to disable them all for the insert, and hope that you can re-enable constraints, etc without a glitch (good luck, unless you have only a handful of tables)
    I prefer using the oracle command line EXP to export an entire schema, then on the target server I use IMP to import the schema. That way, it's near exact. This makes life messy if you develop more than one application in a single schema, and I've felt that pain -- however -- it's a whole lot easier to drop tables and other objects than it is to create them! (thus, even if the process of EXP/IMP moved more than you wanted to "move".. just blow away what you don't want on the target after the fact..)
    You could use oracle's datapump method too.
    Alternatively, what can be done, IF you have access to both servers from your SQL developer instance (or if you can tnsping them both already from the command line, you can use SQL*PLUS), is run a script that will identify your apex apps' objects (usually by prefix to object names, like EBA_PROJ_%, etc) and do all the manual work for you. I've created a script that does exactly this so that I can move data from dev to prod servers over a dblink. It's tricky because of the order that must be executed to disable constraints and then re-enable them, and of course, trickier if you don't consistently prefix ALL of your "application objects"... (tables, views, triggers, sequences, functions, procs, indexes, etc)

  • SAP ABAP Proxy - recursive data structure problem

    Hi,
    For our customer we try to bind SAP with GW Calendar using GW Web Services. We tried to generate ABAP Proxy from groupwise.wsdl file but there are problems: GW uses recursive data structures what ABAP Proxy can not use. Is there any simple solution to this problem?
    Best regards
    Pawel

    At least I don't have a clue about ABAP Proxy.
    You are pretty much on your own unless someone
    else has tried it.
    Preston
    >>> On Tuesday, August 03, 2010 at 8:26 AM,
    pawelnowicki<[email protected]> wrote:
    > Hi,
    >
    > For our customer we try to bind SAP with GW Calendar using GW Web
    > Services. We tried to generate ABAP Proxy from groupwise.wsdl file but
    > there are problems: GW uses recursive data structures what ABAP Proxy
    > can not use. Is there any simple solution to this problem?
    >
    > Best regards
    > Pawel

  • Looking for suitable and robust data structure?

    I have an application where i have to give user an option to correlate the
    response returned by a method call with other methods arguments.
    I have provided a jtree struture where user can see methods and their
    responses and can correlate them by selecting appropriate arguments
    where these values could be used.
    I used a data structure to store such relations like
    A hashmap where every key is a unique number representing method number
    and value ia a matrix.
    In each matrix, the first element of every row would be the Response object and
    all remaining elments in the row would be Argument objects. :)
    Now maybe someone would ask, for every method there is always only one return type
    then why such a matrix with Reponse object as first element of each row.
    The reason is that i gave user an option to not only relate response as a whole Object to
    different arguments of different methods but user can select a field of this response object if this
    is a complex type or user can select any element if the response is a Collection object to relate
    with arguments.
    I am not sure whether this structure is more appropriate or i can use more robust
    and memory efficient structure.
    Please give me your useful suggestions.

    J2EER wrote:
    I have an application where i have to give user an option to correlate the
    response returned by a method call with other methods arguments.I really don't understand you.
    I have provided a jtree struture where user can see methods and their
    responses and can correlate them by selecting appropriate arguments
    where these values could be used.Huh?
    I used a data structure to store such relations likeErr, what happened to the rest of your sentence? Or is the next part the rest of it?
    A hashmap where every key is a unique number representing method number
    and value ia a matrix.Where does this matrix come from? In my book, a matrix is a 2 dimensional structure. What do you mean by it?
    In each matrix, the first element of every row would be the Response object and
    all remaining elments in the row would be Argument objects. :)
    Now maybe someone would ask, for every method there is always only one return type
    then why such a matrix with Reponse object as first element of each row.You're still talking about methods? If so, don't all methods have just one return type?
    The reason is that i gave user an option to not only relate response as a whole Object to
    different arguments of different methods but user can select a field of this response object if this
    is a complex type or user can select any element if the response is a Collection object to relate
    with arguments.
    I am not sure whether this structure is more appropriate or i can use more robust
    and memory efficient structure.
    Please give me your useful suggestions.Perhaps you could give a more detailed explanation and more importantly, provide some examples of what you mean.

  • Efficient data structure to implement simple text editor?

    I was given this problem in an interview:
    What data structure would you use to implement a simple text editor that does these 4 functions:
    a) goto(line number)
    b) insert(char input,location)
    c) delete(location)
    d) printAll() //print entire file
    Given that i'm such a newb, i was stumped. I came up with making a 2d array that would allow for o(1) time for goto, but o(n) for everything else (shifting everything in the array). there were other downfalls too dealing with space issues and such, but he wanted me to optimize this data structure. I then came up with a linked list of arrays, but that had similar problems.
    But thinking about it further is driving me a little crazy so I'm wondering if you guys have any suggestions on how to answer this question...
    one thing that came to mind after is to implement the data structure as a binary tree, where each node contains
    class Node
    char theChar
    int position; // ie 0 = first character in the file, and 81 could be the first character in the 2nd line
    node left,right,parent;
    }so how it works is the cursor would know where the location was so i would know where to delete and insert within the tree.
    insert {
         //search for location to insert after (log n)
           //create new character node and append to current node
         //increment position for all subsequent children
    delete(x) {
         search for x position
         if found, remove node and decrement position value for all children
    goto(line #) {
         return line # * 80 (or whatever max length for a single line)
    }the major problem i see here is balancing the tree after every insert/delete
    Thanks in advance.

    One of many great things emacs has given us is the gap buffer:
    http://en.wikipedia.org/wiki/Gap_buffer
    To see a Java implementation of this you can look in the Java SDK source. The document model (I forget exactly what its called), used in Swing uses a gap buffer.

  • Open ended data structure for retrieving SQL data

    I have a flex project that reads data from a database and
    sends it to my flex app to display in a chart. I need to do this in
    Actionscript since I have to dynamically create different charts
    based on the database definitions. Up til now, I have inserted the
    data for each series into a separate
    ArrayList(java)/ArrayCollection(flex) in this way:
    BindingUtils.bindProperty(lineSeries, "dataProvider", series,
    "pointList");
    lineSeries.xField="point1";
    lineSeries.yField="point2";
    The series variable is a custom Actionscript/Java object
    which contains an ArrayCollection called pointList. And the array
    is made up of custom DataObjects with two fields, point1 &
    point2.
    But in the new paradigm, I'd basically like to create a big
    array of arrays similar to a database table and then just point the
    xField to one column, and the yField to another column. Anyway, it
    seems like the problem is that my point1 and point2 in the current
    implementation are variable names, but with this new dynamic
    structure, I won't be able to create variables on the fly. So how
    do I let flex know to look at column 3, for example for the yField?
    The attached code is from the Flex documentation and is an example
    of what I'd like to emulate (except I need to do it in
    actionscript). I'd like to be able to select "month" or "amount",
    etc. But coming from the Java side, there is no way for me to name
    the array contained in each column.

    I guess I wasn't clear enough. I am actually serializing a
    java object and mapping it to an actionscript object via blazeDS. I
    know about how the objects translate, for example for
    ArrayCollections in flex, I use ArrayLists in java. The problem is
    that I'm not sure how to get the associative aspects of the arrays
    in flex into my java data structures in a way that will translate
    to the way flex can define arrays, such as Month:"January".
    It seems that in flex, you can create an Array and assign it
    an element like {month:"January", amount:"450"} and it will create
    an Object with variables for month and amount. But I don't know how
    to do this on the fly in Java (or in Actionscript for that matter)
    when I have no way of knowing beforehand how many variables my
    object will need. Each series will need an x and y, though usually
    the x will be dates which are the same for each series. So if I had
    10 series sharing an axis, I'd need 11 variables in my object-
    date, series1, series 2...series 10.

  • Is timesten still using T-tree as data structure?

    I just come across this paper - http://www.memdb.com/paper.pdf , this researcher do some experiments and showing that using concurrent B-tree algorithm is actually faster than T-tree operation. How do you think about this paper? Do you think actually he is using a inefficient algorithm to access T-tree? Or, Timesten already know the limitation of T-tree and have changed the internal data-structure?

    Yes, we are aware of the comparisons between T-Trees, concurrent B-trees etc. At the moment TimesTen still uses T-trees but this may change in the future :-)
    Chris

  • Clusters as data structures

    I am looking for the best and simplest way to create and manage data structures in Labview.
    Most often I use clusters as data structures, however the thing I don't like about this approach is that when I pass the cluster to a subroutine, the subroutine needs a local copy of the cluster.
    If I change the cluster later (say I add a new data member), then I need to go through all the subroutines with local copies of the cluster and make the edit of the new member (delete/save/relink to sub-vi, etc).
    On a few occasions in the past, I've tried NI GOOP, but I find the extra overhead associated with this approach cumbersome, I don't want to have to write 'get' and 'set' methods for every integer and string, I like being able to access the cluster/object data via the "unbundle by name" feature.
    Is there a simple or clever way of having a single global reference to a data object (say a cluster) that is shared by a group of subroutines and which can then be used as a template as an input or output parameter? I might guess the answer is no because Labview is interpreted and so the data object has to be passed as a handle, which I guess is how GOOP works, and I have the choice of putting in the extra energy up front (using GOOP) or later (using clusters if I have to edit the data structure). Would it be advisable to just use a data cluster as a global variable?
    I'm curious how other programmers handle this. Is GOOP pretty widely used? Is it the best approach for creating maintainable LV software ?
    Alex

    Alex,
    Encapsulation of data is critical to maintaining a large program. You need global, but restricted, access to your data structures. You need a method that guarantees serial, atomic access so that your exposure to race conditions is minimimized. Since LabVIEW is inherently multi-threaded, it is very easy to shoot yourself in the foot. I can feel your pain when you mention writing all those get and set VIs. However, I can tell you that it is far less painful than trying to debug a race condition. Making a LabVIEW object also forces you to think through your program structure ahead of time - not something we LabVIEW programmers are accustomed to doing, but very necessary for large program success. I have use three methods of data encapsulation.
    NI GOOP - You can get NI GOOP from the tutorial Graphical Object Oriented Programming (GOOP). It uses a code interface node to store the strict typedef data cluster. The wizard eases maintenance. Unfortunately, the code interface node forces you through the UI thread any time you access data, which dramatically slows performance (about an order of magnitude worse than the next couple of methods).
    Functional Globals - These are also called LV2 style globals or shift register globals. The zip file attached includes an NI-Week presentation on the basics of how to use this approach with an amusing example. The commercial Endevo GOOP toolkit now uses this method instead of the code interface node method.
    Single-Element Queues - The data is stored in a single element queue. You create the database by creating the queue and stuffing it with your data. A get function is implemented by popping the data from the queue, doing an unbundle by name, then pushing the data back into the queue. A set is done by popping the data from the queue, doing a bundle by name, then pushing the data back into the queue. You destroy the data by destroying the queue with a force destroy. By always pulling the element from the queue before doing any operation, you force any other caller to wait for the queue to have an element before executing. This serializes access to your database. I have just started using this approach and do not have a good example or lots of experience with it, but can post more info if you need it. Let me know.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Implement a program by using data structure

    i've learn link-list,b-tree,red black tree,hash table. now it's time for me to implement a program using any one of these. i've no idea about how to start, and what kind of program should i write? what kind of program use data structure? i want some simple example program to make me understand. can anyone help me?
    thank 4 u r time,
    makio

    what sort of program do use data structure? let's say
    i want to use binary search on data , how can i?As nasch said: pretty much any program uses data structures.
    Okay, let's say you want to try binary search. You could do that on an array, on a singly linked list, on a doubly linked list, probably on others that I can't think of right now.
    If you just want practice using the data structures you've learned, then why not write a binary search over several different ones? That will give you a feel for how they differ in usage and performance.
    Or, you can just pick any kind of program that interests you. Unless it's trivially small and simple, there will probably be a use for one or more of the data strux you've learned.
    A chat program needs data structures to hold curently connected users, ongoing conversations, maybe a queue for messages waiting to be sent.
    A card game needs to be able to search to see if a given card is in a given hand. Needs to hold all the cards in a deck that has to be shuffled. Dealing is like popping off a stack.
    A database of students and their courses might want to keep something in an ordered tree--by last name for instance.
    What applications interest you?

  • Storing a lot of data in an indexed data structure for quick access.

    I'm designing an app. which will need to store a large amount of data in memory. Records will be flowing into the app. via a socket. The app will receive about 30 records/second which is about 108,000 records/hour and about 600,000 records/day. I need to store the records in an indexed data structure so that I can access them quickly. For example, at 9:00am I will need to access records received at 8:30am, 8:35am, 8:40am, etc. This program will be multithreaded and as I understand Vector is the only data structure that is thread safe. Is Vector my only choice? How do I access objects in a Vector using an index? Is there something better that I can use?

    Is Vector my only choice?If you want to access the objects by key then you should use something like a HashMap. But if you want to access them by an array index then an ArrayList would be more appropriate.
    as I understand Vector is the only data structure that is thread safeYou can get a thread-safe version of any Collection object by using the Collection.synchronizedCollection method.
    How do I access objects in a Vector using an index? I'd suggest you read the API documentation. And probably the Sun tutorial on Collections at http://java.sun.com/docs/books/tutorial/collections/index.html
    600,000 records/day. Unless you plan to dump old data after a short period of time, you may want to consider using a database to avoid running out of memory.

  • Need help on efficient searches using hashes on large #s of data structures

    I have a massive amount of data and I need a way to access it using generated indexes instead of traditional searching such as binary chop as I require a fast response time.
    I have 100,000 Array data structures each containing roughly 10 to 60 elements. The elements of the Arrays do not store any raw data they just hold references to objects. I have about 10,000 unique objects which are referenced to by the arrays.
    What I need to do is supply a number of objects as the query and the system should return all the arrays which contain two or more different objects from the query.
    So for example>
    Given query objects> &#8220;DER&#8221; , &#8220;ERE&#8221; , &#8220;YPS&#8221;
    and arrays> [DER, SFR, PPR]
         [PER, ERE, SWE, YPS]
         [ERE, PPD, DER, YPS, SWE]     
         [PRD, LDF, WSA, MMD]
    It should return arrays 2 and 3.
    I had a look at hashMaps but I&#8217;m not entirely sure if that&#8217;s what I need.
    Would I need to hash the each Array and then hash the query objects and look for overlaps in the two hashes or something similar?
    Thanks In Advance

    For a single query, it's more efficient to do less work, so don't create intermediate objects, don't create a index of objects to sets of possibles, and stop counting matches when you find enough:.
                    final HashSet<String> querySet = new HashSet<String>(Arrays.asList(query));
                    final Set<String> results = new LinkedHashSet<String>();
                    // find matches using scan of all arrays
                    for (String[] array : data) {
                        int matchCount = 0;
                        for (String elt : array) {
                            if (querySet.contains(elt)) {
                                ++matchCount;
                                if (matchCount >= 2) {
                                    results.add(Arrays.asList(array).toString());
                                    break;
                    }Using random data, 100,000 arrays of selections from a list of 10,000 trigrams, scanning all the arrays takes about as long as the set code, but doesn't require generation of the map, which can take up a lot of memory if you pre-create the sets (I didn't complete a run doing that, although it should be faster to query, it used too much memory and my computer started swapping so I gave up after a few minutes).
    Running queries of size 2 to 25, times in ms:
    linear scan of all arrays in data set:
    elapsed: 2196
    creating index of trigram to arrays containing trigram:
    elapsed: 4635
    query using set union and index lookup:
    elapsed: 2195
    query using direct scan and index lookup:
    elapsed: 164
    So if I was doing one query per dataset, just use a simple scan as above. If you're doing lots on the same dataset, then create a map<String, List<String[]>> to index the arrays, and the lookup will save quite a bit of time.
    Creating the index:.
                dataSets = new HashMap<String, List<String[]>>();
                for (String trigram : trigrams)
                    dataSets.put(trigram, new ArrayList<String[]>());
                for (String[] array : data)
                    for (String trigram : array)
                        dataSets.get(trigram).add(array);Finding matches using scan of arrays with at least one elt in query:.
                    final HashSet<String> querySet = new HashSet<String>(Arrays.asList(query));
                    final Set<String> results = new LinkedHashSet<String>();
                    for (String trigram : query) {
                        for (String[] array : dataSets.get(trigram)) {
                            for (String elt : array) {
                                if (elt != trigram && querySet.contains(elt)) {
                                    results.add(Arrays.asList(array).toString());
                                    break;
                    }

  • [DME] Document about data mapping for HSBC ifile format (PP,ACH, COS)

    Dear SAP experts,
    I'm implementing an automatic payment module through HSBC in an SAP project.
    My customer want to streamline their payments to vendors using automated
    payments with HSBC.
    I have a document about ifile description which show me information about the
    structure and data format in HSBC ifile. I know how to create a DME format tree
    but the HSBC format seem to be very complicated, it contains a lots of fields and
    a lots of them I don't really understand.
    I'm sure that many of you have done this before. Do you have any document about
    data mapping for HSBC file format? I have a lot of fields leaving undone so any
    relevant documents will be very helpful for me.
    Hope you can help.
    Thank in advance.
    Maxielight.

    Hello Maxielight,
    I too am working on this IFILE format for transmitting to HSBC for Indian INR payments & am using DMEE.
    I'm struggling with a couple of fields in relation to PP.
    1st  is the 'Record Count' in FIle header, which is just a count of the total number of lines in the file.
    2nd is the 'Total number of instructions in batch' in Batch Header section, because I'm trying to count using aggreation via reference node id, but because the Batch Header is level 1 & I need to wait for '2nd Party Details for PP' which is at a lower level I always get an error when I run the check with error 'aggregation not permitted because of field I'm using is lower level, or they are not in the same segment.
    I've also tried creating a new segment with 'Delay output' but still get similiar error about the nodes being in differenet segments.
    How did you get past this issue?
    We are not using COS, instead we will print our own cheques in the office so sorry I cannot offer advise there.
    Any advice you can offer would be appreciated.
    Thanks,
    Steve

Maybe you are looking for

  • I can't install a extension.

    I can't install a script(extansion) because it sayis it seems corrupt, but i know it's not. How do i install it ? please help

  • Upgradation From 3.5 to 7.0

    Want to know what are the steps to be considered when upgrading for x version to y version.

  • WPUTAB: Idoc segments are missing at receiver system

    Hi ABAP Experts! At Sender System (POS DM) and Receiver System (ECC), Idoc is successfully generated with 144 Segments. Both at sender and receiver, status is in green color, status 53. Now in Receiver System ECC, at WPER transaction, documents proce

  • 3G: $15/month plan used as credit towards $30 plan?

    If in the middle of the month you use up your 250mb and you switch to the unlimited plan, does the $15 you already paid go towards the $30/month price for the current month or no? Or if its half-way through the month, will the $30 price be prorated?

  • Galaxy Nexus Hotspot timeout

    Has anyone else been having timeout problems using the Galaxy Nexus Wi-Fi hotspot feature?  I'll be on my computer using it and at times when I try to load a page my connection will time out and I either need to restart the hotspot on the device or t