Data Structure Problems

can anyone tell me how can i solve this problem .. please help me out
1)Design a FSA which accepts all strings over {a,b} which end in b, and in which every b (except for the last one) is immediately followed by two or more a's.
2)Convert the following into a deterministic finite state automaton, using the subset construction algorithm.
a,b
\/ a b a
--> 0 --> 1 --> 2 ---> 3
State 3 is accepting.
3)Describe (in English) the language which the regular expression (a|ba)* denotes.
4)Using the techniques discussed in class, convert the regular expression in the previous problem to an FSA with epsilon transitions.
5)Now convert the FSA from the previous problem to an equivalent FSA without epsilon transitions.

can anyone tell me how can i solve this problem ..
please help me outYeah -- you could WRITE A JAVA PROGRAM to do it.

Similar Messages

  • SAP ABAP Proxy - recursive data structure problem

    Hi,
    For our customer we try to bind SAP with GW Calendar using GW Web Services. We tried to generate ABAP Proxy from groupwise.wsdl file but there are problems: GW uses recursive data structures what ABAP Proxy can not use. Is there any simple solution to this problem?
    Best regards
    Pawel

    At least I don't have a clue about ABAP Proxy.
    You are pretty much on your own unless someone
    else has tried it.
    Preston
    >>> On Tuesday, August 03, 2010 at 8:26 AM,
    pawelnowicki<[email protected]> wrote:
    > Hi,
    >
    > For our customer we try to bind SAP with GW Calendar using GW Web
    > Services. We tried to generate ABAP Proxy from groupwise.wsdl file but
    > there are problems: GW uses recursive data structures what ABAP Proxy
    > can not use. Is there any simple solution to this problem?
    >
    > Best regards
    > Pawel

  • Data Structures Problems

    hey friends...
    can anyone help me with finding solved problems for Data Structures... I read the materilas but still wanna practicing .... plz if tell me if anyone knows about that ..
    Thanks...

    You should know that Data Structures refers to any structure that allows for the storage and retrieval of data. Thus if you have questions you need to be more specific.
    ASK A QUESTION!
    DeltaCoder
    "Change your mind, your body will follow.

  • OC4J: marshalling does not recreate the same data structure onthe client

    Hi guys,
    I am trying to use OC4J as an EJB container and have come across the following problem, which looks like a bug.
    I have a value object method that returns an instance of ArrayList with references to other value objects of the same class. The value objects have references to other value objects. When this structure is marshalled across the network, we expect it to be recreated as is but that does not happen and instead objects get duplicated.
    Suppose we have 2 value objects: ValueObject1 and ValueObject2. ValueObject1 references ValueObject2 via its private field and the ValueObject2 references ValueObject1. Both value objects are returned by our method in an ArrayList structure. Here is how it will look like (number after @ represents an address in memory):
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@2
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@1
    We would expect to see the same (except exact addresses) after marshalling. Here is what we get instead:
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@3
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@4
    It can be seen that objects get unnecessarily duplicated – the instance of the ValueObject1 referenced by the ValueObject2 is not the same now as the instance that is referenced by the ArrayList instance.
    This does not only break referential integrity, structure and consistency of the data but dramatically increases the amount of information sent across the network. The problem was discovered when we found that a relatively small but complicated structure that gets serialized into a 142kb file requires about 20Mb of network communication. All this extra info is duplicated object instances.
    I have created a small test case to demonstrate the problem and let you reproduce it.
    Here is RMITestBean.java:
    package com.cramer.test;
    import javax.ejb.EJBObject;
    import java.util.*;
    public interface RMITestBean extends EJBObject
    public ArrayList getSomeData(int testSize) throws java.rmi.RemoteException;
    public byte[] getSomeDataInBytes(int testSize) throws java.rmi.RemoteException;
    Here is RMITestBeanBean.java:
    package com.cramer.test;
    import javax.ejb.SessionBean;
    import javax.ejb.SessionContext;
    import java.util.*;
    public class RMITestBeanBean implements SessionBean
    private SessionContext context;
    SomeVO someVO;
    public void ejbCreate()
    someVO = new SomeVO(0);
    public void ejbActivate()
    public void ejbPassivate()
    public void ejbRemove()
    public void setSessionContext(SessionContext ctx)
    this.context = ctx;
    public byte[] getSomeDataInBytes(int testSize)
    ArrayList someData = getSomeData(testSize);
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println(" serialised output size: "+byteOutputStream.size());
    byte[] bytes = byteOutputStream.toByteArray();
    objectOutputStream.close();
    byteOutputStream.close();
    return bytes;
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return null;
    public ArrayList getSomeData(int testSize)
    // Create array of objects
    ArrayList someData = new ArrayList();
    for (int i=0; i<testSize; i++)
    someData.add(new SomeVO(i));
    // Interlink all the objects
    for (int i=0; i<someData.size()-1; i++)
    for (int j=i+1; j<someData.size(); j++)
    ((SomeVO)someData.get(i)).addValueObject((SomeVO)someData.get(j));
    ((SomeVO)someData.get(j)).addValueObject((SomeVO)someData.get(i));
    // print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Check the serialised size of the structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return someData;
    Here is RMITestBeanHome:
    package com.cramer.test;
    import javax.ejb.EJBHome;
    import java.rmi.RemoteException;
    import javax.ejb.CreateException;
    public interface RMITestBeanHome extends EJBHome
    RMITestBean create() throws RemoteException, CreateException;
    Here is ejb-jar.xml:
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN" "http://java.sun.com/dtd/ejb-jar_2_0.dtd">
    <ejb-jar>
    <enterprise-beans>
    <session>
    <description>Session Bean ( Stateful )</description>
    <display-name>RMITestBean</display-name>
    <ejb-name>RMITestBean</ejb-name>
    <home>com.cramer.test.RMITestBeanHome</home>
    <remote>com.cramer.test.RMITestBean</remote>
    <ejb-class>com.cramer.test.RMITestBeanBean</ejb-class>
    <session-type>Stateful</session-type>
    <transaction-type>Container</transaction-type>
    </session>
    </enterprise-beans>
    </ejb-jar>
    And finally the application that tests the bean:
    package com.cramer.test;
    import java.util.*;
    import javax.rmi.*;
    import javax.naming.*;
    public class RMITestApplication
    final static boolean HARDCODE_SERIALISATION = false;
    final static int TEST_SIZE = 2;
    public static void main(String[] args)
    Hashtable props = new Hashtable();
    props.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
    props.put(Context.PROVIDER_URL, "ormi://lil8m:23792/alexei");
    props.put(Context.SECURITY_PRINCIPAL, "admin");
    props.put(Context.SECURITY_CREDENTIALS, "admin");
    try {
    // Get the JNDI initial context
    InitialContext ctx = new InitialContext(props);
    NamingEnumeration list = ctx.list("comp/env/ejb");
    // Get a reference to the Home Object which we use to create the EJB Object
    Object objJNDI = ctx.lookup("comp/env/ejb/RMITestBean");
    // Now cast it to an InventoryHome object
    RMITestBeanHome testBeanHome = (RMITestBeanHome)PortableRemoteObject.narrow(objJNDI,RMITestBeanHome.class);
    // Create the Inventory remote interface
    RMITestBean testBean = testBeanHome.create();
    ArrayList someData = null;
    if (!HARDCODE_SERIALISATION)
    // ############################### Alternative 1 ##############################
    // ## This relies on marshalling serialisation ##
    someData = testBean.getSomeData(TEST_SIZE);
    // ############################ End of Alternative 1 ##########################
    } else
    // ############################### Alternative 2 ##############################
    // ## This gets a serialised byte stream and de-serialises it ##
    byte[] bytes = testBean.getSomeDataInBytes(TEST_SIZE);
    try {
    java.io.ByteArrayInputStream byteInputStream = new java.io.ByteArrayInputStream(bytes);
    java.io.ObjectInputStream objectInputStream = new java.io.ObjectInputStream(byteInputStream);
    someData = (ArrayList)objectInputStream.readObject();
    objectInputStream.close();
    byteInputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    // ############################ End of Alternative 2 ##########################
    // Print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Print out the size of the serialised structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    catch(Exception ex){
    ex.printStackTrace(System.out);
    The parameters you might be interested in playing with are HARDCODE_SERIALISATION and TEST_SIZE defined at the beginning of RMITestApplication.java. The HARDCODE_SERIALISATION is a flag that specifies whether Java serialisation should be used to pass the data across or we should rely on OC4J marshalling. TEST_SIZE defines the size of the object graph and the ArrayList structure. The bigger this size is the more dramatic effect you get from data duplication.
    The test case outputs the structure both on the server and on the client and prints out the size of the serialised structure. That gives us sufficient comparison, as both structure and its size should be the same on the client and on the server.
    The test case also demonstrates that the problem is specific to OC4J. The standard Java serialisation does not suffer the same flaw. However using the standard serialisation the way I did in the test case code is generally unacceptable as it breaks the transparency benefit and complicates interfaces.
    To run the test case:
    1) Modify provider URL parameter value on line 15 of the RMITestApplication.java for your environment.
    2) Deploy the bean to the server.
    4) Run RMITestApplication on a client PC.
    5) Compare the outputs on the server and on the client.
    I hope someone can reproduce the problem and give their opinion, and possibly point to the solution if there is one at the moment.
    Cheers,
    Alexei

    Hi,
    Eugene, wrong end user recovery.  Alexey is referring to client desktop end user recovery which is entirely different.
    Alexy - As noted in the previous post:
    http://social.technet.microsoft.com/Forums/en-US/bc67c597-4379-4a8d-a5e0-cd4b26c85d91/dpm-2012-still-requires-put-end-users-into-local-admin-groups-for-the-purpose-of-end-user-data?forum=dataprotectionmanager
    Each recovery point has users permisions tied to it, so it's not possible to retroacively give the users permissions.  Implement the below and going forward all users can restore their own files.
    This is a hands off solution to allow all users that use a machine to be able to restore their own files.
     1) Make these two cmd files and save them in c:\temp
     2) Using windows scheduler – schedule addperms.cmd to run daily – any new users that log onto the machine will automatically be able to restore their own files.
    <addperms.cmd>
     Cmd.exe /v /c c:\temp\addreg.cmd
    <addreg.cmd>
     set users=
     echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
     echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
     FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
     echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
     REG IMPORT c:\temp\perms.reg
     Del c:\temp\perms.reg
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • What is the best data structure for loading an enterprise Power BI site?

    Hi folks, I'd sure appreciate some help here!
    I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics.  Those puppies would sit up and beg when you asked them to deliver up goodies
    to SSRS or PowerView.
    But Power BI is a whole new game for me.  
    Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?  
    Should I be running Data Management Gateway and exposing each table in my RDW individually?
    Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact? 
    I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?  
    And my subsidiary question is this:  am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
    before the data is suitable to expose to Power BI?  
    Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
    before the data is ready to be massaged and presented by Power BI? 
    I'd sure value your thoughts and opinions,
    Cheers, Donna
    Donna Kelly

    Dear All,
    My original question was: 
    what's the optimum way of exposing data to the Power BI cloud?
    Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
    Before I do that, though, let me summarise a few points:
    Melissa said “My initial thoughts:  I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
    the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
    Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
    how to combine pre-built elements”
    I had commented “. . . end users and data modelling don't mix . . . self-service so
    far has been mostly a bust”.
    Here at Redwing, we give out a short White Paper on Business Intelligence Reporting.  It goes to clients and anyone else who wants one.  The heart
    of the Paper is the Reporting Pyramid, which states:  Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
    For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
    For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
    demand.
    For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
    their audiences.
    For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
    intricate analyses as needed.
    On the subject of self-service, the Redwing view says:  Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
    v
    80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
    v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
    I cannot just expose raw data and tell everyone to get on with it.  That way lies madness!
    I think that clean and well-structured data is a prerequisite for delivering business intelligence. 
    Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
    a data warehouse, which is the physical expression of the dimensional model.
    Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
    Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
    That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
    So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
    That’s fine for the general case, but what about Power BI?  Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
    I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
    The question boils down to what data structures should go down that pipe. 
    According to
    Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views.  I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I). 
    I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid.  I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
    Fact is, I cannot for the life of me see what advantages DMG/PQ
    has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
    If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
    heck out of that to produce all the reporting I’d ever want.  It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
    Of course, your enterprise might not
    have a DW, just a heterogeneous mass of dirty unstructured data.  If that’s the case,
    choosing Power BI data structures is the least of your problems!  :-)
    Cheers, Donna
    Donna Kelly

  • A robust data structure for a histogram of events over time.

    Hi all,
    I've been thinking for days, I am still lost on how to solve the following:
    Let's say that we have a list of events that occur over a certain time interval. Each event has a start timestamp and a finish timestamp. The events occur in no particular order and expand over various length of periods.
    With respect to the histogram, we mark the time period of an event occuring with a frequency of one. When events overlap over a time period, we mark that time period with a frequency of number of overlapping events.
    Given this raw data, we want to examine the data's frequency under various granularity such as second, minutes, hours, days, weeks, month and years for 60sec, 60min, 24 hrs, 7 days, 4 wks ranges respectively.
    So, what is a good intermediate (general) data structure that can store all the various events such that they can be easily transformed to a specific granularity. My goal here is to read the event list once, and histogram of events data with respect to various timeframe can be calculated.
    Does a timeline like data strucuture that stores aggregated frequencies solve my problem? With a fine granuarlity say seconds and a long time range, the data structure will bound to overflow.
    Any help is greatly appreciated!
    Brian

    You data structure would be in terms of Classes. You need a class Anevent{
    String eventName;
    Date startTime;
    Date endTime;
    In your main program have a Vector object which can store unllimited objects and which would not overflow. Store your event objects as they happen in this Vector. After your time interval of events is completed and you have the final Vector, you can pass the vector object to various methods in your main program to be simple or pass it to specailized graph drawing classes you would have to write to draw various Histograms.

  • Efficient data structure to implement simple text editor?

    I was given this problem in an interview:
    What data structure would you use to implement a simple text editor that does these 4 functions:
    a) goto(line number)
    b) insert(char input,location)
    c) delete(location)
    d) printAll() //print entire file
    Given that i'm such a newb, i was stumped. I came up with making a 2d array that would allow for o(1) time for goto, but o(n) for everything else (shifting everything in the array). there were other downfalls too dealing with space issues and such, but he wanted me to optimize this data structure. I then came up with a linked list of arrays, but that had similar problems.
    But thinking about it further is driving me a little crazy so I'm wondering if you guys have any suggestions on how to answer this question...
    one thing that came to mind after is to implement the data structure as a binary tree, where each node contains
    class Node
    char theChar
    int position; // ie 0 = first character in the file, and 81 could be the first character in the 2nd line
    node left,right,parent;
    }so how it works is the cursor would know where the location was so i would know where to delete and insert within the tree.
    insert {
         //search for location to insert after (log n)
           //create new character node and append to current node
         //increment position for all subsequent children
    delete(x) {
         search for x position
         if found, remove node and decrement position value for all children
    goto(line #) {
         return line # * 80 (or whatever max length for a single line)
    }the major problem i see here is balancing the tree after every insert/delete
    Thanks in advance.

    One of many great things emacs has given us is the gap buffer:
    http://en.wikipedia.org/wiki/Gap_buffer
    To see a Java implementation of this you can look in the Java SDK source. The document model (I forget exactly what its called), used in Swing uses a gap buffer.

  • Data Structure for New General Ledger for mySAP 2004

    Guys,
    We are preparing to upgrade to mySAP 2004.
    mySAP 2004 doesn't have data migration tool for upgrade "classic" ledger to new ledger with advanced functionality. To utilize all new functionality, data should be migrated to the new ledger. Data migration tool in this case should be developed (ABAP). What is the difference in data structure of new ledger?
    if anybody who already went thru upgrade and could share some examples of programs for mySAP 2004 upgrade, my e-mail : [email protected]
    Thanks in advance,
    Mike

    Now I have the same problem. Anyone has a solution for this matter?

  • Open ended data structure for retrieving SQL data

    I have a flex project that reads data from a database and
    sends it to my flex app to display in a chart. I need to do this in
    Actionscript since I have to dynamically create different charts
    based on the database definitions. Up til now, I have inserted the
    data for each series into a separate
    ArrayList(java)/ArrayCollection(flex) in this way:
    BindingUtils.bindProperty(lineSeries, "dataProvider", series,
    "pointList");
    lineSeries.xField="point1";
    lineSeries.yField="point2";
    The series variable is a custom Actionscript/Java object
    which contains an ArrayCollection called pointList. And the array
    is made up of custom DataObjects with two fields, point1 &
    point2.
    But in the new paradigm, I'd basically like to create a big
    array of arrays similar to a database table and then just point the
    xField to one column, and the yField to another column. Anyway, it
    seems like the problem is that my point1 and point2 in the current
    implementation are variable names, but with this new dynamic
    structure, I won't be able to create variables on the fly. So how
    do I let flex know to look at column 3, for example for the yField?
    The attached code is from the Flex documentation and is an example
    of what I'd like to emulate (except I need to do it in
    actionscript). I'd like to be able to select "month" or "amount",
    etc. But coming from the Java side, there is no way for me to name
    the array contained in each column.

    I guess I wasn't clear enough. I am actually serializing a
    java object and mapping it to an actionscript object via blazeDS. I
    know about how the objects translate, for example for
    ArrayCollections in flex, I use ArrayLists in java. The problem is
    that I'm not sure how to get the associative aspects of the arrays
    in flex into my java data structures in a way that will translate
    to the way flex can define arrays, such as Month:"January".
    It seems that in flex, you can create an Array and assign it
    an element like {month:"January", amount:"450"} and it will create
    an Object with variables for month and amount. But I don't know how
    to do this on the fly in Java (or in Actionscript for that matter)
    when I have no way of knowing beforehand how many variables my
    object will need. Each series will need an x and y, though usually
    the x will be dates which are the same for each series. So if I had
    10 series sharing an axis, I'd need 11 variables in my object-
    date, series1, series 2...series 10.

  • IMac (March 2009) - Invalid Node Structure problem

    Hi All
    I was using my iMac as normal yesterday, when suddenly the system ground to a halt (something I never seen since using OS X). As I had work to do, after about an hour, I restarted, expecting it to be an app misbehaving or something straight forward, but on restart the same thing happened almost straight away.
    So, I restarted again, only for the iMac to get stuck on the blue screen which follows the grey 'cog' screen. After looking through these discussions, and some other Mac forums, I booted from the installer disk, tried to run Disk Utility, which found issues and couldn't repair the disk.
    I then tried Safe-User mode and fsck, which reported the aforementioned Invalid Node Structure problem. After checking a few more forums, I thought I would try to Erase the disk and reinstall Mac OS. I restored from my Time Machine backup, and finally got it to start, but the system moved at a snail's pace, and wouldn't open any apps or files.
    I've tried again with fsck, and also fsck_hfs -r /dev/disk0s2 but all I get is the same error message:
    disk0s2: I/O error.
    Invalid Node Structure
    (4, 38403)
    ** Volume check failed.
    /dev/rdisk0s2 (hfs) EXITED WITH SIGNAL 8
    So, does anyone know if there is anything I can do to save the iMac? Or does it need a new HD? It is my primary work computer, so I really need to get it back, and after buying it in the UK, am now in France for 6 weeks, so it's difficult for me to take it to an Apple specialist.
    Any help would be greatly appreciated, thanks in advance,
    Daniel

    As the last user stated, Disk Warrior maybe able to correct it, however if you search the net, you will see there are more than one user with this problem. I had the same problem, however, my machine is out of warranty, It started like you are saying, I RAN disk warrior which corrected the problem long enough to boot the system. In a matter of 10 minutes or so, the system started pausing (apps hung, but in a matter of seconds recovered). I was checking the disk with fsck_hfs and fsck while booted from a USB drive with a maintenance install of 10.5.6. After running fsck the 5th time with different options suggested from the internet, I rebooted with my disk warrior DVD only to find the drive NOT mounted, Drive utility see's it (even can attempt to run disk repair, but it locks up the system when trying this). Disk Warrior doesn't even see the disk to run a directory repair, although in the "check s.m.a.r.t status it sees it as a sata device, and SAYS ITS NORMAL).
    Since my iMac was out of warranty, I followed the online instructions to replace the hard drive (not too difficult) and everything is fine now.
    The old drive still wont mount, but another utility I bought (data rescue II) has been able to quick scan it and access all the data, even though it wont clear a fsck fsck_hfs or disk warrior, anyways, get it back while you can, IT IS THE hard drive itself, and lots of other iMac users online with the same issue (can you say WHY is my iMac SOOO HOT on the apple in the back (right where the drive is and not sufficient cooling in my opinion, but hey, some people might want to lay their computers face down and fry eggs on the it or something... ))
    -SD

  • AS3 Data Coercion Problem

    AS3 Data Coercion Problem
    From PHP server RemoteObject AMF3 serialization, I receive a
    photo object per below into my Flex client:
    package WPhoto {
    [Bindable]
    [RemoteClass(alias="WPhoto.PhotoObj")]
    public class PhotoObj {
    public var photo:*;
    public var photoWidth:int;
    public var photoHeight:int;
    The above 'photo' property String of a one-byte (UTF-8)
    '.jpg' photo data. Due to PHP's rather limited set of primitive
    data types this is the best I can do per returning photo data to
    the Flex client. (Extreme tries at PHP data structure chicanery
    hasn't worked for me.)
    I need photo property in a DisplayObject format such that I
    can render photo as a child of my App's DisplayObject. The AS3
    Loader class performs this task, when sourced from an AS3
    ByteArray.
    o Loader.loadBytes(bytes:ByteArray, context:LoaderContext =
    null):void
    Loads from binary data stored in a ByteArray object.
    My dilemma is "how to" convert a UTF-8 string of photo data
    into a ByteArray. I've been through the ByteArray.writeXXX() class
    methods without finding anything close to meeting my requirements.
    AS3 String readers expect UTF-16 data, so reading AS3 string
    reading is not a solution. AS3 has no Byte data type, accordingly
    reading my photo data UTF-8 string with a "for loop" into a AS3
    UTF-16 string, or whatever, is not viable. Is there a single-byte
    reader to 'whatever' writer I'm not aware of?
    If I was sourcing photos from a Java based server, my photo
    rendering dilemma would be over. The server to client packets would
    be "java.lang.Byte[]" => flash.utils.ByteArray and voilà we
    could easily and directly render photos via the Loader.loadBytes()
    method.
    I can tentatively extend one of the AS3 ByteArray class write
    methods if I thought there was a way to read UTF-8 streams. Before
    digging down into the ByteArray class on a feasibility basis, I
    thought it first best to ask around the experts per 1) can
    ByteArray be extended for my requirement, or 2) is there a better
    way to approach this, which I have not thought of?
    Pete Mackie
    Seaquest Software
    Adobe Community Expert - Flex

    Hi,
    Thanks for the quick reply.
    I simply don't get it.
    I have a root class which adds two different children of
    different classes. One child dispatched an event, and the other is
    supposed to get it. Both of the are now supposed to be in the
    instance hirrarchy, or am I wrong?
    Logically, I'd figure this should work smooth and simple
    without any need for any looping or such.
    Right now, the root class can recieve the event dispatched by
    child1, but I can't make child2 recieve any event dispatched by
    either the root or child1.
    I'd appreciate if you could show me the way...
    thanks again,
    EZ42

  • Implementing a Graph Data Structure

    First time posting here, I was introduced to LabVIEW while participating in FIRST.
    I was reading about Graph Data Structures, and wanted to see if I could use them in LabVIEW, as it looked like a good way to relate information. Unfortunately, it appears that the straight forward way that I attempted will not work. I tried creating a typedef which consisted of a cluster of two elements, an array of linked nodes and an array of edge weights. Alas, I found you can't have recursive data types, as the array of linked nodes in the typedef needed to be the same as the typedef. I know why this is after a bit of searching, but I was wondering if there was a way to get around this. From my research, it seems like using a root class and a child class is one possible, but advanced way of doing it.
    I am currently thinking of just representing the linked nodes of a node as an array of index numbers which you can use to get the referenced nodes from the graph array. This means that the graph array cannot be sorted or otherwise modified so that the index numbers won't work, you can only add objects onto the end. Is this thinking right, or is there a different way to go about this?
    Solved!
    Go to Solution.

    Not an easy problem, as recursion is not native to LV programming (much less than any other language I used except assembler).
    But the solution to your index number problem is to use 'keys'. Each node should have a unique key (number), so you can adress it (search for the key) even if the array is sorted in some way.
    Again, it is not native to LV. This means, that you need to program the logic other languages use for that on your own (pseudo-pointers, handles, keys, hashes).
    From a practical side: I would not implement the graph in LV itself but access a graph from some other source (data base?) via an interface (ActiveX, .NET). To learn a bit more about how to do it, try playing around with the tree control (it is a limited graph).
    Felix
    www.aescusoft.de
    My latest community nugget on producer/consumer design
    My current blog: A journey through uml

  • Updating a hierarchical data structure from an entry processor

    I have a tree-like data structure that I am attempting to update from an AbstractProcessor.
    Imagine that one value is a collection of child value keys, and I want to add a new child node in the tree. This requires updating the parent node (which contains the list of child nodes), and adding the child value which is a separate entry.
    I would rather not combine all bits of data into one value (which could make for a large serialized object), as sometimes I prefer to access (read-only) the child values directly. The child and the parent values live in the same partition in the partitioned cache, though, so get access should be local.
    However, I am attempting to call put() on the same cache to add a child value which is apparently disallowed. It makes sense that a blocking call is involved in this operation, as it needs to push out this data to the cluster member that has the backup value for the same operation, but is there a general problem with performing any kind of re-entrant work on Coherence caches from an entry processor for any value that is not the value you are processing? I get the assertion below.
    I am fine with the context blocking (preventing reads or writes on the parent node value) until the child completes, presuming that I handle deadlock prevention myself due to the order in which values are accessed.
    Is there any way to do this, either with entry processors or not? My code previously used lock, get and put to operate on the tree (which worked), but I am trying to convert this code to use entry processors to be more efficient.
    2008-12-05 16:05:34.450 (ERROR)[Coherence/Logger@9219882 3.4/405]: Assertion failed: poll() is a blocking call and cannot be called on the Service thread
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:30)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:1)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:928)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:887)
         at com.tangosol.net.cache.NearCache.put(NearCache.java:286)
         at com.conduit.server.properties.CLDistributedPropertiesManager$UpdatePropertiesProcessor.process(CLDistributedPropertiesManager.java:249)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.invoke(DistributedCache.CDB:20)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onInvokeRequest(DistributedCache.CDB:50)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$InvokeRequest.run(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:637)

    Hi,
    reentrant calls to the same Coherence service is very much recommended against.
    For more about it, please look at the following Wiki page:
    http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Best regards,
    Robert

  • Error in creating New Data Structures

    Hi Gurus,
    I tried creating new data structure in the R3 server using RSO2. I was planning to use the Purchasing Table (EKBE). But when I tried saving the structure this error occured/showed.
    Invalid Extract sturcture template EKBE of Datasource MM_EKBE
    Diagnosis:
    You have tried to generate an extract structure with template structure EKBE. This was not successful, because the template structure references quantity or currency fields, for example the field name MENGE from another table.
    Procedure:
    Generate a view or a DDIC structure based on the template structure, which does not contain the unstupported fields.
    - I have no idea what the problem is and what i should do based on the procedure. If anyone can help it would be much appreciated. Thank you in advance
    - KIT

    Hi Kristian,
    The table EKBE has quantity and currency field refernceing field from other table to define them.
    You need to include these refenced filed in the structure you add in the data source as the template.
    To do this create a custom dictionary object view.
    In the view include the refernce currency and quantity field from the reference table.
    Also create a join between EKBE and the refernece table using the data elements that are the same.
    For example, for quantity key figures in EKBE the unit of measure would be sourced by menge from MARA.
    So you need to have table MARA included in the view.
    The join would be on the material in EKBE and material in MARA.
    You will find reference quantity and currency tab in the table definition in the Reference currency/quantity tab in the table definition.
    You need to do this for all your quantity and currency references.
    Once that is done use the ZVIEW to create your data source.
    Hope it helps,
    Best regards,
    Sunmit.

  • Trees Data Structures in Java

    Hi all !
    Currently I�m developing a Java software that requires a Tree data structure. Searching through Java API I�ve found classes like TreeMap and TreeSet, but although these classes seens to use tree structure behind what they really provide are linear data structure, not a tree data structure. My question is if there is a public and free Java tree data structure that permits to do activities such as:
    - Search for a particular node (efficiently)
    - Insert children to any node
    - Maintain one Object in each node
    Simple like this ! It must have something like this in Java, but I just can�t find it and I�m just too lazy to implement it :) ! Anyone can help me ?
    Thanks in advance !

    Mel,
    I�ve seen javax.swing.tree.TreeModel and its seens more like an implementation of Control in swing MVC (Model Vision Control) swing paradigm for JTree. It has listeners generated for manipulation events for example. It uses javax.swing.tree.DefaultMutableTreeNode for its nodes, and maybe I could use this class for the nodes of my tree since it has many useful operations. However still I miss a node searching operation that this class doesn�t have.
    Yes my problem is a simple Tree, but it is simple now and it might become much more complex, demanding more operations on the tree and specially efficient searching algorithms. Thats why I�m looking for a complete solution and I prefer something that is already done and well tested :) !
    Thanks for your tip !

Maybe you are looking for

  • Macbook Pro Safari pop up's

    Over the last couple of weeks pop up's have started appearing on every new page I open in Safari, the same happens with Chrome, when I search on Google another page opens with adverts and the google search is pushed down below a series of adverts.  A

  • Can't Delete or "Trash" Messages

    I can't seem to delete messages in my mail box; it doesn't even send to any of the "Trash" mailboxes. I have no idea what's going on or how to fix it. I'm getting really tired of wading through a bunch of spam that I can't get rid of.

  • Error installing OBIEE11G on IBM-AIX 6.1

    Hi all, I am trying to install OBIEE 11.1.1.6.0 (software only) on my IBM-AIX 6.1 server, and I have completed installing weblogic and OBIEE, but in the configuration phase i am getting failed at restarting the admin server: the error in the log is a

  • Adobe Audition issue

    Hello everyone. First of all, i would like to thank you for your time in reading this. I have a problem that happened when i put together my new pc I work on Adobe Audition 1.5 from my old pc and when i used to work, i could import MP3's no problem a

  • Get-Acl : The input object cannot be bound to any parameters

    Hi I am trying to get a list of folder and permissions for network folders and I am getting the error Get-Acl : The input object cannot be bound to any parameters for the command either because the command does not take pipeline input or the input an