Dictionary Structure---data structure of Dictionary

I don't know how to build a data structure of dictionary...
somebody said it's was built by binary tree. but I don't know how.
somebody helps me????
thanks for reading my topic

A dictionary is not a tree/hierarchical structure. Have you ever opened a dictionary before?! If you have even once, you already know the structure of a dictionary. Now, how to build that in Java is another question. I'm not sure of your requirements but you could start by creating an object that represents a single term, it's definitions, usages, etc. Then you can create a List of them (and sort them alphabetically). Simple enough?

Similar Messages

  • How I create a structure in the ABAP DICTIONARY?

    I need a structure for a FORM, how can I create a structure in the ABAP DICTIONARY?
    Thanks!!

    Hi Carlos,
    Go to SE11(ABAP Dictionary).
    Select the Data Type radio button.
    Specify the name for your structure.
    Note structure in customer namespace should start with Z or Y.
    Press the create button.
    Then in the next dialog box select the structure radio button.
    Then you will get in to the structure screen.
    Give a meaningful description for your structure.
    Add the relevant fields and finally activate.
    Regards,
    Abdul

  • Inconsistency in the Dictionary for the structure "MC02M_0ITM".

    Hi Expert,
    we are applying ehp3 for ERP 6.0. I am getting the below abap dump DDIC_TYPE_INCONSISTENCY.
    Runtime Errors DDIC_TYPE_INCONSISTENCY
    Date and Time          29.08.2011 10:12:27
    Short text
         Inconsistency in the Dictionary for the structure "MC02M_0ITM".
    What happened?
         Error in the SAP kernel.
         The current ABAP "SAPLMCEX" program had to be terminated because the
         ABAP processor detected an internal system error.
    Error analysis
         There is an internal system error.
         eliminated with ABAP/4.
    How to correct the error
         The internal system error cannot be fixed by ABAP means only.
         You may be able to find a solution in the SAP note system. If you have
         access to the SAP note system, try searching for the following terms:
          "DDIC_TYPE_INCONSISTENCY" " "
          "SAPLMCEX" or "LMCEXTOP"
    I can see there is error on Table MC02M_0ITM:
    Table MC02M_0ITM could not be activated (E- Field ZZUNDTSORT in table MC02M_0ITM is specified twice. Please check )
    when I run the program RSDDCHECK in your system,  for check the structure "MC02M_0ITM".
    There is som error.
    Any Idea how can I solve this kind of error. Is it some repport for help this?
    Regards

    Hi Srikishan,
    Thanks for reply.
    There is just one field ZZUNDTSORT on talbe.
    When I try to check the table I get this log:
    Check table/structure MC02M_0ITM
               Check table MC02M_0ITM (REZ/29.08.11/11:14)
               Enhancement category 3 possible, but include or subty. not yet classified
               Field ZZAFHENTFORM in table MC02M_0ITM is specified twice. Please check
               Field ZZAFHKVANTUM in table MC02M_0ITM is specified twice. Please check
               Field ZZAFHTERMFRA in table MC02M_0ITM is specified twice. Please check
               Field ZZAFHTERMTIL in table MC02M_0ITM is specified twice. Please check
               Field ZZAFREGNDATO in table MC02M_0ITM is specified twice. Please check
               Field ZZAFREGNKVAMTUM in table MC02M_0ITM is specified twice. Please check
               Field ZZAFREGNMEINS in table MC02M_0ITM is specified twice. Please check
               Field ZZALTMATNR in table MC02M_0ITM is specified twice. Please check
               Field ZZALTMPRIS in table MC02M_0ITM is specified twice. Please check
               Field ZZANSOEGNR in table MC02M_0ITM is specified twice. Please check
               Field ZZANTHA in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_DUMMY1 in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_DUMMY2 in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_GRKERN in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_KNKERN in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_MELDR in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_OLIEIND in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_RENS in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_RENTFRO in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_RENTIL in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_TOR in table MC02M_0ITM is specified twice. Please check
               Field ZZBK_TORTIL in table MC02M_0ITM is specified twice. Please check
               Field ZZBPROT in table MC02M_0ITM is specified twice. Please check
               Field ZZBUREN in table MC02M_0ITM is specified twice. Please check
               Field ZZBVAND in table MC02M_0ITM is specified twice. Please check
               Field ZZFORBELNR in table MC02M_0ITM is specified twice. Please check
               Field ZZFORFALDSDATO in table MC02M_0ITM is specified twice. Please check
               Field ZZFORVUDB in table MC02M_0ITM is specified twice. Please check
               Field ZZGJAHR in table MC02M_0ITM is specified twice. Please check
               Field ZZKONCEPTNR in table MC02M_0ITM is specified twice. Please check
               Field ZZLAESSELV in table MC02M_0ITM is specified twice. Please check
               Field ZZLAGERLEJDATO in table MC02M_0ITM is specified twice. Please check
               Field ZZLAGERLEJSATS in table MC02M_0ITM is specified twice. Please check
               Field ZZLAGERSATSDATO in table MC02M_0ITM is specified twice. Please check
               Field ZZLAGERSATSDATO2 in table MC02M_0ITM is specified twice. Please check
               Field ZZLAGERSATSDATO3 in table MC02M_0ITM is specified twice. Please check
    Regards

  • Inconsistency in the Dictionary for the structure "VXO2APPL"

    Hi gurus,
    I started with SP stack upgrade to 10(Install  SAP EHP1 for SAP NetWeaver 7.0  Stack 10 with Kernel 7.20).
    --First i did kernel upgrade from 701 patch 111 to 720 patch 105 which was successfull.
    --Now when i tried to upgrade the stack it gives me below error:
    Runtime Errors         DDIC_TYPE_INCONSISTENCY
    Date and Time          22.02.2012 14:16:39
    Short text
         Inconsistency in the Dictionary for the structure "VXO2APPL".
    What happened?
         Error in the SAP kernel.
         The current ABAP "SAPLSDOC" program had to be terminated because the
         ABAP processor detected an internal system error.
    Error analysis
         There is an internal system error.
         eliminated with ABAP/4.
    Trigger Location of Runtime Error
         Program                                 SAPLSDOC
         Include                                 LSVRXPIN
         Row                                     1.096
    **************************************************************************8
    I am able to login to the system in 000 client but when  i give any tcode i get this error....(i am only able to fire st22 and no other t code)
    Anyone who can help or has got the same problem earlier?
    Thanks
    Siddharth

    Hi Siddharth,
    Is Kernel upgrade to 720 really required for Support package stack upgrade ?
    because information clearly states "Error in the SAP kernel" rather i would suggest to upgrade your SAP system kernel to latest of 701 patch level 150
    Once you revert kernel to 701 , then try applying Support packages via client 000.
    Let us know results
    Regards,

  • Inconsistency in the Dictionary for the structure "FAGLPOSX" - GL Planning

    Hi,
    We are in ECC 6.0. After inputting plan data for GL using TCode GP12N, I am trying to run the report F.01. However the system is giving the error "Inconsistency in the Dictionary for the structure "FAGLPOSX"". Has anyone faced this problem before? Do I have to activate something before I start planning? Do help if you have a solution for the same.
    Thanks and Regards
    Shivram.

    Hi,
    Solved!! I had incorrectly added EBELN in the additional fields for line item display config. This resulted in the EBELN field appearing twice in FAGLPOSX and hence the error.
    Rgds
    Shivram.

  • Simple Transformation to deserialize an XML file into ABAP data structures?

    I'm attempting to write my first simple transformation to deserialize
    an XML file into ABAP data structures and I have a few questions.
    My simple transformation contains code like the following
    <tt:transform xmlns:tt="http://www.sap.com/transformation-templates"
                  xmlns:pp="http://www.sap.com/abapxml/types/defined" >
    <tt:type name="REPORT" line-type="?">
      <tt:node name="COMPANY_ID" type="C" length="10" />
      <tt:node name="JOB_ID" type="C" length="20" />
      <tt:node name="TYPE_CSV" type="C" length="1" />
      <tt:node name="TYPE_XLS" type="C" length="1" />
      <tt:node name="TYPE_PDF" type="C" length="1" />
      <tt:node name="IS_NEW" type="C" length="1" />
    </tt:type>
    <tt:root name="ROOT2" type="pp:REPORT" />
        <QueryResponse>
        <tt:loop ref="ROOT2" name="line">
          <QueryResponseRow>
            <CompanyID>
              <tt:value ref="$line.COMPANY_ID" />
            </CompanyID>
            <JobID>
              <tt:value ref="$line.JOB_ID" />
            </JobID>
            <ExportTypes>
              <tt:loop>
                <ExportType>
                   I don't know what to do here (see item 3, below)
                </ExportType>
              </tt:loop>
            </ExportTypes>
            <IsNew>
              <tt:value ref="$line.IS_NEW"
              map="val(' ') = xml('false'), val('X') = xml('true')" />
            </IsNew>
          </QueryResponseRow>
          </tt:loop>
        </QueryResponse>
        </tt:loop>
    1. In a DTD, an element can be designated as occurring zero or one
    time, zero or more times, or one or more times. How do I write the
    simple transformation to accommodate these possibilities?
    2. In trying to accommodate the "zero or more times" case, I am trying
    to use the <tt:loop> instruction. It occurs several layers deep in the
    XML hierarchy, but at the top level of the ABAP table. The internal
    table has a structure defined in the ABAP program, not in the data
    dictionary. In the simple transformation, I used <tt:type> and
    <tt:node> to define the structure of the internal table and then
    tried to use <tt:loop ref="ROOT2" name="line"> around the subtree that
    can occur zero or more times. But every variation I try seems to get
    different errors. Can anyone supply a working example of this?
    3. Among the fields in the internal table, I've defined three
    one-character fields named TYPE_CSV, TYPE_XLS, and TYPE_PDF. In the
    XML file, I expect zero to three elements of the form
    <ExportType exporttype='csv' />
    <ExportType exporttype='xls' />
    <ExportType exporttype='pdf' />
    I want to set field TYPE_CSV = 'X' if I find an ExportType element
    with its exporttype attribute set to 'csv'. I want to set field
    TYPE_XLS = 'X' if I find an ExportType element with its exporttype
    attribute set to 'xls'. I want to set field TYPE_PDF = 'X' if I find
    an ExportType element with its exporttype attribute set to 'pdf'. How
    can I do that?
    4. For an element that has a value like
    <ErrorCode>123</ErrorCode>
    in the simple transformation, the sequence
    <ErrorCode>  <tt:value ref="ROOT1.CODE" />  </ErrorCode>
    seems to work just fine.
    I have other situations where the XML reads
    <IsNew value='true' />
    I wanted to write
    <IsNew>
            <tt:value ref="$line.IS_NEW"
            map="val(' ') = xml('false'), val('X') = xml('true')" />
           </IsNew>
    but I'm afraid that the <tt:value> fails to deal with the fact that in
    the XML file the value is being passed as the value of an attribute
    (named "value"), rather than the value of the element itself. How do
    you handle this?

    Try this code below:
    data  l_xml_table2  type table of xml_line with header line.
    W_filename - This is a Path.
      if w_filename(02) = '
        open dataset w_filename for output in binary mode.
        if sy-subrc = 0.
          l_xml_table2[] = l_xml_table[].
          loop at l_xml_table2.
            transfer l_xml_table2 to w_filename.
          endloop.
        endif.
        close dataset w_filename.
      else.
        call method cl_gui_frontend_services=>gui_download
          exporting
            bin_filesize = l_xml_size
            filename     = w_filename
            filetype     = 'BIN'
          changing
            data_tab     = l_xml_table
          exceptions
            others       = 24.
        if sy-subrc <> 0.
          message id sy-msgid type sy-msgty number sy-msgno
                     with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
        endif.

  • Error in creating New Data Structures

    Hi Gurus,
    I tried creating new data structure in the R3 server using RSO2. I was planning to use the Purchasing Table (EKBE). But when I tried saving the structure this error occured/showed.
    Invalid Extract sturcture template EKBE of Datasource MM_EKBE
    Diagnosis:
    You have tried to generate an extract structure with template structure EKBE. This was not successful, because the template structure references quantity or currency fields, for example the field name MENGE from another table.
    Procedure:
    Generate a view or a DDIC structure based on the template structure, which does not contain the unstupported fields.
    - I have no idea what the problem is and what i should do based on the procedure. If anyone can help it would be much appreciated. Thank you in advance
    - KIT

    Hi Kristian,
    The table EKBE has quantity and currency field refernceing field from other table to define them.
    You need to include these refenced filed in the structure you add in the data source as the template.
    To do this create a custom dictionary object view.
    In the view include the refernce currency and quantity field from the reference table.
    Also create a join between EKBE and the refernece table using the data elements that are the same.
    For example, for quantity key figures in EKBE the unit of measure would be sourced by menge from MARA.
    So you need to have table MARA included in the view.
    The join would be on the material in EKBE and material in MARA.
    You will find reference quantity and currency tab in the table definition in the Reference currency/quantity tab in the table definition.
    You need to do this for all your quantity and currency references.
    Once that is done use the ZVIEW to create your data source.
    Hope it helps,
    Best regards,
    Sunmit.

  • How to view structure data?

    I wanted to view structure SOS_OBJLST data. Data tables data could be viewed eg. with transactions SE16, SE17. Is there available these kind of transactions to see similar way structure data?

    hello MAYANK.
    i ll help you out. follow this
    let me tell you the basic diffrence between table and structure
    Table : stores value.
    structure : doesnot store value. it is just a skeleton for the table.
    to identify any field in structure from which table it is picking.
    go to se84 tcode choose ABAP DICTIONARY->CHOOSE FIELDS->CHOOSE STRUCTURE FIELDS---> PUT YOUR FIELD NAMES -->EXECUTE
    you will get all the tables from which structure is made of.
    thanku

  • OC4J: marshalling does not recreate the same data structure onthe client

    Hi guys,
    I am trying to use OC4J as an EJB container and have come across the following problem, which looks like a bug.
    I have a value object method that returns an instance of ArrayList with references to other value objects of the same class. The value objects have references to other value objects. When this structure is marshalled across the network, we expect it to be recreated as is but that does not happen and instead objects get duplicated.
    Suppose we have 2 value objects: ValueObject1 and ValueObject2. ValueObject1 references ValueObject2 via its private field and the ValueObject2 references ValueObject1. Both value objects are returned by our method in an ArrayList structure. Here is how it will look like (number after @ represents an address in memory):
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@2
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@1
    We would expect to see the same (except exact addresses) after marshalling. Here is what we get instead:
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@3
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@4
    It can be seen that objects get unnecessarily duplicated – the instance of the ValueObject1 referenced by the ValueObject2 is not the same now as the instance that is referenced by the ArrayList instance.
    This does not only break referential integrity, structure and consistency of the data but dramatically increases the amount of information sent across the network. The problem was discovered when we found that a relatively small but complicated structure that gets serialized into a 142kb file requires about 20Mb of network communication. All this extra info is duplicated object instances.
    I have created a small test case to demonstrate the problem and let you reproduce it.
    Here is RMITestBean.java:
    package com.cramer.test;
    import javax.ejb.EJBObject;
    import java.util.*;
    public interface RMITestBean extends EJBObject
    public ArrayList getSomeData(int testSize) throws java.rmi.RemoteException;
    public byte[] getSomeDataInBytes(int testSize) throws java.rmi.RemoteException;
    Here is RMITestBeanBean.java:
    package com.cramer.test;
    import javax.ejb.SessionBean;
    import javax.ejb.SessionContext;
    import java.util.*;
    public class RMITestBeanBean implements SessionBean
    private SessionContext context;
    SomeVO someVO;
    public void ejbCreate()
    someVO = new SomeVO(0);
    public void ejbActivate()
    public void ejbPassivate()
    public void ejbRemove()
    public void setSessionContext(SessionContext ctx)
    this.context = ctx;
    public byte[] getSomeDataInBytes(int testSize)
    ArrayList someData = getSomeData(testSize);
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println(" serialised output size: "+byteOutputStream.size());
    byte[] bytes = byteOutputStream.toByteArray();
    objectOutputStream.close();
    byteOutputStream.close();
    return bytes;
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return null;
    public ArrayList getSomeData(int testSize)
    // Create array of objects
    ArrayList someData = new ArrayList();
    for (int i=0; i<testSize; i++)
    someData.add(new SomeVO(i));
    // Interlink all the objects
    for (int i=0; i<someData.size()-1; i++)
    for (int j=i+1; j<someData.size(); j++)
    ((SomeVO)someData.get(i)).addValueObject((SomeVO)someData.get(j));
    ((SomeVO)someData.get(j)).addValueObject((SomeVO)someData.get(i));
    // print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Check the serialised size of the structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return someData;
    Here is RMITestBeanHome:
    package com.cramer.test;
    import javax.ejb.EJBHome;
    import java.rmi.RemoteException;
    import javax.ejb.CreateException;
    public interface RMITestBeanHome extends EJBHome
    RMITestBean create() throws RemoteException, CreateException;
    Here is ejb-jar.xml:
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN" "http://java.sun.com/dtd/ejb-jar_2_0.dtd">
    <ejb-jar>
    <enterprise-beans>
    <session>
    <description>Session Bean ( Stateful )</description>
    <display-name>RMITestBean</display-name>
    <ejb-name>RMITestBean</ejb-name>
    <home>com.cramer.test.RMITestBeanHome</home>
    <remote>com.cramer.test.RMITestBean</remote>
    <ejb-class>com.cramer.test.RMITestBeanBean</ejb-class>
    <session-type>Stateful</session-type>
    <transaction-type>Container</transaction-type>
    </session>
    </enterprise-beans>
    </ejb-jar>
    And finally the application that tests the bean:
    package com.cramer.test;
    import java.util.*;
    import javax.rmi.*;
    import javax.naming.*;
    public class RMITestApplication
    final static boolean HARDCODE_SERIALISATION = false;
    final static int TEST_SIZE = 2;
    public static void main(String[] args)
    Hashtable props = new Hashtable();
    props.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
    props.put(Context.PROVIDER_URL, "ormi://lil8m:23792/alexei");
    props.put(Context.SECURITY_PRINCIPAL, "admin");
    props.put(Context.SECURITY_CREDENTIALS, "admin");
    try {
    // Get the JNDI initial context
    InitialContext ctx = new InitialContext(props);
    NamingEnumeration list = ctx.list("comp/env/ejb");
    // Get a reference to the Home Object which we use to create the EJB Object
    Object objJNDI = ctx.lookup("comp/env/ejb/RMITestBean");
    // Now cast it to an InventoryHome object
    RMITestBeanHome testBeanHome = (RMITestBeanHome)PortableRemoteObject.narrow(objJNDI,RMITestBeanHome.class);
    // Create the Inventory remote interface
    RMITestBean testBean = testBeanHome.create();
    ArrayList someData = null;
    if (!HARDCODE_SERIALISATION)
    // ############################### Alternative 1 ##############################
    // ## This relies on marshalling serialisation ##
    someData = testBean.getSomeData(TEST_SIZE);
    // ############################ End of Alternative 1 ##########################
    } else
    // ############################### Alternative 2 ##############################
    // ## This gets a serialised byte stream and de-serialises it ##
    byte[] bytes = testBean.getSomeDataInBytes(TEST_SIZE);
    try {
    java.io.ByteArrayInputStream byteInputStream = new java.io.ByteArrayInputStream(bytes);
    java.io.ObjectInputStream objectInputStream = new java.io.ObjectInputStream(byteInputStream);
    someData = (ArrayList)objectInputStream.readObject();
    objectInputStream.close();
    byteInputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    // ############################ End of Alternative 2 ##########################
    // Print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Print out the size of the serialised structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    catch(Exception ex){
    ex.printStackTrace(System.out);
    The parameters you might be interested in playing with are HARDCODE_SERIALISATION and TEST_SIZE defined at the beginning of RMITestApplication.java. The HARDCODE_SERIALISATION is a flag that specifies whether Java serialisation should be used to pass the data across or we should rely on OC4J marshalling. TEST_SIZE defines the size of the object graph and the ArrayList structure. The bigger this size is the more dramatic effect you get from data duplication.
    The test case outputs the structure both on the server and on the client and prints out the size of the serialised structure. That gives us sufficient comparison, as both structure and its size should be the same on the client and on the server.
    The test case also demonstrates that the problem is specific to OC4J. The standard Java serialisation does not suffer the same flaw. However using the standard serialisation the way I did in the test case code is generally unacceptable as it breaks the transparency benefit and complicates interfaces.
    To run the test case:
    1) Modify provider URL parameter value on line 15 of the RMITestApplication.java for your environment.
    2) Deploy the bean to the server.
    4) Run RMITestApplication on a client PC.
    5) Compare the outputs on the server and on the client.
    I hope someone can reproduce the problem and give their opinion, and possibly point to the solution if there is one at the moment.
    Cheers,
    Alexei

    Hi,
    Eugene, wrong end user recovery.  Alexey is referring to client desktop end user recovery which is entirely different.
    Alexy - As noted in the previous post:
    http://social.technet.microsoft.com/Forums/en-US/bc67c597-4379-4a8d-a5e0-cd4b26c85d91/dpm-2012-still-requires-put-end-users-into-local-admin-groups-for-the-purpose-of-end-user-data?forum=dataprotectionmanager
    Each recovery point has users permisions tied to it, so it's not possible to retroacively give the users permissions.  Implement the below and going forward all users can restore their own files.
    This is a hands off solution to allow all users that use a machine to be able to restore their own files.
     1) Make these two cmd files and save them in c:\temp
     2) Using windows scheduler – schedule addperms.cmd to run daily – any new users that log onto the machine will automatically be able to restore their own files.
    <addperms.cmd>
     Cmd.exe /v /c c:\temp\addreg.cmd
    <addreg.cmd>
     set users=
     echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
     echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
     FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
     echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
     REG IMPORT c:\temp\perms.reg
     Del c:\temp\perms.reg
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Diff B/w ABAP Dictionary and Data Dictionary

    What is the difference between ABAP Dictionary and Data Dictionary........

    Hi,
    Both are the same.
    Please check this online document perhaps it may help.
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCDWBDIC/BCDWBDIC.pdf
    Regards,
    Ferry Lianto

  • BAPIs / Function Modules to access org. structure data?

    Hi,
    I want to integrate SAP org. structure with external BPM system.
    anybody knows how to access org. structure data via RFC or web services?
    Thanks in advance.

    The purchasing org and purchasing grp can be found in T024E and T024 table.
    Regards
    Anurag

  • Xcode, how do you take an app from an idea to a data model, data structure, or class structure?

    Hey everyone!
    I'm a beginner xcoder and computer engineering sophomore and have literally spent every hour of the past few weeks reading as much as I possibly can about becoming a developer.
    I've found plenty of documentation, and guides on how to do very specific things in xcode and objective-C but am still looking for more information on one topic; how do you decide which data models/structures/controllers etc to use?
    I mean, you personally. I've been looking for a resource on this and have not been able to find one. I just finished Apple's iOS Developers Guide and they mention "choosing a data model" but do not describe them in much detail.
    The following is what is provided in the guide:
    ● Choose a basic approach for your data model:
    ● Existing data model code—If you already have data model code written in a C-based language, you
    can integrate that code directly into your iOS apps. Because iOS apps are written in Objective-C, they
    work just fine with code written in other C-based languages. Of course, there is also benefit to writing
    an Objective-C wrapper for any non Objective-C code.
    ● Custom objects data model—A custom object typically combines some simple data (strings, numbers,
    dates, URLs, and so on) with the business logic needed to manage that data and ensure its consistency.
    Custom objects can store a combination of scalar values and pointers to other objects. For example,
    the Foundation framework defines classes for many simple data types and for storing collections of
    other objects. These classes make it much easier to define your own custom objects.
    ● Structured data model—If your data is highly structured—that is, it lends itself to storage in a
    database—use Core Data (or SQLite) to store the data. Core Data provides a simple object-oriented
    model for managing your structured data. It also provides built-in support for some advanced features
    like undo and iCloud. (SQLite files cannot be used in conjunction with iCloud.)
    ● Decide whether you need support for documents:
    The job of a document is to manage your app’s in-memory data model objects and coordinate the storage
    of that data in a corresponding file (or set of files) on disk. Documents normally connote files that the user
    created but apps can use documents to manage non user facing files too. One big advantage of using
    documents is that the UIDocument class makes interacting with iCloud and the local file system much
    simpler. For apps that use Core Data to store their content, the UIManagedDocument class provides similar
    support.
    I suppose my question boils down to, how do you decide which structures to use? If you can provide an example of an app idea and how its implemented that would be very helpful and much appreciated!
    For example, to implement the idea of an app which allows users to progress through levels of knowledge of a certain subject and rewarding them with badges and such (this is not an actual app just a whim) how would one model that?
    Thanks in advance for all your help!!!

    SgtChevelle wrote:
    how do you decide which structures to use?
    Trial and error.
    I wish I had a better answer for you, but that pretty much encapsulates it. There is some, but not much, good wisdom out there, but it takes a significant amount of experience to be able to recognize it. The software development community if currently afflicted with a case of copy-and-paste-itis. And the prognosis is poor.
    The solution is to be brutal to yourself and others. Focus on what you need and ignore everything else. Remember that other people have their own needs and methods and they might not be applicable to you. Apple, for example, can hire thousands of programmers, set them to coding for six months, pick the best results, and have the end-users spend their own time and monety to test it. If you don't have Apple's resources and power, think twice about adopting Apple's approach. And I am talking from a macro to a micro perspective. Apple's sample and boilerplate code is just junk. Don't assume you can't do better. You can.
    Unfortunately, all this takes time and practice. You can read popular books, but never assume that anyone knows more than you do. Maybe they do and maybe they don't. It takes time to figure that out. Just do your best, ignore the naysayers, and doubt other people even more than you doubt yourself.

  • Error saving data structure CE11000 (please read log) message number KX 655

    while activating the data structure in the operating concern of CO PA sap gives the following errors.
    1.Error saving data structure CE11000 (please read log)
    Message no. KX655
    2.Error saving table CE01000
    Message no. KX593
    3.in Log Reference field CE31000-REC_WAERS for CE31000-VVQ10001 has incorrect type.
    Pls suggest

    Hey,
    Below tables are related to application logs
    BAL_AMODAL  :                   Application Log: INDX table for amodal communication
    BALC        :                   Application Log: Log or message context            
    BALDAT      :                   Application Log: Log data                          
    BALHANDLE   :                   Application Log: Lock object dummy table           
    BALHDR      :                   Application log: log header                        
    BALHDRP     :                   Application log: log parameter                     
    BAL_INDX    :                   Application Log: INDX tables                       
    BALM        :                   Application log: log message                       
    BALMP       :                   Application log: message parameter                 
    BALOBJ      :                   Application log: objects                           
    BALOBJT     :                   Application log: object texts                      
    BALSUB      :                   Application log: sub-objects                       
    BALSUBT     :                   Application log: Sub-object texts                  
    -Kiran
    *Please mark useful answers

  • Can I automate the creation of a cluster in LabView using the data structure created in an autogenerated .CSV, C header, or XML file?

    Can I automate the creation of a cluster in LabView using the data structure created in an auto generated .CSV, C header, or XML file?  I'm trying to take the data structure defined in one or more of those files listed and have LabView automatically create a cluster with identical structure and data types.  (Ideally, I would like to do this with a C header file only.)  Basically, I'm trying to avoid having to create the cluster by hand, as the number of cluster elements could be very large. I've looked into EasyXML and contacted the rep for the add-on.  Unfortunately, this capability has not been created yet.  Has anyone done something like this before? Thanks in advance for the help.  
    Message Edited by PhilipJoeP on 04-29-2009 04:54 PM
    Solved!
    Go to Solution.

    smercurio_fc wrote:
    Is this something you're trying to do at runtime? Clusters are fixed data structures so you can't change them programmatically. Or, are you just trying to create some typedef cluster controls so that you can use them for coding? What would your clusters basically look like? Perhaps another way of holding the information like an array of variants?
    You can try LabVIEW scripting, though be aware that this is not supported by NI. 
     Wow!  Thanks for the quick response!  We would use this cluster as a fixed data structure.  No need to change the structure during runtime.  The cluster would be a cluster of clusters with multiple levels.  There would be not pattern as to how deep these levels would go, or how many elements would be in each.   Here is the application.  I would like to be able to autocode a Simulink model file into a DLL.  The model DLL would accept a Simulink bus object of a certain data structure (bus of buses), pick out which elements of the bus is needed for the model calculation, and then pass the bus object.  I then will take the DLL file and use the DLL VI block to pass a cluster into the DLL block (with identical structure as the bus in Simulink).  To save time, I would like to auto generate the C header file using Simulink to define the bus structure and then have LabView read that header file and create the cluster automatically.   Right now I can do everything but the auto creation of the cluster.  I can manually build the cluster to match the Simulink model bus structure and it runs fine.  But this is only for an example model with a small structure.  Need to make the cluster creation automated so it can handle large structures with minimal brute force. Thanks!  

  • What is the best data structure for loading an enterprise Power BI site?

    Hi folks, I'd sure appreciate some help here!
    I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics.  Those puppies would sit up and beg when you asked them to deliver up goodies
    to SSRS or PowerView.
    But Power BI is a whole new game for me.  
    Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?  
    Should I be running Data Management Gateway and exposing each table in my RDW individually?
    Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact? 
    I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?  
    And my subsidiary question is this:  am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
    before the data is suitable to expose to Power BI?  
    Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
    before the data is ready to be massaged and presented by Power BI? 
    I'd sure value your thoughts and opinions,
    Cheers, Donna
    Donna Kelly

    Dear All,
    My original question was: 
    what's the optimum way of exposing data to the Power BI cloud?
    Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
    Before I do that, though, let me summarise a few points:
    Melissa said “My initial thoughts:  I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
    the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
    Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
    how to combine pre-built elements”
    I had commented “. . . end users and data modelling don't mix . . . self-service so
    far has been mostly a bust”.
    Here at Redwing, we give out a short White Paper on Business Intelligence Reporting.  It goes to clients and anyone else who wants one.  The heart
    of the Paper is the Reporting Pyramid, which states:  Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
    For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
    For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
    demand.
    For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
    their audiences.
    For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
    intricate analyses as needed.
    On the subject of self-service, the Redwing view says:  Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
    v
    80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
    v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
    I cannot just expose raw data and tell everyone to get on with it.  That way lies madness!
    I think that clean and well-structured data is a prerequisite for delivering business intelligence. 
    Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
    a data warehouse, which is the physical expression of the dimensional model.
    Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
    Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
    That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
    So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
    That’s fine for the general case, but what about Power BI?  Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
    I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
    The question boils down to what data structures should go down that pipe. 
    According to
    Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views.  I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I). 
    I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid.  I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
    Fact is, I cannot for the life of me see what advantages DMG/PQ
    has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
    If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
    heck out of that to produce all the reporting I’d ever want.  It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
    Of course, your enterprise might not
    have a DW, just a heterogeneous mass of dirty unstructured data.  If that’s the case,
    choosing Power BI data structures is the least of your problems!  :-)
    Cheers, Donna
    Donna Kelly

Maybe you are looking for

  • Apple PLEASE Read This!  When using Grab, why does Saving only allow ONE Format...Tiff?

    It would be nice for Apple's Grab program to offer multiple formats for saving.  I understand why they offer only Tiff but when I have a problem and I need to grab part of the screen or even the whole window, most Discussions require jpg or png forma

  • About form data

    Consider the following scenario in WLW 8.1 o A page has form o User submits the page (action has form bean associated with it) o Goes to confirm page o Back to original page with form How can I preserve the contents of the form that the user had ente

  • Delivery cost during import

    Hi friends..         during import the delivery cost given in the po is different from the delivery cost at the time of miro... i mean there is some slight variation.                i came to know that this was because the decimal place.... in po onl

  • Apple, please fix safari random crash and tab auto refresh

    Apple, are those issues ever going to get fixed? They destroyed iOS 7's usability and the latest 7.1 beta doesnt seem to address either. 1) Safari randomly crashes -  happens multiple times a day. Sometimes opening new pages, sometimes scrolling thro

  • My daughter accidentally downloaded two different version of an album!!

    Anyone know if I can get reimbursed for one of them?