Create data structures on hosted site

Hi,
I'm using a hosted site for my development. With a data modeling tool, I created a DDL script to create the data structures. How, within Apex, do I run the script?
Tnx,
Kim

Hello,
You can run those scripts directly (copy and paste, if size is < 32767 char) from Application Builder -> SQL Workshop -> SQL Commands page.
See http://docs.oracle.com/cd/E23903_01/doc/doc.41/e21677/sql_proc.htm#autoId4 link
Alternatively you can uplaod script (if they are huge) and you can run them. See following link for more info.
http://docs.oracle.com/cd/E23903_01/doc/doc.41/e21677/sql_rep.htm#autoId5
http://docs.oracle.com/cd/E23903_01/doc/doc.41/e21677/sql_rep.htm#BABJBFDJ
Regards,
Hari

Similar Messages

  • What is the best data structure for loading an enterprise Power BI site?

    Hi folks, I'd sure appreciate some help here!
    I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics.  Those puppies would sit up and beg when you asked them to deliver up goodies
    to SSRS or PowerView.
    But Power BI is a whole new game for me.  
    Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?  
    Should I be running Data Management Gateway and exposing each table in my RDW individually?
    Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact? 
    I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?  
    And my subsidiary question is this:  am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
    before the data is suitable to expose to Power BI?  
    Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
    before the data is ready to be massaged and presented by Power BI? 
    I'd sure value your thoughts and opinions,
    Cheers, Donna
    Donna Kelly

    Dear All,
    My original question was: 
    what's the optimum way of exposing data to the Power BI cloud?
    Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
    Before I do that, though, let me summarise a few points:
    Melissa said “My initial thoughts:  I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
    the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
    Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
    how to combine pre-built elements”
    I had commented “. . . end users and data modelling don't mix . . . self-service so
    far has been mostly a bust”.
    Here at Redwing, we give out a short White Paper on Business Intelligence Reporting.  It goes to clients and anyone else who wants one.  The heart
    of the Paper is the Reporting Pyramid, which states:  Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
    For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
    For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
    demand.
    For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
    their audiences.
    For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
    intricate analyses as needed.
    On the subject of self-service, the Redwing view says:  Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
    v
    80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
    v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
    I cannot just expose raw data and tell everyone to get on with it.  That way lies madness!
    I think that clean and well-structured data is a prerequisite for delivering business intelligence. 
    Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
    a data warehouse, which is the physical expression of the dimensional model.
    Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
    Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
    That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
    So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
    That’s fine for the general case, but what about Power BI?  Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
    I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
    The question boils down to what data structures should go down that pipe. 
    According to
    Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views.  I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I). 
    I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid.  I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
    Fact is, I cannot for the life of me see what advantages DMG/PQ
    has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
    If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
    heck out of that to produce all the reporting I’d ever want.  It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
    Of course, your enterprise might not
    have a DW, just a heterogeneous mass of dirty unstructured data.  If that’s the case,
    choosing Power BI data structures is the least of your problems!  :-)
    Cheers, Donna
    Donna Kelly

  • Can Plan Layout in Muse be carried through to a site's folder structure when hosted on a 3rd party site?

    I've reworked a site in Muse (desktop version only, at this stage).  Some of my pages in the old site had very high organic search results in Google, so I need to keep the file structure of the new site the same.  Since I published the Muse version of my site a few days ago, the number of visitors has literally dropped to just 10% of what it usually is!
    Muse seems to create only a flat site (all pages are created directly in the root folder only), and the Plan structure in Muse does NOT carry through to the file structure of the site when published with "Upload to FTP host" .   I am concerned that the Plan layout is only concerned with menu auto-fill functionality, and does not reflect any physical structure.   
    Any solutions?  Am I doing something wrong, or is this a limitation of Muse?
    Many thanks.

    I've reworked a site in Muse (desktop version only, at this stage).  Some of my pages in the old site had very high organic search results in Google, so I need to keep the file structure of the new site the same.  Since I published the Muse version of my site a few days ago, the number of visitors has literally dropped to just 10% of what it usually is!
    Muse seems to create only a flat site (all pages are created directly in the root folder only), and the Plan structure in Muse does NOT carry through to the file structure of the site when published with "Upload to FTP host" .   I am concerned that the Plan layout is only concerned with menu auto-fill functionality, and does not reflect any physical structure.   
    Any solutions?  Am I doing something wrong, or is this a limitation of Muse?
    Many thanks.

  • Can I automate the creation of a cluster in LabView using the data structure created in an autogenerated .CSV, C header, or XML file?

    Can I automate the creation of a cluster in LabView using the data structure created in an auto generated .CSV, C header, or XML file?  I'm trying to take the data structure defined in one or more of those files listed and have LabView automatically create a cluster with identical structure and data types.  (Ideally, I would like to do this with a C header file only.)  Basically, I'm trying to avoid having to create the cluster by hand, as the number of cluster elements could be very large. I've looked into EasyXML and contacted the rep for the add-on.  Unfortunately, this capability has not been created yet.  Has anyone done something like this before? Thanks in advance for the help.  
    Message Edited by PhilipJoeP on 04-29-2009 04:54 PM
    Solved!
    Go to Solution.

    smercurio_fc wrote:
    Is this something you're trying to do at runtime? Clusters are fixed data structures so you can't change them programmatically. Or, are you just trying to create some typedef cluster controls so that you can use them for coding? What would your clusters basically look like? Perhaps another way of holding the information like an array of variants?
    You can try LabVIEW scripting, though be aware that this is not supported by NI. 
     Wow!  Thanks for the quick response!  We would use this cluster as a fixed data structure.  No need to change the structure during runtime.  The cluster would be a cluster of clusters with multiple levels.  There would be not pattern as to how deep these levels would go, or how many elements would be in each.   Here is the application.  I would like to be able to autocode a Simulink model file into a DLL.  The model DLL would accept a Simulink bus object of a certain data structure (bus of buses), pick out which elements of the bus is needed for the model calculation, and then pass the bus object.  I then will take the DLL file and use the DLL VI block to pass a cluster into the DLL block (with identical structure as the bus in Simulink).  To save time, I would like to auto generate the C header file using Simulink to define the bus structure and then have LabView read that header file and create the cluster automatically.   Right now I can do everything but the auto creation of the cluster.  I can manually build the cluster to match the Simulink model bus structure and it runs fine.  But this is only for an example model with a small structure.  Need to make the cluster creation automated so it can handle large structures with minimal brute force. Thanks!  

  • How to create a reference instead of complex type for record in data structure type in BizTalk schema

    I have created the record in BizTalk schema but i will give to Data Structure Type as reference For eg:Recordname(Reference) instead of recordname(Complex type).
    Please help me to sort this out
     

    You can add a reference to an existing type only.
    You might have to manually edit the XSD file to change the xmlns definiton to xmlns:tns and then after you define the record type, set the datatype to tns:.... this creates the XML Type. Then later in your schema you can instantiate this with a name and
    then set type to tns:.... (Reference)
    Regards.

  • How to use "create data" for temporary structures?

    Hi,
    I need to create a structure dynamically based on user provided structure definition. However, my test code throws an exception. I would appreciate it if someone can suggest me an alternative. The code follows.
    Thank you in advance for your help.
    Pradeep
    * The following string is actually passed as a parameter
    data: myLineStructure type string value
    'begin of mystruct,
        mara-matnr type mara-matnr,
        mara-mstae type maara-mstae,
        makt-maktx type makt-maktx,
    end of mystruct.'.
    data: tableLine type ref to data.
    create data tableLine type (myLineStructure).
    assign tableLine->* TO <line>.

    Hi Pradeep,
    First of all u need to create a field-catalog with user provided structure. Then u need to pass the same to create a dynamic internal table. Then create a line type of this table. Please copy & paste this code in ABAP editor which will serve ur purpose.
    TYPE-POOLS : SLIS.
    DATA: mylinestructure TYPE string.
    DATA: it_fieldcat TYPE lvc_t_fcat,
          is_fcat LIKE LINE OF it_fieldcat.
    DATA: new_line TYPE REF TO data.
    FIELD-SYMBOLS : <line> TYPE ANY,
                    <fs_table> TYPE STANDARD TABLE.
    CONCATENATE 'begin of mystruct,'
                 'mara-matnr type mara-matnr,'
                 'mara-mstae type mara-mstae,'
                 'makt-maktx type makt-maktx,'
                 'end of mystruct.' INTO mylinestructure.
    DATA: tableline TYPE REF TO data,
          lines TYPE i,
          off TYPE i,
          off1 TYPE i,
          tabname TYPE ddobjname,
          fieldname TYPE dfies-fieldname,
          ftype(50) TYPE c.
    DATA : BEGIN OF itab OCCURS 0,
           field(30),
           END OF itab.
    DATA :f_tab LIKE dfies OCCURS 0,
          wa_f_tab LIKE dfies,
          f_len TYPE dd01v.
    SPLIT mylinestructure AT ',' INTO TABLE itab.
    DELETE itab INDEX 1.
    DESCRIBE TABLE itab LINES lines.
    DELETE itab INDEX lines.
    LOOP AT itab.
      FIND 'type' IN itab-field MATCH OFFSET off.
      off = off - 1.
      is_fcat-fieldname = itab-field+0(off).
      off = off + 6.
      ftype             = itab-field+off.
      FIND '-' IN ftype MATCH OFFSET off1.
      tabname = ftype+0(off1).
      off1 = off1 + 1.
      fieldname = ftype+off1.
      CALL FUNCTION 'DDIF_FIELDINFO_GET'
        EXPORTING
          tabname        = tabname
          fieldname      = fieldname
        TABLES
          dfies_tab      = f_tab[]
        EXCEPTIONS
          not_found      = 1
          internal_error = 2
          OTHERS         = 3.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.
      READ TABLE f_tab INDEX 1 INTO wa_f_tab.
      CALL FUNCTION 'DDIF_DOMA_GET'
        EXPORTING
          name          = wa_f_tab-domname
        IMPORTING
          dd01v_wa      = f_len
        EXCEPTIONS
          illegal_input = 1
          OTHERS        = 2.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.
      is_fcat-datatype = f_len-datatype.
      is_fcat-intlen = f_len-leng.
      APPEND is_fcat TO it_fieldcat.
    ENDLOOP.
    CALL METHOD cl_alv_table_create=>create_dynamic_table
      EXPORTING
        it_fieldcatalog = it_fieldcat[]
      IMPORTING
        ep_table        = tableline.
    ASSIGN tableline->* TO <fs_table>.
    CREATE DATA new_line LIKE LINE OF <fs_table>.
    ASSIGN new_line->* TO <line>.
    Please reward if helpful.

  • Create the data structure

    hi,
    please let me know how to create the data structure.
    thanks
    suja

    Goto SE11
    Select "Data Type" radio button
    Give name with starting "Z"...example: ZFIRST_DATA
    Press "Create"
    Select "Structure"
    And Give your own description.
    Then give your fields....
    Save & Activate

  • Error in creating New Data Structures

    Hi Gurus,
    I tried creating new data structure in the R3 server using RSO2. I was planning to use the Purchasing Table (EKBE). But when I tried saving the structure this error occured/showed.
    Invalid Extract sturcture template EKBE of Datasource MM_EKBE
    Diagnosis:
    You have tried to generate an extract structure with template structure EKBE. This was not successful, because the template structure references quantity or currency fields, for example the field name MENGE from another table.
    Procedure:
    Generate a view or a DDIC structure based on the template structure, which does not contain the unstupported fields.
    - I have no idea what the problem is and what i should do based on the procedure. If anyone can help it would be much appreciated. Thank you in advance
    - KIT

    Hi Kristian,
    The table EKBE has quantity and currency field refernceing field from other table to define them.
    You need to include these refenced filed in the structure you add in the data source as the template.
    To do this create a custom dictionary object view.
    In the view include the refernce currency and quantity field from the reference table.
    Also create a join between EKBE and the refernece table using the data elements that are the same.
    For example, for quantity key figures in EKBE the unit of measure would be sourced by menge from MARA.
    So you need to have table MARA included in the view.
    The join would be on the material in EKBE and material in MARA.
    You will find reference quantity and currency tab in the table definition in the Reference currency/quantity tab in the table definition.
    You need to do this for all your quantity and currency references.
    Once that is done use the ZVIEW to create your data source.
    Hope it helps,
    Best regards,
    Sunmit.

  • I have a mac OS X 10.4.11 and have created a website in iweb08  ... in order to get it to my host site for it to go live I am having difficulty finding a ftp client to get me there. Can anyone suggest what I should do?

    I have a mac OS X 10.4.11 and have created a website in iweb08  ... in order to get it to my host site for it to go live I am having difficulty finding a ftp client to get me there. Can anyone suggest what I should do?

    Check if any of the below have an older version by contacting the author for your 10.4.11:
    http://www.rbrowser.com/
    http://rsug.itd.umich.edu/software/fugu
    http://cyberduck.io/
    http://panic.com/transmit/
    http://fetchsoftworks.com/
    Or use Applications -> Utilities -> Terminal to do command line FTP:
    http://www.dummies.com/how-to/content/how-to-use-ftp-from-terminal-to-transfer-m ac-files.navId-400509.html
    Upgrading to 10.6.8 should still give you mostly the same compatibility as 10.4.11. Upgrading to 10.7 and above won't.

  • Dynamically create data type for structure

    Hello Experts.
    how to create dynamic data type for structres. for example.
    data lv_struc_name type strukname.
    lv_struc_name = get_struct_name( )  ****** this method gives the structure name('ct_struc')
    now I want to create one data type, which is having the type of lv_struc_name content.(ct_struct)
    thanks
    Tim

    Hi,
    here is the link to really good presentation about generic programming ABAP351.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b332e090-0201-0010-bdbd-b735e96fe0ae
    It contains examples how to create dynamic structure and tables.
    Cheers

  • My website- created on iWeb (Mac) now uploaded to hosting site, does not show content on 4 pages of photos, though Safari shows everything. ???

    All the photos were saved on jpeg format. You can see the text on each page of concern, but not the photos. I have added music to each page of concern with a button to manipulate on/off, and pasted jpeg photos to them which you can see, and music plays with no problem. I re-built the site and uploaded again to hosting site, and am having same problem. Safari shows all pages, text, jpeg photos/music without any problem. So I have a site of 6 pages, and only 2 are completely loadable with Firefox. I have made all updates that have come from Firefox. My Mac is OS X version 10.5.8

    Make sure that you use the latest version of the iWeb software to generate the galleries. Older versions of iWeb had problems in current Firefox versions.
    A good place to ask questions and advice about web development is at the mozillaZine Web Development/Standards Evangelism forum.
    The helpers at that forum are more knowledgeable about web development issues.
    You need to register at the mozillaZine forum site in order to post at that forum.
    See http://forums.mozillazine.org/viewforum.php?f=25

  • Is there a Java API for Tree data structure?

    Hi,
    I am wondering is there any Java API to work on Tree based data structure.
    I could read in some forums / sites that teach me how to create tree based objects using collection.
    But I wanted to know is there any core java API that can be used directly for tree based operations. (like binary tree or other type of trees)
    Please comment on this doubt.

    Headed using google and other stuff not found one.
    Suggestion: Why not start building the one, its a good idea to do that, isn't it.

  • RT - How can I make data transfer to host faster?

    Hi 
    I have created a program that acquires data on FPGA then transfers them to RT so RT can send them up to the windows host once UI is connected.
    The program is based on message type structure as in NI's examples.
    However, I have one problem the data to windows host is not transferred fast enough.
    Pic1 shows the offending bit of code and FPGA.png what's hiding in the subVi.
    FPGA is meant to acquire to cycles of a sine wave. In this case it approximately 40 ms of data at 25us sampling rate (40kS/s 8 channels). Later the data is sent to the host. As you can see I have a dedicated stream for data only and a different one for message passing.
    If I disable the code in the case structure this loop executes in 39-40 ms, however when I start sending the data to the host the rate drops to 57-60 ms per iteration and that sooner or later leads to buffer overflow.
    I have experimented with passing the acquired data to a different loop using a queue but that wasn't faster. I have also tried pipelining but that did not speed it up either. Would you have any suggestions how I could improve my transfer rates?
    Thank you.
    Bartosz
    Attachments:
    Pic1.png ‏100 KB
    FPGA.png ‏75 KB

    Hi barteklul,
    There are 2 main methods of achieving fast data transfer between RT and Host PC:
    1. Using the "Shared Variables", they allow you to transfer data deterministically from
    RT to Host PC, usually are used to monitor data.
    Please have a look at this article:
    http://zone.ni.com/reference/en-XX/help/370622J-01/lvrtconcepts/rt_projectvariable/
    This is the example code:
    https://decibel.ni.com/content/docs/DOC-15928
    The only disadvantage of such method is that if for example RT gets data really fast
    your Host PC can miss some samples.
    2. In order to receive all data from RT
    (guaranteed 100% data transfer without missing any samples) "Network Streams" should be used.
    It is a little bit more complex to implement, but if you want to store data to the file "Network Streams"
    are strongly suggested. In addition "Network Streams" are not deterministic.
    For more details about network streams please read this article:
    http://www.ni.com/white-paper/12267/en/
    At the moment I see that you are using "Network Streams" to transfer data and also I can see that you have some timing on RT
    which can slow down the data transfer rate. I suggest you to transfer data to Host PC as soon as it comes into the FIFO.
    Also if you transfer data just to monitor it, I suggest you to try Shared Variables method.
    I hope that you will this information useful!
    Kind Regards,
    Max
    Applications Engineer
    National Instruments

  • OC4J: marshalling does not recreate the same data structure onthe client

    Hi guys,
    I am trying to use OC4J as an EJB container and have come across the following problem, which looks like a bug.
    I have a value object method that returns an instance of ArrayList with references to other value objects of the same class. The value objects have references to other value objects. When this structure is marshalled across the network, we expect it to be recreated as is but that does not happen and instead objects get duplicated.
    Suppose we have 2 value objects: ValueObject1 and ValueObject2. ValueObject1 references ValueObject2 via its private field and the ValueObject2 references ValueObject1. Both value objects are returned by our method in an ArrayList structure. Here is how it will look like (number after @ represents an address in memory):
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@2
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@1
    We would expect to see the same (except exact addresses) after marshalling. Here is what we get instead:
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@3
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@4
    It can be seen that objects get unnecessarily duplicated – the instance of the ValueObject1 referenced by the ValueObject2 is not the same now as the instance that is referenced by the ArrayList instance.
    This does not only break referential integrity, structure and consistency of the data but dramatically increases the amount of information sent across the network. The problem was discovered when we found that a relatively small but complicated structure that gets serialized into a 142kb file requires about 20Mb of network communication. All this extra info is duplicated object instances.
    I have created a small test case to demonstrate the problem and let you reproduce it.
    Here is RMITestBean.java:
    package com.cramer.test;
    import javax.ejb.EJBObject;
    import java.util.*;
    public interface RMITestBean extends EJBObject
    public ArrayList getSomeData(int testSize) throws java.rmi.RemoteException;
    public byte[] getSomeDataInBytes(int testSize) throws java.rmi.RemoteException;
    Here is RMITestBeanBean.java:
    package com.cramer.test;
    import javax.ejb.SessionBean;
    import javax.ejb.SessionContext;
    import java.util.*;
    public class RMITestBeanBean implements SessionBean
    private SessionContext context;
    SomeVO someVO;
    public void ejbCreate()
    someVO = new SomeVO(0);
    public void ejbActivate()
    public void ejbPassivate()
    public void ejbRemove()
    public void setSessionContext(SessionContext ctx)
    this.context = ctx;
    public byte[] getSomeDataInBytes(int testSize)
    ArrayList someData = getSomeData(testSize);
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println(" serialised output size: "+byteOutputStream.size());
    byte[] bytes = byteOutputStream.toByteArray();
    objectOutputStream.close();
    byteOutputStream.close();
    return bytes;
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return null;
    public ArrayList getSomeData(int testSize)
    // Create array of objects
    ArrayList someData = new ArrayList();
    for (int i=0; i<testSize; i++)
    someData.add(new SomeVO(i));
    // Interlink all the objects
    for (int i=0; i<someData.size()-1; i++)
    for (int j=i+1; j<someData.size(); j++)
    ((SomeVO)someData.get(i)).addValueObject((SomeVO)someData.get(j));
    ((SomeVO)someData.get(j)).addValueObject((SomeVO)someData.get(i));
    // print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Check the serialised size of the structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return someData;
    Here is RMITestBeanHome:
    package com.cramer.test;
    import javax.ejb.EJBHome;
    import java.rmi.RemoteException;
    import javax.ejb.CreateException;
    public interface RMITestBeanHome extends EJBHome
    RMITestBean create() throws RemoteException, CreateException;
    Here is ejb-jar.xml:
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN" "http://java.sun.com/dtd/ejb-jar_2_0.dtd">
    <ejb-jar>
    <enterprise-beans>
    <session>
    <description>Session Bean ( Stateful )</description>
    <display-name>RMITestBean</display-name>
    <ejb-name>RMITestBean</ejb-name>
    <home>com.cramer.test.RMITestBeanHome</home>
    <remote>com.cramer.test.RMITestBean</remote>
    <ejb-class>com.cramer.test.RMITestBeanBean</ejb-class>
    <session-type>Stateful</session-type>
    <transaction-type>Container</transaction-type>
    </session>
    </enterprise-beans>
    </ejb-jar>
    And finally the application that tests the bean:
    package com.cramer.test;
    import java.util.*;
    import javax.rmi.*;
    import javax.naming.*;
    public class RMITestApplication
    final static boolean HARDCODE_SERIALISATION = false;
    final static int TEST_SIZE = 2;
    public static void main(String[] args)
    Hashtable props = new Hashtable();
    props.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
    props.put(Context.PROVIDER_URL, "ormi://lil8m:23792/alexei");
    props.put(Context.SECURITY_PRINCIPAL, "admin");
    props.put(Context.SECURITY_CREDENTIALS, "admin");
    try {
    // Get the JNDI initial context
    InitialContext ctx = new InitialContext(props);
    NamingEnumeration list = ctx.list("comp/env/ejb");
    // Get a reference to the Home Object which we use to create the EJB Object
    Object objJNDI = ctx.lookup("comp/env/ejb/RMITestBean");
    // Now cast it to an InventoryHome object
    RMITestBeanHome testBeanHome = (RMITestBeanHome)PortableRemoteObject.narrow(objJNDI,RMITestBeanHome.class);
    // Create the Inventory remote interface
    RMITestBean testBean = testBeanHome.create();
    ArrayList someData = null;
    if (!HARDCODE_SERIALISATION)
    // ############################### Alternative 1 ##############################
    // ## This relies on marshalling serialisation ##
    someData = testBean.getSomeData(TEST_SIZE);
    // ############################ End of Alternative 1 ##########################
    } else
    // ############################### Alternative 2 ##############################
    // ## This gets a serialised byte stream and de-serialises it ##
    byte[] bytes = testBean.getSomeDataInBytes(TEST_SIZE);
    try {
    java.io.ByteArrayInputStream byteInputStream = new java.io.ByteArrayInputStream(bytes);
    java.io.ObjectInputStream objectInputStream = new java.io.ObjectInputStream(byteInputStream);
    someData = (ArrayList)objectInputStream.readObject();
    objectInputStream.close();
    byteInputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    // ############################ End of Alternative 2 ##########################
    // Print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Print out the size of the serialised structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    catch(Exception ex){
    ex.printStackTrace(System.out);
    The parameters you might be interested in playing with are HARDCODE_SERIALISATION and TEST_SIZE defined at the beginning of RMITestApplication.java. The HARDCODE_SERIALISATION is a flag that specifies whether Java serialisation should be used to pass the data across or we should rely on OC4J marshalling. TEST_SIZE defines the size of the object graph and the ArrayList structure. The bigger this size is the more dramatic effect you get from data duplication.
    The test case outputs the structure both on the server and on the client and prints out the size of the serialised structure. That gives us sufficient comparison, as both structure and its size should be the same on the client and on the server.
    The test case also demonstrates that the problem is specific to OC4J. The standard Java serialisation does not suffer the same flaw. However using the standard serialisation the way I did in the test case code is generally unacceptable as it breaks the transparency benefit and complicates interfaces.
    To run the test case:
    1) Modify provider URL parameter value on line 15 of the RMITestApplication.java for your environment.
    2) Deploy the bean to the server.
    4) Run RMITestApplication on a client PC.
    5) Compare the outputs on the server and on the client.
    I hope someone can reproduce the problem and give their opinion, and possibly point to the solution if there is one at the moment.
    Cheers,
    Alexei

    Hi,
    Eugene, wrong end user recovery.  Alexey is referring to client desktop end user recovery which is entirely different.
    Alexy - As noted in the previous post:
    http://social.technet.microsoft.com/Forums/en-US/bc67c597-4379-4a8d-a5e0-cd4b26c85d91/dpm-2012-still-requires-put-end-users-into-local-admin-groups-for-the-purpose-of-end-user-data?forum=dataprotectionmanager
    Each recovery point has users permisions tied to it, so it's not possible to retroacively give the users permissions.  Implement the below and going forward all users can restore their own files.
    This is a hands off solution to allow all users that use a machine to be able to restore their own files.
     1) Make these two cmd files and save them in c:\temp
     2) Using windows scheduler – schedule addperms.cmd to run daily – any new users that log onto the machine will automatically be able to restore their own files.
    <addperms.cmd>
     Cmd.exe /v /c c:\temp\addreg.cmd
    <addreg.cmd>
     set users=
     echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
     echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
     FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
     echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
     REG IMPORT c:\temp\perms.reg
     Del c:\temp\perms.reg
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Suggestions for most efficient way to create a second app or site with minor rebranding

    I've been asked to create a new Foundation 2010 ite that's going to be 90% the same site as one for another division. The current website uses custom masterpages and some other customization. My understanding is that the differences are going to be a different
    URL and some minor branding stuff. THey are external presence sites. I don't believe it's using search or much else in the way of services, custom wsps, etc. Very simple sites.
    I'm thinking of backing up the dbs, restoring using different mdf/ldf and db names, creating a new app and attaching, but not sure if the GUIDs will conflict. I'm also thinking that what the masterpages call in _layouts may need to be changed, which should
    be pretty simple.
    Anything I'm overlooking?
    Also, apologies for using apps and sites interchangeably, I'm aware of the differences. I was actually asked to create a new site and assign a new URL to it, but if that's doable it sounds more complex to me.
    Thanks,
    Scott

    Hi,
    According to your post, my understanding is that you want to create a second app or site with minor rebranding.
    I recommend to create a test environment for existing production enviroment.
    For more information, you can refer to:
    Moving content between SharePoint environments
    Copy SharePoint production data to a test environment
    Build a SharePoint 2010 Test/Development Farm
    Thanks,
    Linda Li                
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Linda Li
    TechNet Community Support

Maybe you are looking for

  • How do I move my music from my iphone 4s to my new computer?

    Every time i go under the i phone menus and click on music and sync it says that it is either going to delete all the songs that are currenty in my phone righ now. Or that it is going to replace all the 230 songs that i do have with the 3 cd's that i

  • Poor Quality iTunes Purchase

    Hi all Purchased the Complete Led Zeppelin as soon as it was available in the ITMS. I listened to a few of the albums yesterday and noticed there were some tracks that had a loud popping noise at the cut in between songs. Today I listened to a few mo

  • Compatibility? TV is not widescreen, but is HD

    I have a Sony KV-32HS510 TV that I am wondering if it will be compatible with Apple TV. It's a standard format (4:3) screen, but it is HD. You have to watch the HD channels with bars across the top. Does anybody know if this type of TV works with Apl

  • Disk Suite/ Solaris 8 Upgrade Problems

    Hello, I am trying to upgrade from Solaris Sparc 7 to 8 and I have Sun Disk Suite mirroring the boot device. When I try to upgrade the installation fails. Is there a way to upgrade from 7 to 8 without breaking the mirror and if not is there a utility

  • Problem in placing hibernate.cfg.xml

    Hi,   I have written one ejb project with hibernate in my net weaver.Now i have a problem in placing hibernate.cfg.xml.so tell me where to place it .Give some examples related to that and some related links.