Data model 3x size increase

I have one Excel file .xlsx with one flat table size 2Mb that I imported to another Excel file .xlsx.
I first imported it only to data model in Power Pivot and saved the file as .xlsx = 4Mb (without showing it in an Excel sheet).
I then removed the data model, and just imported it to an Excel sheet and saved the file as .xlsb 1,4Mb
What is the reason to the 3x size difference ?

Perhaps for this particular data set, the raw sheet data is compressed more compactly in the .xlsx (which is a zip file) than when it's stored in the data model?
Ehren

Similar Messages

  • Acrobat Pro 9 file size increases 10x

    When I edit a pdf file in Acrobat Pro 9, even the smallest edit like the date, my document size increases by 10 fold.  Another user, on the same version, makes the same edit and it does not change much at all.  I have uninstalled/reinstalled 3 times, tried Reduce File Size and many other things but nothing seems to work.  Has anyone else experienced this same thing and found a resolution?

    That is my bad...I must have been in a hurry trying to beat the snow storm and misposted.  How can I move or delete and get this in the correct forum?
    Thanks

  • BI Publisher 10g - Limit to the size of the data model?

    Hello,
    We are using OBIEE 10g 10.1.3.4.1 in our organization and have a problem when working on a large report in BI Publisher. It seems that once our data model gets large enough, we are no longer able to edit the report. When we click on the Edit link to load the report editor, the side menu that lists the data model, parameters, bursting, etc does not load, preventing us from making any changes to the report.
    At first we thought it was a timeout issue and changed the following:
    $J2EE_HOME/applications/xmlpserver/xmlpserver/js/XDOReportModel.js
    Property name : XDOReportModel.TIMEOUTMSEC
    set value in Milli seconds as " XDOReportModel.TIMEOUTMSEC = 15000
    After making the change and restarting the services, it did not help. Is there a different place for the timeout of loading the report editor? What about some kind of cap to the size of the data model? We have one large data template that is used to generate a report. I tried breaking the large data set into multiple, smaller data sets (still using data tempalte) and that didn't seem to help any. Is there a limit the size of the data model BI Publisher can return?
    Any help would be greatly appreciated! Thanks!

    Hello,
    I think you should create a recordStore with RMS and use it like a string array. You can store huge amount of data and the time access when request data from recordStore is very similar as if it is store on the RAM memory with recent devices.
    And if it is not persistent data you can delete the recordStore when closing the midlet.

  • Increased Object Type support within Data Modeler

    Hi,
    This is a question mainly for the Data Modeler Development Team as well as the Tools group. First of all, the production version of Data Modeler 3.0 is very nice and the improvements in the DataTypes area over what was in EA1 are great. Thanks for adding the Domain support to the definition of Attributes and Parameters for Methods, that makes life much simpler and opens the door for a much wider range of object type modeling and ease of implementations since consistency can now be maintained much easier via Domains. Also the support for straightening of lines is awesome. This is the best implementation I have seen, even better that what we had in the Design Editor in Oracle Designer, kudos again to the team that built the diagrammer interfaces.
    My question is, Going forward will Data Modeler continued to be enhanced to finally fully support Oracle Objects and the object relational (OR) model? I ask this because as users of Oracle's RDBMS, we do not have a tool that allows us to fully model and implement the features provided by the Server Group in the area of Object-Relational features and functionality. Oracle Designer had some OR modeling capability, but the end of life on product stopped all further development. One of the common excuses is there is no market demand for object - relational modeling. I would argue, that the lack of market demand for object - relational applications is because we have never have had a tool that fully supports Oracle's object - relational model. Nor do have we had a development tool that allows us to implement a fully developed Oracle Object - Relational model. The tools group stopped the development of Oracle Forms support for nested objects and collections. The LOV functionality for REF columns is pretty awesome, but we can't get to nested collections much less a nested collection of REFs.
    So as a member of the end user community, we are stuck in this Catch 22. The Server Group continues to support and enhance the Object - Relational model, but the Tools and Modeling Groups haven't kept up with the tools to model or implement the functionality and features supported by the database. As a result we have a partially built set of tools that allows us to scratch the surface of the Object - Relational model but are not able to take full advantage of the powerful features of inheritance, inherited and extended collections, the ability to inherit and then extend methods, etc...
    The Object - Relational model was introduced with Oracle8, 10g solved the type evolution problem which made production implementation of object types realistic. My question is; Is Data Modeler finally going to fully support Oracle's object - relational model? If we as the end user community finally get a tool that does fully support the OR model, we'll finally be able to model, build and deploy applications that will in turn create that market demand which is lacking for the object relational model.
    Thanks in advance,
    ScottK

    Phil,
    Yes, having the ability to include an attribute from an data type as part of the PK, UK and or index will be very, very nice. Doing that opens a number of doors on the design, modeling and implementation sides. It would allow us to model and then create a data type which has all the common attributes for housekeeping information within each entity -> table. This is something we tried to do with Designer but was pretty involved script wise. By being able to create specialized housekeeping datatypes and then using the 'add attribute' with the scripts provided with DM 3.0, it simplifies the model after the fact. Think about it, we'll be able to have a datatype with all the attributes for created on/by and modified on/by along will all the methods to populate those attributes (formerly columns), then we can include that datatype into the entity as an attribute called say, 'row_audit' with the datatype of 'common_'. Now in one pass we have the place to hold the information of who created and modified the row as well as the date and time along with all the code required to populate and maintain that information. Everything modeled, defined and written in one place. No more have to include or write the pl/sql across several different modules, rather the calls to the method, a constructor, are within the insert or update statements for that table. If we take that train of thought and extend it to include the housekeeping functions for relational primary keys, then we can create one datatype with the attribute and methods to create a single column primary key. Once that is pulled into an entity, defined as the part of the unique identifier and then pushed down as a primary key and resulting foreign keys. We have all the code to enforce and populate primary and foreign keys wrapped up in one datatype with its supporting method as a constructor. Think of the other possibilities that would be available once we can include nested attributes w/i keys and indexes. It makes some models and designs much simpler, more maintainable, more scalable (less code to execute), etc...
    I have been thinking about this. I see ya'll have provided the options for IN, OUT, IN/OUT, COPY and NOCOPY as modifiers for the parameters on methods. We also have the option for COPY / NOCOPY in the constructor method for type. Will that be included?
    Lastly, since the Server Group solved the type evolution problem, that coupled with the ability to include a datatype as a definition for column really opens the design doors. For example, by using a datatype of 'common_', as mentioned above, if the system requirement appears after the system is built for a the name of the source system to be included with the 'row_audit' information, we can modify the datatype 'common_' to include these new attributes and methods and then the ALTER TYPE syntax provided by the Server and Language groups to effectively extend all the tables to include this new system requirement. Till the Server group solved the type evolution problem, this wasn't possible and we had to write several 'alter table add column' scripts. This is all replaced with the ALTER TYPE... cascade data command now. Once DM 3.0 includes the ability to reference embedded attributes, design and modeling life will be great.
    Thanks in advance,
    ScottK

  • How the queries gets processed in data model of report

    In a report for e.g there are four queries in data model.
    In the first query there is a group there we are calculating the value of a formula column c_currency.
    In c_currency for set of values we are calculating :p_curr.
    :p_curr is a global variable i.e user parameter.
    So :p_curr is getting used in format trigger of next group in second query.
    So the first query will execute for suppose 15 set of values.
    So :p_curr wil not retain the values for all that 15 sets of values or it will each time overwrite the value?
    or whether the first query will execute for first set of value pass to next query then execute for 2nd set.or will it execute all set and will pass :p_curr to next query?

    This has nothing to do with the number of tables in your query. Search for the error message in the Reports help and you find this:
    Cause:     You are attempting to default a layout that, if created using current settings, would be too large to fit within the defined height and width of a report page.
    Action:     Go back to the Report Wizard and reduce the values of default settings (e.g., shorten some field widths). Alternatively, you can increase the size of a report page using the Report property palette. Then redefault your layout. Continue making adjustments as necessary.

  • Demantra - can it run with two Data Models in parallel?

    Hello,
    we face the following situation: we have items that have to planned on a daily basis; many other items, however, only need weekly planning.
    Demantra only allows one base time unit in its data model. If we choose "daily" for all products, the system size will increase enormously.
    If we choose "weekly" for all products, we can not plan our daily products for specific days in the week.
    The idea is now to set up a Demantra System with two data models in parallel. Is this possible? Would they reside in different databases or database users?
    The Business Modeller allows to create new models, but can they run in parallel? If they can, how would the user access the different data models?
    Thanks for any hints on this! 

    Can you please elaborate on this issue?
    Thanks

  • Some observations on my first use of Data Modeler Beta

    First of all, I can see this tool has a lot of promise.
    I hope Oracle keeps at it, it could turn into a real winner if all the features I see being worked on mature.
    Thanks!
    Here are a few observations on things that I found non-obvious or tedious to do.
    1. When designing an entity, I want to give it a name, a definition, attributes and keys. I want that process to be quick and require the minimum amount of mouse-clicking/navigation fiddling as possible. The current way of defining the attribute's datatype and size is painfully slow. I have to click to get a pop-up. Then I have to click to choose from a set of categories. Then I have to click on a dropdown list. If I try to use the down-arrow on the dropdown list, it works, but not if I go past the one I want. The up-arrow won't take me backwards in the list, so more clicking. It's just a nasty, slow interface to do a simple task that I have to do a thousand times in a data model. If I need to change the size of something, back I go thru the entire process all over again.
    That makes it doubly slow to work in the most natural way, which is to list the attributes and the datatypes, then come back and refine the sizes once the model is maturing and relatively stable.
    2. Adding an additional attribute requires a mouse click instead of a down-arrow. That means I have to take my hands off the keyboard to add a new attribute. Maybe there is some short-cut key that does that, but I have better things to do than memorize non-standard keyboard mappings. Make the down/up arrows works as they should.
    3. Adding the comment that describes the attributes is not quite as slow, but still requires more keystrokes/mouse movements than it should. It's hard enough to get developers to document their attributes, don't discourage them.
    4. I can't see the list of attributes, data types, sizes, key/mandatory settings, and the comment at one time while editing the entity. Makes it harder to grasp what all the attributes mean at a glance, which slows down the modeling and the comprehension of an existing model.
    5. All the entities I created had primary keys with columns in them. But when I tried to have it build a physical model, it complained that some relationships had no columns in them. For the life of me, I couldn't figure out how to fix that. Never had that problem in any other case tool.
    6. Getting it to generate DDL was awkward to find. Make it something obvious, like a button on the toolbar that says "Generate DDL".
    7. Apostrophes in the Comments in RDBMS are not escaped, so the generated DDL won't run.
    8. For the ease of use/speed of use testing on high-volume key tasks, make the developers do the task 1000 times in a row. Make them use long names that require typing, not table A with columns c1, c2 and c3. Long before they get to iteration 1000 they will have many ideas on how to make that task easier and faster to do.
    9. Make developers use names of things that are the maximum length allowed. For example, for a table name in oracle, the max length of the name is 30 characters. The name of one testing table should be AMMMMMMMMMMMMMMMMMMMMMMMMMMMMZ. That's a capital A followed by 28 capital M's and a capital Z. For numbers, use the pattern 1555559. If the developers can't see the A and Z or 1 and 9 in the display area for the name in the default layout for the window, they did the display layout wrong. For places where the text can be really long, choose a "supported visible length" for that field and enter data in the pattern AMMMMMMMMMMMMMMMQMMMMMMMMZ, where Q is placed at the supported visible length. if the A and Q don't show, the layout is wrong.
    10. SQL Developer has quite a few truly gooberish UI interaction designs and I can see some of that carrying over to the Data Modeler tool. I really recommend getting a windows UI expert to design the ui interface, not a java expert. I've seen a lot of very productive windows user interfaces and extremely few java interfaces suitable for high-speed data entry. Give the UI expert the authority to tell the java programmers "I don't want to hear about java coding internals - make the user interface perform this way." I think the technical limitations in java UIs are much less than the mindset limitations I've seen in all to many programmers. That, and making the developers use their code 1000 times in a row to perform key tasks will cause the UI to get streamlined considerably.
    Thanks, and keep up the good work!

    Dear David,
    Again thank you for your valuable and highly appreciated feedback. Find included a more elaborated answer to your observations:
    *1. When designing an entity, I want to give it a name, a definition, attributes and keys.*
    In the new Early Adopter Release, a "SQL developer like" property window has been added for Entities and Tables. For each attribute/column you will have the ability to add name, datatype, Primary Key, Mandatory and comment from one and the same screen
    *2. Adding an additional attribute requires a mouse click instead of a down-arrow.*
    An enhancement request has been created
    *3. Adding the comment that describes the attributes is not quite as slow, but still requires more keystrokes/mouse movements than it should.*
    In the new Early Adopter Release, a "SQL developer like" property window has been added for Entities and Tables. For each attribute/column you will have the ability to add name, datatype, PK, M and comment from one and the same screen
    *4. I can't see the list of attributes, data types, sizes, key/mandatory settings, and the comment at one time while editing the entity. Makes it harder to grasp what all the attributes mean at a glance, which slows down the modeling and the comprehension of an existing model.*
    See former answers. For meaning of attributes you can also use the Glossary and Naming Standardization facilities: see Tools Option menu, Glossary and General Options for naming standards
    *5. All the entities I created had primary keys with columns in them. But when I tried to have it build a physical model, it complained that some relationships had no columns in them. For the life of me, I couldn't figure out how to fix that. Never had that problem in any other case tool.*
    A Bug report has been created. Issue will most probably be solved in the nexr Early Adopter release
    *6. Getting it to generate DDL was awkward to find. Make it something obvious, like a button on the toolbar that says "Generate DDL".*
    An enhancement request has been created.
    *7. Apostrophes in the Comments in RDBMS are not escaped, so the generated DDL won't run.*
    A bug report has been created
    *8. For the ease of use/speed of use testing on high-volume key tasks, make the developers do the task 1000 times in a row. Make them use long names that require typing, not table A with columns c1, c2 and c3. Long before they get to iteration 1000 they will have many ideas on how to make that task easier and faster to do.*
    I aplogize, but I don't understand clearly what you want to say with the use/speed of use here.
    *9. Make developers use names of things that are the maximum length allowed.*
    Our relational model is for use for not just Oracle, but also DB2, SQL Server and in the future maybe other database systems. Whicjh means that we can't taylor it to just one of these database systems. However you can set maxinum name lenghts by clicking right on the diagram and select Model Properties and here you can set naming Options. Here you can also use the Glossary and Naming Standardization facilities: see Tools Option menu, Glossary and General Options for naming standards
    *10. SQL Developer has quite a few truly gooberish UI interaction designs and I can see some of that carrying over to the Data Modeler tool.*
    Fully agree. As you will see in our next Early Adopter release we have started to use SQL Developer like UI objects.
    Edited by: René De Vleeschauwer on 17-nov-2008 1:58

  • "An error occured while working on the Data Model in the workbook" on some workbooks published to Power BI site

    Hello,
    I am using the Power BI for Office 365, and I have published several Excel 2013 workbooks having Power Pivot Data Models.
    I have a problem on some of the workbooks, once a slicer is selected, I get the error: "An error occurred while working on the Data Model in the workbook" and the slicers do not affect the charts.
    Some workbooks work perfectly fine. I am using the same user for all workbooks when creating and publishing. I tried with small workbooks less than < 10 MB size and larger workbooks > 10 MB. There is no rule, some workbooks larger than 10 MB work perfectly
    with the slicers effecting the charts, and some don't. Similarly for smaller size.
    Any ideas of how I can debug the cause of the issue? 
    Appreciate any feedback, 
    Thanks,
    Grace

    Hi Grace,
    I assume that the experience in the Excel client is working fine, right?
    Are you getting a correlation id with the error?
    Please send us a bit more information / samples to reproduce over email to
    this address.
    thanks,
    Guy
    GALROY

  • Beginners guide to PowerPivot data models

    Hi,
    I've been using PowerPivot for a little while now but have finally given into the fact that my lack of knowledge about data modelling is causing me all kinds of problems.
    I'm looking for recommendations on where I should start learning about data modelling for Powerpivot (and other software e.g. Tablea, Chartio etc). By data modelling I mean how I should best organise all the data that I want to analyse which is coming fomr
    multiple sources. In my case my primary sources right now are:
    Our main MySQL database
    Google Analytics Data
    Google Adwords data
    MailChimp data
    Various excels
    I have bought two books - "Dax Formulas for PowerPivot" which is great but sparse on data modelling information and "Microsoft Excel 2013 - Building Data Models with PowerPivot" which looks excellent but starts of at I believe too advanced
    a level.
    Where should a beginner with no experience of data modelling, but intermediate/advanced experience of Excel go to learn skills for PowerPivot Data modelling?
    By far the main issues is that our MySQL databases are expansive and include hundreds of tables across multiple databases and we need to be able to utilise data from all of them. I imagine that I somehow need to come up with a intermediary layer between
    the Databases and Powerpivot which extracts and flattens the main data into fewer more important tables, but would have no idea how to do this.
    Also to be clear, I am not looking at ways of modelling the MySQL database itself - our developers are happy with the database relationships etc, it just the modelling of that data within PowerPivot and how to best import that data.
    Recommendations would be absolutely brilliant, its a fantastic product but right now I'm struggling to make the most of it.

    Thanks for the recommendations, I am aware of the last two of those and
    http://www.powerpivotpro.com/ in particular has proved very useful (TechNet less so). 
    I will take a look at SQLBI in more detail but from a very casual browse it seems like this too is targeted more at experienced users. There paid courses may definitely prove useful though.
    I think what I'm getting at is that there are probably an increasing number of people like myself who have fallen into PowerPivot without a traditional background in databases and data modelling. In my case I have a small business of
    15 employees and we were using Excel and PivotTables to do some basic analysis before soon discovering that our data was too complicated and that I needed something. PowerPivot definitely seems to solve that issue and I'm having much
    better success now than I was without. I also feel quite competent with DAX and actually building tables from the PowerPivot data model.
    What I'm lacking in is the very first step of cleaning and preparing raw data for import and then importing it into Powerpivot and setting up a efficient model. I have to be honest that your links above did bring
    PowerQuery to my attention and it seems like a brilliant tool and one of the missing links. I would however still like to see a beginners guide to data import and model set-up as I don't think I've yet come across one either in book or
    online form which explains the fundamentals well.
     

  • Error to open a data model in Report builder (Word)

    Dear all,
    Im troubles when i try to open a data model in Report Builder (Word). Someone know about this problem?
    The message is: A error has ocurred. Check the settings and try again.
    Any suggestion?
    Thanks for all!

    I've also got this error several times. Usually the reason is an error in the Publisher query (or data template). It's better to first test (view) that you get a proper xml-output in Publisher, and only after that try to create an rtf-template. If this doesn't work, I usually start from the beginning, and first make a very simple report, then try the template, and if it works, then gradually increase elements for the Publisher side. Sometimes I haven't got any idea why it didn't work at the first place, when it then works after beginning from the simple report.

  • Creating PDF from ERD (Data Modeler) in readable format

    I created an entity relationship diagram (ERD) using Toad Data Modeler.  I want to save/print it to PDF, which I can just fine.  However, it cuts off tables and is very hard to read.  I'm looking for a format to use that will not cut off the tables.  I've played around with various settings, but can't get it to look right.  Are there specific settings I should be targeting so it won't cut off the diagram?  Thank you!

    Thank you.  I've been playing around with the page size, but it still wants to cut off some of my tables, almost as if I can't get the diagram to size into the size of the PDF margins.  I can't seem to find the magic combination so I figured I must be missing a parameter setting somewhere.  Thank you for your time.
    Brian

  • Support for 11x17 paper in Oracle SQL Developer Data Modeler 4.0.0.833

    In last versions of the version 3 of this product the ability to print your data model on 11x17 paper was included by default.  This support was even in some of the beta 4 version of the data modeler.  But, in the release of the production version for this tool, the 11x17 paper printing option has dissapeared from this product.  Can you please add this back into your product because those of us with large data models but no plotter need this paper size.
    Thanks,

    Hi,
    What printer do you use? You need to set it as default printer for your system and you should get ledge size as option if the printer supports it.
    Otherwise you can print diagram to PDF file and then you can use Adobe reader or Foxit reader to print to separate pages - it's poster in Adobe and scaling type "tile large pages" in Foxit
    Philip

  • SQL Developer 3.0 data modeler print-to-pdf not working

    I am working in Windows XP. I am running SQL Developer version 3.0.04, which now includes a full-featured version of the Data Modeler. I have run into a problem with this version, which did not occur in the stand-alone version of the Data Modeler.
    When I try to print a data model diagram [File -> Data Modeler -> Print Diagram -> To PDF file], a PDF file is generated. When I try to open that PDF file with Adobe Reader (version 9.4.3) I get the following error message:
    "Adobe Reader could not open 'finename.pdf' because it is either not a supported file type or because the file has been damaged(for example it was sent as an email attachment and wasn't correctly decoded)."
    Furthermore, the size of the file in question is 0 bytes. When I use the same process in the stand-alone version, a readable PDF file is created whose size is 858 bytes.
    The 'print to IMAGE file' option works well in both versions.
    Any help resolving this would be appreciated.

    I am having almost same problem. PDF file is genereted about 400K but blank.

  • SQL Developer Data Modeler Printing on Plotter

    When I select a plotter to print to in the stand alone version of Data Modeler and select paper size greater than 11x17 the margin settings are uncontrollable. For example if I select 17x22 and press ok the margins change to Left 5.166, Right 1, Top 1, Bottom 6.166. If I change these to 1,1,1,1 it has no effect. If I select larger paper it just gets worse. Anyone else having this problem?

    {color:#0000ff}Since I didn't get a response here I went and left feed back for the product and got the following message from Sue Harper:
    {color}
    Thank you for your feedback on our Data Modeling preview release.
    Your query was:
    When I select a plotter to print to in the stand alone version of Data Modeler and select paper size greater than 11x17 the margin settings are uncontrollable. For example if I select 17x22 and press ok the margins change to Left 5.166, Right 1, Top 1, Bottom 6.166. If I change these to 1,1,1,1 it has no effect. If I select larger paper it just gets worse.
    We have a bug logged for printing to plotter support.
    Sue

  • How to Transper data Model Node to Vallue Node

    Hi Friends
    I am getting problem in Create the Check box in Table
    I am getting model node from ECC System(Zmmoa_Pending_Getlist_Input)
    This is path for attributes avaliable
    Zmmoa_Pending_Getlist_Input-Output-outtab 
    Under outtab  all attributes available
    My Requrement is display Check box. So I am doing like this I will care one more Vallue Node (OutTab_1) under this Vallue Node I put I have node attributes.here I careate one Check boxdatatype --Boolean(i.e Under outtab attributes)
    Now I will get data from ModelNode and send that data to Vallue Node(by this node that data display in table formate.Eache Row Having Check box)
    So I have to this Code But Data is Not getting in Vallunode table
    in FirstView
    in Submit Button
    public void onActionGetData(com.sap.tc.webdynpro.progmodel.api.IWDCustomEvent wdEvent )
        //@@begin onActionGetData(ServerEvent)
        //$$begin ActionButton(1164287125)
        //wdThis.wdGetExamp2CompController().checkSRA();
         wdThis.wdGetExamp2CompController().checkBox();
        wdThis.wdFirePlugToSV();
        //$$end
        //@@end
    This is code writen in CC
    public void checkBox( )
        //@@begin checkBox()
        //Date today = new Date(System.currentTimeMillis());
        IWDMessageManager mes = wdComponentAPI.getMessageManager();
        try
             Zmmoa_Pending_Getlist_Input input1 = new Zmmoa_Pending_Getlist_Input();
             wdContext.nodeZmmoa_Pending_Getlist_Input().bind(input1);        
              wdContext.currentZmmoa_Pending_Getlist_InputElement().modelObject().execute();
              wdContext.nodeOuttab().invalidate();
    //          IPrivateExamp2Comp.IOutTab_1Element elem = wdContext.nodeOutTab_1().getOutTab_1ElementAt(i);         
             //mes.reportSuccess("Input:" +wdContext.nodeOuttab().size());
                      for (int i =0; i < wdContext.nodeOuttab().size(); i++)
                   //mes.reportSuccess("Input Of I:" +wdContext.nodeOutput().size());
                   //mes.reportSuccess("Input:" );
                   IPrivateExamp2Comp.IOuttabElement  elem = wdContext.nodeOuttab().getOuttabElementAt(i);
                   //wdComponentAPI.getMessageManager().reportSuccess("elem::  "+elem);
                   IPrivateExamp2Comp.IOutTab_1Element result = wdContext.nodeOutTab_1().createOutTab_1Element();
             //Zbapiresult result = new Zbapiresult();
             result.setCheckBox(false);
             result.setConf_Shp_Date(elem.getConf_Shp_Date());        
              wdComponentAPI.getMessageManager().reportSuccess("Conf_Shp_Date::  "+elem.getConf_Shp_Date());
             result.setExpt_Shp_Date(elem.getExpt_Shp_Date());
             result.setMaterial(elem.getMaterial());
             result.setMatl_Desc(elem.getMatl_Desc());
             result.setOa_Quantity(elem.getOa_Quantity());
             result.setOpn_Quantity(elem.getOpn_Quantity());
             result.setPo_Item(elem.getPo_Item());
             result.setPo_Number(elem.getPo_Number());
             result.setPo_Status(elem.getPo_Status());
             result.setPur_Group(elem.getPur_Group());
             result.setStat_Date(elem.getStat_Date());
             result.setQuantity(elem.getQuantity());
             wdContext.nodeOutTab_1().addElement(result);
         catch(Exception e)
                   mes.reportException(e.getMessage(), false);
        //@@end
    can u help me
    how to Transper data Model Node to Vallue Node. By using vallue node that data will display in table format with Check box.
    I need data display in table format with Check box
    Regards
    Vijay Kalluri

    Hi Vijay,
    TO copy values from Model Node to Value Node use the copyElements() method of the WDCopyService API. To acheive this, the name and type of the attibutes in the Value Node should be same as Model node attributes.
    Example: -
    Model Node <----
    > Value node
    ---Name-String                                       ---Name - String
    ---Number-Integer                                  ---Number -  Integer.
    then use the following statement:
    WDCopyService.copyElements(wdContext.node<ModelNode>(),wdContext.node<ValueNode>());
    This will copy all the values.
    Regards,
    Poojith MV

Maybe you are looking for

  • PowerBook G4 using external HD for iTunes to new MacBook Pro

    How can I move my iTunes library (all music, movies, ratings history, etc) from my old PowerBook G4 using an external hard drive to my new MacBook Pro? Migration Assistant will not work due to incompatible versions of the software, and the Apple Stor

  • Regza LCD HDTV 52XF550U has wide black stripes on the right side

    Hi, my 2008 REGZA LCD 52XF550U has wide black stripes on the right side.  I think the problem started within 24 months from my purchase.  And the stripes are getting wider.  I have tried to reset the setup, power off and on, all to no avail.  See att

  • Ipad air music

    How can I transfer my music list from android Tablet to iPad air

  • Bt speedtester

    hi again folks a question if i run bt speedtester one after another my ping goes from 30 up to 150 and run a few more and it will come down,still having problems with my online gaming so could the ping be the problem,speedtested wired through master

  • Router to pix vpn ipsec - bandwidth issue

    I basically followed the config example of cisco in : http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_configuration_example09186a0080094498.shtml it works fine. my pb is: the remote site has a 2Mbps line but once the tunnel is up, even