Header/Item extractor data modelling question

Hi,
I have installed the Business Content for CRM Activities. In the Business Content they have a separate extractor and cube for header and item level.
However, for our reporting requirements we would like to use one cube. We don't consider a multicube as a valid solution since the item/header cubes have ofcourse not all fields in common what will lead to the famous not-assigned behaviour in Bex.
Therefore I'm trying to enrich each item line with all header fields. I don't want to write a user-exit to enhance the item extractor on datasource side since this will be a lot of work and also not sure this would be delta-compatible. I have bad experiences with this on a previous project.
So I would like to find a solution on BW (data modelling) side but at the moment I don't see any good alternatives. The only thing that I can think of is to build one ODS/master data object (transaction number) that contains all header information and to read this ODS/Master data object while loading the item lines from the ODS to the cube. This way I can copy all header fields to the item line. But this will force me to do FULL loads between the item ODS and the "global" cube and ofcourse I would like to avoid this. (if a header fields changes I want this to be changed in all item lines as well ofcourse)
Does anyone have better solutions ?
Thanks

What I finally did is the following :
1) Create a new cube with all characteristics and keyfigures from header/item level.
2) Load Header ODS to new cube
3) Load Item ODS to new cube but via a startroutine that enriches the item datasources with all header characteristics that are not yet available on item level.
4) Make a query and put constant selection on a restricted keyfigure that contains the following :
a) Product (item level) ==> CONSTANT SELECTION enabled
b) Nr. of activities (header level)
In Bex everything looks now fine with the nr. of activities header keyfigure, even when I do a filtering on material that still only exists on item level thanks to the constant selection but my result row is acting a little bit dissapointing. I hope that can be solved.
Let me give an example : (i'm even excluding the header line in this query)
1) all fine
   Transaction nr ; Nr. Of activities
   6000           ; 1
   ====================================
Resultrow           1
2) drilldown to material
   Transaction nr ; Material ; Nr of activities
   6000           ; 12D      ; 1
   6000           ; 13D      ; 1
=====================================================
Resultrow                      2
==> As you can see the resultrow changes to 2 after doing the drill-down.
Note that I am using "calculate result as summation". If I use "nothing defined" the result will be fine in this case but then it is not correct in the case where I do filterings on e.g. a material number. It will then show the number of activities for the WHOLE query.
Is there anything I can do to get the resultrow right in all circumstances using constant selection ?
Thanks
Message was edited by: Double U

Similar Messages

  • Data Modeler: Relational data model questions

    1. Can a different notation be specified for relational data models' constraints? Specifically, I'd like crow's feet. BTW, the docs show crow's feet and parent pointer (with the arrowhead), but there's no such thing in the actual modeler.
    2. Is there any way to manually route FK constraints lines?
    3. When forward engineering from logical, is there any way to indicate the preferred name for keys and indexes (primary, unique, foreign)?
    4. Mandatory/optional indicator on tables: what exactly does 'N' or 'A' stand for? I can understand 'N' meaning "Not optional", but 'A'? Wouldn't it be simpler to use '*' and 'o' like in the logical?
    Man, do I ever miss Designer!
    Thanks,
    Patrick

    Here's one more question:
    I've transformed several super/sub entities to relational, and some of the tables do not allow me to open Properties (on the table). I can use the navigator to open column properties, but cannot open table properties (neither from diagrammer nor from navigator). Some of the tables are two or three subtype levels deep, and I haven't figured out why some open and some don't.

  • ADF BC: Active data model question

    Hello,
    can somebody confirm that row iterator of detail view object obtained from master view using view link is different from row iterator used by active data model (getDetails() in application module) ?
    My problem is:
    I have a detail view object with multiple updateable entities. When I create a new row like this:
       Row row = masterViewRow.getDetails().createRow();
       masterViewRow.getDetails().insertRow(row);I don't see this row on web page probably because multi upd entities view object doesn't notify other iterators about changes by default.
    I have to use
       Row row = appModule.getDetails().createRow();
       appModule.getDetails().createRow(row);to make it work
    Can I omit use of application module and use master view object only to work with details in model and have changes visible in layout?

    Nobody can answer this.
    Where are all the ADF experts.
    Please help solve this issue.

  • SQL Query Data Model Question

    Hi All,
    I'm new to XMLPublisher so i'll try to explain the best i can the problem i'm having creating a Data Model for my report.
    I'm using Database XE and have installed XMLPublisher, as XE is extremely flexible in creating apps i'm having huge trouble finding a reporting solution for my applicaiton. I have created an app to track and control time and expense which i want to generate timesheet, expense sheets, invoice etc from.
    Having done some read i think XML is the best approach for me however i'm having huge troubles with my data model.
    EXAMPLE:
    I have 2 tables to start with, first table holds client information and the second table holds project information. One client can have many projects.
    When i create the Data Model using SQL i get the obvious problem that when a client has more then one project my XML structure is incorrect.
    SELECT gc.name AS CLIENT,
    gp.name AS PROJECT
    FROM gte_client gc,
    gte_project gp
    WHERE gc.client_id = gp.client_id(+)
    The above SQL creates the following XML
    - <ROWSET>
    - <ROW>
    <NAME>Symatrix Ltd</NAME>
    <NAME>Symatrix Pre-Sales</NAME>
    </ROW>
    - <ROW>
    <NAME>Aston Carter</NAME>
    <NAME>MOD</NAME>
    </ROW>
    - <ROW>
    <NAME>Symatrix Ltd</NAME>
    <NAME>Fujitsu</NAME>
    </ROW>
    </ROWSET>
    Obvious problem here is the fact that client Symatrix appears twice as there are 2 projects.
    I have tried to write SQL/XML and i think i'm beginning to understand the structure of the XML functions however XMLPublisher doesn't appear to understand the syntax when creating a SQL Query for the data model.
    I hope all this makes sense.
    Kind Regards
    Glen

    Hi All,
    Ok, so with a little searching through documents and articles on the web i found this information.
    http://blogs.oracle.com/xmlpublisher/2006/05/05#a38
    this details the exact thing i'm trying to achieve so anyone else who searches this forum with a similar problem check this out.
    Regards
    Glen

  • Data Model Question

    I am new with XML Publisher. What I'm still not getting is how to define a data model containing parent-child relationship (e.g. customers - orders - order_lines). Do I have to use some tools to create it?
    Denes Kubicek

    Hi.  In my opinion, option A is the most straight forward solution. 
    One thing to keep in mind is how the % figures will roll up, say over the entity hierarchy.  The % accounts will simply sum as you roll up the entity heirarchy, which is generally incorrect.  So, at a parent level you will need to back calculate the % figure.  I hope this makes sense.  I am pointing this out to you because I am in the process of building a similar solution and clean forgot about this.  I have had to re-engineer the solution slightly to correct this.
    Hope this helps.
    Sean

  • Canonical data model  Question?

    How can XI support a canonical data model ?  We don't want point-to-point mappings....We always want to map to and from a central message/data model which will be based on canonical data model.
    regards,
    tony

    you can support that with using a sequence of message mappings within an Interface mapping. with that you could canonicalize your message mappings, where you map from legacy to canonical model and from canonical to other legacy model. your interface mapping however would always be point to point, including a sequence of mappings to and from canonical model.
    best regards
    christine

  • Item Master Data Form Question

    screenshot
    http://mx1.beautybase.com/share/master.jpg
    I have a user defined table (i.e. Colours) connected to the Item Master Data Form (-150). This table is combo box on the form. Whenever I populate this combo I specify Code (i.e. 1) and Name (i.e. Red) values. Now if my form in the search mode and I bring list of the items by using * for item number (Form 1003) I can see matrix with number of records but unfortunately  column with  colours displays Code instead of  Names. Please let me know whether it is possible to display Name value.
    thanks

    thanks a lot
    i think i have the latest vesrsion of SBO incl SP 14
    <b>SBO 2004A (6.70.187) SP:00 PL14</b>
    but had no luck even if setting description check box
    (see screenshot)
    http://mx1.beautybase.com/share/master_2.jpg
    what shell I do, any ideas?

  • Complex data model question regarding master data

    Hi Experts,
    I have a requirement to add master data retail price and master data cost into BW.
    Both amounts are based on plant and material.  How do I model this?
    Here is how the records are coming in via the datasource:
    Plant   Material      UofM Cost    CostFrom CostTo Retail   RetailFrom RetailTo
    0001   1000321      EA    $1.00   5/1/07   5/9/07     $2.99    6/6/07        6/9/07
    0001   1000321      CS    $8.00   5/5/07   5/7/07     $9.99    8/6/07        9/9/07
    Here's what I am thinking....
    1.  Add UofM, Cost, and Retail prices as attributes of 0MAT_PLANT infoObject
        Also make those attributes time dependent.
    2.  Use ABAP slit up the cost info and the retail info and use the valid dates above for the time dependent from/to dates in 0MAT_PLANT
    Example:
    0MAT_PLANT attributes:
    Plant   Material      UofM Cost    ValidityFrom ValidityTo    Retail
    0001   1000321      EA    $1.00   5/1/07          5/9/07
    0001   1000321      CS    $8.00   5/5/07          5/7/07
    0001   1000321      EA               6/6/07          6/9/07         $2.99
    0001   1000321      CS               8/6/07          9/9/07         $9.99
    After looking at the above, I need to compound UofM with 0MAT_PLANT somehow.  How would I do that?
    Is this the best way to model this?
    Thanks,
    Chris

    Chris,
    I wouldn't include those amounts as attributes for 0MAT_PLANT. In general it doesn't make too much sense, except in some very specific cases, to make a Key Figure an Attribute of a Characteristic.
    I wouldn't either modify the 0MAT_PLANT key to compound it with UOM... Not a good idea, in my opinion.
    In your case, those values can change over time, depending on the validity period. So you could have several records with different validity periods for the same Material and Plant combination.
    I'd rather create an ODS with these values. The key fields would be 0MAT_PLANT and the validity dates and the data fields would be the amounts.
    You could use and include this ODS in any Multicube or get the values based on the validity periods by using ABAP routines if you need to.
    Another thing, as SAP recommends, an InfoObject with more than 500,000 records shouldn't be modeled as Master Data. And 0MAT_PLANT is a perfect candidate for this situation.
    So I'd advise to go with the ODS solution.
    Hope this helps.
    Regards,
    Luis

  • Dimension table design Data modeling question

    Hi Experts,
    Sorry if I am putting my question in a wrong forum and please suggest an appropriate forum.
    need your opinion on the existing design of our 10 years old datawarehouse.
    There is one dimension table with structure like following
    Dimension Table
    Dimension Key Number (THIS IS NOT A PRIMARY KEY)
    Natural key (from source) Number
    source name character
    current record indicator char(1)
    form_date date
    to_date date
    many other columns, which if change a new current record is created and previous is marked as H-historical
    Data is stored in the dimension table like this
    Dimension_key Natural key Source Name current record ind from_date to_date
    1 10001 Source1 H 1-jan-2005 31-may-2005
    1 10001 Source1 H 1-jun-20005 12-dec-2011
    1 10001 Source1 C 13-dec-2011 NULL
    2 20002 Source1 H 1-jun-20001 12-dec-2011
    2 20002 Source1 C 13-dec-2011 NULL
    The problem I see in this design is that there is no surrogate key, if any attribute is changed the new record is inserted by first taking the dimension key based on the (natural_key,source_name,current_record_ind).
    Shouldn't it be stored like following based on the data-warehousing principals.
    Dimension_key Natural key Source Name current record ind from_date to_date
    1 10001 Source1 H 1-jan-2005 31-may-2005
    2 10001 Source1 H 1-jun-20005 12-dec-2011
    3 10001 Source1 C 13-dec-2011 NULL
    4 20002 Source1 H 1-jun-20001 12-dec-2011
    5 20002 Source1 C 13-dec-2011 NULL
    Please let me know the pros and cons of the current design.

    And what if you have both the features something like this :
    Lineno  Dimension_key Natural key Source Name current record ind    from_date         to_date
         1              1       10001     Source1                  H   1-jan-2005     31-may-2005
         2              1       10001     Source1                  H  1-jun-20005     12-dec-2011
         3              1       10001     Source1                  C  13-dec-2011            NULLI mean just add a new column and populate it with required order by clause. Because what i guess, that in the second example you just added a new column which is something like a line number.
    Regards
    Girish Sharma

  • Data Modeling with BPC

    Hello , does any body know some paper that explains the datamodeling process in BPC?
    Thanks
    Sergio

    Hi Sergio-
    Key the data modeling question is the recognition that BPC utilizes a true account based data model.  This means only one key figure is available in your modeling design.
    The properties of the individual dimensions are key for much of the BPC delivered functionality. For example, Currently Translation requires several specific properties across multiple applications to work correctly.
    Also, there is functionality to move data from Application to Application (SAP will soon be posting a How To guide for this!), but currently there is no capability to move data (using available delivered BPC tools) from Appset to Appset. However if you consider the creation o custom process chains and the power of NetWeaver  BW tools, you are essentially able to manipulate records from/to anywhere.
    Regards,
    Sheldon

  • SQL Developer Data Modeling - import from Oracle Designer Model

    Hi,
    I do want to try to import model from a Designer repository.
    The first step is to create connection to the repository (9i version).
    I have created the one, however any attempt to test the connection or to go to the next step simply finishes with error "4", without any valueable message.
    Any idea or suggestion ?
    Thanx.

    A clarification of the Data Modeling feedback application and this forum. The developers are part of the SQL Developer development team, but as for all our features, each developer has a focus area, so the Data Modeling developers will tend to answer those questions. There will not be a separate Data Modeling forum, so once we're production all SQL Developer Data Modeling questions will be on this forum. We want to encourage all users to get into the habit of tagging posts. Having the posts tagged means that users can skip them if they're not their focus area and users can search using the tags. The more tags are used in all the forums the better. You may well find find the answers to your questions are outside this forum!
    As SQL Developer Data Modeling in in its first early adopter phase, we have provide a feedback application, where users can provide feedback, log bugs and ask questions. We did the same for SQL Developer before it's initial production release and found this to be very useful for the product.
    I'm happy for "How To" questions to be on this forum, as the answers benefit the broader community.
    Sue Harper

  • An error occurred while loading the model for the item or data source 'EntityDataSource'.

    Hi Team,
    We are trying to configure the Power View in SharePoint 2013. We did the entire configuration by following below steps. When we open the excel workbook (2013) containing
    Power View sheet (using excel web services) deployed in SharePoint Document library, we get the below error.
    “An error occurred while loading the model for the item or data source
    'EntityDataSource'. Verify that the connection information is correct and that you have permissions to access the data source.”
    Steps followed for configuring Power View in SharePoint Server 2013 (Single Machine setup):
    Installed SharePoint Server 2013 Enterprise edition
    Did not configure it at this time
    Installed SQL Server 2012 with default instance installation (selected every features)
    Later installed SP1 for SQL Server 2012
    After that installed Analysis Server in SharePoint mode (SQL Server PowerPivot for SharePoint)
    Again installed another Analysis Server and selected the tabular mode of data model
    Then ran the configuration wizard of Share point 2013 for farm configuration (only for central admin creation)
    Later we ran the PowerPivot for SharePoint 2013 (spPowerpivot.msi)
    Then using the Power Pivot Configuration tool for 2013 and completed all the steps and configured the excel service and registered the above two analysis service instances. 
    Questions/Doubts are:
    Is our steps followed as mentioned above is correct for configuring Power View in SharePoint Server 2013 Enterprise edition?
    Is the SQL Server 2012 Enterprise edition with SP1 installed is sufficient to view Power View sheets in SharePoint 2013 or we need to use the SQL Server 2012 SP1
    CTP3 software for Power View feature as mentioned in the below link.
    http://technet.microsoft.com/en-us/library/jj219634.aspx
    but we don’t have an option to download CTP3 software now from Microsoft site
    Or do we need to install
    Microsoft SQL Server 2012 With Power View For Multidimensional Models
    CTP
    http://www.microsoft.com/en-sg/download/details.aspx?id=35822
    Few blogs says that SQL Server 2012 instance should have been installed with SP1 at one go (not seperately).
    http://www.microsoft.com/en-in/download/details.aspx?id=35575
    Error detail :
    <detail><ErrorCode xmlns="http://www.microsoft.com/sql/reportingservices">rsCannotRetrieveModel</ErrorCode><HttpStatus xmlns="http://www.microsoft.com/sql/reportingservices">400</HttpStatus><Message
    xmlns="http://www.microsoft.com/sql/reportingservices">An error occurred while loading the model for the item or data source 'EntityDataSource'. Verify that the connection information is correct and that you have permissions to access
    the data source.</Message><HelpLink xmlns="http://www.microsoft.com/sql/reportingservices">http://go.microsoft.com/fwlink/?LinkId=20476&amp;EvtSrc=Microsoft.ReportingServices.Diagnostics.Utilities.ErrorStrings&amp;EvtID=rsCannotRetrieveModel&amp;ProdName=Microsoft%20SQL%20Server%20Reporting%20Services&amp;ProdVer=11.0.3000.0</HelpLink><ProductName
    xmlns="http://www.microsoft.com/sql/reportingservices">Microsoft SQL Server Reporting Services</ProductName><ProductVersion xmlns="http://www.microsoft.com/sql/reportingservices">11.0.3000.0</ProductVersion><ProductLocaleId
    xmlns="http://www.microsoft.com/sql/reportingservices">127</ProductLocaleId><OperatingSystem xmlns="http://www.microsoft.com/sql/reportingservices">OsIndependent</OperatingSystem><CountryLocaleId xmlns="http://www.microsoft.com/sql/reportingservices">1033</CountryLocaleId><MoreInformation
    xmlns="http://www.microsoft.com/sql/reportingservices"><Source>ReportingServicesLibrary</Source><Message msrs:ErrorCode="rsCannotRetrieveModel" msrs:HelpLink="http://go.microsoft.com/fwlink/?LinkId=20476&amp;EvtSrc=Microsoft.ReportingServices.Diagnostics.Utilities.ErrorStrings&amp;EvtID=rsCannotRetrieveModel&amp;ProdName=Microsoft%20SQL%20Server%20Reporting%20Services&amp;ProdVer=11.0.3000.0"
    xmlns:msrs="http://www.microsoft.com/sql/reportingservices">An error occurred while loading the model for the item or data source 'EntityDataSource'. Verify that the connection information is correct and that you have permissions to access
    the data source.</Message><MoreInformation><Source>Microsoft.ReportingServices.ProcessingCore</Source><Message msrs:ErrorCode="rsErrorOpeningConnection" msrs:HelpLink="http://go.microsoft.com/fwlink/?LinkId=20476&amp;EvtSrc=Microsoft.ReportingServices.Diagnostics.Utilities.ErrorStrings&amp;EvtID=rsErrorOpeningConnection&amp;ProdName=Microsoft%20SQL%20Server%20Reporting%20Services&amp;ProdVer=11.0.3000.0"
    xmlns:msrs="http://www.microsoft.com/sql/reportingservices">Cannot create a connection to data source 'EntityDataSource'.</Message><MoreInformation><Source>Microsoft.AnalysisServices.SPClient</Source><Message>We
    cannot locate a server to load the workbook Data Model.</Message><MoreInformation><Source></Source><Message>We cannot locate a server to load the workbook Data Model.</Message><MoreInformation><Source>Microsoft.Office.Excel.Server.WebServices</Source><Message>We
    cannot locate a server to load the workbook Data Model.</Message></MoreInformation></MoreInformation></MoreInformation></MoreInformation></MoreInformation><Warnings xmlns="http://www.microsoft.com/sql/reportingservices"
    /></detail>
    Please help me in  configuring Power View for SharePoint 2013.
    Thanks in advance.
    Pavan Kumar

    I'll bet that I know the answer to this one - and you probably won't like it ;)
    Here's the telling portion of that error:  "We cannot locate a server to load the workbook Data Model."
    This error is thrown in two scenarios; the first, when you haven't registered an analysis server.  The second appears to be the issue you're having:
    PowerPivot requires SQL Server 2012 SP1.  Microsoft issued a release of SQL Server 2012 SP1 that really wasn't SP1...
    Check the version of SQL that you are running.  It should be version 11.0.3000 or greater.  If it's not, you have the wrong "SP1" installed.  The correct one is
    here.
    Here's the bad news- you can't simply redeploy the correct one... because the installer already thinks that SP1 is installed.  You'll have to backup (or detach and copy) your databases, then re-install SQL with the correct SP1, then restore or reattach
    the databases.  Once you've done that, PowerPivot and PowerView will work properly.
    I sincerely hope this resolves your issues - we wasted TWO WEEKS of our time with this problem!

  • Why should we load header data first and then we load item level data?

    Hi BW guru`s,
    I have small confusion about the data loading.
    Why should we load header data first and then we load item level data?
    Is there any particular reason?
    Scenario: 1st I have uploaded 2LIS_11_VAHDR sales document header data from R/3 to BW by using LO-Cockpit extraction. Then I loaded 2LIS_11_VAITM.This is the norma procedure which we use to follow.
    I have a question. If I load 2LIS_11_VAITM data 1st from R/3 to BW and then I will load 2LIS_11_VAHDR by using LO-Cockpit extraction. what will happen?
    Regards,
    Venkat
    Edited by: VENKAT BOORUGADDA on Aug 12, 2008 11:51 AM

    There is no difference in doing it the other way.
    The load sequence will come into play only during activation where if you map the same fields from the two datasources, you might want to have the previous value overwritten by data from the next datasource.
    That is when you should care about loading one datasource before the other.
    To your question it is not arule that header data should come first.

  • Basic questions on data modeling

    Hi experts,
    I have some basic questions regarding data modeling within MDM. I understand the available table types and the concept of lookup fields. I know that the MDM data modeling concept is different to the relational concept. But having a strong database background my first step was to design a relational data model which I would like to transfer to a MDM repository. Unfortunately I didn't found good information material on this. So here are some questions maybe you can help me:
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    Thank you for your answers.
    Regards, bd

    Yes you are correct. It is almost difficult to map relational database to mdm one. But again MDM is not 'just' a database. It holds much more 'master' information as compared to any relational db.
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    Yes Here you need to use MV look up tables or can also try Qualifier tables if it gets more complex
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    Concept of uniqueness differs here that you also have something called Display Fields (DF). A combination of DF can also be treated as Unique one. For instance while importing records if you select these DF as a combination, you will eliminate any possible of duplicates based on this combination. Auto Id is one of the ways to have a unique id once record is within MDM. While you use UF or DF to eliminate any possible duplicates at import level
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    Hmm... good one. Referencial Integrity. What I assume you are talking is that if you have relationships between tables then removing a record will not be possible as it is a foreign key for some record. Here MDM does not allow that. As Relationships within MDM are physical and not conceptual. For instance material can have components. Now if material does not exist then any relationship to components is not worthwile to maintain. Hence relationshsip is eliminated.  While in relational model relationships are more conceptual. Hence with MDM usage of lookups and main table you do not need to maintain these kind of relationships on your own.
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    No. It is not possible to convert main table. There is only one main table and it cannot be changed.
    I went for the same option but it did not work. What I suggest is to look up your legacy system one by one and see what fields in general can be classified as Master, Reference, Transactional - You will start getting answers immediately.

  • Questions on Subviews and Import/Export in Data Modeler v3 EA1.

    I have a few questions about the capabilities of Data Modeler v3 EA1:
    1) Is it possible to rename subviews? Would like more meaningful names then Relational_x - Subview_x.
    2) Is it possible to save documents at subview level?
    3) Is it possible to import/export subsets of data?
    4) Having problems importing Erwin 7 .xml file, is there known problems with this import?
    Judy

    Hi Judy,
    1) To rename a subview just right click on it in the browser tree and select "Properties". In the properties dialog change the name and click OK button.
    2) You can save a subview as new Data Modeler design - from the File menu select Export -> To Data Modeling Design. In the newly opened dialog select the subview you want to export and click OK button.
    3) After saving a subview as new design (see answer #2) it can be imported in some other design (File -> Import -> Data Modeler Design).
    4) What kind of problems do you have with import of Erwin 7.* xml file?
    Regards,
    Ivan

Maybe you are looking for

  • Why is my battery life so bad?

    I've had my laptop for just over three years and the battery has become appalling.  It lasts for just over an hour without being attached to the mains, and is becoming unbearable. As a student I'm probably working on it a few hours a week - is it jus

  • Apex Calendar has a row limit per day? character limit per day?

    Hi, I am using the apex calendar in my application and it seems to have a limit of rows it can display per day, does anyone knows anything about it? Or maybe a limit of characters it can display in one day. (full month view) What I need to do is to i

  • Vendor consignment reports

    Hi, Are there any standard reports that would display the items set up for payment but not yet paid and the items paid for vendor consignment materials. I think FBL1N would work but i need to see the invoice number and the material number on the repo

  • Find movie dimensions with Automator?

    I need to go through 300+ mp4 files to determine if their dimensions are correct to be viewed on my AppleTV. Is there a way to: 1. Point Automator to a directory of MP4 files 2. Automator to inspect each file's dimension attribute 3. Automator to com

  • Media recovery disabled

    Hi, I've this entry: ARC1: Media recovery disabled in the alert.log. I believe that it is why the server was shut down this week. What should i do know? Could this be a problem? Best regards Nicole