Data modeler reporting repository & process model metdata

Hi,
From reporting repository schema ,Can one query "Source Target mapping information" provided in Transformation tasks in Process model.
Is this feature supported in latest 4.1?
Thanks in advance!

Thank you Philip,
We are doing reports in TOAD and Cognos 8.
So far so good. One of the pieces of information I need out of the Oracle Data Modeler reporting schema is the columns in foreign keys. Foreign keys are not usually enforced in datamarts but data analysts must know what the foreign keys are to join tables.
There is a notation in Relational Model Elements.pdf "Foreign keys of DMR_CONSTR_INDEX_COLUMNS table are not valid since one column is used to refer to objects in two different tables"
This begs the question: Is there a valid foreign key in DMRS_CONSTR_INDEX_COLUMNS to join it to DMRS_FOREIGNKEYS or is the model documented incorrectly?
The repository adds tremendous value to Data Modeler.
Gerhardt

Similar Messages

  • "Oracle SQL Developer Data Modeler reporting repository required"

    Hi all,
    Trying to make use of Data Modeler Reports from within SQL Dev EA 2.1, but every time I try to connect to a DB I get the following message "Oracle SQL Developer Data Modeler reporting repository required."
    Any idea how to go about sorting this out?
    Thanks

    Hi i have the Oracle SQL Developer Data Modeler version 2.0.0 installed.
    When i try to export one diagram to the reporting schema, i get the following error : "Error occurred while exporting to Reporting Schema.See the log file for details".
    The log file has no errors.
    The repository owner is DBA.
    The connection is ok.
    Can you help me?
    Thanks in advance
    Paolo.

  • SQL Data Modeler reporting repository data model

    SQL Data Modeler (SDM) reporting repository data model
    I exported several models to reporting schema and reviewed the reports. So far so good. The reports I need are not included so I'm creating my own. I imported the reporting schema to SDM to explore the model and see how to join the tables. Has anyone already done this exercise and produced a data model for the reporting schema with primary keys, foreign keys and relationships? It would save me a pile of work if I could download the model and import it to SDM?
    Gerhardt

    Thank you Philip,
    We are doing reports in TOAD and Cognos 8.
    So far so good. One of the pieces of information I need out of the Oracle Data Modeler reporting schema is the columns in foreign keys. Foreign keys are not usually enforced in datamarts but data analysts must know what the foreign keys are to join tables.
    There is a notation in Relational Model Elements.pdf "Foreign keys of DMR_CONSTR_INDEX_COLUMNS table are not valid since one column is used to refer to objects in two different tables"
    This begs the question: Is there a valid foreign key in DMRS_CONSTR_INDEX_COLUMNS to join it to DMRS_FOREIGNKEYS or is the model documented incorrectly?
    The repository adds tremendous value to Data Modeler.
    Gerhardt

  • How the queries gets processed in data model of report

    In a report for e.g there are four queries in data model.
    In the first query there is a group there we are calculating the value of a formula column c_currency.
    In c_currency for set of values we are calculating :p_curr.
    :p_curr is a global variable i.e user parameter.
    So :p_curr is getting used in format trigger of next group in second query.
    So the first query will execute for suppose 15 set of values.
    So :p_curr wil not retain the values for all that 15 sets of values or it will each time overwrite the value?
    or whether the first query will execute for first set of value pass to next query then execute for 2nd set.or will it execute all set and will pass :p_curr to next query?

    This has nothing to do with the number of tables in your query. Search for the error message in the Reports help and you find this:
    Cause:     You are attempting to default a layout that, if created using current settings, would be too large to fit within the defined height and width of a report page.
    Action:     Go back to the Report Wizard and reduce the values of default settings (e.g., shorten some field widths). Alternatively, you can increase the size of a report page using the Report property palette. Then redefault your layout. Continue making adjustments as necessary.

  • Report locale, subtemplates and Data Model

    Hi,
    I have the following situation:
    We have 4 templates according to language, with french the default language:
    - FR: template.rtf
    - NL: template_nl.rtf
    - DE: template_de.rtf
    - EN: template_en.rtf
    The report locale is enough for BI Publisher to select the correct template, so that's OK.
    Some data from our database is also based on locale (eg street names and cities).
    Question 1: is it possible to use the current locale as parameter for the (sql) data model in some way (eg by using a specific parameter name)? For now we set the report locale to the correct language AND we send that same value through as a parameter. It would be cleaner to just set the report locale and be able to use that value in some way.
    Question 2: all of the above templates have a footer that's being reused, each also share the same data model, but the data is again based on the current locale.
    so: footer.rtf, footer_nl.rtf, footer_de.rtf and footer_en.rtf. Do I need to import all of these in their respective files or will localisation work (I doubt it)?
    Question 3: Is there a way to implicitly pass all parameters that were passed to the main template, also pass to the subtemplates? (eg LANG parameter is passed to template_nl.rtf, who can use it in his data model, is there a way to automatically make LANG available for footer_nl.rtf's data model or do I need to explicitly pass it as a parameter in the <?call-template: footer?>) - again if the report locale is available in some way to the data model, it would be easy.
    Thanks in advance

    Hello,
    Probably the easiest would be to use an OnDemand Process and some AJAX to pull your data into the extjs object.
    Here's an example of an OnDemand Process.
    http://apex.oracle.com/pls/otn/f?p=11933:11
    For data the easiest at the moment would be probably to use the Oracle XML functions to generate your data in XML and use that as your data feed.
    You might want to take a look the new Interactive Reports included in APEX 3.1, while extjs has alot of functions there is a bit of setup involved , while the Interactive Reports all you need is a SQL query http://www.oracle.com/technology/products/database/application_express/html/irrs.html
    You should search the forum for extjs as there has been quite a few successful integrations of APEX and extjs / YUI / jQuery etc.
    Regards,
    Carl
    blog : http://carlback.blogspot.com/
    apex examples : http://apex.oracle.com/pls/otn/f?p=11933:5

  • How to print Data Model in Report??

    Do anyone know how to print Data Model??
    Thanks your help.
    - Frank

    Hi Venkat,
    Thanks alot.... i am checking the appendix k.
    I want to know that if i want to customize the report as per the client requirement,
    can you please let me know which template i will use if i required remittence advice as well as check print data on the layout.
    The following is the list of field i required:
    Vendor ID
    Check Date
    Check number
    invoice date
    invoice/ CR memo number
    invoice description
    invoice gross amount
    invoice discount amount
    invoice net amount
    total gross amount
    total discount amount
    total net amount
    logo
    company name and info
    bank name and info
    check number
    check amount spelled out
    check date
    check amount numeric
    payee name
    CEO signature
    MICR check number
    MICR routing number
    MICR bank account number
    mailing return address
    mailing address
    Venkat it would be great help if you please let me know the exact process to be follow for check printing report.
    I worked on bi publisher report in 11i, but in 12i the process is bit different.
    also if you have any template ready can you please send me on my mail id ... i forwarded you the test mail from my official id or on [email protected]
    Looking forward to your kind response.
    Regards
    Ratnesh

  • Sharepoint 2013 Reporting Services & OLAP Cubes for Data Modeling.

    I've been using PowerPivot & PowerView in Excel 2013 Pro for some time now so am now eager to get set up with Sharepoint 2013 Reporting Services.
    Before set up Reporting Services  I have just one question to resolve.
    What are the benefits/differences of using a normal flat table set up, compared to an OLAP cube?
    Should I base my Data Model on an OLAP Cube or just Connect to tables in my SQL 2012 database?
    I realize that OLAP Cubes aggregate data making it faster to return results, but am unclear if this is needed with Data Modeling for Sharepoint 2013.
    Many thanks,
    Mike

    So yes, PV is an in-memory cube. When data is loaded from the data source, it's cached in memory, and stored (compressed) in the Excel file. (also, same concept for SSAS Tabular mode... loads from source, cached in mem, but also stored (compressed) in data
    files, in the event that the server reboots, or something similar).
    As far as performance, tabular uses memory, but has a shorter load process (no ETL, no cube processing)... OLAP/MDX uses less memory, by requiring ETL and cube processing... technically tabular uses column compression, so the memory consumption will be based
    on the type of data (numeric data is GREAT, text not as much)... but the decision to use OLAP (MDX)/TAB (DAX) is just dependent on the type of load and your needs... both platforms CAN do realtime queries (ROLAP in multidimensional, or DirectQuery for tabular),
    or can use their processed/in-memory cache (MOLAP in multidimensional, xVelocity for tabular) to process queries.
    if you have a cube, there's no need to reinvent the wheel (especially since there's no way to convert/import the BIDS/SSDT project from MDX to DAX). If you have SSAS 2012 SP1 CU4 or later, you can connect PV (from Excel OR from within SP) directly to the
    MDX cube.
    Generally, the benefit of PP is for the power users who can build models quickly and easily (without needing to talk to the BI dept)... SharePoint lets those people share the reports with a team... if it's worthy of including in an enterprise warehouse,
    it gets handed off to the BI folks who vet the process and calculations... but by that time, the business has received value from the self-service (Excel) and team (SharePoint) analytics... and the BI team has less effort since the PP model includes data sources
    and calculations - aside from verifying the sources and calculations, BI can just port the effort into the existing enterprise ETL / warehouse / cubes / reports... shorter dev cycle.
    I'll be speaking on this very topic (done so several times already) this weekend in Chicago at SharePoint Saturday!
    http://www.spschicagosuburbs.com/Pages/Sessions.aspx
    Scott Brickey
    MCTS, MCPD, MCITP
    www.sbrickey.com
    Strategic Data Systems - for all your SharePoint needs

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • SQL Developer Data Modeler Repository

    Hi,
    I would like to know how to save all my applications into the Data Modeler Repository instead of doing it piece by piece and having to create a dmd file for every single application I imported into Data Modeler.
    In Oracle Designer, everything is in the repository and I can run reports against the tables and views belonging to that repository. So, I can produce a pdf documentation out of it. Can we do the same with Data Modeler?
    Thank you!

    Hello Anonymous,
    I'll start with your first comment - "I would like to know how to save all my applications into the Data Modeler Repository ". The Repository is just for reporting purposes, as Dimitar points outs, so saving all your applications in the Data Modeler is merely File - Save. This action saves the current design (with all its models) to the file system. The main premise here is that the Data Modeler is a file based product and all models and designs are saved locally to the file system. This is a positive move for many customers who are now building applications in Java or using any of the application development environments that work with a variety of files. They are taking all the artifacts and placing them under version control using open source versing tools like Subversion. Using the Data Modeler, they can do the same.
    For reporting you have a choice, one of which has also been explained. The action of creating a schema before you export the design is a one off step. Following on from this you can export new versions of the model to the same repository. The reason we did this was that many of the Designer audience wanted to write and run their own reports. So here you have a set of tables that you can write whatever SQL query you want to use.
    The other reporting option is to use the integrated reports in the product and you do not have to export the design or open another product or write SQL.
    Have you used the Design Rules in the Data Modeler? These are a set of "reports" for quality assurance purposes. There are many of these reports at all levels of the design which verify that your model is defined according to a set of database rules and general business rules and if you like, you can add your own rules.
    I attended ODTUG last week in LA and was really pleased to listen to a number of talks by well seasoned Designer users who have successfully transitioned to the Data Modeler and who had many positive experiences.
    Yes, the tools are different - but the one tool was not designed to replace the other - Data Modeler does none of the application design and generation that Designer does, those customers who have wanted to replace one tool (people are moving to the data Modeler from different tools) with the Data Modeler, have reported to me that they are pleased with the change.
    Regards
    Sue Harper
    Product Manager

  • Data Modeler goes to Oracle Database Repository ?

    Hi Philip,
    Thank you very much for your quick answer. Hope that this forum is the correct one right now.
    To us it seems to be inpossible to use the data modeller v3.1 with SVN.
    The recent designer-toolbox hasn't been updated by Oracle since 2007.
    That's why we have created various interfaces and workarounds of our own:
    A cg-refcode-interface with multilingualism, a table- and module-api, a hyperion application
    to generate ddl-statements from case-repository (using user-extension for the new features of v10g).
    These interfaces would not be possible with XML structure and would be discarded.
    Central data modeling is a strategic approach for our company (financial) and has been
    classified with risk class 1.
    Because of this fact, I'd like to ask again:
    Is there a timetable for converting Oracle's Data Modeler from XML structure to database?
    By the way: This was promissed by Mrs Sue Harper in 2007.
    Regards,
    Reinhard

    Hi Reinhard,
    To us it seems to be inpossible to use the data modeller v3.1 with SVN. Why is that? Is it forbidden to use SVN in your company or you have other reasons.
    Is there a timetable for converting Oracle's Data Modeler from XML structure to database? Not for functional repository. We have reporting repository and our target is all information to be there. You can have history of your designs (snapshot at specific time) in reporting repository and then you can run your tools against it.
    That's for the moment. I'm not going to make other promises.
    Philip

  • M:N relationships within a dimension: Standard process vs. BI Data model

    Hi,
    I just completed a review of the u201CMulti-Dimensional Modeling with BIu201D from this link:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84 and I have a quick question here:
    On page 36 of this link, the author noted that
    u201CAccording to the standard process, color should be in the master data table for material, like material type. But this is not possible because the material is the unique key of the master data table. We cannot have one material with multiple colors in the master data table.u201D
    i.e. my understanding is that, based on Standard Process it is NOT possible to place two characteristics in M:N relationships in the SAME dimension but with BI Data Model, this, the author points out  is possible
    u201Cdue to the usage of surrogate keys (DIM-IDs) in the dimension tables allowing the same material several times in the dimension tableu201D  i.e. although material is the unique key of the dimension.
    1.
    What is being referred here as u201CStandard Processu201D since document is on u201C u2026 modeling with BIu201D?
    2.
    It goes on to discuss u201CDesigning M:N relationships using a compound attributeu201C as a solution to the M:N relationship in a dimension.
    What is the need to address this problem with compound attributes if characteristics in M:N relationships within a dimension, such as material and color, are not a problem in BI Data Model?
    3.
    Can you help explain the underlined cautions of the following guidelines for compound attributes (with examples if possible please):
    u201CIf you can avoid compounding - do it!
    Compound attributes always mean there is an overhead with respect to:
    Reporting - you will always have to qualify the compound attributes within a query
    Performance
    Compounding always implies a heritage of source systems and just because it makes sense within the
    source systems does not necessarily mean that it will also make sense in data warehousing.u201D
    Thanks

    Hi Amanda.......
    In a dimension table, any number of semantically related dimension attributes are stored in a hierarchy (parent-child relationship as a 1:N relationship). If an M:N relationship exists between dimension attributes, they are normally stored in different dimension tables.
    I checked the document.........
    On page 36 of this link, the author noted that
    u201CAccording to the standard process, color should be in the master data table for material, like material type. But this is not possible because the material is the unique key of the master data table. We cannot have one material with multiple colors in the master data table.u201D
    1.
    What is being referred here as u201CStandard Processu201D since document is on u201C u2026 modeling with BIu201D?
    Here the first thing that I want to tell u is that............the diagram shown here is Classic Start Schema............since Extended Star Schema will never store Master data in Dimension tables.........it stores Masterdata in seperate Master data tables..........and nowadays..............Classic Star schema is obsolet.......Dimension table will only store Dimension id and SID......
    Now the Standard process is that..........anything which is Describing a master data..........can be added as an Attribute of that master data.......
    Suppose........Employee is the Masterdata.then Ph no can be one of the Attribute of this master data......
    So this the Standard Process.........but this cannot be followed every time.........why........already explained.....
    2.
    It goes on to discuss u201CDesigning M:N relationships using a compound attributeu201C as a solution to the M:N relationship in a dimension.
    What is the need to address this problem with compound attributes if characteristics in M:N relationships within a dimension, such as material and color, are not a problem in BI Data Model?
    Bcoz ..........we use compounding Characteris tic to define the Master data uniquely.........and we load compounding Characteristic seoerately...which is independent of the Master data........ie......compounding Characteristic there is a seperate master data tables...........so ..........problem resolved......
    3.
    Can you help explain the underlined cautions of the following guidelines for compound attributes (with examples if possible please):
    u201CIf you can avoid compounding - do it!
    Compound attributes always mean there is an overhead with respect to:
    Reporting - you will always have to qualify the compound attributes within a query
    Performance
    Compounding always implies a heritage of source systems and just because it makes sense within the
    source systems does not necessarily mean that it will also make sense in data warehousing.u201D
    For Compounding Characteristic............u hav to laod the Coumpoundede master data seperately..which is a overhead...........moreover while query execution......two tables will be accessd..which may result a performance issue.......Performance can be affected when compounded characteristics are used extensively, particularly when a large number of characteristics are included in a compounding. In most cases, the need to compound is discovered during data modeling.
    Regards,
    Debjani.........

  • Data Modeler version 2.0.0 - Export to reporting schema error

    Hi i have the Oracle SQL Developer Data Modeler version 2.0.0 installed.
    When i try to export one diagram to the reporting schema, i get the following error : "Error occurred while exporting to Reporting Schema.See the log file for details".
    The log file has no errors.
    The repository owner is DBA.
    The connection is ok.
    Can you help me?
    Thanks in advance
    Paolo.

    Sorry for the log. I looked at the one at the bottom of the window, i didn't know of the one in the directory.
    I purged the log and i tried again with an account with only connect,resource privileges.
    At the bottom of the message you'll find the log.
    The java.exe points refers to the one in the java directory of the data modeler (i downloaded the one with Java included).
    Thanks for the answer
    Paolo.
    SQL> create user p identified by p default tablespace f2 temporary tablespace temp;
    User created
    SQL> grant connect,resource to p;
    Grant succeeded
    2009-10-23 09:53:10,921 [main] INFO ApplicationView - Oracle SQL Developer Data Modeler Version: 2.0.0 Build: 570
    2009-10-23 09:53:11,546 [main] WARN AbstractXMLReader - There is no file with default domains (path: domains name: defaultdomains)
    2009-10-23 09:56:36,984 [Thread-4] ERROR ReportsHandler - Error Exporting to Reporting Schema:
    java.sql.SQLException: Missing IN or OUT parameter at index:: 10
         at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
         at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:110)
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:171)
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:227)
         at oracle.jdbc.driver.OraclePreparedStatement.processCompletedBindRow(OraclePreparedStatement.java:1737)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3376)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3462)
         at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1061)
         at oracle.dbtools.crest.exports.reports.RSCheckConstraint.export(Unknown Source)
         at oracle.dbtools.crest.exports.reports.RSColumns.export(Unknown Source)
         at oracle.dbtools.crest.exports.reports.RSRelationalModel.export(Unknown Source)
         at oracle.dbtools.crest.exports.reports.ReportsHandler.export(Unknown Source)
         at oracle.dbtools.crest.swingui.ControllerApplication$ExportToReportsSchema$1.run(Unknown Source)

  • Repository Data Model

    Does anybody have repository data model available? Or/and did anybody reverse engineer the core hierarchy tables.
    thank you
    jiri

    Thank you Philip,
    We are doing reports in TOAD and Cognos 8.
    So far so good. One of the pieces of information I need out of the Oracle Data Modeler reporting schema is the columns in foreign keys. Foreign keys are not usually enforced in datamarts but data analysts must know what the foreign keys are to join tables.
    There is a notation in Relational Model Elements.pdf "Foreign keys of DMR_CONSTR_INDEX_COLUMNS table are not valid since one column is used to refer to objects in two different tables"
    This begs the question: Is there a valid foreign key in DMRS_CONSTR_INDEX_COLUMNS to join it to DMRS_FOREIGNKEYS or is the model documented incorrectly?
    The repository adds tremendous value to Data Modeler.
    Gerhardt

  • Data Modeler : Modifying the Table Report layout

    Hi ,
    I'm using SQL Data Modeler (DM) 4.0 EA .
    I used the File -> reports option to generate the reports on Table and in the report
    1) I couldn't see the column data type ? (Refer below table ) , Is there any options to bring on the Data Type in the Report ?
    No
    Column Name
    PK
    FK
    M
    Data Type
    DT
    kind
    Domain Name
    Formula
    (Default Value)
    Security
    Abbreviation
    1
    ID1
    P
    Y
    (10)
    LT
      2
    ID2
    F
    Y
    (10)
    LT
    3
    ID3
    Y
    (1)
    LT
    4
    ID4
    Y
    (10)
    LT
    5
    ID5
    F
    (20,4)
    LT
    2) In options to manage the Table report (Reports -> Manage). Currently I can only remove a heading (like Descriptions Notes ,columns,column comments.). Is there is any way to customize the report in such a way that I can remove some columns (mostly the empty ones) from the report and add the column comments along with the column table than a new column comments table.
    Please let me know  ,is there is an option to customize the report .
    Note: I even tried generating the report using the search then report , but it doesn't give a complete report (i was only able to generate a report on column or table name or constraints , not all together.)
    Thanks,
    Srinivasan.K

    Thanks for the immediate reply.
    I checked on the DM again, the report comes good for the one which has the data type . But now all my data type changed to UNKNOWN and lokks like ,due to this my table report is coming without the "Data Type".
    Can you please help me in fixing this unknown data type issue ?
    Table report for the ones with proper Data type:
    Columns
    No
    Column Name
    PK
    FK
    M
    Data Type
    DT
    kind
    Domain Name
    Formula
    (Default Value)
    Security
    Abbreviation
    1
    ID1
    Y
    NUMERIC (10)
    LT
    2
    SCRIPT
    Y
    VARCHAR (1024)
    LT
    3
    UPGRADE_S
    Y
    VARCHAR (10)
    LT
    4
    UPGRADE_D
    Y
    VARCHAR (250)
    LT
    5
    UPGRADING_F
    Y
    VARCHAR (20)
    LT
    6
    TIMESTAMP2
    Y
    Timestamp
    LT

  • Copy data models and reports from BW 3.1 to NW 2004s

    Hi experts,
    Our client has two BW servers: BW 3.1 and BI 7. BW 3.1 contains lots of data models and reports. And the BI 7 server is newly installed.
    Now we want to copy these data models and reports from BW 3.1 to the new BI 7 server. Are there any solutions for this?
    Thank you very much in advance.

    Hi Frank,
    Sounds like a cross version transport is needed.
    This is a solution we have used to do what you want to do:-
    Create and release a transport as per normal.
    Copy and transport the files from the source system (BW 3.1) e.g /usr/sap/trans/data & /usr/sap/trans/cofiles to the same folders on the target system.
    Basis help is needed here.
    From here onwards using stms_import should help you in the normal manner.
    Works a treat.
    Have transported the following all correctly appearing as 3.x data models in NW2004s.
    DSO objects.
    Cubes
    Transfer/Update rules
    Reports.
    Cheers,
    Pom

Maybe you are looking for