Best practises Subversion and Data modeler

hello, i'am looking for some best practises regarding subversion and datamodeler.
A team of 10 analysts create several releases of our product over time.
Within one release you'll find several change requests.
The application itself contains about 700tables so performance is important.
I want to establish a lean working method were analyst can focus on their job - design.
Till now I think to create one trunk containing the db model let's call it v17.00
An analyst could create their designs in separate projects grouped by change request eg CR1234.
When development starts i would compare the trunk model with their change request to generated the alter script.
Afterwards i would import their design CR1234 into the trunk.
Note : it's possible that a change request got cancelled - that's why i opt for a design per change request.
This way of working seems much leaner than the setup of branches and merging.
My opinion, being a novice subversion user, is that setting up branches and merging is "more complex" and might causes frustration for designers.
Anyone having a simular setup or advice ?
kr
chris

Hi Sam,
Let me add my two cents here, when speaking about MAN deployments the name of the game is MPLS, so I guess you are using the same on your Cat 6500s and connecting your customers on 3550s using Vlans.
Regarding your questions:
a) Upgrading Ethernet to L3 for traffic shaping: This is basically done at 3550, so I suppose that's what you intend to do, plus you will be letting Spokes talk to only Hub site, so inter Vlan, atleast between Hub and each spoke will be required, hence inter valn routing. Other way is to configure P2P circuits between Hub site with Vlan mapping (per spoke) and Spoke sites with Port mapping, in this scenario Inter Vlan routing is not a necessity.
b) Security: This depends on what exact architecure you have deployed, in my case I have simply installed a Gateway router with BGP peering with PEs, a separate VRF alongwith redistribution does the trick.
Hope I addresses the query correctly, let me know if that helped..
Cheers
~sultan

Similar Messages

  • Links for SQL Developer and Data Modeler not working?

    Hi folks,
    I tried to access the new SQL Developer and Data Modeler links posted in the April 1 message (and on the Oracle site) from a couple of machines with no luck (empty zip files) - is there an update to the links?
    Thanks!
    Tomo

    In Firefox zero length zip. In IE loses connection to downloads.oracle.com.

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

  • Report locale, subtemplates and Data Model

    Hi,
    I have the following situation:
    We have 4 templates according to language, with french the default language:
    - FR: template.rtf
    - NL: template_nl.rtf
    - DE: template_de.rtf
    - EN: template_en.rtf
    The report locale is enough for BI Publisher to select the correct template, so that's OK.
    Some data from our database is also based on locale (eg street names and cities).
    Question 1: is it possible to use the current locale as parameter for the (sql) data model in some way (eg by using a specific parameter name)? For now we set the report locale to the correct language AND we send that same value through as a parameter. It would be cleaner to just set the report locale and be able to use that value in some way.
    Question 2: all of the above templates have a footer that's being reused, each also share the same data model, but the data is again based on the current locale.
    so: footer.rtf, footer_nl.rtf, footer_de.rtf and footer_en.rtf. Do I need to import all of these in their respective files or will localisation work (I doubt it)?
    Question 3: Is there a way to implicitly pass all parameters that were passed to the main template, also pass to the subtemplates? (eg LANG parameter is passed to template_nl.rtf, who can use it in his data model, is there a way to automatically make LANG available for footer_nl.rtf's data model or do I need to explicitly pass it as a parameter in the <?call-template: footer?>) - again if the report locale is available in some way to the data model, it would be easy.
    Thanks in advance

    Hello,
    Probably the easiest would be to use an OnDemand Process and some AJAX to pull your data into the extjs object.
    Here's an example of an OnDemand Process.
    http://apex.oracle.com/pls/otn/f?p=11933:11
    For data the easiest at the moment would be probably to use the Oracle XML functions to generate your data in XML and use that as your data feed.
    You might want to take a look the new Interactive Reports included in APEX 3.1, while extjs has alot of functions there is a bit of setup involved , while the Interactive Reports all you need is a SQL query http://www.oracle.com/technology/products/database/application_express/html/irrs.html
    You should search the forum for extjs as there has been quite a few successful integrations of APEX and extjs / YUI / jQuery etc.
    Regards,
    Carl
    blog : http://carlback.blogspot.com/
    apex examples : http://apex.oracle.com/pls/otn/f?p=11933:5

  • Integration of  SQL Developer and Data Modelling - concerns.

    In aanother thread about OSDM connections, I commented that the development team for OSDM and the core product seem to separate and Sue responded.
    Sue Harper wrote:
    A clarification of the Data Modeling feedback application and this forum. The developers are part of the SQL Developer development team, but as for all our features, each developer has a focus area, so the Data Modeling developers will tend to answer those questions. The reason I said that they seem to be separate as that ODSM seems to be put together completely differently. I hit another example today. I downloaded a fresh copy of osdm without JDK. On starting it up, it complained it couldn't find a jre and quit. There was no prompt to specify the location as there is with SQLDevloper and there are no configuration files to edit.
    SQLDeveloper already has issues with different components doing the same differently. The number of problems has been reduced but it has taken some time and I am concerned that the introduction of a large body of new code will cause things to revert to the bad old days in terms of quality.
    I may or may not end up using OSDM, but I want to have the choice. I don't want components I don't use becoming part of the application and adding bloat. The migration components are a case in point. The "migration workbench" ought to be a completely separate add-in rather than a core component and I hope this is down with OSDM.

    Jim,
    "I don't want components I don't use becoming part of the application and adding bloat." Absolutely. and we agree. SQL Developer Data Modeling will be a separate standalone product so that users who don't want to use SQL Developer, but who do want to model will have that option. The nice thing about an integrated solution is that if you are doing database work and then want to do some modeling, you are in one tool. Our current plans are to release the standalone product in the initial production release and then follow that with the extension to SQL Developer depending on the customer demand. We do plan to release a read only viewer as part of SQL Developer 2.0. This will be an extension, which like the Migration extension, you can switch off. and from your note, you'd prefer the reverse, that these new extensions are pieces you choose to include and not choose to exclude.
    I'm not certain we'll get agreement on this, but I'll be interested. Just this morning I was updating a thread were the request was for all the pieces to be there working out of the box.
    Sue

  • Flex and Data Model plugin

    Where can i find documentation about the (LiveCycle) Data Model plugin for Flash Builder 4.6? And then specific about the filter option's criteria and query.

    Thanks Matt,
    One question: I see this code:
    import flash.net.LocalConnection;
    private var conn:LocalConnection;
    private function InitConn():void{
    conn = new LocalConnection();
    conn.client = this;
    try {
    conn.connect("taskConnection");
    } catch (error:ArgumentError) {
    trace("Can't connect.");
    Where is "taskConnection"? Is this an ODBC type connection?
    Rgds
    JFB

  • Discoverer and Data Modeler Integration

    Discoverer has the ability to load tables or views into BA from Designer Dictionary. Is there such an ability with Data Modeler or Oracle SQL Developer?

    Hello
    Interesting question. As far as I know this is not possible. You might want to ask Oracle Support though as it may be something that has not been tried yet works.
    Best wishes
    Michael

  • Forum for SQL Developer and Data Modeler?

    I searched now for a while all the available forums but did not found a special forum for the Oracle SQL Developer tool.
    Where do I post questions about it?
    Is there another, separate forum for SQL Developer Data Modeler?
    Peter

    https://forums.oracle.com/forums/category.jspa?categoryID=501

  • Best Practise to handle Data Refresh & Hierarchy

    Hi,
    During a recent discussion with one of our BI user groups, the questions were raised as what the best practice are to handle the following two issues.
    Issue 1:
    If entries are posted to the prior periods in SAP R/3 (outside of the daily auto-refresh range), the current process is that the user group will ask us to conduct a manual refresh in BI for the prior periods which are effected. 
    Question: Is it possible to set up a trigger in the system, so that BI knows which periods are changed and automatically refreshes data for those periods?
    Issue 2:
    If a hierarchy used in the reports is modified, there might be an adverse impact on the financial data the user group reports.  The current process we have in place is to run a group of BI reports for both current year and prior year to make sure nothing is impacted, but there is limitation to this current process.  What if there is no impact on current or prior year, but on the years prior to that?
    Question: What other global companies do to minimize such reporting impact, especially when they have hundreds of complex reports?
    If someone has any info on this, help me in sharing the same.
    Thanks all for your support.
    Regards,
    Murali

    Hi,
    1) I would recommend you consider doing another delta init w/o data transfer for all years. (of course first you need have a quiet time in the source system, then need to pull the existing data in the delta queue).
    If you don't and /or can't have delta loads for some reason, in the source system, find the table(s) for the old periods, check with your DBAs to see if they can create a "database trigger". With that,  when this table is changed, they can send the new/changed record to another copy table that you can create an extract against it.
    2)When a major report needs a change or when we need to change an infobject/hierarchy that will impact many reports, we ask our business analyists/super users/reporting gropus to test the key reports. This is of course after we run some queries in our QA environment to compare the results (before and after the change).
    I would like to get consensus from all the regions before touching the global reports, and have them send me their agreement on it.  If everybody is happy, we make the change, ask user to test the query and give us a sign off in QA, then send the query/infoobject.hierarchy to production and regenerate the queries in prodcution if necessary. If you are not too comfortable making this change, at least try avoiding the key dates/weeks, maybe send the change to production Friday afternoon so that if it doesn't work you can change it in the weekend etc.
    Cheers
    Tansu

  • Best practise for returning data from EJB's

    I have an EJB that runs a query on a backend database and i want to return the data back to my Java GUI. Ideally i would like to pass a ResultSet back but i don't think they are serialisable so this isn't an option.
    What's considered the best way to pass database results back from EJB's to a front end Java application ?
    Thanks for any ideas you guys have

    If you want type-safety, define a VO (value object) that maps to your result-set, extract the data from the result set into the VO, and return an array of the data. Yes, it's extra work on the "back-end," but that's what the back-end is for. Just make sure your client.jar has the VO in it, as well as the Home and Remote interfaces.

  • Best Practise for loading data into BW CSV vs XML ?

    Hi Everyone,
    I would like to get some of your thoughts on what file format would be best or most efficient to push data into BW. CSV or XML ?
    Also what are the advantages / Disadvantages?
    Appreciate your thoughts.

    XML is used only for small data fields - more like it is easier to do it by XML rather than build an application for the same - provided the usage is less.
    Flat files are used for HUGE data loads ( non SAP ) and definitely the choice of data formats would be flat files.
    Also XML files are transformed into a flat file type format with each tag referring to the field and the size of the XML file grows to a high value depending on the number of fields.
    Arun

  • Best practise to move data to new datawarehouse partitions

    Our DW is about 500GB and expected to double in the next three years.
    Largest table has 290m rows, but some fact tables have as little as 1k rows.
    We are also migrating to SQL Server 2012 (2008R2) by building new servers and will split SSRS and DBMS.
    I was thinking I would only partition the larger fact tables, but my dilemma is moving the data to the new servers.
    Trying to avoid having to load each table manually, so would moving the tables to be partitioned to a different database on the current server be a viable option, what about all the current subscriptions and SSRS reports ?
    Then at some point (only 5 years data) we will need to start archiving, so I wanted this physical design to fit in with assisting the archiving process, seems crazy to partition every fact table or is that a better strategy ?

    Hi,
    I will move this thread to SQL Server Data Warehousing forum for further discussion.
    Here is a link to article on partitioning on Microsoft SQL Server:
    Strategies for Partitioning Relational Data Warehouses in Microsoft SQL Server
    http://technet.microsoft.com/en-us/library/cc966457.aspx
    Thanks.
    Tracy Cai
    TechNet Community Support

  • Troubles with App Module and Data Model

    Hi All!
    I have trouble with inserting new record in vo.
    My situation is next:
    I have AM with two instances of one VO. One instance I use to perform search and second to insert new records.
    How I can understend if I create instances in AM by giving different aliases to the same VO, they must be independent. Yes or not?
    My problems happens when I deploy app to OAS in local mode. App is stateless.
    When I try to insert new record I always get one from next situation:
    1. PK = 1. This attribute is generated by trigger and sequense on server. Update after insert in EO is on.
    2. **** Updating Attribute: CustSid Value is -******
    Application Error
    Return
    Error Message: JBO-26030: Failed to lock the record, another user holds the lock.
    Seems like update not insert!
    After that I can't edit first record in first VO.
    3. Time by time I get another messages
    When I run this app in JD debugger all is working fine!
    By the way - this page apper in frame.
    When I try it without frame all works fine On OAS too.
    I think it's problem with AM state. Isn't it?
    Any help will be appreciated!
    Mike

    In principle, there are only one way to do my task!? I can add LVAIKolekcijaDati via APasesDati as instance and move APasesDati child elements under that. Am I right? And if my jspx page is created from APasesDati instance and now I want to use my new - LVAIKolekcijaDati via APasesDati then I need to reecreate my jspx page from begining, or I can change something in my jspx page or somewhere else something that way I do not to recreate my jspx page and use LVAIKolekcijaDati via APasesDati in my page?
    Best regards, Debuger!

  • Data Models and Data Flow diagrams.

    Hi  Gurus,
        Can anybody brief me the concept of Data Models and Data Flow Diagrams and their development with illustrations. And is it a responsibility of a Technical or a Functional consultant..i.e to translate Business requirements and functional specifications into technical specifications, data flow diagrams and data models.
    Your valuable answers will be rewarded.
    Thanks in advance.

    Hi,
    Concept of Data Models
    Data model or Data modelling is basically how you define or design your BW Architecture based on Business requirements. It deals with designing and creating a effcient BW architecture sticking to standard practices.
    Multi-Dimensional Modeling with SAP NetWeaver BI
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
    /people/githen.ronney3/blog/2008/02/13/modeling-strategies
    Modeling the Data Warehouse Layer with BI
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3668618d-0c01-0010-1ab5-aa75c3a4dfc2
    /people/gilad.weinbach2/blog/2007/02/23/a-beginners-guide-to-your-first-bi-model-in-nw2004s
    Data Flow Diagrams
    This show the path of data flow for each individual object in BW. How data dets loaded into that object and how it is going out of the object etc.
    Right click on the data target > show data flow .
    It shows all the intermdeiate layer through which data comes into that particular object.
    Responsibility of a Technical or a Functional consultant
    This is done genrally in the designing phase itself by a Senior Technical Consultant with the help of a Functional consultant or a Techno=Functional consultant interacting with Business.
    Hope this helps.
    Thanks,
    JituK

  • Beginners guide to PowerPivot data models

    Hi,
    I've been using PowerPivot for a little while now but have finally given into the fact that my lack of knowledge about data modelling is causing me all kinds of problems.
    I'm looking for recommendations on where I should start learning about data modelling for Powerpivot (and other software e.g. Tablea, Chartio etc). By data modelling I mean how I should best organise all the data that I want to analyse which is coming fomr
    multiple sources. In my case my primary sources right now are:
    Our main MySQL database
    Google Analytics Data
    Google Adwords data
    MailChimp data
    Various excels
    I have bought two books - "Dax Formulas for PowerPivot" which is great but sparse on data modelling information and "Microsoft Excel 2013 - Building Data Models with PowerPivot" which looks excellent but starts of at I believe too advanced
    a level.
    Where should a beginner with no experience of data modelling, but intermediate/advanced experience of Excel go to learn skills for PowerPivot Data modelling?
    By far the main issues is that our MySQL databases are expansive and include hundreds of tables across multiple databases and we need to be able to utilise data from all of them. I imagine that I somehow need to come up with a intermediary layer between
    the Databases and Powerpivot which extracts and flattens the main data into fewer more important tables, but would have no idea how to do this.
    Also to be clear, I am not looking at ways of modelling the MySQL database itself - our developers are happy with the database relationships etc, it just the modelling of that data within PowerPivot and how to best import that data.
    Recommendations would be absolutely brilliant, its a fantastic product but right now I'm struggling to make the most of it.

    Thanks for the recommendations, I am aware of the last two of those and
    http://www.powerpivotpro.com/ in particular has proved very useful (TechNet less so). 
    I will take a look at SQLBI in more detail but from a very casual browse it seems like this too is targeted more at experienced users. There paid courses may definitely prove useful though.
    I think what I'm getting at is that there are probably an increasing number of people like myself who have fallen into PowerPivot without a traditional background in databases and data modelling. In my case I have a small business of
    15 employees and we were using Excel and PivotTables to do some basic analysis before soon discovering that our data was too complicated and that I needed something. PowerPivot definitely seems to solve that issue and I'm having much
    better success now than I was without. I also feel quite competent with DAX and actually building tables from the PowerPivot data model.
    What I'm lacking in is the very first step of cleaning and preparing raw data for import and then importing it into Powerpivot and setting up a efficient model. I have to be honest that your links above did bring
    PowerQuery to my attention and it seems like a brilliant tool and one of the missing links. I would however still like to see a beginners guide to data import and model set-up as I don't think I've yet come across one either in book or
    online form which explains the fundamentals well.
     

Maybe you are looking for